id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1704.02197 | Amarjot Singh | Vibin Vijay, Raghunath Vp, Amarjot Singh, SN Omar | Variance Based Moving K-Means Algorithm | Accepted at the 7th IEEE International Advance Computing Conference
(IACC-2017) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clustering is a useful data exploratory method with its wide applicability in
multiple fields. However, data clustering greatly relies on initialization of
cluster centers that can result in large intra-cluster variance and dead
centers, therefore leading to sub-optimal solutions. This paper proposes a
novel variance based version of the conventional Moving K-Means (MKM) algorithm
called Variance Based Moving K-Means (VMKM) that can partition data into
optimal homogeneous clusters, irrespective of cluster initialization. The
algorithm utilizes a novel distance metric and a unique data element selection
criteria to transfer the selected elements between clusters to achieve low
intra-cluster variance and subsequently avoid dead centers. Quantitative and
qualitative comparison with various clustering techniques is performed on four
datasets selected from image processing, bioinformatics, remote sensing and the
stock market respectively. An extensive analysis highlights the superior
performance of the proposed method over other techniques.
| [
{
"version": "v1",
"created": "Fri, 7 Apr 2017 12:10:39 GMT"
},
{
"version": "v2",
"created": "Fri, 12 May 2017 13:03:54 GMT"
}
] | 2017-05-15T00:00:00 | [
[
"Vijay",
"Vibin",
""
],
[
"Vp",
"Raghunath",
""
],
[
"Singh",
"Amarjot",
""
],
[
"Omar",
"SN",
""
]
] | TITLE: Variance Based Moving K-Means Algorithm
ABSTRACT: Clustering is a useful data exploratory method with its wide applicability in
multiple fields. However, data clustering greatly relies on initialization of
cluster centers that can result in large intra-cluster variance and dead
centers, therefore leading to sub-optimal solutions. This paper proposes a
novel variance based version of the conventional Moving K-Means (MKM) algorithm
called Variance Based Moving K-Means (VMKM) that can partition data into
optimal homogeneous clusters, irrespective of cluster initialization. The
algorithm utilizes a novel distance metric and a unique data element selection
criteria to transfer the selected elements between clusters to achieve low
intra-cluster variance and subsequently avoid dead centers. Quantitative and
qualitative comparison with various clustering techniques is performed on four
datasets selected from image processing, bioinformatics, remote sensing and the
stock market respectively. An extensive analysis highlights the superior
performance of the proposed method over other techniques.
| no_new_dataset | 0.952486 |
1705.04641 | Hassan Foroosh | Marjaneh Safaei and Hassan Foroosh | Single Image Action Recognition by Predicting Space-Time Saliency | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel approach based on deep Convolutional Neural Networks (CNN)
to recognize human actions in still images by predicting the future motion, and
detecting the shape and location of the salient parts of the image. We make the
following major contributions to this important area of research: (i) We use
the predicted future motion in the static image (Walker et al., 2015) as a
means of compensating for the missing temporal information, while using the
saliency map to represent the the spatial information in the form of location
and shape of what is predicted as significant. (ii) We cast action
classification in static images as a domain adaptation problem by transfer
learning. We first map the input static image to a new domain that we refer to
as the Predicted Optical Flow-Saliency Map domain (POF-SM), and then fine-tune
the layers of a deep CNN model trained on classifying the ImageNet dataset to
perform action classification in the POF-SM domain. (iii) We tested our method
on the popular Willow dataset. But unlike existing methods, we also tested on a
more realistic and challenging dataset of over 2M still images that we
collected and labeled by taking random frames from the UCF-101 video dataset.
We call our dataset the UCF Still Image dataset or UCFSI-101 in short. Our
results outperform the state of the art.
| [
{
"version": "v1",
"created": "Fri, 12 May 2017 16:03:33 GMT"
}
] | 2017-05-15T00:00:00 | [
[
"Safaei",
"Marjaneh",
""
],
[
"Foroosh",
"Hassan",
""
]
] | TITLE: Single Image Action Recognition by Predicting Space-Time Saliency
ABSTRACT: We propose a novel approach based on deep Convolutional Neural Networks (CNN)
to recognize human actions in still images by predicting the future motion, and
detecting the shape and location of the salient parts of the image. We make the
following major contributions to this important area of research: (i) We use
the predicted future motion in the static image (Walker et al., 2015) as a
means of compensating for the missing temporal information, while using the
saliency map to represent the the spatial information in the form of location
and shape of what is predicted as significant. (ii) We cast action
classification in static images as a domain adaptation problem by transfer
learning. We first map the input static image to a new domain that we refer to
as the Predicted Optical Flow-Saliency Map domain (POF-SM), and then fine-tune
the layers of a deep CNN model trained on classifying the ImageNet dataset to
perform action classification in the POF-SM domain. (iii) We tested our method
on the popular Willow dataset. But unlike existing methods, we also tested on a
more realistic and challenging dataset of over 2M still images that we
collected and labeled by taking random frames from the UCF-101 video dataset.
We call our dataset the UCF Still Image dataset or UCFSI-101 in short. Our
results outperform the state of the art.
| no_new_dataset | 0.592142 |
1602.00554 | Bj\"orn Weghenkel | Bj\"orn Weghenkel and Asja Fischer and Laurenz Wiskott | Graph-based Predictable Feature Analysis | null | null | 10.1007/s10994-017-5632-x | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose graph-based predictable feature analysis (GPFA), a new method for
unsupervised learning of predictable features from high-dimensional time
series, where high predictability is understood very generically as low
variance in the distribution of the next data point given the previous ones. We
show how this measure of predictability can be understood in terms of graph
embedding as well as how it relates to the information-theoretic measure of
predictive information in special cases. We confirm the effectiveness of GPFA
on different datasets, comparing it to three existing algorithms with similar
objectives---namely slow feature analysis, forecastable component analysis, and
predictable feature analysis---to which GPFA shows very competitive results.
| [
{
"version": "v1",
"created": "Mon, 1 Feb 2016 15:11:48 GMT"
},
{
"version": "v2",
"created": "Thu, 11 May 2017 12:41:25 GMT"
}
] | 2017-05-12T00:00:00 | [
[
"Weghenkel",
"Björn",
""
],
[
"Fischer",
"Asja",
""
],
[
"Wiskott",
"Laurenz",
""
]
] | TITLE: Graph-based Predictable Feature Analysis
ABSTRACT: We propose graph-based predictable feature analysis (GPFA), a new method for
unsupervised learning of predictable features from high-dimensional time
series, where high predictability is understood very generically as low
variance in the distribution of the next data point given the previous ones. We
show how this measure of predictability can be understood in terms of graph
embedding as well as how it relates to the information-theoretic measure of
predictive information in special cases. We confirm the effectiveness of GPFA
on different datasets, comparing it to three existing algorithms with similar
objectives---namely slow feature analysis, forecastable component analysis, and
predictable feature analysis---to which GPFA shows very competitive results.
| no_new_dataset | 0.948202 |
1705.03865 | Akshay Gupta | Akshay Kumar Gupta | Survey of Visual Question Answering: Datasets and Techniques | 10 pages, 3 figures, 3 tables Added references, corrected typos, made
references less wordy | null | null | null | cs.CL cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual question answering (or VQA) is a new and exciting problem that
combines natural language processing and computer vision techniques. We present
a survey of the various datasets and models that have been used to tackle this
task. The first part of the survey details the various datasets for VQA and
compares them along some common factors. The second part of this survey details
the different approaches for VQA, classified into four types: non-deep learning
models, deep learning models without attention, deep learning models with
attention, and other models which do not fit into the first three. Finally, we
compare the performances of these approaches and provide some directions for
future work.
| [
{
"version": "v1",
"created": "Wed, 10 May 2017 17:30:17 GMT"
},
{
"version": "v2",
"created": "Thu, 11 May 2017 06:46:52 GMT"
}
] | 2017-05-12T00:00:00 | [
[
"Gupta",
"Akshay Kumar",
""
]
] | TITLE: Survey of Visual Question Answering: Datasets and Techniques
ABSTRACT: Visual question answering (or VQA) is a new and exciting problem that
combines natural language processing and computer vision techniques. We present
a survey of the various datasets and models that have been used to tackle this
task. The first part of the survey details the various datasets for VQA and
compares them along some common factors. The second part of this survey details
the different approaches for VQA, classified into four types: non-deep learning
models, deep learning models without attention, deep learning models with
attention, and other models which do not fit into the first three. Finally, we
compare the performances of these approaches and provide some directions for
future work.
| no_new_dataset | 0.942295 |
1705.04003 | Hoang Pham | Thai-Hoang Pham, Phuong Le-Hong | Content-based Approach for Vietnamese Spam SMS Filtering | 4 pages, IALP 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Short Message Service (SMS) spam is a serious problem in Vietnam because of
the availability of very cheap pre-paid SMS packages. There are some systems to
detect and filter spam messages for English, most of which use machine learning
techniques to analyze the content of messages and classify them. For
Vietnamese, there is some research on spam email filtering but none focused on
SMS. In this work, we propose the first system for filtering Vietnamese spam
SMS. We first propose an appropriate preprocessing method since existing tools
for Vietnamese preprocessing cannot give good accuracy on our dataset. We then
experiment with vector representations and classifiers to find the best model
for this problem. Our system achieves an accuracy of 94% when labelling spam
messages while the misclassification rate of legitimate messages is relatively
small, about only 0.4%. This is an encouraging result compared to that of
English and can be served as a strong baseline for future development of
Vietnamese SMS spam prevention systems.
| [
{
"version": "v1",
"created": "Thu, 11 May 2017 04:04:33 GMT"
}
] | 2017-05-12T00:00:00 | [
[
"Pham",
"Thai-Hoang",
""
],
[
"Le-Hong",
"Phuong",
""
]
] | TITLE: Content-based Approach for Vietnamese Spam SMS Filtering
ABSTRACT: Short Message Service (SMS) spam is a serious problem in Vietnam because of
the availability of very cheap pre-paid SMS packages. There are some systems to
detect and filter spam messages for English, most of which use machine learning
techniques to analyze the content of messages and classify them. For
Vietnamese, there is some research on spam email filtering but none focused on
SMS. In this work, we propose the first system for filtering Vietnamese spam
SMS. We first propose an appropriate preprocessing method since existing tools
for Vietnamese preprocessing cannot give good accuracy on our dataset. We then
experiment with vector representations and classifiers to find the best model
for this problem. Our system achieves an accuracy of 94% when labelling spam
messages while the misclassification rate of legitimate messages is relatively
small, about only 0.4%. This is an encouraging result compared to that of
English and can be served as a strong baseline for future development of
Vietnamese SMS spam prevention systems.
| no_new_dataset | 0.930899 |
1705.04249 | Li Heng Liou | Cheng-Shang Chang, Chia-Tai Chang, Duan-Shin Lee and Li-Heng Liou | K-sets+: a Linear-time Clustering Algorithm for Data Points with a
Sparse Similarity Measure | null | null | null | null | cs.DS cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we first propose a new iterative algorithm, called the K-sets+
algorithm for clustering data points in a semi-metric space, where the distance
measure does not necessarily satisfy the triangular inequality. We show that
the K-sets+ algorithm converges in a finite number of iterations and it retains
the same performance guarantee as the K-sets algorithm for clustering data
points in a metric space. We then extend the applicability of the K-sets+
algorithm from data points in a semi-metric space to data points that only have
a symmetric similarity measure. Such an extension leads to great reduction of
computational complexity. In particular, for an n * n similarity matrix with m
nonzero elements in the matrix, the computational complexity of the K-sets+
algorithm is O((Kn + m)I), where I is the number of iterations. The memory
complexity to achieve that computational complexity is O(Kn + m). As such, both
the computational complexity and the memory complexity are linear in n when the
n * n similarity matrix is sparse, i.e., m = O(n). We also conduct various
experiments to show the effectiveness of the K-sets+ algorithm by using a
synthetic dataset from the stochastic block model and a real network from the
WonderNetwork website.
| [
{
"version": "v1",
"created": "Thu, 11 May 2017 15:39:48 GMT"
}
] | 2017-05-12T00:00:00 | [
[
"Chang",
"Cheng-Shang",
""
],
[
"Chang",
"Chia-Tai",
""
],
[
"Lee",
"Duan-Shin",
""
],
[
"Liou",
"Li-Heng",
""
]
] | TITLE: K-sets+: a Linear-time Clustering Algorithm for Data Points with a
Sparse Similarity Measure
ABSTRACT: In this paper, we first propose a new iterative algorithm, called the K-sets+
algorithm for clustering data points in a semi-metric space, where the distance
measure does not necessarily satisfy the triangular inequality. We show that
the K-sets+ algorithm converges in a finite number of iterations and it retains
the same performance guarantee as the K-sets algorithm for clustering data
points in a metric space. We then extend the applicability of the K-sets+
algorithm from data points in a semi-metric space to data points that only have
a symmetric similarity measure. Such an extension leads to great reduction of
computational complexity. In particular, for an n * n similarity matrix with m
nonzero elements in the matrix, the computational complexity of the K-sets+
algorithm is O((Kn + m)I), where I is the number of iterations. The memory
complexity to achieve that computational complexity is O(Kn + m). As such, both
the computational complexity and the memory complexity are linear in n when the
n * n similarity matrix is sparse, i.e., m = O(n). We also conduct various
experiments to show the effectiveness of the K-sets+ algorithm by using a
synthetic dataset from the stochastic block model and a real network from the
WonderNetwork website.
| no_new_dataset | 0.955486 |
1705.04258 | Alexander Kolesnikov | Amelie Royer, Alexander Kolesnikov, Christoph H. Lampert | Probabilistic Image Colorization | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a probabilistic technique for colorizing grayscale natural images.
In light of the intrinsic uncertainty of this task, the proposed probabilistic
framework has numerous desirable properties. In particular, our model is able
to produce multiple plausible and vivid colorizations for a given grayscale
image and is one of the first colorization models to provide a proper
stochastic sampling scheme. Moreover, our training procedure is supported by a
rigorous theoretical framework that does not require any ad hoc heuristics and
allows for efficient modeling and learning of the joint pixel color
distribution. We demonstrate strong quantitative and qualitative experimental
results on the CIFAR-10 dataset and the challenging ILSVRC 2012 dataset.
| [
{
"version": "v1",
"created": "Thu, 11 May 2017 16:09:16 GMT"
}
] | 2017-05-12T00:00:00 | [
[
"Royer",
"Amelie",
""
],
[
"Kolesnikov",
"Alexander",
""
],
[
"Lampert",
"Christoph H.",
""
]
] | TITLE: Probabilistic Image Colorization
ABSTRACT: We develop a probabilistic technique for colorizing grayscale natural images.
In light of the intrinsic uncertainty of this task, the proposed probabilistic
framework has numerous desirable properties. In particular, our model is able
to produce multiple plausible and vivid colorizations for a given grayscale
image and is one of the first colorization models to provide a proper
stochastic sampling scheme. Moreover, our training procedure is supported by a
rigorous theoretical framework that does not require any ad hoc heuristics and
allows for efficient modeling and learning of the joint pixel color
distribution. We demonstrate strong quantitative and qualitative experimental
results on the CIFAR-10 dataset and the challenging ILSVRC 2012 dataset.
| no_new_dataset | 0.949295 |
1705.04288 | Hokchhay Tann | Hokchhay Tann, Soheil Hashemi, Iris Bahar, Sherief Reda | Hardware-Software Codesign of Accurate, Multiplier-free Deep Neural
Networks | 6 pages | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While Deep Neural Networks (DNNs) push the state-of-the-art in many machine
learning applications, they often require millions of expensive floating-point
operations for each input classification. This computation overhead limits the
applicability of DNNs to low-power, embedded platforms and incurs high cost in
data centers. This motivates recent interests in designing low-power,
low-latency DNNs based on fixed-point, ternary, or even binary data precision.
While recent works in this area offer promising results, they often lead to
large accuracy drops when compared to the floating-point networks. We propose a
novel approach to map floating-point based DNNs to 8-bit dynamic fixed-point
networks with integer power-of-two weights with no change in network
architecture. Our dynamic fixed-point DNNs allow different radix points between
layers. During inference, power-of-two weights allow multiplications to be
replaced with arithmetic shifts, while the 8-bit fixed-point representation
simplifies both the buffer and adder design. In addition, we propose a hardware
accelerator design to achieve low-power, low-latency inference with
insignificant degradation in accuracy. Using our custom accelerator design with
the CIFAR-10 and ImageNet datasets, we show that our method achieves
significant power and energy savings while increasing the classification
accuracy.
| [
{
"version": "v1",
"created": "Thu, 11 May 2017 17:01:44 GMT"
}
] | 2017-05-12T00:00:00 | [
[
"Tann",
"Hokchhay",
""
],
[
"Hashemi",
"Soheil",
""
],
[
"Bahar",
"Iris",
""
],
[
"Reda",
"Sherief",
""
]
] | TITLE: Hardware-Software Codesign of Accurate, Multiplier-free Deep Neural
Networks
ABSTRACT: While Deep Neural Networks (DNNs) push the state-of-the-art in many machine
learning applications, they often require millions of expensive floating-point
operations for each input classification. This computation overhead limits the
applicability of DNNs to low-power, embedded platforms and incurs high cost in
data centers. This motivates recent interests in designing low-power,
low-latency DNNs based on fixed-point, ternary, or even binary data precision.
While recent works in this area offer promising results, they often lead to
large accuracy drops when compared to the floating-point networks. We propose a
novel approach to map floating-point based DNNs to 8-bit dynamic fixed-point
networks with integer power-of-two weights with no change in network
architecture. Our dynamic fixed-point DNNs allow different radix points between
layers. During inference, power-of-two weights allow multiplications to be
replaced with arithmetic shifts, while the 8-bit fixed-point representation
simplifies both the buffer and adder design. In addition, we propose a hardware
accelerator design to achieve low-power, low-latency inference with
insignificant degradation in accuracy. Using our custom accelerator design with
the CIFAR-10 and ImageNet datasets, we show that our method achieves
significant power and energy savings while increasing the classification
accuracy.
| no_new_dataset | 0.944587 |
1408.5286 | Lorenzo Livi | Lorenzo Livi | Designing labeled graph classifiers by exploiting the R\'enyi entropy of
the dissimilarity representation | Revised version | null | 10.3390/e19050216 | null | cs.CV cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Representing patterns as labeled graphs is becoming increasingly common in
the broad field of computational intelligence. Accordingly, a wide repertoire
of pattern recognition tools, such as classifiers and knowledge discovery
procedures, are nowadays available and tested for various datasets of labeled
graphs. However, the design of effective learning procedures operating in the
space of labeled graphs is still a challenging problem, especially from the
computational complexity viewpoint. In this paper, we present a major
improvement of a general-purpose classifier for graphs, which is conceived on
an interplay between dissimilarity representation, clustering,
information-theoretic techniques, and evolutionary optimization algorithms. The
improvement focuses on a specific key subroutine devised to compress the input
data. We prove different theorems which are fundamental to the setting of the
parameters controlling such a compression operation. We demonstrate the
effectiveness of the resulting classifier by benchmarking the developed
variants on well-known datasets of labeled graphs, considering as distinct
performance indicators the classification accuracy, computing time, and
parsimony in terms of structural complexity of the synthesized classification
models. The results show state-of-the-art standards in terms of test set
accuracy and a considerable speed-up for what concerns the computing time.
| [
{
"version": "v1",
"created": "Fri, 22 Aug 2014 13:03:00 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Jan 2015 15:29:31 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Jan 2016 23:44:25 GMT"
},
{
"version": "v4",
"created": "Fri, 11 Mar 2016 13:18:17 GMT"
},
{
"version": "v5",
"created": "Fri, 31 Mar 2017 19:26:16 GMT"
},
{
"version": "v6",
"created": "Tue, 4 Apr 2017 20:48:07 GMT"
},
{
"version": "v7",
"created": "Thu, 20 Apr 2017 14:40:11 GMT"
}
] | 2017-05-11T00:00:00 | [
[
"Livi",
"Lorenzo",
""
]
] | TITLE: Designing labeled graph classifiers by exploiting the R\'enyi entropy of
the dissimilarity representation
ABSTRACT: Representing patterns as labeled graphs is becoming increasingly common in
the broad field of computational intelligence. Accordingly, a wide repertoire
of pattern recognition tools, such as classifiers and knowledge discovery
procedures, are nowadays available and tested for various datasets of labeled
graphs. However, the design of effective learning procedures operating in the
space of labeled graphs is still a challenging problem, especially from the
computational complexity viewpoint. In this paper, we present a major
improvement of a general-purpose classifier for graphs, which is conceived on
an interplay between dissimilarity representation, clustering,
information-theoretic techniques, and evolutionary optimization algorithms. The
improvement focuses on a specific key subroutine devised to compress the input
data. We prove different theorems which are fundamental to the setting of the
parameters controlling such a compression operation. We demonstrate the
effectiveness of the resulting classifier by benchmarking the developed
variants on well-known datasets of labeled graphs, considering as distinct
performance indicators the classification accuracy, computing time, and
parsimony in terms of structural complexity of the synthesized classification
models. The results show state-of-the-art standards in terms of test set
accuracy and a considerable speed-up for what concerns the computing time.
| no_new_dataset | 0.942135 |
1509.08267 | Nandini Singhal | Nandini Singhal, Sathya Peri, Subrahmanyam Kalyanasundaram | Multi-threaded Graph Coloring Algorithm for Shared Memory Architecture | null | null | 10.1145/3007748.3018281 | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present multi-threaded algorithms for graph coloring
suitable to the shared memory programming model. We modify an existing
algorithm widely used in the literature and prove the correctness of the
modified algorithm. We also propose a new approach to solve the problem of
coloring using locks. Using datasets from real world graphs, we evaluate the
performance of the algorithms on the Intel platform. We compare the performance
of the sequential approach v/s our proposed approach and analyze the speedup
obtained against the existing algorithm from the literature. The results show
that the speedup obtained is consequential. We also provide a direction for
future work towards improving the performance further in terms of different
metrics.
| [
{
"version": "v1",
"created": "Mon, 28 Sep 2015 11:03:43 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Oct 2015 17:27:14 GMT"
}
] | 2017-05-11T00:00:00 | [
[
"Singhal",
"Nandini",
""
],
[
"Peri",
"Sathya",
""
],
[
"Kalyanasundaram",
"Subrahmanyam",
""
]
] | TITLE: Multi-threaded Graph Coloring Algorithm for Shared Memory Architecture
ABSTRACT: In this paper, we present multi-threaded algorithms for graph coloring
suitable to the shared memory programming model. We modify an existing
algorithm widely used in the literature and prove the correctness of the
modified algorithm. We also propose a new approach to solve the problem of
coloring using locks. Using datasets from real world graphs, we evaluate the
performance of the algorithms on the Intel platform. We compare the performance
of the sequential approach v/s our proposed approach and analyze the speedup
obtained against the existing algorithm from the literature. The results show
that the speedup obtained is consequential. We also provide a direction for
future work towards improving the performance further in terms of different
metrics.
| no_new_dataset | 0.949809 |
1608.02117 | Ivan Vuli\'c | Ivan Vuli\'c, Daniela Gerz, Douwe Kiela, Felix Hill, and Anna Korhonen | HyperLex: A Large-Scale Evaluation of Graded Lexical Entailment | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce HyperLex - a dataset and evaluation resource that quantifies the
extent of of the semantic category membership, that is, type-of relation also
known as hyponymy-hypernymy or lexical entailment (LE) relation between 2,616
concept pairs. Cognitive psychology research has established that typicality
and category/class membership are computed in human semantic memory as a
gradual rather than binary relation. Nevertheless, most NLP research, and
existing large-scale invetories of concept category membership (WordNet,
DBPedia, etc.) treat category membership and LE as binary. To address this, we
asked hundreds of native English speakers to indicate typicality and strength
of category membership between a diverse range of concept pairs on a
crowdsourcing platform. Our results confirm that category membership and LE are
indeed more gradual than binary. We then compare these human judgements with
the predictions of automatic systems, which reveals a huge gap between human
performance and state-of-the-art LE, distributional and representation learning
models, and substantial differences between the models themselves. We discuss a
pathway for improving semantic models to overcome this discrepancy, and
indicate future application areas for improved graded LE systems.
| [
{
"version": "v1",
"created": "Sat, 6 Aug 2016 15:29:34 GMT"
},
{
"version": "v2",
"created": "Wed, 10 May 2017 15:07:53 GMT"
}
] | 2017-05-11T00:00:00 | [
[
"Vulić",
"Ivan",
""
],
[
"Gerz",
"Daniela",
""
],
[
"Kiela",
"Douwe",
""
],
[
"Hill",
"Felix",
""
],
[
"Korhonen",
"Anna",
""
]
] | TITLE: HyperLex: A Large-Scale Evaluation of Graded Lexical Entailment
ABSTRACT: We introduce HyperLex - a dataset and evaluation resource that quantifies the
extent of of the semantic category membership, that is, type-of relation also
known as hyponymy-hypernymy or lexical entailment (LE) relation between 2,616
concept pairs. Cognitive psychology research has established that typicality
and category/class membership are computed in human semantic memory as a
gradual rather than binary relation. Nevertheless, most NLP research, and
existing large-scale invetories of concept category membership (WordNet,
DBPedia, etc.) treat category membership and LE as binary. To address this, we
asked hundreds of native English speakers to indicate typicality and strength
of category membership between a diverse range of concept pairs on a
crowdsourcing platform. Our results confirm that category membership and LE are
indeed more gradual than binary. We then compare these human judgements with
the predictions of automatic systems, which reveals a huge gap between human
performance and state-of-the-art LE, distributional and representation learning
models, and substantial differences between the models themselves. We discuss a
pathway for improving semantic models to overcome this discrepancy, and
indicate future application areas for improved graded LE systems.
| new_dataset | 0.972598 |
1611.05923 | Michael Wojnowicz | Mike Wojnowicz, Ben Cruz, Xuan Zhao, Brian Wallace, Matt Wolff, Jay
Luan, and Caleb Crable | "Influence Sketching": Finding Influential Samples In Large-Scale
Regressions | fixed additional typos | Big Data (Big Data), 2016 IEEE International Conference on, pp.
3601 - 3612. IEEE, 2016 | 10.1109/BigData.2016.7841024 | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is an especially strong need in modern large-scale data analysis to
prioritize samples for manual inspection. For example, the inspection could
target important mislabeled samples or key vulnerabilities exploitable by an
adversarial attack. In order to solve the "needle in the haystack" problem of
which samples to inspect, we develop a new scalable version of Cook's distance,
a classical statistical technique for identifying samples which unusually
strongly impact the fit of a regression model (and its downstream predictions).
In order to scale this technique up to very large and high-dimensional
datasets, we introduce a new algorithm which we call "influence sketching."
Influence sketching embeds random projections within the influence computation;
in particular, the influence score is calculated using the randomly projected
pseudo-dataset from the post-convergence Generalized Linear Model (GLM). We
validate that influence sketching can reliably and successfully discover
influential samples by applying the technique to a malware detection dataset of
over 2 million executable files, each represented with almost 100,000 features.
For example, we find that randomly deleting approximately 10% of training
samples reduces predictive accuracy only slightly from 99.47% to 99.45%,
whereas deleting the same number of samples with high influence sketch scores
reduces predictive accuracy all the way down to 90.24%. Moreover, we find that
influential samples are especially likely to be mislabeled. In the case study,
we manually inspect the most influential samples, and find that influence
sketching pointed us to new, previously unidentified pieces of malware.
| [
{
"version": "v1",
"created": "Thu, 17 Nov 2016 22:23:08 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Dec 2016 20:15:16 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Mar 2017 05:55:24 GMT"
}
] | 2017-05-11T00:00:00 | [
[
"Wojnowicz",
"Mike",
""
],
[
"Cruz",
"Ben",
""
],
[
"Zhao",
"Xuan",
""
],
[
"Wallace",
"Brian",
""
],
[
"Wolff",
"Matt",
""
],
[
"Luan",
"Jay",
""
],
[
"Crable",
"Caleb",
""
]
] | TITLE: "Influence Sketching": Finding Influential Samples In Large-Scale
Regressions
ABSTRACT: There is an especially strong need in modern large-scale data analysis to
prioritize samples for manual inspection. For example, the inspection could
target important mislabeled samples or key vulnerabilities exploitable by an
adversarial attack. In order to solve the "needle in the haystack" problem of
which samples to inspect, we develop a new scalable version of Cook's distance,
a classical statistical technique for identifying samples which unusually
strongly impact the fit of a regression model (and its downstream predictions).
In order to scale this technique up to very large and high-dimensional
datasets, we introduce a new algorithm which we call "influence sketching."
Influence sketching embeds random projections within the influence computation;
in particular, the influence score is calculated using the randomly projected
pseudo-dataset from the post-convergence Generalized Linear Model (GLM). We
validate that influence sketching can reliably and successfully discover
influential samples by applying the technique to a malware detection dataset of
over 2 million executable files, each represented with almost 100,000 features.
For example, we find that randomly deleting approximately 10% of training
samples reduces predictive accuracy only slightly from 99.47% to 99.45%,
whereas deleting the same number of samples with high influence sketch scores
reduces predictive accuracy all the way down to 90.24%. Moreover, we find that
influential samples are especially likely to be mislabeled. In the case study,
we manually inspect the most influential samples, and find that influence
sketching pointed us to new, previously unidentified pieces of malware.
| no_new_dataset | 0.927888 |
1612.03969 | Mikael Henaff | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes and Yann
LeCun | Tracking the World State with Recurrent Entity Networks | null | ICLR 2017 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass.
| [
{
"version": "v1",
"created": "Mon, 12 Dec 2016 23:29:40 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Mar 2017 03:05:14 GMT"
},
{
"version": "v3",
"created": "Wed, 10 May 2017 16:52:56 GMT"
}
] | 2017-05-11T00:00:00 | [
[
"Henaff",
"Mikael",
""
],
[
"Weston",
"Jason",
""
],
[
"Szlam",
"Arthur",
""
],
[
"Bordes",
"Antoine",
""
],
[
"LeCun",
"Yann",
""
]
] | TITLE: Tracking the World State with Recurrent Entity Networks
ABSTRACT: We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass.
| no_new_dataset | 0.942082 |
1705.01842 | Ran Breuer | Ran Breuer and Ron Kimmel | A Deep Learning Perspective on the Origin of Facial Expressions | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Facial expressions play a significant role in human communication and
behavior. Psychologists have long studied the relationship between facial
expressions and emotions. Paul Ekman et al., devised the Facial Action Coding
System (FACS) to taxonomize human facial expressions and model their behavior.
The ability to recognize facial expressions automatically, enables novel
applications in fields like human-computer interaction, social gaming, and
psychological research. There has been a tremendously active research in this
field, with several recent papers utilizing convolutional neural networks (CNN)
for feature extraction and inference. In this paper, we employ CNN
understanding methods to study the relation between the features these
computational networks are using, the FACS and Action Units (AU). We verify our
findings on the Extended Cohn-Kanade (CK+), NovaEmotions and FER2013 datasets.
We apply these models to various tasks and tests using transfer learning,
including cross-dataset validation and cross-task performance. Finally, we
exploit the nature of the FER based CNN models for the detection of
micro-expressions and achieve state-of-the-art accuracy using a simple
long-short-term-memory (LSTM) recurrent neural network (RNN).
| [
{
"version": "v1",
"created": "Thu, 4 May 2017 13:59:07 GMT"
},
{
"version": "v2",
"created": "Wed, 10 May 2017 13:05:00 GMT"
}
] | 2017-05-11T00:00:00 | [
[
"Breuer",
"Ran",
""
],
[
"Kimmel",
"Ron",
""
]
] | TITLE: A Deep Learning Perspective on the Origin of Facial Expressions
ABSTRACT: Facial expressions play a significant role in human communication and
behavior. Psychologists have long studied the relationship between facial
expressions and emotions. Paul Ekman et al., devised the Facial Action Coding
System (FACS) to taxonomize human facial expressions and model their behavior.
The ability to recognize facial expressions automatically, enables novel
applications in fields like human-computer interaction, social gaming, and
psychological research. There has been a tremendously active research in this
field, with several recent papers utilizing convolutional neural networks (CNN)
for feature extraction and inference. In this paper, we employ CNN
understanding methods to study the relation between the features these
computational networks are using, the FACS and Action Units (AU). We verify our
findings on the Extended Cohn-Kanade (CK+), NovaEmotions and FER2013 datasets.
We apply these models to various tasks and tests using transfer learning,
including cross-dataset validation and cross-task performance. Finally, we
exploit the nature of the FER based CNN models for the detection of
micro-expressions and achieve state-of-the-art accuracy using a simple
long-short-term-memory (LSTM) recurrent neural network (RNN).
| no_new_dataset | 0.941761 |
1705.03531 | David Barina | Stanislav Svoboda, David Barina | New Transforms for JPEG Format | preprint submitted to SCCG 2017 | null | null | null | cs.MM cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The two-dimensional discrete cosine transform (DCT) can be found in the heart
of many image compression algorithms. Specifically, the JPEG format uses a
lossy form of compression based on that transform. Since the standardization of
the JPEG, many other transforms become practical in lossy data compression.
This article aims to analyze the use of these transforms as the DCT replacement
in the JPEG compression chain. Each transform is examined for different image
datasets and subsequently compared to other transforms using the peak
signal-to-noise ratio (PSNR). Our experiments show that an overlapping
variation of the DCT, the local cosine transform (LCT), overcame the original
block-wise transform at low bitrates. At high bitrates, the discrete wavelet
transform employing the Cohen-Daubechies-Feauveau 9/7 wavelet offers about the
same compression performance as the DCT.
| [
{
"version": "v1",
"created": "Tue, 9 May 2017 20:34:44 GMT"
}
] | 2017-05-11T00:00:00 | [
[
"Svoboda",
"Stanislav",
""
],
[
"Barina",
"David",
""
]
] | TITLE: New Transforms for JPEG Format
ABSTRACT: The two-dimensional discrete cosine transform (DCT) can be found in the heart
of many image compression algorithms. Specifically, the JPEG format uses a
lossy form of compression based on that transform. Since the standardization of
the JPEG, many other transforms become practical in lossy data compression.
This article aims to analyze the use of these transforms as the DCT replacement
in the JPEG compression chain. Each transform is examined for different image
datasets and subsequently compared to other transforms using the peak
signal-to-noise ratio (PSNR). Our experiments show that an overlapping
variation of the DCT, the local cosine transform (LCT), overcame the original
block-wise transform at low bitrates. At high bitrates, the discrete wavelet
transform employing the Cohen-Daubechies-Feauveau 9/7 wavelet offers about the
same compression performance as the DCT.
| no_new_dataset | 0.949576 |
1705.03550 | Vincenzo Lomonaco | Vincenzo Lomonaco and Davide Maltoni | CORe50: a New Dataset and Benchmark for Continuous Object Recognition | null | null | null | null | cs.CV cs.AI cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Continuous/Lifelong learning of high-dimensional data streams is a
challenging research problem. In fact, fully retraining models each time new
data become available is infeasible, due to computational and storage issues,
while na\"ive incremental strategies have been shown to suffer from
catastrophic forgetting. In the context of real-world object recognition
applications (e.g., robotic vision), where continuous learning is crucial, very
few datasets and benchmarks are available to evaluate and compare emerging
techniques. In this work we propose a new dataset and benchmark CORe50,
specifically designed for continuous object recognition, and introduce baseline
approaches for different continuous learning scenarios.
| [
{
"version": "v1",
"created": "Tue, 9 May 2017 21:32:19 GMT"
}
] | 2017-05-11T00:00:00 | [
[
"Lomonaco",
"Vincenzo",
""
],
[
"Maltoni",
"Davide",
""
]
] | TITLE: CORe50: a New Dataset and Benchmark for Continuous Object Recognition
ABSTRACT: Continuous/Lifelong learning of high-dimensional data streams is a
challenging research problem. In fact, fully retraining models each time new
data become available is infeasible, due to computational and storage issues,
while na\"ive incremental strategies have been shown to suffer from
catastrophic forgetting. In the context of real-world object recognition
applications (e.g., robotic vision), where continuous learning is crucial, very
few datasets and benchmarks are available to evaluate and compare emerging
techniques. In this work we propose a new dataset and benchmark CORe50,
specifically designed for continuous object recognition, and introduce baseline
approaches for different continuous learning scenarios.
| new_dataset | 0.956513 |
1705.03590 | Peng Wu | Peng Wu, Li Pan | Mining Target Attribute Subspace and Set of Target Communities in Large
Attributed Networks | 25 pages, 7 figures | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community detection provides invaluable help for various applications, such
as marketing and product recommendation. Traditional community detection
methods designed for plain networks may not be able to detect communities with
homogeneous attributes inside on attributed networks with attribute
information. Most of recent attribute community detection methods may fail to
capture the requirements of a specific application and not be able to mine the
set of required communities for a specific application. In this paper, we aim
to detect the set of target communities in the target subspace which has some
focus attributes with large importance weights satisfying the requirements of a
specific application. In order to improve the university of the problem, we
address the problem in an extreme case where only two sample nodes in any
potential target community are provided. A Target Subspace and Communities
Mining (TSCM) method is proposed. In TSCM, a sample information extension
method is designed to extend the two sample nodes to a set of exemplar nodes
from which the target subspace is inferred. Then the set of target communities
are located and mined based on the target subspace. Experiments on synthetic
datasets demonstrate the effectiveness and efficiency of our method and
applications on real-world datasets show its application values.
| [
{
"version": "v1",
"created": "Wed, 10 May 2017 02:29:44 GMT"
}
] | 2017-05-11T00:00:00 | [
[
"Wu",
"Peng",
""
],
[
"Pan",
"Li",
""
]
] | TITLE: Mining Target Attribute Subspace and Set of Target Communities in Large
Attributed Networks
ABSTRACT: Community detection provides invaluable help for various applications, such
as marketing and product recommendation. Traditional community detection
methods designed for plain networks may not be able to detect communities with
homogeneous attributes inside on attributed networks with attribute
information. Most of recent attribute community detection methods may fail to
capture the requirements of a specific application and not be able to mine the
set of required communities for a specific application. In this paper, we aim
to detect the set of target communities in the target subspace which has some
focus attributes with large importance weights satisfying the requirements of a
specific application. In order to improve the university of the problem, we
address the problem in an extreme case where only two sample nodes in any
potential target community are provided. A Target Subspace and Communities
Mining (TSCM) method is proposed. In TSCM, a sample information extension
method is designed to extend the two sample nodes to a set of exemplar nodes
from which the target subspace is inferred. Then the set of target communities
are located and mined based on the target subspace. Experiments on synthetic
datasets demonstrate the effectiveness and efficiency of our method and
applications on real-world datasets show its application values.
| no_new_dataset | 0.947186 |
1705.03592 | Peng Wu | Peng Wu, Li Pan | Mining Application-aware Community Organization with Expanded Feature
Subspaces from Concerned Attributes in Social Networks | 21 pages, 2 figures | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social networks are typical attributed networks with node attributes.
Different from traditional attribute community detection problem aiming at
obtaining the whole set of communities in the network, we study an
application-oriented problem of mining an application-aware community
organization with respect to specific concerned attributes. The concerned
attributes are designated based on the requirements of any application by a
user in advance. The application-aware community organization w.r.t. concerned
attributes consists of the communities with feature subspaces containing these
concerned attributes. Besides concerned attributes, feature subspace of each
required community may contain some other relevant attributes. All relevant
attributes of a feature subspace jointly describe and determine the community
embedded in such subspace. Thus the problem includes two subproblems, i.e., how
to expand the set of concerned attributes to complete feature subspaces and how
to mine the communities embedded in the expanded subspaces. Two subproblems are
jointly solved by optimizing a quality function called subspace fitness. An
algorithm called ACM is proposed. In order to locate the communities
potentially belonging to the application-aware community organization, cohesive
parts of a network backbone composed of nodes with similar concerned attributes
are detected and set as the community seeds. The set of concerned attributes is
set as the initial subspace for all community seeds. Then each community seed
and its attribute subspace are adjusted iteratively to optimize the subspace
fitness. Extensive experiments on synthetic datasets demonstrate the
effectiveness and efficiency of our method and applications on real-world
networks show its application values.
| [
{
"version": "v1",
"created": "Wed, 10 May 2017 02:31:47 GMT"
}
] | 2017-05-11T00:00:00 | [
[
"Wu",
"Peng",
""
],
[
"Pan",
"Li",
""
]
] | TITLE: Mining Application-aware Community Organization with Expanded Feature
Subspaces from Concerned Attributes in Social Networks
ABSTRACT: Social networks are typical attributed networks with node attributes.
Different from traditional attribute community detection problem aiming at
obtaining the whole set of communities in the network, we study an
application-oriented problem of mining an application-aware community
organization with respect to specific concerned attributes. The concerned
attributes are designated based on the requirements of any application by a
user in advance. The application-aware community organization w.r.t. concerned
attributes consists of the communities with feature subspaces containing these
concerned attributes. Besides concerned attributes, feature subspace of each
required community may contain some other relevant attributes. All relevant
attributes of a feature subspace jointly describe and determine the community
embedded in such subspace. Thus the problem includes two subproblems, i.e., how
to expand the set of concerned attributes to complete feature subspaces and how
to mine the communities embedded in the expanded subspaces. Two subproblems are
jointly solved by optimizing a quality function called subspace fitness. An
algorithm called ACM is proposed. In order to locate the communities
potentially belonging to the application-aware community organization, cohesive
parts of a network backbone composed of nodes with similar concerned attributes
are detected and set as the community seeds. The set of concerned attributes is
set as the initial subspace for all community seeds. Then each community seed
and its attribute subspace are adjusted iteratively to optimize the subspace
fitness. Extensive experiments on synthetic datasets demonstrate the
effectiveness and efficiency of our method and applications on real-world
networks show its application values.
| no_new_dataset | 0.948155 |
1705.03607 | Riku Shigematsu | Riku Shigematsu, David Feng, Shaodi You, Nick Barnes | Learning RGB-D Salient Object Detection using background enclosure,
depth contrast, and top-down features | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, deep Convolutional Neural Networks (CNN) have demonstrated strong
performance on RGB salient object detection. Although, depth information can
help improve detection results, the exploration of CNNs for RGB-D salient
object detection remains limited. Here we propose a novel deep CNN architecture
for RGB-D salient object detection that exploits high-level, mid-level, and low
level features. Further, we present novel depth features that capture the ideas
of background enclosure and depth contrast that are suitable for a learned
approach. We show improved results compared to state-of-the-art RGB-D salient
object detection methods. We also show that the low-level and mid-level depth
features both contribute to improvements in the results. Especially, F-Score of
our method is 0.848 on RGBD1000 dataset, which is 10.7% better than the second
place.
| [
{
"version": "v1",
"created": "Wed, 10 May 2017 05:12:45 GMT"
}
] | 2017-05-11T00:00:00 | [
[
"Shigematsu",
"Riku",
""
],
[
"Feng",
"David",
""
],
[
"You",
"Shaodi",
""
],
[
"Barnes",
"Nick",
""
]
] | TITLE: Learning RGB-D Salient Object Detection using background enclosure,
depth contrast, and top-down features
ABSTRACT: Recently, deep Convolutional Neural Networks (CNN) have demonstrated strong
performance on RGB salient object detection. Although, depth information can
help improve detection results, the exploration of CNNs for RGB-D salient
object detection remains limited. Here we propose a novel deep CNN architecture
for RGB-D salient object detection that exploits high-level, mid-level, and low
level features. Further, we present novel depth features that capture the ideas
of background enclosure and depth contrast that are suitable for a learned
approach. We show improved results compared to state-of-the-art RGB-D salient
object detection methods. We also show that the low-level and mid-level depth
features both contribute to improvements in the results. Especially, F-Score of
our method is 0.848 on RGBD1000 dataset, which is 10.7% better than the second
place.
| no_new_dataset | 0.947866 |
1705.03645 | Shantanu Kumar | Shantanu Kumar | A Survey of Deep Learning Methods for Relation Extraction | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Relation Extraction is an important sub-task of Information Extraction which
has the potential of employing deep learning (DL) models with the creation of
large datasets using distant supervision. In this review, we compare the
contributions and pitfalls of the various DL models that have been used for the
task, to help guide the path ahead.
| [
{
"version": "v1",
"created": "Wed, 10 May 2017 08:05:44 GMT"
}
] | 2017-05-11T00:00:00 | [
[
"Kumar",
"Shantanu",
""
]
] | TITLE: A Survey of Deep Learning Methods for Relation Extraction
ABSTRACT: Relation Extraction is an important sub-task of Information Extraction which
has the potential of employing deep learning (DL) models with the creation of
large datasets using distant supervision. In this review, we compare the
contributions and pitfalls of the various DL models that have been used for the
task, to help guide the path ahead.
| no_new_dataset | 0.944842 |
1705.03678 | Babak Ehteshami Bejnordi | Babak Ehteshami Bejnordi, Guido Zuidhof, Maschenka Balkenhol, Meyke
Hermsen, Peter Bult, Bram van Ginneken, Nico Karssemeijer, Geert Litjens,
Jeroen van der Laak | Context-aware stacked convolutional neural networks for classification
of breast carcinomas in whole-slide histopathology images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated classification of histopathological whole-slide images (WSI) of
breast tissue requires analysis at very high resolutions with a large
contextual area. In this paper, we present context-aware stacked convolutional
neural networks (CNN) for classification of breast WSIs into normal/benign,
ductal carcinoma in situ (DCIS), and invasive ductal carcinoma (IDC). We first
train a CNN using high pixel resolution patches to capture cellular level
information. The feature responses generated by this model are then fed as
input to a second CNN, stacked on top of the first. Training of this stacked
architecture with large input patches enables learning of fine-grained
(cellular) details and global interdependence of tissue structures. Our system
is trained and evaluated on a dataset containing 221 WSIs of H&E stained breast
tissue specimens. The system achieves an AUC of 0.962 for the binary
classification of non-malignant and malignant slides and obtains a three class
accuracy of 81.3% for classification of WSIs into normal/benign, DCIS, and IDC,
demonstrating its potentials for routine diagnostics.
| [
{
"version": "v1",
"created": "Wed, 10 May 2017 10:05:06 GMT"
}
] | 2017-05-11T00:00:00 | [
[
"Bejnordi",
"Babak Ehteshami",
""
],
[
"Zuidhof",
"Guido",
""
],
[
"Balkenhol",
"Maschenka",
""
],
[
"Hermsen",
"Meyke",
""
],
[
"Bult",
"Peter",
""
],
[
"van Ginneken",
"Bram",
""
],
[
"Karssemeijer",
"Nico",
""
],
[
"Litjens",
"Geert",
""
],
[
"van der Laak",
"Jeroen",
""
]
] | TITLE: Context-aware stacked convolutional neural networks for classification
of breast carcinomas in whole-slide histopathology images
ABSTRACT: Automated classification of histopathological whole-slide images (WSI) of
breast tissue requires analysis at very high resolutions with a large
contextual area. In this paper, we present context-aware stacked convolutional
neural networks (CNN) for classification of breast WSIs into normal/benign,
ductal carcinoma in situ (DCIS), and invasive ductal carcinoma (IDC). We first
train a CNN using high pixel resolution patches to capture cellular level
information. The feature responses generated by this model are then fed as
input to a second CNN, stacked on top of the first. Training of this stacked
architecture with large input patches enables learning of fine-grained
(cellular) details and global interdependence of tissue structures. Our system
is trained and evaluated on a dataset containing 221 WSIs of H&E stained breast
tissue specimens. The system achieves an AUC of 0.962 for the binary
classification of non-malignant and malignant slides and obtains a three class
accuracy of 81.3% for classification of WSIs into normal/benign, DCIS, and IDC,
demonstrating its potentials for routine diagnostics.
| no_new_dataset | 0.829354 |
1604.03505 | Prithvijit Chattopadhyay Chattopadhyay | Prithvijit Chattopadhyay, Ramakrishna Vedantam, Ramprasaath R.
Selvaraju, Dhruv Batra, and Devi Parikh | Counting Everyday Objects in Everyday Scenes | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We are interested in counting the number of instances of object classes in
natural, everyday images. Previous counting approaches tackle the problem in
restricted domains such as counting pedestrians in surveillance videos. Counts
can also be estimated from outputs of other vision tasks like object detection.
In this work, we build dedicated models for counting designed to tackle the
large variance in counts, appearances, and scales of objects found in natural
scenes. Our approach is inspired by the phenomenon of subitizing - the ability
of humans to make quick assessments of counts given a perceptual signal, for
small count values. Given a natural scene, we employ a divide and conquer
strategy while incorporating context across the scene to adapt the subitizing
idea to counting. Our approach offers consistent improvements over numerous
baseline approaches for counting on the PASCAL VOC 2007 and COCO datasets.
Subsequently, we study how counting can be used to improve object detection. We
then show a proof of concept application of our counting methods to the task of
Visual Question Answering, by studying the `how many?' questions in the VQA and
COCO-QA datasets.
| [
{
"version": "v1",
"created": "Tue, 12 Apr 2016 18:31:43 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Dec 2016 17:34:20 GMT"
},
{
"version": "v3",
"created": "Tue, 9 May 2017 03:24:40 GMT"
}
] | 2017-05-10T00:00:00 | [
[
"Chattopadhyay",
"Prithvijit",
""
],
[
"Vedantam",
"Ramakrishna",
""
],
[
"Selvaraju",
"Ramprasaath R.",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Parikh",
"Devi",
""
]
] | TITLE: Counting Everyday Objects in Everyday Scenes
ABSTRACT: We are interested in counting the number of instances of object classes in
natural, everyday images. Previous counting approaches tackle the problem in
restricted domains such as counting pedestrians in surveillance videos. Counts
can also be estimated from outputs of other vision tasks like object detection.
In this work, we build dedicated models for counting designed to tackle the
large variance in counts, appearances, and scales of objects found in natural
scenes. Our approach is inspired by the phenomenon of subitizing - the ability
of humans to make quick assessments of counts given a perceptual signal, for
small count values. Given a natural scene, we employ a divide and conquer
strategy while incorporating context across the scene to adapt the subitizing
idea to counting. Our approach offers consistent improvements over numerous
baseline approaches for counting on the PASCAL VOC 2007 and COCO datasets.
Subsequently, we study how counting can be used to improve object detection. We
then show a proof of concept application of our counting methods to the task of
Visual Question Answering, by studying the `how many?' questions in the VQA and
COCO-QA datasets.
| no_new_dataset | 0.948775 |
1607.07034 | Aarti Sathyanarayana | Aarti Sathyanarayana, Shafiq Joty, Luis Fernandez-Luque, Ferda Ofli,
Jaideep Srivastava, Ahmed Elmagarmid, Shahrad Taheri, Teresa Arora | Impact of Physical Activity on Sleep:A Deep Learning Based Exploration | null | JMIR Mhealth Uhealth 2016;4(4):e125 | 10.2196/mhealth.6562 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The importance of sleep is paramount for maintaining physical, emotional and
mental wellbeing. Though the relationship between sleep and physical activity
is known to be important, it is not yet fully understood. The explosion in
popularity of actigraphy and wearable devices, provides a unique opportunity to
understand this relationship. Leveraging this information source requires new
tools to be developed to facilitate data-driven research for sleep and activity
patient-recommendations.
In this paper we explore the use of deep learning to build sleep quality
prediction models based on actigraphy data. We first use deep learning as a
pure model building device by performing human activity recognition (HAR) on
raw sensor data, and using deep learning to build sleep prediction models. We
compare the deep learning models with those build using classical approaches,
i.e. logistic regression, support vector machines, random forest and adaboost.
Secondly, we employ the advantage of deep learning with its ability to handle
high dimensional datasets. We explore several deep learning models on the raw
wearable sensor output without performing HAR or any other feature extraction.
Our results show that using a convolutional neural network on the raw
wearables output improves the predictive value of sleep quality from physical
activity, by an additional 8% compared to state-of-the-art non-deep learning
approaches, which itself shows a 15% improvement over current practice.
Moreover, utilizing deep learning on raw data eliminates the need for data
pre-processing and simplifies the overall workflow to analyze actigraphy data
for sleep and physical activity research.
| [
{
"version": "v1",
"created": "Sun, 24 Jul 2016 12:12:03 GMT"
}
] | 2017-05-10T00:00:00 | [
[
"Sathyanarayana",
"Aarti",
""
],
[
"Joty",
"Shafiq",
""
],
[
"Fernandez-Luque",
"Luis",
""
],
[
"Ofli",
"Ferda",
""
],
[
"Srivastava",
"Jaideep",
""
],
[
"Elmagarmid",
"Ahmed",
""
],
[
"Taheri",
"Shahrad",
""
],
[
"Arora",
"Teresa",
""
]
] | TITLE: Impact of Physical Activity on Sleep:A Deep Learning Based Exploration
ABSTRACT: The importance of sleep is paramount for maintaining physical, emotional and
mental wellbeing. Though the relationship between sleep and physical activity
is known to be important, it is not yet fully understood. The explosion in
popularity of actigraphy and wearable devices, provides a unique opportunity to
understand this relationship. Leveraging this information source requires new
tools to be developed to facilitate data-driven research for sleep and activity
patient-recommendations.
In this paper we explore the use of deep learning to build sleep quality
prediction models based on actigraphy data. We first use deep learning as a
pure model building device by performing human activity recognition (HAR) on
raw sensor data, and using deep learning to build sleep prediction models. We
compare the deep learning models with those build using classical approaches,
i.e. logistic regression, support vector machines, random forest and adaboost.
Secondly, we employ the advantage of deep learning with its ability to handle
high dimensional datasets. We explore several deep learning models on the raw
wearable sensor output without performing HAR or any other feature extraction.
Our results show that using a convolutional neural network on the raw
wearables output improves the predictive value of sleep quality from physical
activity, by an additional 8% compared to state-of-the-art non-deep learning
approaches, which itself shows a 15% improvement over current practice.
Moreover, utilizing deep learning on raw data eliminates the need for data
pre-processing and simplifies the overall workflow to analyze actigraphy data
for sleep and physical activity research.
| no_new_dataset | 0.945197 |
1611.00135 | Jia Li | Jia Li, Changqun Xia and Xiaowu Chen | A Benchmark Dataset and Saliency-guided Stacked Autoencoders for
Video-based Salient Object Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image-based salient object detection (SOD) has been extensively studied in
the past decades. However, video-based SOD is much less explored since there
lack large-scale video datasets within which salient objects are unambiguously
defined and annotated. Toward this end, this paper proposes a video-based SOD
dataset that consists of 200 videos (64 minutes). In constructing the dataset,
we manually annotate all objects and regions over 7,650 uniformly sampled
keyframes and collect the eye-tracking data of 23 subjects that free-view all
videos. From the user data, we find salient objects in video can be defined as
objects that consistently pop-out throughout the video, and objects with such
attributes can be unambiguously annotated by combining manually annotated
object/region masks with eye-tracking data of multiple subjects. To the best of
our knowledge, it is currently the largest dataset for video-based salient
object detection.
Based on this dataset, this paper proposes an unsupervised baseline approach
for video-based SOD by using saliency-guided stacked autoencoders. In the
proposed approach, multiple spatiotemporal saliency cues are first extracted at
pixel, superpixel and object levels. With these saliency cues, stacked
autoencoders are unsupervisedly constructed which automatically infer a
saliency score for each pixel by progressively encoding the high-dimensional
saliency cues gathered from the pixel and its spatiotemporal neighbors.
Experimental results show that the proposed unsupervised approach outperforms
30 state-of-the-art models on the proposed dataset, including 19 image-based &
classic (unsupervised or non-deep learning), 6 image-based & deep learning, and
5 video-based & unsupervised. Moreover, benchmarking results show that the
proposed dataset is very challenging and has the potential to boost the
development of video-based SOD.
| [
{
"version": "v1",
"created": "Tue, 1 Nov 2016 05:48:05 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2017 07:38:17 GMT"
}
] | 2017-05-10T00:00:00 | [
[
"Li",
"Jia",
""
],
[
"Xia",
"Changqun",
""
],
[
"Chen",
"Xiaowu",
""
]
] | TITLE: A Benchmark Dataset and Saliency-guided Stacked Autoencoders for
Video-based Salient Object Detection
ABSTRACT: Image-based salient object detection (SOD) has been extensively studied in
the past decades. However, video-based SOD is much less explored since there
lack large-scale video datasets within which salient objects are unambiguously
defined and annotated. Toward this end, this paper proposes a video-based SOD
dataset that consists of 200 videos (64 minutes). In constructing the dataset,
we manually annotate all objects and regions over 7,650 uniformly sampled
keyframes and collect the eye-tracking data of 23 subjects that free-view all
videos. From the user data, we find salient objects in video can be defined as
objects that consistently pop-out throughout the video, and objects with such
attributes can be unambiguously annotated by combining manually annotated
object/region masks with eye-tracking data of multiple subjects. To the best of
our knowledge, it is currently the largest dataset for video-based salient
object detection.
Based on this dataset, this paper proposes an unsupervised baseline approach
for video-based SOD by using saliency-guided stacked autoencoders. In the
proposed approach, multiple spatiotemporal saliency cues are first extracted at
pixel, superpixel and object levels. With these saliency cues, stacked
autoencoders are unsupervisedly constructed which automatically infer a
saliency score for each pixel by progressively encoding the high-dimensional
saliency cues gathered from the pixel and its spatiotemporal neighbors.
Experimental results show that the proposed unsupervised approach outperforms
30 state-of-the-art models on the proposed dataset, including 19 image-based &
classic (unsupervised or non-deep learning), 6 image-based & deep learning, and
5 video-based & unsupervised. Moreover, benchmarking results show that the
proposed dataset is very challenging and has the potential to boost the
development of video-based SOD.
| new_dataset | 0.872184 |
1611.08215 | Andrea Palazzi | Andrea Palazzi, Francesco Solera, Simone Calderara, Stefano Alletto,
Rita Cucchiara | Learning Where to Attend Like a Human Driver | To appear in IEEE Intelligent Vehicles Symposium 2017 | null | null | null | cs.CV cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the advent of autonomous cars, it's likely - at least in the near
future - that human attention will still maintain a central role as a guarantee
in terms of legal responsibility during the driving task. In this paper we
study the dynamics of the driver's gaze and use it as a proxy to understand
related attentional mechanisms. First, we build our analysis upon two
questions: where and what the driver is looking at? Second, we model the
driver's gaze by training a coarse-to-fine convolutional network on short
sequences extracted from the DR(eye)VE dataset. Experimental comparison against
different baselines reveal that the driver's gaze can indeed be learnt to some
extent, despite i) being highly subjective and ii) having only one driver's
gaze available for each sequence due to the irreproducibility of the scene.
Eventually, we advocate for a new assisted driving paradigm which suggests to
the driver, with no intervention, where she should focus her attention.
| [
{
"version": "v1",
"created": "Thu, 24 Nov 2016 15:14:23 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2017 16:24:16 GMT"
}
] | 2017-05-10T00:00:00 | [
[
"Palazzi",
"Andrea",
""
],
[
"Solera",
"Francesco",
""
],
[
"Calderara",
"Simone",
""
],
[
"Alletto",
"Stefano",
""
],
[
"Cucchiara",
"Rita",
""
]
] | TITLE: Learning Where to Attend Like a Human Driver
ABSTRACT: Despite the advent of autonomous cars, it's likely - at least in the near
future - that human attention will still maintain a central role as a guarantee
in terms of legal responsibility during the driving task. In this paper we
study the dynamics of the driver's gaze and use it as a proxy to understand
related attentional mechanisms. First, we build our analysis upon two
questions: where and what the driver is looking at? Second, we model the
driver's gaze by training a coarse-to-fine convolutional network on short
sequences extracted from the DR(eye)VE dataset. Experimental comparison against
different baselines reveal that the driver's gaze can indeed be learnt to some
extent, despite i) being highly subjective and ii) having only one driver's
gaze available for each sequence due to the irreproducibility of the scene.
Eventually, we advocate for a new assisted driving paradigm which suggests to
the driver, with no intervention, where she should focus her attention.
| no_new_dataset | 0.939582 |
1612.01465 | Eldar Insafutdinov | Eldar Insafutdinov, Mykhaylo Andriluka, Leonid Pishchulin, Siyu Tang,
Evgeny Levinkov, Bjoern Andres, Bernt Schiele | ArtTrack: Articulated Multi-person Tracking in the Wild | Accepted to CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose an approach for articulated tracking of multiple
people in unconstrained videos. Our starting point is a model that resembles
existing architectures for single-frame pose estimation but is substantially
faster. We achieve this in two ways: (1) by simplifying and sparsifying the
body-part relationship graph and leveraging recent methods for faster
inference, and (2) by offloading a substantial share of computation onto a
feed-forward convolutional architecture that is able to detect and associate
body joints of the same person even in clutter. We use this model to generate
proposals for body joint locations and formulate articulated tracking as
spatio-temporal grouping of such proposals. This allows to jointly solve the
association problem for all people in the scene by propagating evidence from
strong detections through time and enforcing constraints that each proposal can
be assigned to one person only. We report results on a public MPII Human Pose
benchmark and on a new MPII Video Pose dataset of image sequences with multiple
people. We demonstrate that our model achieves state-of-the-art results while
using only a fraction of time and is able to leverage temporal information to
improve state-of-the-art for crowded scenes.
| [
{
"version": "v1",
"created": "Mon, 5 Dec 2016 18:38:56 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Dec 2016 11:49:21 GMT"
},
{
"version": "v3",
"created": "Tue, 9 May 2017 09:56:46 GMT"
}
] | 2017-05-10T00:00:00 | [
[
"Insafutdinov",
"Eldar",
""
],
[
"Andriluka",
"Mykhaylo",
""
],
[
"Pishchulin",
"Leonid",
""
],
[
"Tang",
"Siyu",
""
],
[
"Levinkov",
"Evgeny",
""
],
[
"Andres",
"Bjoern",
""
],
[
"Schiele",
"Bernt",
""
]
] | TITLE: ArtTrack: Articulated Multi-person Tracking in the Wild
ABSTRACT: In this paper we propose an approach for articulated tracking of multiple
people in unconstrained videos. Our starting point is a model that resembles
existing architectures for single-frame pose estimation but is substantially
faster. We achieve this in two ways: (1) by simplifying and sparsifying the
body-part relationship graph and leveraging recent methods for faster
inference, and (2) by offloading a substantial share of computation onto a
feed-forward convolutional architecture that is able to detect and associate
body joints of the same person even in clutter. We use this model to generate
proposals for body joint locations and formulate articulated tracking as
spatio-temporal grouping of such proposals. This allows to jointly solve the
association problem for all people in the scene by propagating evidence from
strong detections through time and enforcing constraints that each proposal can
be assigned to one person only. We report results on a public MPII Human Pose
benchmark and on a new MPII Video Pose dataset of image sequences with multiple
people. We demonstrate that our model achieves state-of-the-art results while
using only a fraction of time and is able to leverage temporal information to
improve state-of-the-art for crowded scenes.
| new_dataset | 0.954647 |
1612.02541 | Yuxin Peng | Jian Zhang and Yuxin Peng | Query-adaptive Image Retrieval by Deep Weighted Hashing | 13 pages, submitted to IEEE Transactions On Multimedia | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hashing methods have attracted much attention for large scale image
retrieval. Some deep hashing methods have achieved promising results by taking
advantage of the strong representation power of deep networks recently.
However, existing deep hashing methods treat all hash bits equally. On one
hand, a large number of images share the same distance to a query image due to
the discrete Hamming distance, which raises a critical issue of image retrieval
where fine-grained rankings are very important. On the other hand, different
hash bits actually contribute to the image retrieval differently, and treating
them equally greatly affects the retrieval accuracy of image. To address the
above two problems, we propose the query-adaptive deep weighted hashing (QaDWH)
approach, which can perform fine-grained ranking for different queries by
weighted Hamming distance. First, a novel deep hashing network is proposed to
learn the hash codes and corresponding class-wise weights jointly, so that the
learned weights can reflect the importance of different hash bits for different
image classes. Second, a query-adaptive image retrieval method is proposed,
which rapidly generates hash bit weights for different query images by fusing
its semantic probability and the learned class-wise weights. Fine-grained image
retrieval is then performed by the weighted Hamming distance, which can provide
more accurate ranking than the traditional Hamming distance. Experiments on
four widely used datasets show that the proposed approach outperforms eight
state-of-the-art hashing methods.
| [
{
"version": "v1",
"created": "Thu, 8 Dec 2016 06:20:03 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2017 02:40:20 GMT"
}
] | 2017-05-10T00:00:00 | [
[
"Zhang",
"Jian",
""
],
[
"Peng",
"Yuxin",
""
]
] | TITLE: Query-adaptive Image Retrieval by Deep Weighted Hashing
ABSTRACT: Hashing methods have attracted much attention for large scale image
retrieval. Some deep hashing methods have achieved promising results by taking
advantage of the strong representation power of deep networks recently.
However, existing deep hashing methods treat all hash bits equally. On one
hand, a large number of images share the same distance to a query image due to
the discrete Hamming distance, which raises a critical issue of image retrieval
where fine-grained rankings are very important. On the other hand, different
hash bits actually contribute to the image retrieval differently, and treating
them equally greatly affects the retrieval accuracy of image. To address the
above two problems, we propose the query-adaptive deep weighted hashing (QaDWH)
approach, which can perform fine-grained ranking for different queries by
weighted Hamming distance. First, a novel deep hashing network is proposed to
learn the hash codes and corresponding class-wise weights jointly, so that the
learned weights can reflect the importance of different hash bits for different
image classes. Second, a query-adaptive image retrieval method is proposed,
which rapidly generates hash bit weights for different query images by fusing
its semantic probability and the learned class-wise weights. Fine-grained image
retrieval is then performed by the weighted Hamming distance, which can provide
more accurate ranking than the traditional Hamming distance. Experiments on
four widely used datasets show that the proposed approach outperforms eight
state-of-the-art hashing methods.
| no_new_dataset | 0.950641 |
1705.02950 | Jan Hosang | Jan Hosang, Rodrigo Benenson, Bernt Schiele | Learning non-maximum suppression | Added "Supplementary material" title | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object detectors have hugely profited from moving towards an end-to-end
learning paradigm: proposals, features, and the classifier becoming one neural
network improved results two-fold on general object detection. One
indispensable component is non-maximum suppression (NMS), a post-processing
algorithm responsible for merging all detections that belong to the same
object. The de facto standard NMS algorithm is still fully hand-crafted,
suspiciously simple, and -- being based on greedy clustering with a fixed
distance threshold -- forces a trade-off between recall and precision. We
propose a new network architecture designed to perform NMS, using only boxes
and their score. We report experiments for person detection on PETS and for
general object categories on the COCO dataset. Our approach shows promise
providing improved localization and occlusion handling.
| [
{
"version": "v1",
"created": "Mon, 8 May 2017 16:16:28 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2017 12:52:04 GMT"
}
] | 2017-05-10T00:00:00 | [
[
"Hosang",
"Jan",
""
],
[
"Benenson",
"Rodrigo",
""
],
[
"Schiele",
"Bernt",
""
]
] | TITLE: Learning non-maximum suppression
ABSTRACT: Object detectors have hugely profited from moving towards an end-to-end
learning paradigm: proposals, features, and the classifier becoming one neural
network improved results two-fold on general object detection. One
indispensable component is non-maximum suppression (NMS), a post-processing
algorithm responsible for merging all detections that belong to the same
object. The de facto standard NMS algorithm is still fully hand-crafted,
suspiciously simple, and -- being based on greedy clustering with a fixed
distance threshold -- forces a trade-off between recall and precision. We
propose a new network architecture designed to perform NMS, using only boxes
and their score. We report experiments for person detection on PETS and for
general object categories on the COCO dataset. Our approach shows promise
providing improved localization and occlusion handling.
| no_new_dataset | 0.949949 |
1705.03004 | Hussam Qassim Mr. | Hussam Qassim, David Feinzimer, and Abhishek Verma | Residual Squeeze VGG16 | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning has given way to a new era of machine learning, apart from
computer vision. Convolutional neural networks have been implemented in image
classification, segmentation and object detection. Despite recent advancements,
we are still in the very early stages and have yet to settle on best practices
for network architecture in terms of deep design, small in size and a short
training time. In this work, we propose a very deep neural network comprised of
16 Convolutional layers compressed with the Fire Module adapted from the
SQUEEZENET model. We also call for the addition of residual connections to help
suppress degradation. This model can be implemented on almost every neural
network model with fully incorporated residual learning. This proposed model
Residual-Squeeze-VGG16 (ResSquVGG16) trained on the large-scale MIT
Places365-Standard scene dataset. In our tests, the model performed with
accuracy similar to the pre-trained VGG16 model in Top-1 and Top-5 validation
accuracy while also enjoying a 23.86% reduction in training time and an 88.4%
reduction in size. In our tests, this model was trained from scratch.
| [
{
"version": "v1",
"created": "Fri, 5 May 2017 23:46:26 GMT"
}
] | 2017-05-10T00:00:00 | [
[
"Qassim",
"Hussam",
""
],
[
"Feinzimer",
"David",
""
],
[
"Verma",
"Abhishek",
""
]
] | TITLE: Residual Squeeze VGG16
ABSTRACT: Deep learning has given way to a new era of machine learning, apart from
computer vision. Convolutional neural networks have been implemented in image
classification, segmentation and object detection. Despite recent advancements,
we are still in the very early stages and have yet to settle on best practices
for network architecture in terms of deep design, small in size and a short
training time. In this work, we propose a very deep neural network comprised of
16 Convolutional layers compressed with the Fire Module adapted from the
SQUEEZENET model. We also call for the addition of residual connections to help
suppress degradation. This model can be implemented on almost every neural
network model with fully incorporated residual learning. This proposed model
Residual-Squeeze-VGG16 (ResSquVGG16) trained on the large-scale MIT
Places365-Standard scene dataset. In our tests, the model performed with
accuracy similar to the pre-trained VGG16 model in Top-1 and Top-5 validation
accuracy while also enjoying a 23.86% reduction in training time and an 88.4%
reduction in size. In our tests, this model was trained from scratch.
| no_new_dataset | 0.945901 |
1705.03148 | Chen Chen | Ce Li, Chen Chen, Baochang Zhang, Qixiang Ye, Jungong Han, Rongrong Ji | Deep Spatio-temporal Manifold Network for Action Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual data such as videos are often sampled from complex manifold. We
propose leveraging the manifold structure to constrain the deep action feature
learning, thereby minimizing the intra-class variations in the feature space
and alleviating the over-fitting problem. Considering that manifold can be
transferred, layer by layer, from the data domain to the deep features, the
manifold priori is posed from the top layer into the back propagation learning
procedure of convolutional neural network (CNN). The resulting algorithm
--Spatio-Temporal Manifold Network-- is solved with the efficient Alternating
Direction Method of Multipliers and Backward Propagation (ADMM-BP). We
theoretically show that STMN recasts the problem as projection over the
manifold via an embedding method. The proposed approach is evaluated on two
benchmark datasets, showing significant improvements to the baselines.
| [
{
"version": "v1",
"created": "Tue, 9 May 2017 02:37:30 GMT"
}
] | 2017-05-10T00:00:00 | [
[
"Li",
"Ce",
""
],
[
"Chen",
"Chen",
""
],
[
"Zhang",
"Baochang",
""
],
[
"Ye",
"Qixiang",
""
],
[
"Han",
"Jungong",
""
],
[
"Ji",
"Rongrong",
""
]
] | TITLE: Deep Spatio-temporal Manifold Network for Action Recognition
ABSTRACT: Visual data such as videos are often sampled from complex manifold. We
propose leveraging the manifold structure to constrain the deep action feature
learning, thereby minimizing the intra-class variations in the feature space
and alleviating the over-fitting problem. Considering that manifold can be
transferred, layer by layer, from the data domain to the deep features, the
manifold priori is posed from the top layer into the back propagation learning
procedure of convolutional neural network (CNN). The resulting algorithm
--Spatio-Temporal Manifold Network-- is solved with the efficient Alternating
Direction Method of Multipliers and Backward Propagation (ADMM-BP). We
theoretically show that STMN recasts the problem as projection over the
manifold via an embedding method. The proposed approach is evaluated on two
benchmark datasets, showing significant improvements to the baselines.
| no_new_dataset | 0.94625 |
1705.03178 | Mayank Singh | Mayank Singh, Ajay Jaiswal, Priya Shree, Arindam Pal, Animesh
Mukherjee, Pawan Goyal | Understanding the Impact of Early Citers on Long-Term Scientific Impact | null | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores an interesting new dimension to the challenging problem
of predicting long-term scientific impact (LTSI) usually measured by the number
of citations accumulated by a paper in the long-term. It is well known that
early citations (within 1-2 years after publication) acquired by a paper
positively affects its LTSI. However, there is no work that investigates if the
set of authors who bring in these early citations to a paper also affect its
LTSI. In this paper, we demonstrate for the first time, the impact of these
authors whom we call early citers (EC) on the LTSI of a paper. Note that this
study of the complex dynamics of EC introduces a brand new paradigm in citation
behavior analysis. Using a massive computer science bibliographic dataset we
identify two distinct categories of EC - we call those authors who have high
overall publication/citation count in the dataset as influential and the rest
of the authors as non-influential. We investigate three characteristic
properties of EC and present an extensive analysis of how each category
correlates with LTSI in terms of these properties. In contrast to popular
perception, we find that influential EC negatively affects LTSI possibly owing
to attention stealing. To motivate this, we present several representative
examples from the dataset. A closer inspection of the collaboration network
reveals that this stealing effect is more profound if an EC is nearer to the
authors of the paper being investigated. As an intuitive use case, we show that
incorporating EC properties in the state-of-the-art supervised citation
prediction models leads to high performance margins. At the closing, we present
an online portal to visualize EC statistics along with the prediction results
for a given query paper.
| [
{
"version": "v1",
"created": "Tue, 9 May 2017 05:14:46 GMT"
}
] | 2017-05-10T00:00:00 | [
[
"Singh",
"Mayank",
""
],
[
"Jaiswal",
"Ajay",
""
],
[
"Shree",
"Priya",
""
],
[
"Pal",
"Arindam",
""
],
[
"Mukherjee",
"Animesh",
""
],
[
"Goyal",
"Pawan",
""
]
] | TITLE: Understanding the Impact of Early Citers on Long-Term Scientific Impact
ABSTRACT: This paper explores an interesting new dimension to the challenging problem
of predicting long-term scientific impact (LTSI) usually measured by the number
of citations accumulated by a paper in the long-term. It is well known that
early citations (within 1-2 years after publication) acquired by a paper
positively affects its LTSI. However, there is no work that investigates if the
set of authors who bring in these early citations to a paper also affect its
LTSI. In this paper, we demonstrate for the first time, the impact of these
authors whom we call early citers (EC) on the LTSI of a paper. Note that this
study of the complex dynamics of EC introduces a brand new paradigm in citation
behavior analysis. Using a massive computer science bibliographic dataset we
identify two distinct categories of EC - we call those authors who have high
overall publication/citation count in the dataset as influential and the rest
of the authors as non-influential. We investigate three characteristic
properties of EC and present an extensive analysis of how each category
correlates with LTSI in terms of these properties. In contrast to popular
perception, we find that influential EC negatively affects LTSI possibly owing
to attention stealing. To motivate this, we present several representative
examples from the dataset. A closer inspection of the collaboration network
reveals that this stealing effect is more profound if an EC is nearer to the
authors of the paper being investigated. As an intuitive use case, we show that
incorporating EC properties in the state-of-the-art supervised citation
prediction models leads to high performance margins. At the closing, we present
an online portal to visualize EC statistics along with the prediction results
for a given query paper.
| no_new_dataset | 0.943971 |
1705.03212 | San Jiang | San Jiang, Wanshou Jiang | Efficient Structure from Motion for Oblique UAV Images Based on Maximal
Spanning Tree Expansions | 33 pages, 66 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The primary contribution of this paper is an efficient Structure from Motion
(SfM) solution for oblique unmanned aerial vehicle (UAV) images. First, an
algorithm, considering spatial relationship constrains between image
footprints, is designed for match pair selection with assistant of UAV flight
control data and oblique camera mounting angles. Second, a topological
connection network (TCN), represented by an undirected weighted graph, is
constructed from initial match pairs, which encodes overlap area and
intersection angle into edge weights. Then, an algorithm, termed MST-Expansion,
is proposed to extract the match graph from the TCN where the TCN is firstly
simplified by a maximum spanning tree (MST). By further analysis of local
structure in the MST, expansion operations are performed on the nodes of the
MST for match graph enhancement, which is achieved by introducing critical
connections in two expansion directions. Finally, guided by the match graph, an
efficient SfM solution is proposed, and its validation is verified through
comprehensive analysis and comparison using three UAV datasets captured with
different oblique multi-camera systems. Experiment results demonstrate that the
efficiency of image matching is improved with a speedup ratio ranging from 19
to 35, and competitive orientation accuracy is achieved from both relative
bundle adjustment (BA) without GCPs (Ground Control Points) and absolute BA
with GCPs. At the same time, images in the three datasets are successfully
oriented. For orientation of oblique UAV images, the proposed method can be a
more efficient solution.
| [
{
"version": "v1",
"created": "Tue, 9 May 2017 07:22:23 GMT"
}
] | 2017-05-10T00:00:00 | [
[
"Jiang",
"San",
""
],
[
"Jiang",
"Wanshou",
""
]
] | TITLE: Efficient Structure from Motion for Oblique UAV Images Based on Maximal
Spanning Tree Expansions
ABSTRACT: The primary contribution of this paper is an efficient Structure from Motion
(SfM) solution for oblique unmanned aerial vehicle (UAV) images. First, an
algorithm, considering spatial relationship constrains between image
footprints, is designed for match pair selection with assistant of UAV flight
control data and oblique camera mounting angles. Second, a topological
connection network (TCN), represented by an undirected weighted graph, is
constructed from initial match pairs, which encodes overlap area and
intersection angle into edge weights. Then, an algorithm, termed MST-Expansion,
is proposed to extract the match graph from the TCN where the TCN is firstly
simplified by a maximum spanning tree (MST). By further analysis of local
structure in the MST, expansion operations are performed on the nodes of the
MST for match graph enhancement, which is achieved by introducing critical
connections in two expansion directions. Finally, guided by the match graph, an
efficient SfM solution is proposed, and its validation is verified through
comprehensive analysis and comparison using three UAV datasets captured with
different oblique multi-camera systems. Experiment results demonstrate that the
efficiency of image matching is improved with a speedup ratio ranging from 19
to 35, and competitive orientation accuracy is achieved from both relative
bundle adjustment (BA) without GCPs (Ground Control Points) and absolute BA
with GCPs. At the same time, images in the three datasets are successfully
oriented. For orientation of oblique UAV images, the proposed method can be a
more efficient solution.
| no_new_dataset | 0.949059 |
1705.03260 | Joshua Peterson | Joshua C. Peterson, Thomas L. Griffiths | Evidence for the size principle in semantic and perceptual domains | 6 pages, 4 figures, To appear in the Proceedings of the 39th Annual
Conference of the Cognitive Science Society | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Shepard's Universal Law of Generalization offered a compelling case for the
first physics-like law in cognitive science that should hold for all
intelligent agents in the universe. Shepard's account is based on a rational
Bayesian model of generalization, providing an answer to the question of why
such a law should emerge. Extending this account to explain how humans use
multiple examples to make better generalizations requires an additional
assumption, called the size principle: hypotheses that pick out fewer objects
should make a larger contribution to generalization. The degree to which this
principle warrants similarly law-like status is far from conclusive. Typically,
evaluating this principle has not been straightforward, requiring additional
assumptions. We present a new method for evaluating the size principle that is
more direct, and apply this method to a diverse array of datasets. Our results
provide support for the broad applicability of the size principle.
| [
{
"version": "v1",
"created": "Tue, 9 May 2017 10:21:49 GMT"
}
] | 2017-05-10T00:00:00 | [
[
"Peterson",
"Joshua C.",
""
],
[
"Griffiths",
"Thomas L.",
""
]
] | TITLE: Evidence for the size principle in semantic and perceptual domains
ABSTRACT: Shepard's Universal Law of Generalization offered a compelling case for the
first physics-like law in cognitive science that should hold for all
intelligent agents in the universe. Shepard's account is based on a rational
Bayesian model of generalization, providing an answer to the question of why
such a law should emerge. Extending this account to explain how humans use
multiple examples to make better generalizations requires an additional
assumption, called the size principle: hypotheses that pick out fewer objects
should make a larger contribution to generalization. The degree to which this
principle warrants similarly law-like status is far from conclusive. Typically,
evaluating this principle has not been straightforward, requiring additional
assumptions. We present a new method for evaluating the size principle that is
more direct, and apply this method to a diverse array of datasets. Our results
provide support for the broad applicability of the size principle.
| no_new_dataset | 0.951459 |
1705.03264 | Abhik Jana | Abhik Jana, Sruthi Mooriyath, Animesh Mukherjee, Pawan Goyal | WikiM: Metapaths based Wikification of Scientific Abstracts | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to disseminate the exponential extent of knowledge being produced in
the form of scientific publications, it would be best to design mechanisms that
connect it with already existing rich repository of concepts -- the Wikipedia.
Not only does it make scientific reading simple and easy (by connecting the
involved concepts used in the scientific articles to their Wikipedia
explanations) but also improves the overall quality of the article. In this
paper, we present a novel metapath based method, WikiM, to efficiently wikify
scientific abstracts -- a topic that has been rarely investigated in the
literature. One of the prime motivations for this work comes from the
observation that, wikified abstracts of scientific documents help a reader to
decide better, in comparison to the plain abstracts, whether (s)he would be
interested to read the full article. We perform mention extraction mostly
through traditional tf-idf measures coupled with a set of smart filters. The
entity linking heavily leverages on the rich citation and author publication
networks. Our observation is that various metapaths defined over these networks
can significantly enhance the overall performance of the system. For mention
extraction and entity linking, we outperform most of the competing
state-of-the-art techniques by a large margin arriving at precision values of
72.42% and 73.8% respectively over a dataset from the ACL Anthology Network. In
order to establish the robustness of our scheme, we wikify three other datasets
and get precision values of 63.41%-94.03% and 67.67%-73.29% respectively for
the mention extraction and the entity linking phase.
| [
{
"version": "v1",
"created": "Tue, 9 May 2017 10:35:15 GMT"
}
] | 2017-05-10T00:00:00 | [
[
"Jana",
"Abhik",
""
],
[
"Mooriyath",
"Sruthi",
""
],
[
"Mukherjee",
"Animesh",
""
],
[
"Goyal",
"Pawan",
""
]
] | TITLE: WikiM: Metapaths based Wikification of Scientific Abstracts
ABSTRACT: In order to disseminate the exponential extent of knowledge being produced in
the form of scientific publications, it would be best to design mechanisms that
connect it with already existing rich repository of concepts -- the Wikipedia.
Not only does it make scientific reading simple and easy (by connecting the
involved concepts used in the scientific articles to their Wikipedia
explanations) but also improves the overall quality of the article. In this
paper, we present a novel metapath based method, WikiM, to efficiently wikify
scientific abstracts -- a topic that has been rarely investigated in the
literature. One of the prime motivations for this work comes from the
observation that, wikified abstracts of scientific documents help a reader to
decide better, in comparison to the plain abstracts, whether (s)he would be
interested to read the full article. We perform mention extraction mostly
through traditional tf-idf measures coupled with a set of smart filters. The
entity linking heavily leverages on the rich citation and author publication
networks. Our observation is that various metapaths defined over these networks
can significantly enhance the overall performance of the system. For mention
extraction and entity linking, we outperform most of the competing
state-of-the-art techniques by a large margin arriving at precision values of
72.42% and 73.8% respectively over a dataset from the ACL Anthology Network. In
order to establish the robustness of our scheme, we wikify three other datasets
and get precision values of 63.41%-94.03% and 67.67%-73.29% respectively for
the mention extraction and the entity linking phase.
| no_new_dataset | 0.952309 |
1705.03345 | Emiliano De Cristofaro | Despoina Chatzakou, Nicolas Kourtellis, Jeremy Blackburn, Emiliano De
Cristofaro, Gianluca Stringhini, Athena Vakali | Hate is not Binary: Studying Abusive Behavior of #GamerGate on Twitter | In 28th ACM Conference on Hypertext and Social Media (ACM HyperText
2017) | null | null | null | cs.SI cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the past few years, online bullying and aggression have become
increasingly prominent, and manifested in many different forms on social media.
However, there is little work analyzing the characteristics of abusive users
and what distinguishes them from typical social media users. In this paper, we
start addressing this gap by analyzing tweets containing a great large amount
of abusiveness. We focus on a Twitter dataset revolving around the Gamergate
controversy, which led to many incidents of cyberbullying and cyberaggression
on various gaming and social media platforms. We study the properties of the
users tweeting about Gamergate, the content they post, and the differences in
their behavior compared to typical Twitter users.
We find that while their tweets are often seemingly about aggressive and
hateful subjects, "Gamergaters" do not exhibit common expressions of online
anger, and in fact primarily differ from typical users in that their tweets are
less joyful. They are also more engaged than typical Twitter users, which is an
indication as to how and why this controversy is still ongoing. Surprisingly,
we find that Gamergaters are less likely to be suspended by Twitter, thus we
analyze their properties to identify differences from typical users and what
may have led to their suspension. We perform an unsupervised machine learning
analysis to detect clusters of users who, though currently active, could be
considered for suspension since they exhibit similar behaviors with suspended
users. Finally, we confirm the usefulness of our analyzed features by emulating
the Twitter suspension mechanism with a supervised learning method, achieving
very good precision and recall.
| [
{
"version": "v1",
"created": "Tue, 9 May 2017 14:25:01 GMT"
}
] | 2017-05-10T00:00:00 | [
[
"Chatzakou",
"Despoina",
""
],
[
"Kourtellis",
"Nicolas",
""
],
[
"Blackburn",
"Jeremy",
""
],
[
"De Cristofaro",
"Emiliano",
""
],
[
"Stringhini",
"Gianluca",
""
],
[
"Vakali",
"Athena",
""
]
] | TITLE: Hate is not Binary: Studying Abusive Behavior of #GamerGate on Twitter
ABSTRACT: Over the past few years, online bullying and aggression have become
increasingly prominent, and manifested in many different forms on social media.
However, there is little work analyzing the characteristics of abusive users
and what distinguishes them from typical social media users. In this paper, we
start addressing this gap by analyzing tweets containing a great large amount
of abusiveness. We focus on a Twitter dataset revolving around the Gamergate
controversy, which led to many incidents of cyberbullying and cyberaggression
on various gaming and social media platforms. We study the properties of the
users tweeting about Gamergate, the content they post, and the differences in
their behavior compared to typical Twitter users.
We find that while their tweets are often seemingly about aggressive and
hateful subjects, "Gamergaters" do not exhibit common expressions of online
anger, and in fact primarily differ from typical users in that their tweets are
less joyful. They are also more engaged than typical Twitter users, which is an
indication as to how and why this controversy is still ongoing. Surprisingly,
we find that Gamergaters are less likely to be suspended by Twitter, thus we
analyze their properties to identify differences from typical users and what
may have led to their suspension. We perform an unsupervised machine learning
analysis to detect clusters of users who, though currently active, could be
considered for suspension since they exhibit similar behaviors with suspended
users. Finally, we confirm the usefulness of our analyzed features by emulating
the Twitter suspension mechanism with a supervised learning method, achieving
very good precision and recall.
| no_new_dataset | 0.927429 |
1705.03372 | Zhiyuan Shi | Zhiyuan Shi, Timothy M. Hospedales, Tao Xiang | Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation | iccv 2013 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of localisation of objects as bounding boxes in images
with weak labels. This weakly supervised object localisation problem has been
tackled in the past using discriminative models where each object class is
localised independently from other classes. We propose a novel framework based
on Bayesian joint topic modelling. Our framework has three distinctive
advantages over previous works: (1) All object classes and image backgrounds
are modelled jointly together in a single generative model so that "explaining
away" inference can resolve ambiguity and lead to better learning and
localisation. (2) The Bayesian formulation of the model enables easy
integration of prior knowledge about object appearance to compensate for
limited supervision. (3) Our model can be learned with a mixture of weakly
labelled and unlabelled data, allowing the large volume of unlabelled images on
the Internet to be exploited for learning. Extensive experiments on the
challenging VOC dataset demonstrate that our approach outperforms the
state-of-the-art competitors.
| [
{
"version": "v1",
"created": "Tue, 9 May 2017 15:00:07 GMT"
}
] | 2017-05-10T00:00:00 | [
[
"Shi",
"Zhiyuan",
""
],
[
"Hospedales",
"Timothy M.",
""
],
[
"Xiang",
"Tao",
""
]
] | TITLE: Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation
ABSTRACT: We address the problem of localisation of objects as bounding boxes in images
with weak labels. This weakly supervised object localisation problem has been
tackled in the past using discriminative models where each object class is
localised independently from other classes. We propose a novel framework based
on Bayesian joint topic modelling. Our framework has three distinctive
advantages over previous works: (1) All object classes and image backgrounds
are modelled jointly together in a single generative model so that "explaining
away" inference can resolve ambiguity and lead to better learning and
localisation. (2) The Bayesian formulation of the model enables easy
integration of prior knowledge about object appearance to compensate for
limited supervision. (3) Our model can be learned with a mixture of weakly
labelled and unlabelled data, allowing the large volume of unlabelled images on
the Internet to be exploited for learning. Extensive experiments on the
challenging VOC dataset demonstrate that our approach outperforms the
state-of-the-art competitors.
| no_new_dataset | 0.950088 |
1705.03419 | Ishan Jindal | Ishan Jindal, Matthew Nokleby and Xuewen Chen | Learning Deep Networks from Noisy Labels with Dropout Regularization | Published at 2016 IEEE 16th International Conference on Data Mining | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large datasets often have unreliable labels-such as those obtained from
Amazon's Mechanical Turk or social media platforms-and classifiers trained on
mislabeled datasets often exhibit poor performance. We present a simple,
effective technique for accounting for label noise when training deep neural
networks. We augment a standard deep network with a softmax layer that models
the label noise statistics. Then, we train the deep network and noise model
jointly via end-to-end stochastic gradient descent on the (perhaps mislabeled)
dataset. The augmented model is overdetermined, so in order to encourage the
learning of a non-trivial noise model, we apply dropout regularization to the
weights of the noise model during training. Numerical experiments on noisy
versions of the CIFAR-10 and MNIST datasets show that the proposed dropout
technique outperforms state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 9 May 2017 16:42:32 GMT"
}
] | 2017-05-10T00:00:00 | [
[
"Jindal",
"Ishan",
""
],
[
"Nokleby",
"Matthew",
""
],
[
"Chen",
"Xuewen",
""
]
] | TITLE: Learning Deep Networks from Noisy Labels with Dropout Regularization
ABSTRACT: Large datasets often have unreliable labels-such as those obtained from
Amazon's Mechanical Turk or social media platforms-and classifiers trained on
mislabeled datasets often exhibit poor performance. We present a simple,
effective technique for accounting for label noise when training deep neural
networks. We augment a standard deep network with a softmax layer that models
the label noise statistics. Then, we train the deep network and noise model
jointly via end-to-end stochastic gradient descent on the (perhaps mislabeled)
dataset. The augmented model is overdetermined, so in order to encourage the
learning of a non-trivial noise model, we apply dropout regularization to the
weights of the noise model during training. Numerical experiments on noisy
versions of the CIFAR-10 and MNIST datasets show that the proposed dropout
technique outperforms state-of-the-art methods.
| no_new_dataset | 0.949809 |
1705.03428 | Felix J\"aremo Lawin | Felix J\"aremo Lawin, Martin Danelljan, Patrik Tosteberg, Goutam Bhat,
Fahad Shahbaz Khan, Michael Felsberg | Deep Projective 3D Semantic Segmentation | Submitted to CAIP 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic segmentation of 3D point clouds is a challenging problem with
numerous real-world applications. While deep learning has revolutionized the
field of image semantic segmentation, its impact on point cloud data has been
limited so far. Recent attempts, based on 3D deep learning approaches
(3D-CNNs), have achieved below-expected results. Such methods require
voxelizations of the underlying point cloud data, leading to decreased spatial
resolution and increased memory consumption. Additionally, 3D-CNNs greatly
suffer from the limited availability of annotated datasets.
In this paper, we propose an alternative framework that avoids the
limitations of 3D-CNNs. Instead of directly solving the problem in 3D, we first
project the point cloud onto a set of synthetic 2D-images. These images are
then used as input to a 2D-CNN, designed for semantic segmentation. Finally,
the obtained prediction scores are re-projected to the point cloud to obtain
the segmentation results. We further investigate the impact of multiple
modalities, such as color, depth and surface normals, in a multi-stream network
architecture. Experiments are performed on the recent Semantic3D dataset. Our
approach sets a new state-of-the-art by achieving a relative gain of 7.9 %,
compared to the previous best approach.
| [
{
"version": "v1",
"created": "Tue, 9 May 2017 16:59:41 GMT"
}
] | 2017-05-10T00:00:00 | [
[
"Lawin",
"Felix Järemo",
""
],
[
"Danelljan",
"Martin",
""
],
[
"Tosteberg",
"Patrik",
""
],
[
"Bhat",
"Goutam",
""
],
[
"Khan",
"Fahad Shahbaz",
""
],
[
"Felsberg",
"Michael",
""
]
] | TITLE: Deep Projective 3D Semantic Segmentation
ABSTRACT: Semantic segmentation of 3D point clouds is a challenging problem with
numerous real-world applications. While deep learning has revolutionized the
field of image semantic segmentation, its impact on point cloud data has been
limited so far. Recent attempts, based on 3D deep learning approaches
(3D-CNNs), have achieved below-expected results. Such methods require
voxelizations of the underlying point cloud data, leading to decreased spatial
resolution and increased memory consumption. Additionally, 3D-CNNs greatly
suffer from the limited availability of annotated datasets.
In this paper, we propose an alternative framework that avoids the
limitations of 3D-CNNs. Instead of directly solving the problem in 3D, we first
project the point cloud onto a set of synthetic 2D-images. These images are
then used as input to a 2D-CNN, designed for semantic segmentation. Finally,
the obtained prediction scores are re-projected to the point cloud to obtain
the segmentation results. We further investigate the impact of multiple
modalities, such as color, depth and surface normals, in a multi-stream network
architecture. Experiments are performed on the recent Semantic3D dataset. Our
approach sets a new state-of-the-art by achieving a relative gain of 7.9 %,
compared to the previous best approach.
| no_new_dataset | 0.949576 |
1402.5500 | J\'er\^ome Kunegis | J\'er\^ome Kunegis | Handbook of Network Analysis [KONECT -- the Koblenz Network Collection] | 64 pages | null | null | null | cs.SI physics.soc-ph | http://creativecommons.org/licenses/by-sa/4.0/ | This is the handbook for the KONECT project, the \emph{Koblenz Network
Collection}, a scientific project to collect, analyse, and provide network
datasets for researchers in all related fields of research, by the Namur Center
for Complex Systems (naXys) at the University of Namur, Belgium, with web
hosting provided by the Institute for Web Science and Technologies (WeST) at
the University of Koblenz--Landau, Germany.
| [
{
"version": "v1",
"created": "Sat, 22 Feb 2014 11:31:04 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Sep 2014 12:33:42 GMT"
},
{
"version": "v3",
"created": "Mon, 5 Sep 2016 18:09:42 GMT"
},
{
"version": "v4",
"created": "Sat, 6 May 2017 13:21:52 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Kunegis",
"Jérôme",
""
]
] | TITLE: Handbook of Network Analysis [KONECT -- the Koblenz Network Collection]
ABSTRACT: This is the handbook for the KONECT project, the \emph{Koblenz Network
Collection}, a scientific project to collect, analyse, and provide network
datasets for researchers in all related fields of research, by the Namur Center
for Complex Systems (naXys) at the University of Namur, Belgium, with web
hosting provided by the Institute for Web Science and Technologies (WeST) at
the University of Koblenz--Landau, Germany.
| no_new_dataset | 0.912358 |
1606.07442 | Tom Charnock | Tom Charnock and Adam Moss | Deep Recurrent Neural Networks for Supernovae Classification | 9 pages, 4 figures | null | 10.3847/2041-8213/aa603d | null | astro-ph.IM astro-ph.CO cs.LG physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We apply deep recurrent neural networks, which are capable of learning
complex sequential information, to classify supernovae\footnote{Code available
at
\href{https://github.com/adammoss/supernovae}{https://github.com/adammoss/supernovae}}.
The observational time and filter fluxes are used as inputs to the network, but
since the inputs are agnostic additional data such as host galaxy information
can also be included. Using the Supernovae Photometric Classification Challenge
(SPCC) data, we find that deep networks are capable of learning about light
curves, however the performance of the network is highly sensitive to the
amount of training data. For a training size of 50\% of the representational
SPCC dataset (around $10^4$ supernovae) we obtain a type-Ia vs. non-type-Ia
classification accuracy of 94.7\%, an area under the Receiver Operating
Characteristic curve AUC of 0.986 and a SPCC figure-of-merit $F_1=0.64$. When
using only the data for the early-epoch challenge defined by the SPCC we
achieve a classification accuracy of 93.1\%, AUC of 0.977 and $F_1=0.58$,
results almost as good as with the whole light-curve. By employing
bidirectional neural networks we can acquire impressive classification results
between supernovae types -I,~-II and~-III at an accuracy of 90.4\% and AUC of
0.974. We also apply a pre-trained model to obtain classification probabilities
as a function of time, and show it can give early indications of supernovae
type. Our method is competitive with existing algorithms and has applications
for future large-scale photometric surveys.
| [
{
"version": "v1",
"created": "Thu, 23 Jun 2016 20:00:02 GMT"
},
{
"version": "v2",
"created": "Fri, 5 May 2017 18:57:31 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Charnock",
"Tom",
""
],
[
"Moss",
"Adam",
""
]
] | TITLE: Deep Recurrent Neural Networks for Supernovae Classification
ABSTRACT: We apply deep recurrent neural networks, which are capable of learning
complex sequential information, to classify supernovae\footnote{Code available
at
\href{https://github.com/adammoss/supernovae}{https://github.com/adammoss/supernovae}}.
The observational time and filter fluxes are used as inputs to the network, but
since the inputs are agnostic additional data such as host galaxy information
can also be included. Using the Supernovae Photometric Classification Challenge
(SPCC) data, we find that deep networks are capable of learning about light
curves, however the performance of the network is highly sensitive to the
amount of training data. For a training size of 50\% of the representational
SPCC dataset (around $10^4$ supernovae) we obtain a type-Ia vs. non-type-Ia
classification accuracy of 94.7\%, an area under the Receiver Operating
Characteristic curve AUC of 0.986 and a SPCC figure-of-merit $F_1=0.64$. When
using only the data for the early-epoch challenge defined by the SPCC we
achieve a classification accuracy of 93.1\%, AUC of 0.977 and $F_1=0.58$,
results almost as good as with the whole light-curve. By employing
bidirectional neural networks we can acquire impressive classification results
between supernovae types -I,~-II and~-III at an accuracy of 90.4\% and AUC of
0.974. We also apply a pre-trained model to obtain classification probabilities
as a function of time, and show it can give early indications of supernovae
type. Our method is competitive with existing algorithms and has applications
for future large-scale photometric surveys.
| no_new_dataset | 0.944638 |
1607.06694 | Mahdi Boloursaz Mashhadi | Mahdi Boloursaz Mashhadi, Maryam Fallah, and Farokh Marvasti | Interpolation of Sparse Graph Signals by Sequential Adaptive Thresholds | 12th International Conference on Sampling Theory and Applications
(SAMPTA 2017) | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers the problem of interpolating signals defined on graphs.
A major presumption considered by many previous approaches to this problem has
been lowpass/ band-limitedness of the underlying graph signal. However,
inspired by the findings on sparse signal reconstruction, we consider the graph
signal to be rather sparse/compressible in the Graph Fourier Transform (GFT)
domain and propose the Iterative Method with Adaptive Thresholding for Graph
Interpolation (IMATGI) algorithm for sparsity promoting interpolation of the
underlying graph signal.We analytically prove convergence of the proposed
algorithm. We also demonstrate efficient performance of the proposed IMATGI
algorithm in reconstructing randomly generated sparse graph signals. Finally,
we consider the widely desirable application of recommendation systems and show
by simulations that IMATGI outperforms state-of-the-art algorithms on the
benchmark datasets in this application.
| [
{
"version": "v1",
"created": "Fri, 22 Jul 2016 14:40:33 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Sep 2016 19:15:24 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Oct 2016 13:44:48 GMT"
},
{
"version": "v4",
"created": "Sat, 6 May 2017 23:07:01 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Mashhadi",
"Mahdi Boloursaz",
""
],
[
"Fallah",
"Maryam",
""
],
[
"Marvasti",
"Farokh",
""
]
] | TITLE: Interpolation of Sparse Graph Signals by Sequential Adaptive Thresholds
ABSTRACT: This paper considers the problem of interpolating signals defined on graphs.
A major presumption considered by many previous approaches to this problem has
been lowpass/ band-limitedness of the underlying graph signal. However,
inspired by the findings on sparse signal reconstruction, we consider the graph
signal to be rather sparse/compressible in the Graph Fourier Transform (GFT)
domain and propose the Iterative Method with Adaptive Thresholding for Graph
Interpolation (IMATGI) algorithm for sparsity promoting interpolation of the
underlying graph signal.We analytically prove convergence of the proposed
algorithm. We also demonstrate efficient performance of the proposed IMATGI
algorithm in reconstructing randomly generated sparse graph signals. Finally,
we consider the widely desirable application of recommendation systems and show
by simulations that IMATGI outperforms state-of-the-art algorithms on the
benchmark datasets in this application.
| no_new_dataset | 0.941547 |
1608.03371 | Shenghua Liu | Shenghua Liu, Houdong Zheng, Huawei Shen, Xiangwen Liao, Xueqi Cheng | Learning Sentimental Influences from Users' Behaviors | 11 pages, related version is accepted by IJCAI 2017 | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling interpersonal influence on different sentimental polarities is a
fundamental problem in opinion formation and viral marketing. There has not
been seen an effective solution for learning sentimental influences from users'
behaviors yet. Previous related works on information propagation directly
define interpersonal influence between each pair of users as a parameter, which
is independent from each others, even if the influences come from or affect the
same user. And influences are learned from user's propagation behaviors, namely
temporal cascades, while sentiments are not associated with them. Thus we
propose to model the interpersonal influence by latent influence and
susceptibility matrices defined on individual users and sentiment polarities.
Such low-dimensional and distributed representations naturally make the
interpersonal influences related to the same user coupled with each other, and
in turn, reduce the model complexity. Sentiments act on different rows of
parameter matrices, depicting their effects in modeling cascades. With the
iterative optimization algorithm of projected stochastic gradient descent over
shuffled mini-batches and Adadelta update rule, negative cases are repeatedly
sampled with the distribution of infection frequencies users, for reducing
computation cost and optimization imbalance. Experiments are conducted on
Microblog dataset. The results show that our model achieves better performance
than the state-of-the-art and pair-wise models. Besides, analyzing the
distribution of learned users' sentimental influences and susceptibilities
results some interesting discoveries.
| [
{
"version": "v1",
"created": "Thu, 11 Aug 2016 05:18:36 GMT"
},
{
"version": "v2",
"created": "Sat, 6 May 2017 16:58:22 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Liu",
"Shenghua",
""
],
[
"Zheng",
"Houdong",
""
],
[
"Shen",
"Huawei",
""
],
[
"Liao",
"Xiangwen",
""
],
[
"Cheng",
"Xueqi",
""
]
] | TITLE: Learning Sentimental Influences from Users' Behaviors
ABSTRACT: Modeling interpersonal influence on different sentimental polarities is a
fundamental problem in opinion formation and viral marketing. There has not
been seen an effective solution for learning sentimental influences from users'
behaviors yet. Previous related works on information propagation directly
define interpersonal influence between each pair of users as a parameter, which
is independent from each others, even if the influences come from or affect the
same user. And influences are learned from user's propagation behaviors, namely
temporal cascades, while sentiments are not associated with them. Thus we
propose to model the interpersonal influence by latent influence and
susceptibility matrices defined on individual users and sentiment polarities.
Such low-dimensional and distributed representations naturally make the
interpersonal influences related to the same user coupled with each other, and
in turn, reduce the model complexity. Sentiments act on different rows of
parameter matrices, depicting their effects in modeling cascades. With the
iterative optimization algorithm of projected stochastic gradient descent over
shuffled mini-batches and Adadelta update rule, negative cases are repeatedly
sampled with the distribution of infection frequencies users, for reducing
computation cost and optimization imbalance. Experiments are conducted on
Microblog dataset. The results show that our model achieves better performance
than the state-of-the-art and pair-wise models. Besides, analyzing the
distribution of learned users' sentimental influences and susceptibilities
results some interesting discoveries.
| no_new_dataset | 0.9462 |
1608.05743 | Songze Li | Songze Li, Qian Yu, Mohammad Ali Maddah-Ali, A. Salman Avestimehr | A Scalable Framework for Wireless Distributed Computing | To appear in IEEE/ACM Transactions on Networking | null | null | null | cs.IT cs.DC math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a wireless distributed computing system, in which multiple mobile
users, connected wirelessly through an access point, collaborate to perform a
computation task. In particular, users communicate with each other via the
access point to exchange their locally computed intermediate computation
results, which is known as data shuffling. We propose a scalable framework for
this system, in which the required communication bandwidth for data shuffling
does not increase with the number of users in the network. The key idea is to
utilize a particular repetitive pattern of placing the dataset (thus a
particular repetitive pattern of intermediate computations), in order to
provide coding opportunities at both the users and the access point, which
reduce the required uplink communication bandwidth from users to access point
and the downlink communication bandwidth from access point to users by factors
that grow linearly with the number of users. We also demonstrate that the
proposed dataset placement and coded shuffling schemes are optimal (i.e.,
achieve the minimum required shuffling load) for both a centralized setting and
a decentralized setting, by developing tight information-theoretic lower
bounds.
| [
{
"version": "v1",
"created": "Fri, 19 Aug 2016 21:48:19 GMT"
},
{
"version": "v2",
"created": "Fri, 5 May 2017 22:30:16 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Li",
"Songze",
""
],
[
"Yu",
"Qian",
""
],
[
"Maddah-Ali",
"Mohammad Ali",
""
],
[
"Avestimehr",
"A. Salman",
""
]
] | TITLE: A Scalable Framework for Wireless Distributed Computing
ABSTRACT: We consider a wireless distributed computing system, in which multiple mobile
users, connected wirelessly through an access point, collaborate to perform a
computation task. In particular, users communicate with each other via the
access point to exchange their locally computed intermediate computation
results, which is known as data shuffling. We propose a scalable framework for
this system, in which the required communication bandwidth for data shuffling
does not increase with the number of users in the network. The key idea is to
utilize a particular repetitive pattern of placing the dataset (thus a
particular repetitive pattern of intermediate computations), in order to
provide coding opportunities at both the users and the access point, which
reduce the required uplink communication bandwidth from users to access point
and the downlink communication bandwidth from access point to users by factors
that grow linearly with the number of users. We also demonstrate that the
proposed dataset placement and coded shuffling schemes are optimal (i.e.,
achieve the minimum required shuffling load) for both a centralized setting and
a decentralized setting, by developing tight information-theoretic lower
bounds.
| no_new_dataset | 0.94868 |
1609.09475 | Andy Zeng | Andy Zeng, Kuan-Ting Yu, Shuran Song, Daniel Suo, Ed Walker Jr.,
Alberto Rodriguez and Jianxiong Xiao | Multi-view Self-supervised Deep Learning for 6D Pose Estimation in the
Amazon Picking Challenge | To appear at the International Conference on Robotics and Automation
(ICRA) 2017. Project webpage: http://apc.cs.princeton.edu/ | null | null | null | cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robot warehouse automation has attracted significant interest in recent
years, perhaps most visibly in the Amazon Picking Challenge (APC). A fully
autonomous warehouse pick-and-place system requires robust vision that reliably
recognizes and locates objects amid cluttered environments, self-occlusions,
sensor noise, and a large variety of objects. In this paper we present an
approach that leverages multi-view RGB-D data and self-supervised, data-driven
learning to overcome those difficulties. The approach was part of the
MIT-Princeton Team system that took 3rd- and 4th- place in the stowing and
picking tasks, respectively at APC 2016. In the proposed approach, we segment
and label multiple views of a scene with a fully convolutional neural network,
and then fit pre-scanned 3D object models to the resulting segmentation to get
the 6D object pose. Training a deep neural network for segmentation typically
requires a large amount of training data. We propose a self-supervised method
to generate a large labeled dataset without tedious manual segmentation. We
demonstrate that our system can reliably estimate the 6D pose of objects under
a variety of scenarios. All code, data, and benchmarks are available at
http://apc.cs.princeton.edu/
| [
{
"version": "v1",
"created": "Thu, 29 Sep 2016 19:39:13 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Oct 2016 00:24:29 GMT"
},
{
"version": "v3",
"created": "Sun, 7 May 2017 20:12:55 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Zeng",
"Andy",
""
],
[
"Yu",
"Kuan-Ting",
""
],
[
"Song",
"Shuran",
""
],
[
"Suo",
"Daniel",
""
],
[
"Walker",
"Ed",
"Jr."
],
[
"Rodriguez",
"Alberto",
""
],
[
"Xiao",
"Jianxiong",
""
]
] | TITLE: Multi-view Self-supervised Deep Learning for 6D Pose Estimation in the
Amazon Picking Challenge
ABSTRACT: Robot warehouse automation has attracted significant interest in recent
years, perhaps most visibly in the Amazon Picking Challenge (APC). A fully
autonomous warehouse pick-and-place system requires robust vision that reliably
recognizes and locates objects amid cluttered environments, self-occlusions,
sensor noise, and a large variety of objects. In this paper we present an
approach that leverages multi-view RGB-D data and self-supervised, data-driven
learning to overcome those difficulties. The approach was part of the
MIT-Princeton Team system that took 3rd- and 4th- place in the stowing and
picking tasks, respectively at APC 2016. In the proposed approach, we segment
and label multiple views of a scene with a fully convolutional neural network,
and then fit pre-scanned 3D object models to the resulting segmentation to get
the 6D object pose. Training a deep neural network for segmentation typically
requires a large amount of training data. We propose a self-supervised method
to generate a large labeled dataset without tedious manual segmentation. We
demonstrate that our system can reliably estimate the 6D pose of objects under
a variety of scenarios. All code, data, and benchmarks are available at
http://apc.cs.princeton.edu/
| no_new_dataset | 0.945851 |
1611.05118 | Mohit Iyyer | Mohit Iyyer, Varun Manjunatha, Anupam Guha, Yogarshi Vyas, Jordan
Boyd-Graber, Hal Daum\'e III, Larry Davis | The Amazing Mysteries of the Gutter: Drawing Inferences Between Panels
in Comic Book Narratives | null | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual narrative is often a combination of explicit information and judicious
omissions, relying on the viewer to supply missing details. In comics, most
movements in time and space are hidden in the "gutters" between panels. To
follow the story, readers logically connect panels together by inferring unseen
actions through a process called "closure". While computers can now describe
what is explicitly depicted in natural images, in this paper we examine whether
they can understand the closure-driven narratives conveyed by stylized artwork
and dialogue in comic book panels. We construct a dataset, COMICS, that
consists of over 1.2 million panels (120 GB) paired with automatic textbox
transcriptions. An in-depth analysis of COMICS demonstrates that neither text
nor image alone can tell a comic book story, so a computer must understand both
modalities to keep up with the plot. We introduce three cloze-style tasks that
ask models to predict narrative and character-centric aspects of a panel given
n preceding panels as context. Various deep neural architectures underperform
human baselines on these tasks, suggesting that COMICS contains fundamental
challenges for both vision and language.
| [
{
"version": "v1",
"created": "Wed, 16 Nov 2016 02:16:09 GMT"
},
{
"version": "v2",
"created": "Sun, 7 May 2017 20:26:24 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Iyyer",
"Mohit",
""
],
[
"Manjunatha",
"Varun",
""
],
[
"Guha",
"Anupam",
""
],
[
"Vyas",
"Yogarshi",
""
],
[
"Boyd-Graber",
"Jordan",
""
],
[
"Daumé",
"Hal",
"III"
],
[
"Davis",
"Larry",
""
]
] | TITLE: The Amazing Mysteries of the Gutter: Drawing Inferences Between Panels
in Comic Book Narratives
ABSTRACT: Visual narrative is often a combination of explicit information and judicious
omissions, relying on the viewer to supply missing details. In comics, most
movements in time and space are hidden in the "gutters" between panels. To
follow the story, readers logically connect panels together by inferring unseen
actions through a process called "closure". While computers can now describe
what is explicitly depicted in natural images, in this paper we examine whether
they can understand the closure-driven narratives conveyed by stylized artwork
and dialogue in comic book panels. We construct a dataset, COMICS, that
consists of over 1.2 million panels (120 GB) paired with automatic textbox
transcriptions. An in-depth analysis of COMICS demonstrates that neither text
nor image alone can tell a comic book story, so a computer must understand both
modalities to keep up with the plot. We introduce three cloze-style tasks that
ask models to predict narrative and character-centric aspects of a panel given
n preceding panels as context. Various deep neural architectures underperform
human baselines on these tasks, suggesting that COMICS contains fundamental
challenges for both vision and language.
| new_dataset | 0.961534 |
1704.00326 | Alessandro Lameiras Koerich | Fabio Dittrich and Luiz E. S. de Oliveira and Alceu S. Britto Jr. and
Alessandro L. Koerich | People Counting in Crowded and Outdoor Scenes using a Hybrid
Multi-Camera Approach | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents two novel approaches for people counting in crowded and
open environments that combine the information gathered by multiple views.
Multiple camera are used to expand the field of view as well as to mitigate the
problem of occlusion that commonly affects the performance of counting methods
using single cameras. The first approach is regarded as a direct approach and
it attempts to segment and count each individual in the crowd. For such an aim,
two head detectors trained with head images are employed: one based on support
vector machines and another based on Adaboost perceptron. The second approach,
regarded as an indirect approach employs learning algorithms and statistical
analysis on the whole crowd to achieve counting. For such an aim, corner points
are extracted from groups of people in a foreground image and computed by a
learning algorithm which estimates the number of people in the scene. Both
approaches count the number of people on the scene and not only on a given
image or video frame of the scene. The experimental results obtained on the
benchmark PETS2009 video dataset show that proposed indirect method surpasses
other methods with improvements of up to 46.7% and provides accurate counting
results for the crowded scenes. On the other hand, the direct method shows high
error rates due to the fact that the latter has much more complex problems to
solve, such as segmentation of heads.
| [
{
"version": "v1",
"created": "Sun, 2 Apr 2017 16:38:04 GMT"
},
{
"version": "v2",
"created": "Mon, 8 May 2017 12:51:51 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Dittrich",
"Fabio",
""
],
[
"de Oliveira",
"Luiz E. S.",
""
],
[
"Britto",
"Alceu S.",
"Jr."
],
[
"Koerich",
"Alessandro L.",
""
]
] | TITLE: People Counting in Crowded and Outdoor Scenes using a Hybrid
Multi-Camera Approach
ABSTRACT: This paper presents two novel approaches for people counting in crowded and
open environments that combine the information gathered by multiple views.
Multiple camera are used to expand the field of view as well as to mitigate the
problem of occlusion that commonly affects the performance of counting methods
using single cameras. The first approach is regarded as a direct approach and
it attempts to segment and count each individual in the crowd. For such an aim,
two head detectors trained with head images are employed: one based on support
vector machines and another based on Adaboost perceptron. The second approach,
regarded as an indirect approach employs learning algorithms and statistical
analysis on the whole crowd to achieve counting. For such an aim, corner points
are extracted from groups of people in a foreground image and computed by a
learning algorithm which estimates the number of people in the scene. Both
approaches count the number of people on the scene and not only on a given
image or video frame of the scene. The experimental results obtained on the
benchmark PETS2009 video dataset show that proposed indirect method surpasses
other methods with improvements of up to 46.7% and provides accurate counting
results for the crowded scenes. On the other hand, the direct method shows high
error rates due to the fact that the latter has much more complex problems to
solve, such as segmentation of heads.
| no_new_dataset | 0.952397 |
1704.04743 | Roee Aharoni | Roee Aharoni and Yoav Goldberg | Towards String-to-Tree Neural Machine Translation | Accepted as a short paper in ACL 2017 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a simple method to incorporate syntactic information about the
target language in a neural machine translation system by translating into
linearized, lexicalized constituency trees. An experiment on the WMT16
German-English news translation task resulted in an improved BLEU score when
compared to a syntax-agnostic NMT baseline trained on the same dataset. An
analysis of the translations from the syntax-aware system shows that it
performs more reordering during translation in comparison to the baseline. A
small-scale human evaluation also showed an advantage to the syntax-aware
system.
| [
{
"version": "v1",
"created": "Sun, 16 Apr 2017 09:54:50 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Apr 2017 10:20:28 GMT"
},
{
"version": "v3",
"created": "Sat, 6 May 2017 07:25:19 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Aharoni",
"Roee",
""
],
[
"Goldberg",
"Yoav",
""
]
] | TITLE: Towards String-to-Tree Neural Machine Translation
ABSTRACT: We present a simple method to incorporate syntactic information about the
target language in a neural machine translation system by translating into
linearized, lexicalized constituency trees. An experiment on the WMT16
German-English news translation task resulted in an improved BLEU score when
compared to a syntax-agnostic NMT baseline trained on the same dataset. An
analysis of the translations from the syntax-aware system shows that it
performs more reordering during translation in comparison to the baseline. A
small-scale human evaluation also showed an advantage to the syntax-aware
system.
| no_new_dataset | 0.951142 |
1705.01908 | Yifan Liu | Yifan Liu, Zengchang Qin, Zhenbo Luo and Hua Wang | Auto-painter: Cartoon Image Generation from Sketch by Using Conditional
Generative Adversarial Networks | 12 pages, 7 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Recently, realistic image generation using deep neural networks has become a
hot topic in machine learning and computer vision. Images can be generated at
the pixel level by learning from a large collection of images. Learning to
generate colorful cartoon images from black-and-white sketches is not only an
interesting research problem, but also a potential application in digital
entertainment. In this paper, we investigate the sketch-to-image synthesis
problem by using conditional generative adversarial networks (cGAN). We propose
the auto-painter model which can automatically generate compatible colors for a
sketch. The new model is not only capable of painting hand-draw sketch with
proper colors, but also allowing users to indicate preferred colors.
Experimental results on two sketch datasets show that the auto-painter performs
better that existing image-to-image methods.
| [
{
"version": "v1",
"created": "Thu, 4 May 2017 17:04:28 GMT"
},
{
"version": "v2",
"created": "Sun, 7 May 2017 03:40:05 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Liu",
"Yifan",
""
],
[
"Qin",
"Zengchang",
""
],
[
"Luo",
"Zhenbo",
""
],
[
"Wang",
"Hua",
""
]
] | TITLE: Auto-painter: Cartoon Image Generation from Sketch by Using Conditional
Generative Adversarial Networks
ABSTRACT: Recently, realistic image generation using deep neural networks has become a
hot topic in machine learning and computer vision. Images can be generated at
the pixel level by learning from a large collection of images. Learning to
generate colorful cartoon images from black-and-white sketches is not only an
interesting research problem, but also a potential application in digital
entertainment. In this paper, we investigate the sketch-to-image synthesis
problem by using conditional generative adversarial networks (cGAN). We propose
the auto-painter model which can automatically generate compatible colors for a
sketch. The new model is not only capable of painting hand-draw sketch with
proper colors, but also allowing users to indicate preferred colors.
Experimental results on two sketch datasets show that the auto-painter performs
better that existing image-to-image methods.
| no_new_dataset | 0.951774 |
1705.02429 | Peng Tang | Peng Tang, Xinggang Wang, Zilong Huang, Xiang Bai, Wenyu Liu | Deep Patch Learning for Weakly Supervised Object Classification and
Discovery | Accepted by Pattern Recognition | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Patch-level image representation is very important for object classification
and detection, since it is robust to spatial transformation, scale variation,
and cluttered background. Many existing methods usually require fine-grained
supervisions (e.g., bounding-box annotations) to learn patch features, which
requires a great effort to label images may limit their potential applications.
In this paper, we propose to learn patch features via weak supervisions, i.e.,
only image-level supervisions. To achieve this goal, we treat images as bags
and patches as instances to integrate the weakly supervised multiple instance
learning constraints into deep neural networks. Also, our method integrates the
traditional multiple stages of weakly supervised object classification and
discovery into a unified deep convolutional neural network and optimizes the
network in an end-to-end way. The network processes the two tasks object
classification and discovery jointly, and shares hierarchical deep features.
Through this jointly learning strategy, weakly supervised object classification
and discovery are beneficial to each other. We test the proposed method on the
challenging PASCAL VOC datasets. The results show that our method can obtain
state-of-the-art performance on object classification, and very competitive
results on object discovery, with faster testing speed than competitors.
| [
{
"version": "v1",
"created": "Sat, 6 May 2017 02:05:38 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Tang",
"Peng",
""
],
[
"Wang",
"Xinggang",
""
],
[
"Huang",
"Zilong",
""
],
[
"Bai",
"Xiang",
""
],
[
"Liu",
"Wenyu",
""
]
] | TITLE: Deep Patch Learning for Weakly Supervised Object Classification and
Discovery
ABSTRACT: Patch-level image representation is very important for object classification
and detection, since it is robust to spatial transformation, scale variation,
and cluttered background. Many existing methods usually require fine-grained
supervisions (e.g., bounding-box annotations) to learn patch features, which
requires a great effort to label images may limit their potential applications.
In this paper, we propose to learn patch features via weak supervisions, i.e.,
only image-level supervisions. To achieve this goal, we treat images as bags
and patches as instances to integrate the weakly supervised multiple instance
learning constraints into deep neural networks. Also, our method integrates the
traditional multiple stages of weakly supervised object classification and
discovery into a unified deep convolutional neural network and optimizes the
network in an end-to-end way. The network processes the two tasks object
classification and discovery jointly, and shares hierarchical deep features.
Through this jointly learning strategy, weakly supervised object classification
and discovery are beneficial to each other. We test the proposed method on the
challenging PASCAL VOC datasets. The results show that our method can obtain
state-of-the-art performance on object classification, and very competitive
results on object discovery, with faster testing speed than competitors.
| no_new_dataset | 0.949435 |
1705.02431 | He Zhang | He Zhang, Vishal M.Patel | Sparse Representation-based Open Set Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a generalized Sparse Representation- based Classification (SRC)
algorithm for open set recognition where not all classes presented during
testing are known during training. The SRC algorithm uses class reconstruction
errors for classification. As most of the discriminative information for open
set recognition is hidden in the tail part of the matched and sum of
non-matched reconstruction error distributions, we model the tail of those two
error distributions using the statistical Extreme Value Theory (EVT). Then we
simplify the open set recognition problem into a set of hypothesis testing
problems. The confidence scores corresponding to the tail distributions of a
novel test sample are then fused to determine its identity. The effectiveness
of the proposed method is demonstrated using four publicly available image and
object classification datasets and it is shown that this method can perform
significantly better than many competitive open set recognition algorithms.
Code is public available: https://github.com/hezhangsprinter/SROSR
| [
{
"version": "v1",
"created": "Sat, 6 May 2017 02:16:48 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Zhang",
"He",
""
],
[
"Patel",
"Vishal M.",
""
]
] | TITLE: Sparse Representation-based Open Set Recognition
ABSTRACT: We propose a generalized Sparse Representation- based Classification (SRC)
algorithm for open set recognition where not all classes presented during
testing are known during training. The SRC algorithm uses class reconstruction
errors for classification. As most of the discriminative information for open
set recognition is hidden in the tail part of the matched and sum of
non-matched reconstruction error distributions, we model the tail of those two
error distributions using the statistical Extreme Value Theory (EVT). Then we
simplify the open set recognition problem into a set of hypothesis testing
problems. The confidence scores corresponding to the tail distributions of a
novel test sample are then fused to determine its identity. The effectiveness
of the proposed method is demonstrated using four publicly available image and
object classification datasets and it is shown that this method can perform
significantly better than many competitive open set recognition algorithms.
Code is public available: https://github.com/hezhangsprinter/SROSR
| no_new_dataset | 0.94887 |
1705.02447 | Yifan Liu | Yifan Liu, Zengchang Qin, Pengyu Li and Tao Wan | Stock Volatility Prediction Using Recurrent Neural Networks with
Sentiment Analysis | 10 pages, 5 figures and it is an extended vision of our conference
paper in IEA/AIE 2017 | null | null | null | cs.SI | http://creativecommons.org/licenses/by-sa/4.0/ | In this paper, we propose a model to analyze sentiment of online stock forum
and use the information to predict the stock volatility in the Chinese market.
We have labeled the sentiment of the online financial posts and make the
dataset public available for research. By generating a sentimental dictionary
based on financial terms, we develop a model to compute the sentimental score
of each online post related to a particular stock. Such sentimental information
is represented by two sentiment indicators, which are fused to market data for
stock volatility prediction by using the Recurrent Neural Networks (RNNs).
Empirical study shows that, comparing to using RNN only, the model performs
significantly better with sentimental indicators.
| [
{
"version": "v1",
"created": "Sat, 6 May 2017 05:13:50 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Liu",
"Yifan",
""
],
[
"Qin",
"Zengchang",
""
],
[
"Li",
"Pengyu",
""
],
[
"Wan",
"Tao",
""
]
] | TITLE: Stock Volatility Prediction Using Recurrent Neural Networks with
Sentiment Analysis
ABSTRACT: In this paper, we propose a model to analyze sentiment of online stock forum
and use the information to predict the stock volatility in the Chinese market.
We have labeled the sentiment of the online financial posts and make the
dataset public available for research. By generating a sentimental dictionary
based on financial terms, we develop a model to compute the sentimental score
of each online post related to a particular stock. Such sentimental information
is represented by two sentiment indicators, which are fused to market data for
stock volatility prediction by using the Recurrent Neural Networks (RNNs).
Empirical study shows that, comparing to using RNN only, the model performs
significantly better with sentimental indicators.
| no_new_dataset | 0.937669 |
1705.02499 | Mayank Singh | Mayank Singh, Abhishek Niranjan, Divyansh Gupta, Nikhil Angad Bakshi,
Animesh Mukherjee, Pawan Goyal | Citation sentence reuse behavior of scientists: A case study on massive
bibliographic text dataset of computer science | null | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our current knowledge of scholarly plagiarism is largely based on the
similarity between full text research articles. In this paper, we propose an
innovative and novel conceptualization of scholarly plagiarism in the form of
reuse of explicit citation sentences in scientific research articles. Note that
while full-text plagiarism is an indicator of a gross-level behavior, copying
of citation sentences is a more nuanced micro-scale phenomenon observed even
for well-known researchers. The current work poses several interesting
questions and attempts to answer them by empirically investigating a large
bibliographic text dataset from computer science containing millions of lines
of citation sentences. In particular, we report evidences of massive copying
behavior. We also present several striking real examples throughout the paper
to showcase widespread adoption of this undesirable practice. In contrast to
the popular perception, we find that copying tendency increases as an author
matures. The copying behavior is reported to exist in all fields of computer
science; however, the theoretical fields indicate more copying than the applied
fields.
| [
{
"version": "v1",
"created": "Sat, 6 May 2017 16:16:36 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Singh",
"Mayank",
""
],
[
"Niranjan",
"Abhishek",
""
],
[
"Gupta",
"Divyansh",
""
],
[
"Bakshi",
"Nikhil Angad",
""
],
[
"Mukherjee",
"Animesh",
""
],
[
"Goyal",
"Pawan",
""
]
] | TITLE: Citation sentence reuse behavior of scientists: A case study on massive
bibliographic text dataset of computer science
ABSTRACT: Our current knowledge of scholarly plagiarism is largely based on the
similarity between full text research articles. In this paper, we propose an
innovative and novel conceptualization of scholarly plagiarism in the form of
reuse of explicit citation sentences in scientific research articles. Note that
while full-text plagiarism is an indicator of a gross-level behavior, copying
of citation sentences is a more nuanced micro-scale phenomenon observed even
for well-known researchers. The current work poses several interesting
questions and attempts to answer them by empirically investigating a large
bibliographic text dataset from computer science containing millions of lines
of citation sentences. In particular, we report evidences of massive copying
behavior. We also present several striking real examples throughout the paper
to showcase widespread adoption of this undesirable practice. In contrast to
the popular perception, we find that copying tendency increases as an author
matures. The copying behavior is reported to exist in all fields of computer
science; however, the theoretical fields indicate more copying than the applied
fields.
| no_new_dataset | 0.890294 |
1705.02503 | Lamberto Ballan | Federico Bartoli, Giuseppe Lisanti, Lamberto Ballan, Alberto Del Bimbo | Context-Aware Trajectory Prediction | Submitted to BMVC 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human motion and behaviour in crowded spaces is influenced by several
factors, such as the dynamics of other moving agents in the scene, as well as
the static elements that might be perceived as points of attraction or
obstacles. In this work, we present a new model for human trajectory prediction
which is able to take advantage of both human-human and human-space
interactions. The future trajectory of humans, are generated by observing their
past positions and interactions with the surroundings. To this end, we propose
a "context-aware" recurrent neural network LSTM model, which can learn and
predict human motion in crowded spaces such as a sidewalk, a museum or a
shopping mall. We evaluate our model on a public pedestrian datasets, and we
contribute a new challenging dataset that collects videos of humans that
navigate in a (real) crowded space such as a big museum. Results show that our
approach can predict human trajectories better when compared to previous
state-of-the-art forecasting models.
| [
{
"version": "v1",
"created": "Sat, 6 May 2017 16:36:32 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Bartoli",
"Federico",
""
],
[
"Lisanti",
"Giuseppe",
""
],
[
"Ballan",
"Lamberto",
""
],
[
"Del Bimbo",
"Alberto",
""
]
] | TITLE: Context-Aware Trajectory Prediction
ABSTRACT: Human motion and behaviour in crowded spaces is influenced by several
factors, such as the dynamics of other moving agents in the scene, as well as
the static elements that might be perceived as points of attraction or
obstacles. In this work, we present a new model for human trajectory prediction
which is able to take advantage of both human-human and human-space
interactions. The future trajectory of humans, are generated by observing their
past positions and interactions with the surroundings. To this end, we propose
a "context-aware" recurrent neural network LSTM model, which can learn and
predict human motion in crowded spaces such as a sidewalk, a museum or a
shopping mall. We evaluate our model on a public pedestrian datasets, and we
contribute a new challenging dataset that collects videos of humans that
navigate in a (real) crowded space such as a big museum. Results show that our
approach can predict human trajectories better when compared to previous
state-of-the-art forecasting models.
| new_dataset | 0.960547 |
1705.02518 | Subhabrata Mukherjee | Subhabrata Mukherjee, Kashyap Popat, Gerhard Weikum | Exploring Latent Semantic Factors to Find Useful Product Reviews | null | null | null | null | cs.AI cs.CL cs.IR cs.SI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online reviews provided by consumers are a valuable asset for e-Commerce
platforms, influencing potential consumers in making purchasing decisions.
However, these reviews are of varying quality, with the useful ones buried deep
within a heap of non-informative reviews. In this work, we attempt to
automatically identify review quality in terms of its helpfulness to the end
consumers. In contrast to previous works in this domain exploiting a variety of
syntactic and community-level features, we delve deep into the semantics of
reviews as to what makes them useful, providing interpretable explanation for
the same. We identify a set of consistency and semantic factors, all from the
text, ratings, and timestamps of user-generated reviews, making our approach
generalizable across all communities and domains. We explore review semantics
in terms of several latent factors like the expertise of its author, his
judgment about the fine-grained facets of the underlying product, and his
writing style. These are cast into a Hidden Markov Model -- Latent Dirichlet
Allocation (HMM-LDA) based model to jointly infer: (i) reviewer expertise, (ii)
item facets, and (iii) review helpfulness. Large-scale experiments on five
real-world datasets from Amazon show significant improvement over
state-of-the-art baselines in predicting and ranking useful reviews.
| [
{
"version": "v1",
"created": "Sat, 6 May 2017 19:21:48 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Mukherjee",
"Subhabrata",
""
],
[
"Popat",
"Kashyap",
""
],
[
"Weikum",
"Gerhard",
""
]
] | TITLE: Exploring Latent Semantic Factors to Find Useful Product Reviews
ABSTRACT: Online reviews provided by consumers are a valuable asset for e-Commerce
platforms, influencing potential consumers in making purchasing decisions.
However, these reviews are of varying quality, with the useful ones buried deep
within a heap of non-informative reviews. In this work, we attempt to
automatically identify review quality in terms of its helpfulness to the end
consumers. In contrast to previous works in this domain exploiting a variety of
syntactic and community-level features, we delve deep into the semantics of
reviews as to what makes them useful, providing interpretable explanation for
the same. We identify a set of consistency and semantic factors, all from the
text, ratings, and timestamps of user-generated reviews, making our approach
generalizable across all communities and domains. We explore review semantics
in terms of several latent factors like the expertise of its author, his
judgment about the fine-grained facets of the underlying product, and his
writing style. These are cast into a Hidden Markov Model -- Latent Dirichlet
Allocation (HMM-LDA) based model to jointly infer: (i) reviewer expertise, (ii)
item facets, and (iii) review helpfulness. Large-scale experiments on five
real-world datasets from Amazon show significant improvement over
state-of-the-art baselines in predicting and ranking useful reviews.
| no_new_dataset | 0.947962 |
1705.02562 | Ajay Nagesh | Naveen Nair, Ajay Nagesh, Ganesh Ramakrishnan | Learning Discriminative Relational Features for Sequence Labeling | 13 pages, technical report | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discovering relational structure between input features in sequence labeling
models has shown to improve their accuracy in several problem settings.
However, the search space of relational features is exponential in the number
of basic input features. Consequently, approaches that learn relational
features, tend to follow a greedy search strategy. In this paper, we study the
possibility of optimally learning and applying discriminative relational
features for sequence labeling. For learning features derived from inputs at a
particular sequence position, we propose a Hierarchical Kernels-based approach
(referred to as Hierarchical Kernel Learning for Structured Output Spaces -
StructHKL). This approach optimally and efficiently explores the hierarchical
structure of the feature space for problems with structured output spaces such
as sequence labeling. Since the StructHKL approach has limitations in learning
complex relational features derived from inputs at relative positions, we
propose two solutions to learn relational features namely, (i) enumerating
simple component features of complex relational features and discovering their
compositions using StructHKL and (ii) leveraging relational kernels, that
compute the similarity between instances implicitly, in the sequence labeling
problem. We perform extensive empirical evaluation on publicly available
datasets and record our observations on settings in which certain approaches
are effective.
| [
{
"version": "v1",
"created": "Sun, 7 May 2017 04:37:53 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Nair",
"Naveen",
""
],
[
"Nagesh",
"Ajay",
""
],
[
"Ramakrishnan",
"Ganesh",
""
]
] | TITLE: Learning Discriminative Relational Features for Sequence Labeling
ABSTRACT: Discovering relational structure between input features in sequence labeling
models has shown to improve their accuracy in several problem settings.
However, the search space of relational features is exponential in the number
of basic input features. Consequently, approaches that learn relational
features, tend to follow a greedy search strategy. In this paper, we study the
possibility of optimally learning and applying discriminative relational
features for sequence labeling. For learning features derived from inputs at a
particular sequence position, we propose a Hierarchical Kernels-based approach
(referred to as Hierarchical Kernel Learning for Structured Output Spaces -
StructHKL). This approach optimally and efficiently explores the hierarchical
structure of the feature space for problems with structured output spaces such
as sequence labeling. Since the StructHKL approach has limitations in learning
complex relational features derived from inputs at relative positions, we
propose two solutions to learn relational features namely, (i) enumerating
simple component features of complex relational features and discovering their
compositions using StructHKL and (ii) leveraging relational kernels, that
compute the similarity between instances implicitly, in the sequence labeling
problem. We perform extensive empirical evaluation on publicly available
datasets and record our observations on settings in which certain approaches
are effective.
| no_new_dataset | 0.948632 |
1705.02583 | Xinyu Zhang | Xinyu Zhang, Srinjoy Das, Ojash Neopane and Ken Kreutz-Delgado | A Design Methodology for Efficient Implementation of Deconvolutional
Neural Networks on an FPGA | null | null | null | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years deep learning algorithms have shown extremely high
performance on machine learning tasks such as image classification and speech
recognition. In support of such applications, various FPGA accelerator
architectures have been proposed for convolutional neural networks (CNNs) that
enable high performance for classification tasks at lower power than CPU and
GPU processors. However, to date, there has been little research on the use of
FPGA implementations of deconvolutional neural networks (DCNNs). DCNNs, also
known as generative CNNs, encode high-dimensional probability distributions and
have been widely used for computer vision applications such as scene
completion, scene segmentation, image creation, image denoising, and
super-resolution imaging. We propose an FPGA architecture for deconvolutional
networks built around an accelerator which effectively handles the complex
memory access patterns needed to perform strided deconvolutions, and that
supports convolution as well. We also develop a three-step design optimization
method that systematically exploits statistical analysis, design space
exploration and VLSI optimization. To verify our FPGA deconvolutional
accelerator design methodology we train DCNNs offline on two representative
datasets using the generative adversarial network method (GAN) run on
Tensorflow, and then map these DCNNs to an FPGA DCNN-plus-accelerator
implementation to perform generative inference on a Xilinx Zynq-7000 FPGA. Our
DCNN implementation achieves a peak performance density of 0.012 GOPs/DSP.
| [
{
"version": "v1",
"created": "Sun, 7 May 2017 09:18:44 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Zhang",
"Xinyu",
""
],
[
"Das",
"Srinjoy",
""
],
[
"Neopane",
"Ojash",
""
],
[
"Kreutz-Delgado",
"Ken",
""
]
] | TITLE: A Design Methodology for Efficient Implementation of Deconvolutional
Neural Networks on an FPGA
ABSTRACT: In recent years deep learning algorithms have shown extremely high
performance on machine learning tasks such as image classification and speech
recognition. In support of such applications, various FPGA accelerator
architectures have been proposed for convolutional neural networks (CNNs) that
enable high performance for classification tasks at lower power than CPU and
GPU processors. However, to date, there has been little research on the use of
FPGA implementations of deconvolutional neural networks (DCNNs). DCNNs, also
known as generative CNNs, encode high-dimensional probability distributions and
have been widely used for computer vision applications such as scene
completion, scene segmentation, image creation, image denoising, and
super-resolution imaging. We propose an FPGA architecture for deconvolutional
networks built around an accelerator which effectively handles the complex
memory access patterns needed to perform strided deconvolutions, and that
supports convolution as well. We also develop a three-step design optimization
method that systematically exploits statistical analysis, design space
exploration and VLSI optimization. To verify our FPGA deconvolutional
accelerator design methodology we train DCNNs offline on two representative
datasets using the generative adversarial network method (GAN) run on
Tensorflow, and then map these DCNNs to an FPGA DCNN-plus-accelerator
implementation to perform generative inference on a Xilinx Zynq-7000 FPGA. Our
DCNN implementation achieves a peak performance density of 0.012 GOPs/DSP.
| no_new_dataset | 0.948394 |
1705.02668 | Subhabrata Mukherjee | Subhabrata Mukherjee, Sourav Dutta, Gerhard Weikum | Credible Review Detection with Limited Information using Consistency
Analysis | null | null | null | null | cs.AI cs.CL cs.IR cs.SI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online reviews provide viewpoints on the strengths and shortcomings of
products/services, influencing potential customers' purchasing decisions.
However, the proliferation of non-credible reviews -- either fake (promoting/
demoting an item), incompetent (involving irrelevant aspects), or biased --
entails the problem of identifying credible reviews. Prior works involve
classifiers harnessing rich information about items/users -- which might not be
readily available in several domains -- that provide only limited
interpretability as to why a review is deemed non-credible. This paper presents
a novel approach to address the above issues. We utilize latent topic models
leveraging review texts, item ratings, and timestamps to derive consistency
features without relying on item/user histories, unavailable for "long-tail"
items/users. We develop models, for computing review credibility scores to
provide interpretable evidence for non-credible reviews, that are also
transferable to other domains -- addressing the scarcity of labeled data.
Experiments on real-world datasets demonstrate improvements over
state-of-the-art baselines.
| [
{
"version": "v1",
"created": "Sun, 7 May 2017 17:43:01 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Mukherjee",
"Subhabrata",
""
],
[
"Dutta",
"Sourav",
""
],
[
"Weikum",
"Gerhard",
""
]
] | TITLE: Credible Review Detection with Limited Information using Consistency
Analysis
ABSTRACT: Online reviews provide viewpoints on the strengths and shortcomings of
products/services, influencing potential customers' purchasing decisions.
However, the proliferation of non-credible reviews -- either fake (promoting/
demoting an item), incompetent (involving irrelevant aspects), or biased --
entails the problem of identifying credible reviews. Prior works involve
classifiers harnessing rich information about items/users -- which might not be
readily available in several domains -- that provide only limited
interpretability as to why a review is deemed non-credible. This paper presents
a novel approach to address the above issues. We utilize latent topic models
leveraging review texts, item ratings, and timestamps to derive consistency
features without relying on item/user histories, unavailable for "long-tail"
items/users. We develop models, for computing review credibility scores to
provide interpretable evidence for non-credible reviews, that are also
transferable to other domains -- addressing the scarcity of labeled data.
Experiments on real-world datasets demonstrate improvements over
state-of-the-art baselines.
| no_new_dataset | 0.948106 |
1705.02735 | Amir Zadeh | Edmund Tong, Amir Zadeh, Cara Jones, Louis-Philippe Morency | Combating Human Trafficking with Deep Multimodal Models | ACL 2017 Long Paper | null | null | null | cs.CL cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human trafficking is a global epidemic affecting millions of people across
the planet. Sex trafficking, the dominant form of human trafficking, has seen a
significant rise mostly due to the abundance of escort websites, where human
traffickers can openly advertise among at-will escort advertisements. In this
paper, we take a major step in the automatic detection of advertisements
suspected to pertain to human trafficking. We present a novel dataset called
Trafficking-10k, with more than 10,000 advertisements annotated for this task.
The dataset contains two sources of information per advertisement: text and
images. For the accurate detection of trafficking advertisements, we designed
and trained a deep multimodal model called the Human Trafficking Deep Network
(HTDN).
| [
{
"version": "v1",
"created": "Mon, 8 May 2017 03:48:01 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Tong",
"Edmund",
""
],
[
"Zadeh",
"Amir",
""
],
[
"Jones",
"Cara",
""
],
[
"Morency",
"Louis-Philippe",
""
]
] | TITLE: Combating Human Trafficking with Deep Multimodal Models
ABSTRACT: Human trafficking is a global epidemic affecting millions of people across
the planet. Sex trafficking, the dominant form of human trafficking, has seen a
significant rise mostly due to the abundance of escort websites, where human
traffickers can openly advertise among at-will escort advertisements. In this
paper, we take a major step in the automatic detection of advertisements
suspected to pertain to human trafficking. We present a novel dataset called
Trafficking-10k, with more than 10,000 advertisements annotated for this task.
The dataset contains two sources of information per advertisement: text and
images. For the accurate detection of trafficking advertisements, we designed
and trained a deep multimodal model called the Human Trafficking Deep Network
(HTDN).
| new_dataset | 0.963916 |
1705.02772 | Toshiki Nakamura | Toshiki Nakamura, Anna Zhu, Keiji Yanai and Seiichi Uchida | Scene Text Eraser | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The character information in natural scene images contains various personal
information, such as telephone numbers, home addresses, etc. It is a high risk
of leakage the information if they are published. In this paper, we proposed a
scene text erasing method to properly hide the information via an inpainting
convolutional neural network (CNN) model. The input is a scene text image, and
the output is expected to be text erased image with all the character regions
filled up the colors of the surrounding background pixels. This work is
accomplished by a CNN model through convolution to deconvolution with
interconnection process. The training samples and the corresponding inpainting
images are considered as teaching signals for training. To evaluate the text
erasing performance, the output images are detected by a novel scene text
detection method. Subsequently, the same measurement on text detection is
utilized for testing the images in benchmark dataset ICDAR2013. Compared with
direct text detection way, the scene text erasing process demonstrates a
drastically decrease on the precision, recall and f-score. That proves the
effectiveness of proposed method for erasing the text in natural scene images.
| [
{
"version": "v1",
"created": "Mon, 8 May 2017 08:28:34 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Nakamura",
"Toshiki",
""
],
[
"Zhu",
"Anna",
""
],
[
"Yanai",
"Keiji",
""
],
[
"Uchida",
"Seiichi",
""
]
] | TITLE: Scene Text Eraser
ABSTRACT: The character information in natural scene images contains various personal
information, such as telephone numbers, home addresses, etc. It is a high risk
of leakage the information if they are published. In this paper, we proposed a
scene text erasing method to properly hide the information via an inpainting
convolutional neural network (CNN) model. The input is a scene text image, and
the output is expected to be text erased image with all the character regions
filled up the colors of the surrounding background pixels. This work is
accomplished by a CNN model through convolution to deconvolution with
interconnection process. The training samples and the corresponding inpainting
images are considered as teaching signals for training. To evaluate the text
erasing performance, the output images are detected by a novel scene text
detection method. Subsequently, the same measurement on text detection is
utilized for testing the images in benchmark dataset ICDAR2013. Compared with
direct text detection way, the scene text erasing process demonstrates a
drastically decrease on the precision, recall and f-score. That proves the
effectiveness of proposed method for erasing the text in natural scene images.
| no_new_dataset | 0.956022 |
1705.02875 | David Weyburne | David Weyburne | Inner/Outer Ratio Similarity Scaling for 2-D Wall-bounded Turbulent
Flows | 10 pages. arXiv admin note: text overlap with arXiv:1703.02092 | null | null | null | physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The turbulent boundary layer scaling parameters for the velocity profile are
usually associated with either the inner viscous region or the outer boundary
layer region. It has been a long-held view that complete similarity of the
velocity profile can only occur if the inner and outer region scaling
parameters change proportionally as one moves from station to station along the
wall. However, it appears that complete similarity is not possible for the
wall-bounded turbulent boundary layer. Hence, the outer/inner ratio would seem
to be of little use. However, recent revelations revive the need for
identifying likely experimental datasets that display outer region similarity.
It is our contention that likely datasets can be identified by finding datasets
in which the inner-outer thickness ratio is almost constant. This inner-outer
thickness ratio is usually associated with the Rotta scaling ratio.
Unfortunately, the Rotta ratio proportional change condition has never been
shown to be a similarity requirement. In contrast, we show that a recently
developed thickness ratio based on the integral moment method must change
proportionately from station to station if similarity is present.
| [
{
"version": "v1",
"created": "Thu, 27 Apr 2017 13:50:15 GMT"
}
] | 2017-05-09T00:00:00 | [
[
"Weyburne",
"David",
""
]
] | TITLE: Inner/Outer Ratio Similarity Scaling for 2-D Wall-bounded Turbulent
Flows
ABSTRACT: The turbulent boundary layer scaling parameters for the velocity profile are
usually associated with either the inner viscous region or the outer boundary
layer region. It has been a long-held view that complete similarity of the
velocity profile can only occur if the inner and outer region scaling
parameters change proportionally as one moves from station to station along the
wall. However, it appears that complete similarity is not possible for the
wall-bounded turbulent boundary layer. Hence, the outer/inner ratio would seem
to be of little use. However, recent revelations revive the need for
identifying likely experimental datasets that display outer region similarity.
It is our contention that likely datasets can be identified by finding datasets
in which the inner-outer thickness ratio is almost constant. This inner-outer
thickness ratio is usually associated with the Rotta scaling ratio.
Unfortunately, the Rotta ratio proportional change condition has never been
shown to be a similarity requirement. In contrast, we show that a recently
developed thickness ratio based on the integral moment method must change
proportionately from station to station if similarity is present.
| no_new_dataset | 0.951549 |
1606.02838 | Nicolas Keriven | Nicolas Keriven (UR1, PANAMA), Anthony Bourrier (GIPSA-lab), R\'emi
Gribonval (PANAMA), Patrick P\'erez | Sketching for Large-Scale Learning of Mixture Models | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning parameters from voluminous data can be prohibitive in terms of
memory and computational requirements. We propose a "compressive learning"
framework where we estimate model parameters from a sketch of the training
data. This sketch is a collection of generalized moments of the underlying
probability distribution of the data. It can be computed in a single pass on
the training set, and is easily computable on streams or distributed datasets.
The proposed framework shares similarities with compressive sensing, which aims
at drastically reducing the dimension of high-dimensional signals while
preserving the ability to reconstruct them. To perform the estimation task, we
derive an iterative algorithm analogous to sparse reconstruction algorithms in
the context of linear inverse problems. We exemplify our framework with the
compressive estimation of a Gaussian Mixture Model (GMM), providing heuristics
on the choice of the sketching procedure and theoretical guarantees of
reconstruction. We experimentally show on synthetic data that the proposed
algorithm yields results comparable to the classical Expectation-Maximization
(EM) technique while requiring significantly less memory and fewer computations
when the number of database elements is large. We further demonstrate the
potential of the approach on real large-scale data (over 10 8 training samples)
for the task of model-based speaker verification. Finally, we draw some
connections between the proposed framework and approximate Hilbert space
embedding of probability distributions using random features. We show that the
proposed sketching operator can be seen as an innovative method to design
translation-invariant kernels adapted to the analysis of GMMs. We also use this
theoretical framework to derive information preservation guarantees, in the
spirit of infinite-dimensional compressive sensing.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2016 06:59:19 GMT"
},
{
"version": "v2",
"created": "Fri, 5 May 2017 11:22:44 GMT"
}
] | 2017-05-08T00:00:00 | [
[
"Keriven",
"Nicolas",
"",
"UR1, PANAMA"
],
[
"Bourrier",
"Anthony",
"",
"GIPSA-lab"
],
[
"Gribonval",
"Rémi",
"",
"PANAMA"
],
[
"Pérez",
"Patrick",
""
]
] | TITLE: Sketching for Large-Scale Learning of Mixture Models
ABSTRACT: Learning parameters from voluminous data can be prohibitive in terms of
memory and computational requirements. We propose a "compressive learning"
framework where we estimate model parameters from a sketch of the training
data. This sketch is a collection of generalized moments of the underlying
probability distribution of the data. It can be computed in a single pass on
the training set, and is easily computable on streams or distributed datasets.
The proposed framework shares similarities with compressive sensing, which aims
at drastically reducing the dimension of high-dimensional signals while
preserving the ability to reconstruct them. To perform the estimation task, we
derive an iterative algorithm analogous to sparse reconstruction algorithms in
the context of linear inverse problems. We exemplify our framework with the
compressive estimation of a Gaussian Mixture Model (GMM), providing heuristics
on the choice of the sketching procedure and theoretical guarantees of
reconstruction. We experimentally show on synthetic data that the proposed
algorithm yields results comparable to the classical Expectation-Maximization
(EM) technique while requiring significantly less memory and fewer computations
when the number of database elements is large. We further demonstrate the
potential of the approach on real large-scale data (over 10 8 training samples)
for the task of model-based speaker verification. Finally, we draw some
connections between the proposed framework and approximate Hilbert space
embedding of probability distributions using random features. We show that the
proposed sketching operator can be seen as an innovative method to design
translation-invariant kernels adapted to the analysis of GMMs. We also use this
theoretical framework to derive information preservation guarantees, in the
spirit of infinite-dimensional compressive sensing.
| no_new_dataset | 0.940517 |
1607.06408 | Yongkang Wong | Wenhui Li, Yongkang Wong, An-An Liu, Yang Li, Yu-Ting Su, Mohan
Kankanhalli | Multi-Camera Action Dataset for Cross-Camera Action Recognition
Benchmarking | null | null | 10.1109/WACV.2017.28 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Action recognition has received increasing attention from the computer vision
and machine learning communities in the last decade. To enable the study of
this problem, there exist a vast number of action datasets, which are recorded
under controlled laboratory settings, real-world surveillance environments, or
crawled from the Internet. Apart from the "in-the-wild" datasets, the training
and test split of conventional datasets often possess similar environments
conditions, which leads to close to perfect performance on constrained
datasets. In this paper, we introduce a new dataset, namely Multi-Camera Action
Dataset (MCAD), which is designed to evaluate the open view classification
problem under the surveillance environment. In total, MCAD contains 14,298
action samples from 18 action categories, which are performed by 20 subjects
and independently recorded with 5 cameras. Inspired by the well received
evaluation approach on the LFW dataset, we designed a standard evaluation
protocol and benchmarked MCAD under several scenarios. The benchmark shows that
while an average of 85% accuracy is achieved under the closed-view scenario,
the performance suffers from a significant drop under the cross-view scenario.
In the worst case scenario, the performance of 10-fold cross validation drops
from 87.0% to 47.4%.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2016 17:58:19 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jan 2017 10:00:59 GMT"
},
{
"version": "v3",
"created": "Fri, 5 May 2017 05:21:31 GMT"
}
] | 2017-05-08T00:00:00 | [
[
"Li",
"Wenhui",
""
],
[
"Wong",
"Yongkang",
""
],
[
"Liu",
"An-An",
""
],
[
"Li",
"Yang",
""
],
[
"Su",
"Yu-Ting",
""
],
[
"Kankanhalli",
"Mohan",
""
]
] | TITLE: Multi-Camera Action Dataset for Cross-Camera Action Recognition
Benchmarking
ABSTRACT: Action recognition has received increasing attention from the computer vision
and machine learning communities in the last decade. To enable the study of
this problem, there exist a vast number of action datasets, which are recorded
under controlled laboratory settings, real-world surveillance environments, or
crawled from the Internet. Apart from the "in-the-wild" datasets, the training
and test split of conventional datasets often possess similar environments
conditions, which leads to close to perfect performance on constrained
datasets. In this paper, we introduce a new dataset, namely Multi-Camera Action
Dataset (MCAD), which is designed to evaluate the open view classification
problem under the surveillance environment. In total, MCAD contains 14,298
action samples from 18 action categories, which are performed by 20 subjects
and independently recorded with 5 cameras. Inspired by the well received
evaluation approach on the LFW dataset, we designed a standard evaluation
protocol and benchmarked MCAD under several scenarios. The benchmark shows that
while an average of 85% accuracy is achieved under the closed-view scenario,
the performance suffers from a significant drop under the cross-view scenario.
In the worst case scenario, the performance of 10-fold cross validation drops
from 87.0% to 47.4%.
| new_dataset | 0.957437 |
1609.03321 | Julius Hannink | Julius Hannink, Thomas Kautz, Cristian F. Pasluosta, Jens Barth,
Samuel Sch\"ulein, Karl-G\"unter Ga{\ss}mann, Jochen Klucken, Bjoern M.
Eskofier | Stride Length Estimation with Deep Learning | null | null | 10.1109/JBHI.2017.2679486 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate estimation of spatial gait characteristics is critical to assess
motor impairments resulting from neurological or musculoskeletal disease.
Currently, however, methodological constraints limit clinical applicability of
state-of-the-art double integration approaches to gait patterns with a clear
zero-velocity phase. We describe a novel approach to stride length estimation
that uses deep convolutional neural networks to map stride-specific inertial
sensor data to the resulting stride length. The model is trained on a publicly
available and clinically relevant benchmark dataset consisting of 1220 strides
from 101 geriatric patients. Evaluation is done in a 10-fold cross validation
and for three different stride definitions. Even though best results are
achieved with strides defined from mid-stance to mid-stance with average
accuracy and precision of 0.01 $\pm$ 5.37 cm, performance does not strongly
depend on stride definition. The achieved precision outperforms
state-of-the-art methods evaluated on this benchmark dataset by 3.0 cm (36%).
Due to the independence of stride definition, the proposed method is not
subject to the methodological constrains that limit applicability of
state-of-the-art double integration methods. Furthermore, precision on the
benchmark dataset could be improved. With more precise mobile stride length
estimation, new insights to the progression of neurological disease or early
indications might be gained. Due to the independence of stride definition,
previously uncharted diseases in terms of mobile gait analysis can now be
investigated by re-training and applying the proposed method.
| [
{
"version": "v1",
"created": "Mon, 12 Sep 2016 09:23:34 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Oct 2016 10:54:22 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Mar 2017 15:30:28 GMT"
}
] | 2017-05-08T00:00:00 | [
[
"Hannink",
"Julius",
""
],
[
"Kautz",
"Thomas",
""
],
[
"Pasluosta",
"Cristian F.",
""
],
[
"Barth",
"Jens",
""
],
[
"Schülein",
"Samuel",
""
],
[
"Gaßmann",
"Karl-Günter",
""
],
[
"Klucken",
"Jochen",
""
],
[
"Eskofier",
"Bjoern M.",
""
]
] | TITLE: Stride Length Estimation with Deep Learning
ABSTRACT: Accurate estimation of spatial gait characteristics is critical to assess
motor impairments resulting from neurological or musculoskeletal disease.
Currently, however, methodological constraints limit clinical applicability of
state-of-the-art double integration approaches to gait patterns with a clear
zero-velocity phase. We describe a novel approach to stride length estimation
that uses deep convolutional neural networks to map stride-specific inertial
sensor data to the resulting stride length. The model is trained on a publicly
available and clinically relevant benchmark dataset consisting of 1220 strides
from 101 geriatric patients. Evaluation is done in a 10-fold cross validation
and for three different stride definitions. Even though best results are
achieved with strides defined from mid-stance to mid-stance with average
accuracy and precision of 0.01 $\pm$ 5.37 cm, performance does not strongly
depend on stride definition. The achieved precision outperforms
state-of-the-art methods evaluated on this benchmark dataset by 3.0 cm (36%).
Due to the independence of stride definition, the proposed method is not
subject to the methodological constrains that limit applicability of
state-of-the-art double integration methods. Furthermore, precision on the
benchmark dataset could be improved. With more precise mobile stride length
estimation, new insights to the progression of neurological disease or early
indications might be gained. Due to the independence of stride definition,
previously uncharted diseases in terms of mobile gait analysis can now be
investigated by re-training and applying the proposed method.
| no_new_dataset | 0.785966 |
1610.07940 | Albert Gordo | Albert Gordo and Jon Almazan and Jerome Revaud and Diane Larlus | End-to-end Learning of Deep Visual Representations for Image Retrieval | Accepted for publication at the International Journal of Computer
Vision (IJCV). Extended version of our ECCV2016 paper "Deep Image Retrieval:
Learning global representations for image search" | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While deep learning has become a key ingredient in the top performing methods
for many computer vision tasks, it has failed so far to bring similar
improvements to instance-level image retrieval. In this article, we argue that
reasons for the underwhelming results of deep methods on image retrieval are
threefold: i) noisy training data, ii) inappropriate deep architecture, and
iii) suboptimal training procedure. We address all three issues.
First, we leverage a large-scale but noisy landmark dataset and develop an
automatic cleaning method that produces a suitable training set for deep
retrieval. Second, we build on the recent R-MAC descriptor, show that it can be
interpreted as a deep and differentiable architecture, and present improvements
to enhance it. Last, we train this network with a siamese architecture that
combines three streams with a triplet loss. At the end of the training process,
the proposed architecture produces a global image representation in a single
forward pass that is well suited for image retrieval. Extensive experiments
show that our approach significantly outperforms previous retrieval approaches,
including state-of-the-art methods based on costly local descriptor indexing
and spatial verification. On Oxford 5k, Paris 6k and Holidays, we respectively
report 94.7, 96.6, and 94.8 mean average precision. Our representations can
also be heavily compressed using product quantization with little loss in
accuracy. For additional material, please see
www.xrce.xerox.com/Deep-Image-Retrieval.
| [
{
"version": "v1",
"created": "Tue, 25 Oct 2016 16:02:42 GMT"
},
{
"version": "v2",
"created": "Fri, 5 May 2017 15:34:09 GMT"
}
] | 2017-05-08T00:00:00 | [
[
"Gordo",
"Albert",
""
],
[
"Almazan",
"Jon",
""
],
[
"Revaud",
"Jerome",
""
],
[
"Larlus",
"Diane",
""
]
] | TITLE: End-to-end Learning of Deep Visual Representations for Image Retrieval
ABSTRACT: While deep learning has become a key ingredient in the top performing methods
for many computer vision tasks, it has failed so far to bring similar
improvements to instance-level image retrieval. In this article, we argue that
reasons for the underwhelming results of deep methods on image retrieval are
threefold: i) noisy training data, ii) inappropriate deep architecture, and
iii) suboptimal training procedure. We address all three issues.
First, we leverage a large-scale but noisy landmark dataset and develop an
automatic cleaning method that produces a suitable training set for deep
retrieval. Second, we build on the recent R-MAC descriptor, show that it can be
interpreted as a deep and differentiable architecture, and present improvements
to enhance it. Last, we train this network with a siamese architecture that
combines three streams with a triplet loss. At the end of the training process,
the proposed architecture produces a global image representation in a single
forward pass that is well suited for image retrieval. Extensive experiments
show that our approach significantly outperforms previous retrieval approaches,
including state-of-the-art methods based on costly local descriptor indexing
and spatial verification. On Oxford 5k, Paris 6k and Holidays, we respectively
report 94.7, 96.6, and 94.8 mean average precision. Our representations can
also be heavily compressed using product quantization with little loss in
accuracy. For additional material, please see
www.xrce.xerox.com/Deep-Image-Retrieval.
| no_new_dataset | 0.943191 |
1701.08398 | Zhun Zhong | Zhun Zhong, Liang Zheng, Donglin Cao, Shaozi Li | Re-ranking Person Re-identification with k-reciprocal Encoding | To appear in CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When considering person re-identification (re-ID) as a retrieval process,
re-ranking is a critical step to improve its accuracy. Yet in the re-ID
community, limited effort has been devoted to re-ranking, especially those
fully automatic, unsupervised solutions. In this paper, we propose a
k-reciprocal encoding method to re-rank the re-ID results. Our hypothesis is
that if a gallery image is similar to the probe in the k-reciprocal nearest
neighbors, it is more likely to be a true match. Specifically, given an image,
a k-reciprocal feature is calculated by encoding its k-reciprocal nearest
neighbors into a single vector, which is used for re-ranking under the Jaccard
distance. The final distance is computed as the combination of the original
distance and the Jaccard distance. Our re-ranking method does not require any
human interaction or any labeled data, so it is applicable to large-scale
datasets. Experiments on the large-scale Market-1501, CUHK03, MARS, and PRW
datasets confirm the effectiveness of our method.
| [
{
"version": "v1",
"created": "Sun, 29 Jan 2017 16:31:51 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2017 14:53:20 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Mar 2017 12:57:33 GMT"
},
{
"version": "v4",
"created": "Fri, 5 May 2017 02:46:47 GMT"
}
] | 2017-05-08T00:00:00 | [
[
"Zhong",
"Zhun",
""
],
[
"Zheng",
"Liang",
""
],
[
"Cao",
"Donglin",
""
],
[
"Li",
"Shaozi",
""
]
] | TITLE: Re-ranking Person Re-identification with k-reciprocal Encoding
ABSTRACT: When considering person re-identification (re-ID) as a retrieval process,
re-ranking is a critical step to improve its accuracy. Yet in the re-ID
community, limited effort has been devoted to re-ranking, especially those
fully automatic, unsupervised solutions. In this paper, we propose a
k-reciprocal encoding method to re-rank the re-ID results. Our hypothesis is
that if a gallery image is similar to the probe in the k-reciprocal nearest
neighbors, it is more likely to be a true match. Specifically, given an image,
a k-reciprocal feature is calculated by encoding its k-reciprocal nearest
neighbors into a single vector, which is used for re-ranking under the Jaccard
distance. The final distance is computed as the combination of the original
distance and the Jaccard distance. Our re-ranking method does not require any
human interaction or any labeled data, so it is applicable to large-scale
datasets. Experiments on the large-scale Market-1501, CUHK03, MARS, and PRW
datasets confirm the effectiveness of our method.
| no_new_dataset | 0.950503 |
1704.03144 | Maziar Raissi | Maziar Raissi | Parametric Gaussian Process Regression for Big Data | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work introduces the concept of parametric Gaussian processes (PGPs),
which is built upon the seemingly self-contradictory idea of making Gaussian
processes parametric. Parametric Gaussian processes, by construction, are
designed to operate in "big data" regimes where one is interested in
quantifying the uncertainty associated with noisy data. The proposed
methodology circumvents the well-established need for stochastic variational
inference, a scalable algorithm for approximating posterior distributions. The
effectiveness of the proposed approach is demonstrated using an illustrative
example with simulated data and a benchmark dataset in the airline industry
with approximately 6 million records.
| [
{
"version": "v1",
"created": "Tue, 11 Apr 2017 04:57:24 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2017 20:12:45 GMT"
}
] | 2017-05-08T00:00:00 | [
[
"Raissi",
"Maziar",
""
]
] | TITLE: Parametric Gaussian Process Regression for Big Data
ABSTRACT: This work introduces the concept of parametric Gaussian processes (PGPs),
which is built upon the seemingly self-contradictory idea of making Gaussian
processes parametric. Parametric Gaussian processes, by construction, are
designed to operate in "big data" regimes where one is interested in
quantifying the uncertainty associated with noisy data. The proposed
methodology circumvents the well-established need for stochastic variational
inference, a scalable algorithm for approximating posterior distributions. The
effectiveness of the proposed approach is demonstrated using an illustrative
example with simulated data and a benchmark dataset in the airline industry
with approximately 6 million records.
| new_dataset | 0.619241 |
1705.02009 | Hien To | Hien To, Sumeet Agrawal, Seon Ho Kim, Cyrus Shahabi | On Identifying Disaster-Related Tweets: Matching-based or
Learning-based? | null | null | null | null | cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social media such as tweets are emerging as platforms contributing to
situational awareness during disasters. Information shared on Twitter by both
affected population (e.g., requesting assistance, warning) and those outside
the impact zone (e.g., providing assistance) would help first responders,
decision makers, and the public to understand the situation first-hand.
Effective use of such information requires timely selection and analysis of
tweets that are relevant to a particular disaster. Even though abundant tweets
are promising as a data source, it is challenging to automatically identify
relevant messages since tweet are short and unstructured, resulting to
unsatisfactory classification performance of conventional learning-based
approaches. Thus, we propose a simple yet effective algorithm to identify
relevant messages based on matching keywords and hashtags, and provide a
comparison between matching-based and learning-based approaches. To evaluate
the two approaches, we put them into a framework specifically proposed for
analyzing disaster-related tweets. Analysis results on eleven datasets with
various disaster types show that our technique provides relevant tweets of
higher quality and more interpretable results of sentiment analysis tasks when
compared to learning approach.
| [
{
"version": "v1",
"created": "Thu, 4 May 2017 20:42:23 GMT"
}
] | 2017-05-08T00:00:00 | [
[
"To",
"Hien",
""
],
[
"Agrawal",
"Sumeet",
""
],
[
"Kim",
"Seon Ho",
""
],
[
"Shahabi",
"Cyrus",
""
]
] | TITLE: On Identifying Disaster-Related Tweets: Matching-based or
Learning-based?
ABSTRACT: Social media such as tweets are emerging as platforms contributing to
situational awareness during disasters. Information shared on Twitter by both
affected population (e.g., requesting assistance, warning) and those outside
the impact zone (e.g., providing assistance) would help first responders,
decision makers, and the public to understand the situation first-hand.
Effective use of such information requires timely selection and analysis of
tweets that are relevant to a particular disaster. Even though abundant tweets
are promising as a data source, it is challenging to automatically identify
relevant messages since tweet are short and unstructured, resulting to
unsatisfactory classification performance of conventional learning-based
approaches. Thus, we propose a simple yet effective algorithm to identify
relevant messages based on matching keywords and hashtags, and provide a
comparison between matching-based and learning-based approaches. To evaluate
the two approaches, we put them into a framework specifically proposed for
analyzing disaster-related tweets. Analysis results on eleven datasets with
various disaster types show that our technique provides relevant tweets of
higher quality and more interpretable results of sentiment analysis tasks when
compared to learning approach.
| no_new_dataset | 0.947624 |
1705.02019 | Loukianos Spyrou | Loukianos Spyrou, Mario Parra and Javier Escudero | Complex tensor factorisation with PARAFAC2 for the estimation of brain
connectivity from the EEG | null | null | null | null | cs.CE q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: The coupling between neuronal populations and its magnitude have
been shown to be informative for various clinical applications. One method to
estimate brain connectivity is with electroencephalography (EEG) from which the
cross-spectrum between different sensor locations is derived. We wish to test
the efficacy of tensor factorisation in the estimation of brain connectivity.
Methods: Complex tensor factorisation based on PARAFAC2 is used to decompose
the EEG into scalp components described by the spatial, spectral, and complex
trial profiles. An EEG model in the complex domain was derived that shows the
suitability of PARAFAC2. A connectivity metric was also derived on the complex
trial profiles of the extracted components. Results: Results on a benchmark EEG
dataset confirmed that PARAFAC2 can estimate connectivity better than
traditional tensor analysis such as PARAFAC within a range of signal-to-noise
ratios. The analysis of EEG from patients with mild cognitive impairment or
Alzheimer's disease showed that PARAFAC2 identifies loss of brain connectivity
better than traditional approaches and agreeing with prior pathological
knowledge. Conclusion: The complex PARAFAC2 algorithm is suitable for EEG
connectivity estimation since it allows to extract meaningful coupled sources
and provides better estimates than complex PARAFAC. Significance: A new
paradigm that employs complex tensor factorisation has demonstrated to be
successful in identifying brain connectivity and the location of couples
sources for both a benchmark and a real-world EEG dataset. This can enable
future applications and has the potential to solve some the issues that
deteriorate the performance of traditional connectivity metrics.
| [
{
"version": "v1",
"created": "Tue, 2 May 2017 10:59:24 GMT"
}
] | 2017-05-08T00:00:00 | [
[
"Spyrou",
"Loukianos",
""
],
[
"Parra",
"Mario",
""
],
[
"Escudero",
"Javier",
""
]
] | TITLE: Complex tensor factorisation with PARAFAC2 for the estimation of brain
connectivity from the EEG
ABSTRACT: Objective: The coupling between neuronal populations and its magnitude have
been shown to be informative for various clinical applications. One method to
estimate brain connectivity is with electroencephalography (EEG) from which the
cross-spectrum between different sensor locations is derived. We wish to test
the efficacy of tensor factorisation in the estimation of brain connectivity.
Methods: Complex tensor factorisation based on PARAFAC2 is used to decompose
the EEG into scalp components described by the spatial, spectral, and complex
trial profiles. An EEG model in the complex domain was derived that shows the
suitability of PARAFAC2. A connectivity metric was also derived on the complex
trial profiles of the extracted components. Results: Results on a benchmark EEG
dataset confirmed that PARAFAC2 can estimate connectivity better than
traditional tensor analysis such as PARAFAC within a range of signal-to-noise
ratios. The analysis of EEG from patients with mild cognitive impairment or
Alzheimer's disease showed that PARAFAC2 identifies loss of brain connectivity
better than traditional approaches and agreeing with prior pathological
knowledge. Conclusion: The complex PARAFAC2 algorithm is suitable for EEG
connectivity estimation since it allows to extract meaningful coupled sources
and provides better estimates than complex PARAFAC. Significance: A new
paradigm that employs complex tensor factorisation has demonstrated to be
successful in identifying brain connectivity and the location of couples
sources for both a benchmark and a real-world EEG dataset. This can enable
future applications and has the potential to solve some the issues that
deteriorate the performance of traditional connectivity metrics.
| no_new_dataset | 0.938576 |
1705.02058 | Joshua Gluck Joshua Gluck | Joshua Gluck, Christian Koehler, Jennifer Mankoff, Anind Dey, Yuvraj
Agarwal | A Systematic Approach for Exploring Tradeoffs in Predictive HVAC Control
Systems for Buildings | null | null | null | null | cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Heating, Ventilation, and Cooling (HVAC) systems are often the most
significant contributor to the energy usage, and the operational cost, of large
office buildings. Therefore, to understand the various factors affecting the
energy usage, and to optimize the operational efficiency of building HVAC
systems, energy analysts and architects often create simulations (e.g.,
EnergyPlus or DOE-2), of buildings prior to construction or renovation to
determine energy savings and quantify the Return-on-Investment (ROI). While
useful, these simulations usually use static HVAC control strategies such as
lowering room temperature at night, or reactive control based on simulated room
occupancy. Recently, advances have been made in HVAC control algorithms that
predict room occupancy. However, these algorithms depend on costly sensor
installations and the tradeoffs between predictive accuracy, energy savings,
comfort and expenses are not well understood. Current simulation frameworks do
not support easy analysis of these tradeoffs. Our contribution is a simulation
framework that can be used to explore this design space by generating objective
estimates of the energy savings and occupant comfort for different levels of
HVAC prediction and control performance. We validate our framework on a
real-world occupancy dataset spanning 6 months for 235 rooms in a large
university office building. Using the gold standard of energy use modeling and
simulation (Revit and Energy Plus), we compare the energy consumption and
occupant comfort in 29 independent simulations that explore our parameter
space. Our results highlight a number of potentially useful tradeoffs with
respect to energy savings, comfort, and algorithmic performance among
predictive, reactive, and static schedules, for a stakeholder of our building.
| [
{
"version": "v1",
"created": "Fri, 5 May 2017 01:33:39 GMT"
}
] | 2017-05-08T00:00:00 | [
[
"Gluck",
"Joshua",
""
],
[
"Koehler",
"Christian",
""
],
[
"Mankoff",
"Jennifer",
""
],
[
"Dey",
"Anind",
""
],
[
"Agarwal",
"Yuvraj",
""
]
] | TITLE: A Systematic Approach for Exploring Tradeoffs in Predictive HVAC Control
Systems for Buildings
ABSTRACT: Heating, Ventilation, and Cooling (HVAC) systems are often the most
significant contributor to the energy usage, and the operational cost, of large
office buildings. Therefore, to understand the various factors affecting the
energy usage, and to optimize the operational efficiency of building HVAC
systems, energy analysts and architects often create simulations (e.g.,
EnergyPlus or DOE-2), of buildings prior to construction or renovation to
determine energy savings and quantify the Return-on-Investment (ROI). While
useful, these simulations usually use static HVAC control strategies such as
lowering room temperature at night, or reactive control based on simulated room
occupancy. Recently, advances have been made in HVAC control algorithms that
predict room occupancy. However, these algorithms depend on costly sensor
installations and the tradeoffs between predictive accuracy, energy savings,
comfort and expenses are not well understood. Current simulation frameworks do
not support easy analysis of these tradeoffs. Our contribution is a simulation
framework that can be used to explore this design space by generating objective
estimates of the energy savings and occupant comfort for different levels of
HVAC prediction and control performance. We validate our framework on a
real-world occupancy dataset spanning 6 months for 235 rooms in a large
university office building. Using the gold standard of energy use modeling and
simulation (Revit and Energy Plus), we compare the energy consumption and
occupant comfort in 29 independent simulations that explore our parameter
space. Our results highlight a number of potentially useful tradeoffs with
respect to energy savings, comfort, and algorithmic performance among
predictive, reactive, and static schedules, for a stakeholder of our building.
| no_new_dataset | 0.946399 |
1705.02077 | Mengxue Li | Mengxue Li, Shiqiang Geng, Yang Gao, Haijing Liu, Hao Wang | Crowdsourcing Argumentation Structures in Chinese Hotel Reviews | 6 pages,3 figures,This article has been submitted to "The 2017 IEEE
International Conference on Systems, Man, and Cybernetics (SMC2017)" | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Argumentation mining aims at automatically extracting the premises-claim
discourse structures in natural language texts. There is a great demand for
argumentation corpora for customer reviews. However, due to the controversial
nature of the argumentation annotation task, there exist very few large-scale
argumentation corpora for customer reviews. In this work, we novelly use the
crowdsourcing technique to collect argumentation annotations in Chinese hotel
reviews. As the first Chinese argumentation dataset, our corpus includes 4814
argument component annotations and 411 argument relation annotations, and its
annotations qualities are comparable to some widely used argumentation corpora
in other languages.
| [
{
"version": "v1",
"created": "Fri, 5 May 2017 03:43:35 GMT"
}
] | 2017-05-08T00:00:00 | [
[
"Li",
"Mengxue",
""
],
[
"Geng",
"Shiqiang",
""
],
[
"Gao",
"Yang",
""
],
[
"Liu",
"Haijing",
""
],
[
"Wang",
"Hao",
""
]
] | TITLE: Crowdsourcing Argumentation Structures in Chinese Hotel Reviews
ABSTRACT: Argumentation mining aims at automatically extracting the premises-claim
discourse structures in natural language texts. There is a great demand for
argumentation corpora for customer reviews. However, due to the controversial
nature of the argumentation annotation task, there exist very few large-scale
argumentation corpora for customer reviews. In this work, we novelly use the
crowdsourcing technique to collect argumentation annotations in Chinese hotel
reviews. As the first Chinese argumentation dataset, our corpus includes 4814
argument component annotations and 411 argument relation annotations, and its
annotations qualities are comparable to some widely used argumentation corpora
in other languages.
| new_dataset | 0.949902 |
1705.02089 | D. Sam Paul | D. Sam Paul and N. Gautham | iMOLSDOCK : induced-fit docking using mutually orthogonal Latin squares
(MOLS) | null | J.Mol.Graph.Model. 74 (2017) 89-99 | 10.1016/j.jmgm.2017.03.008 | null | physics.bio-ph q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have earlier reported the MOLSDOCK technique to perform rigid
receptor/flexible ligand docking. The method uses the MOLS method, developed in
our laboratory. In this paper we report iMOLSDOCK, the 'flexible receptor'
extension we have carried out to the algorithm MOLSDOCK. iMOLSDOCK uses
mutually orthogonal Latin squares (MOLS) to sample the conformation and the
docking pose of the ligand and also the flexible residues of the receptor
protein. The method then uses a variant of the mean field technique to analyze
the sample to arrive at the optimum. We have benchmarked and validated
iMOLSDOCK with a dataset of 44 peptide-protein complexes with peptides. We have
also compared iMOLSDOCK with other flexible receptor docking tools GOLD v5.2.1
and AutoDock Vina. The results obtained show that the method works better than
these two algorithms, though it consumes more computer time.
| [
{
"version": "v1",
"created": "Fri, 5 May 2017 05:43:25 GMT"
}
] | 2017-05-08T00:00:00 | [
[
"Paul",
"D. Sam",
""
],
[
"Gautham",
"N.",
""
]
] | TITLE: iMOLSDOCK : induced-fit docking using mutually orthogonal Latin squares
(MOLS)
ABSTRACT: We have earlier reported the MOLSDOCK technique to perform rigid
receptor/flexible ligand docking. The method uses the MOLS method, developed in
our laboratory. In this paper we report iMOLSDOCK, the 'flexible receptor'
extension we have carried out to the algorithm MOLSDOCK. iMOLSDOCK uses
mutually orthogonal Latin squares (MOLS) to sample the conformation and the
docking pose of the ligand and also the flexible residues of the receptor
protein. The method then uses a variant of the mean field technique to analyze
the sample to arrive at the optimum. We have benchmarked and validated
iMOLSDOCK with a dataset of 44 peptide-protein complexes with peptides. We have
also compared iMOLSDOCK with other flexible receptor docking tools GOLD v5.2.1
and AutoDock Vina. The results obtained show that the method works better than
these two algorithms, though it consumes more computer time.
| no_new_dataset | 0.614625 |
1705.02131 | Minglan Li | Minglan Li, Yang Gao, Hui Wen, Yang Du, Haijing Liu and Hao Wang | Joint RNN Model for Argument Component Boundary Detection | 6 pages, 3 figures, submitted to IEEE SMC 2017 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Argument Component Boundary Detection (ACBD) is an important sub-task in
argumentation mining; it aims at identifying the word sequences that constitute
argument components, and is usually considered as the first sub-task in the
argumentation mining pipeline. Existing ACBD methods heavily depend on
task-specific knowledge, and require considerable human efforts on
feature-engineering. To tackle these problems, in this work, we formulate ACBD
as a sequence labeling problem and propose a variety of Recurrent Neural
Network (RNN) based methods, which do not use domain specific or handcrafted
features beyond the relative position of the sentence in the document. In
particular, we propose a novel joint RNN model that can predict whether
sentences are argumentative or not, and use the predicted results to more
precisely detect the argument component boundaries. We evaluate our techniques
on two corpora from two different genres; results suggest that our joint RNN
model obtain the state-of-the-art performance on both datasets.
| [
{
"version": "v1",
"created": "Fri, 5 May 2017 08:49:14 GMT"
}
] | 2017-05-08T00:00:00 | [
[
"Li",
"Minglan",
""
],
[
"Gao",
"Yang",
""
],
[
"Wen",
"Hui",
""
],
[
"Du",
"Yang",
""
],
[
"Liu",
"Haijing",
""
],
[
"Wang",
"Hao",
""
]
] | TITLE: Joint RNN Model for Argument Component Boundary Detection
ABSTRACT: Argument Component Boundary Detection (ACBD) is an important sub-task in
argumentation mining; it aims at identifying the word sequences that constitute
argument components, and is usually considered as the first sub-task in the
argumentation mining pipeline. Existing ACBD methods heavily depend on
task-specific knowledge, and require considerable human efforts on
feature-engineering. To tackle these problems, in this work, we formulate ACBD
as a sequence labeling problem and propose a variety of Recurrent Neural
Network (RNN) based methods, which do not use domain specific or handcrafted
features beyond the relative position of the sentence in the document. In
particular, we propose a novel joint RNN model that can predict whether
sentences are argumentative or not, and use the predicted results to more
precisely detect the argument component boundaries. We evaluate our techniques
on two corpora from two different genres; results suggest that our joint RNN
model obtain the state-of-the-art performance on both datasets.
| no_new_dataset | 0.950088 |
1705.02145 | Fu-Qing Zhu | Fuqing Zhu, Xiangwei Kong, Liang Zheng, Haiyan Fu, Qi Tian | Part-based Deep Hashing for Large-scale Person Re-identification | 12 pages, 4 figures. IEEE Transactions on Image Processing, 2017 | null | 10.1109/TIP.2017.2695101 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale is a trend in person re-identification (re-id). It is important
that real-time search be performed in a large gallery. While previous methods
mostly focus on discriminative learning, this paper makes the attempt in
integrating deep learning and hashing into one framework to evaluate the
efficiency and accuracy for large-scale person re-id. We integrate spatial
information for discriminative visual representation by partitioning the
pedestrian image into horizontal parts. Specifically, Part-based Deep Hashing
(PDH) is proposed, in which batches of triplet samples are employed as the
input of the deep hashing architecture. Each triplet sample contains two
pedestrian images (or parts) with the same identity and one pedestrian image
(or part) of the different identity. A triplet loss function is employed with a
constraint that the Hamming distance of pedestrian images (or parts) with the
same identity is smaller than ones with the different identity. In the
experiment, we show that the proposed Part-based Deep Hashing method yields
very competitive re-id accuracy on the large-scale Market-1501 and
Market-1501+500K datasets.
| [
{
"version": "v1",
"created": "Fri, 5 May 2017 09:24:13 GMT"
}
] | 2017-05-08T00:00:00 | [
[
"Zhu",
"Fuqing",
""
],
[
"Kong",
"Xiangwei",
""
],
[
"Zheng",
"Liang",
""
],
[
"Fu",
"Haiyan",
""
],
[
"Tian",
"Qi",
""
]
] | TITLE: Part-based Deep Hashing for Large-scale Person Re-identification
ABSTRACT: Large-scale is a trend in person re-identification (re-id). It is important
that real-time search be performed in a large gallery. While previous methods
mostly focus on discriminative learning, this paper makes the attempt in
integrating deep learning and hashing into one framework to evaluate the
efficiency and accuracy for large-scale person re-id. We integrate spatial
information for discriminative visual representation by partitioning the
pedestrian image into horizontal parts. Specifically, Part-based Deep Hashing
(PDH) is proposed, in which batches of triplet samples are employed as the
input of the deep hashing architecture. Each triplet sample contains two
pedestrian images (or parts) with the same identity and one pedestrian image
(or part) of the different identity. A triplet loss function is employed with a
constraint that the Hamming distance of pedestrian images (or parts) with the
same identity is smaller than ones with the different identity. In the
experiment, we show that the proposed Part-based Deep Hashing method yields
very competitive re-id accuracy on the large-scale Market-1501 and
Market-1501+500K datasets.
| no_new_dataset | 0.949342 |
1705.02148 | Noureldien Hussein | Noureldien Hussein, Efstratios Gavves and Arnold W.M. Smeulders | Unified Embedding and Metric Learning for Zero-Exemplar Event Detection | IEEE CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Event detection in unconstrained videos is conceived as a content-based video
retrieval with two modalities: textual and visual. Given a text describing a
novel event, the goal is to rank related videos accordingly. This task is
zero-exemplar, no video examples are given to the novel event.
Related works train a bank of concept detectors on external data sources.
These detectors predict confidence scores for test videos, which are ranked and
retrieved accordingly. In contrast, we learn a joint space in which the visual
and textual representations are embedded. The space casts a novel event as a
probability of pre-defined events. Also, it learns to measure the distance
between an event and its related videos.
Our model is trained end-to-end on publicly available EventNet. When applied
to TRECVID Multimedia Event Detection dataset, it outperforms the
state-of-the-art by a considerable margin.
| [
{
"version": "v1",
"created": "Fri, 5 May 2017 09:45:58 GMT"
}
] | 2017-05-08T00:00:00 | [
[
"Hussein",
"Noureldien",
""
],
[
"Gavves",
"Efstratios",
""
],
[
"Smeulders",
"Arnold W. M.",
""
]
] | TITLE: Unified Embedding and Metric Learning for Zero-Exemplar Event Detection
ABSTRACT: Event detection in unconstrained videos is conceived as a content-based video
retrieval with two modalities: textual and visual. Given a text describing a
novel event, the goal is to rank related videos accordingly. This task is
zero-exemplar, no video examples are given to the novel event.
Related works train a bank of concept detectors on external data sources.
These detectors predict confidence scores for test videos, which are ranked and
retrieved accordingly. In contrast, we learn a joint space in which the visual
and textual representations are embedded. The space casts a novel event as a
probability of pre-defined events. Also, it learns to measure the distance
between an event and its related videos.
Our model is trained end-to-end on publicly available EventNet. When applied
to TRECVID Multimedia Event Detection dataset, it outperforms the
state-of-the-art by a considerable margin.
| no_new_dataset | 0.948585 |
1705.02156 | Samin Mohammadi | Samin Mohammadi, Reza Farahbakhsh, Noel Crespi | Popularity Evolution of Professional Users on Facebook | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Popularity in social media is an important objective for professional users
(e.g. companies, celebrities, and public figures, etc). A simple yet prominent
metric utilized to measure the popularity of a user is the number of fans or
followers she succeed to attract to her page. Popularity is influenced by
several factors which identifying them is an interesting research topic. This
paper aims to understand this phenomenon in social media by exploring the
popularity evolution for professional users in Facebook. To this end, we
implemented a crawler and monitor the popularity evolution trend of 8k most
popular professional users on Facebook over a period of 14 months. The
collected dataset includes around 20 million popularity values and 43 million
posts. We characterized different popularity evolution patterns by clustering
the users temporal number of fans and study them from various perspectives
including their categories and level of activities. Our observations show that
being active and famous correlate positively with the popularity trend.
| [
{
"version": "v1",
"created": "Fri, 5 May 2017 10:01:29 GMT"
}
] | 2017-05-08T00:00:00 | [
[
"Mohammadi",
"Samin",
""
],
[
"Farahbakhsh",
"Reza",
""
],
[
"Crespi",
"Noel",
""
]
] | TITLE: Popularity Evolution of Professional Users on Facebook
ABSTRACT: Popularity in social media is an important objective for professional users
(e.g. companies, celebrities, and public figures, etc). A simple yet prominent
metric utilized to measure the popularity of a user is the number of fans or
followers she succeed to attract to her page. Popularity is influenced by
several factors which identifying them is an interesting research topic. This
paper aims to understand this phenomenon in social media by exploring the
popularity evolution for professional users in Facebook. To this end, we
implemented a crawler and monitor the popularity evolution trend of 8k most
popular professional users on Facebook over a period of 14 months. The
collected dataset includes around 20 million popularity values and 43 million
posts. We characterized different popularity evolution patterns by clustering
the users temporal number of fans and study them from various perspectives
including their categories and level of activities. Our observations show that
being active and famous correlate positively with the popularity trend.
| new_dataset | 0.890818 |
1705.02175 | Nikos Katzouris | Nikos Katzouris, Alexander Artikis, Georgios Paliouras | Distributed Online Learning of Event Definitions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Logic-based event recognition systems infer occurrences of events in time
using a set of event definitions in the form of first-order rules. The Event
Calculus is a temporal logic that has been used as a basis in event recognition
applications, providing among others, direct connections to machine learning,
via Inductive Logic Programming (ILP). OLED is a recently proposed ILP system
that learns event definitions in the form of Event Calculus theories, in a
single pass over a data stream. In this work we present a version of OLED that
allows for distributed, online learning. We evaluate our approach on a
benchmark activity recognition dataset and show that we can significantly
reduce training times, exchanging minimal information between processing nodes.
| [
{
"version": "v1",
"created": "Fri, 5 May 2017 11:40:11 GMT"
}
] | 2017-05-08T00:00:00 | [
[
"Katzouris",
"Nikos",
""
],
[
"Artikis",
"Alexander",
""
],
[
"Paliouras",
"Georgios",
""
]
] | TITLE: Distributed Online Learning of Event Definitions
ABSTRACT: Logic-based event recognition systems infer occurrences of events in time
using a set of event definitions in the form of first-order rules. The Event
Calculus is a temporal logic that has been used as a basis in event recognition
applications, providing among others, direct connections to machine learning,
via Inductive Logic Programming (ILP). OLED is a recently proposed ILP system
that learns event definitions in the form of Event Calculus theories, in a
single pass over a data stream. In this work we present a version of OLED that
allows for distributed, online learning. We evaluate our approach on a
benchmark activity recognition dataset and show that we can significantly
reduce training times, exchanging minimal information between processing nodes.
| no_new_dataset | 0.951818 |
1705.02304 | Chao Li | Chao Li, Xiaokong Ma, Bing Jiang, Xiangang Li, Xuewei Zhang, Xiao Liu,
Ying Cao, Ajay Kannan, Zhenyao Zhu | Deep Speaker: an End-to-End Neural Speaker Embedding System | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Deep Speaker, a neural speaker embedding system that maps
utterances to a hypersphere where speaker similarity is measured by cosine
similarity. The embeddings generated by Deep Speaker can be used for many
tasks, including speaker identification, verification, and clustering. We
experiment with ResCNN and GRU architectures to extract the acoustic features,
then mean pool to produce utterance-level speaker embeddings, and train using
triplet loss based on cosine similarity. Experiments on three distinct datasets
suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For
example, Deep Speaker reduces the verification equal error rate by 50%
(relatively) and improves the identification accuracy by 60% (relatively) on a
text-independent dataset. We also present results that suggest adapting from a
model trained with Mandarin can improve accuracy for English speaker
recognition.
| [
{
"version": "v1",
"created": "Fri, 5 May 2017 17:10:16 GMT"
}
] | 2017-05-08T00:00:00 | [
[
"Li",
"Chao",
""
],
[
"Ma",
"Xiaokong",
""
],
[
"Jiang",
"Bing",
""
],
[
"Li",
"Xiangang",
""
],
[
"Zhang",
"Xuewei",
""
],
[
"Liu",
"Xiao",
""
],
[
"Cao",
"Ying",
""
],
[
"Kannan",
"Ajay",
""
],
[
"Zhu",
"Zhenyao",
""
]
] | TITLE: Deep Speaker: an End-to-End Neural Speaker Embedding System
ABSTRACT: We present Deep Speaker, a neural speaker embedding system that maps
utterances to a hypersphere where speaker similarity is measured by cosine
similarity. The embeddings generated by Deep Speaker can be used for many
tasks, including speaker identification, verification, and clustering. We
experiment with ResCNN and GRU architectures to extract the acoustic features,
then mean pool to produce utterance-level speaker embeddings, and train using
triplet loss based on cosine similarity. Experiments on three distinct datasets
suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For
example, Deep Speaker reduces the verification equal error rate by 50%
(relatively) and improves the identification accuracy by 60% (relatively) on a
text-independent dataset. We also present results that suggest adapting from a
model trained with Mandarin can improve accuracy for English speaker
recognition.
| no_new_dataset | 0.951051 |
1705.02307 | Francesco Grassi | Francesco Grassi, Andreas Loukas, Nathana\"el Perraudin, Benjamin
Ricaud | A Time-Vertex Signal Processing Framework | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An emerging way to deal with high-dimensional non-euclidean data is to assume
that the underlying structure can be captured by a graph. Recently, ideas have
begun to emerge related to the analysis of time-varying graph signals. This
work aims to elevate the notion of joint harmonic analysis to a full-fledged
framework denoted as Time-Vertex Signal Processing, that links together the
time-domain signal processing techniques with the new tools of graph signal
processing. This entails three main contributions: (a) We provide a formal
motivation for harmonic time-vertex analysis as an analysis tool for the state
evolution of simple Partial Differential Equations on graphs. (b) We improve
the accuracy of joint filtering operators by up-to two orders of magnitude. (c)
Using our joint filters, we construct time-vertex dictionaries analyzing the
different scales and the local time-frequency content of a signal. The utility
of our tools is illustrated in numerous applications and datasets, such as
dynamic mesh denoising and classification, still-video inpainting, and source
localization in seismic events. Our results suggest that joint analysis of
time-vertex signals can bring benefits to regression and learning.
| [
{
"version": "v1",
"created": "Fri, 5 May 2017 17:20:32 GMT"
}
] | 2017-05-08T00:00:00 | [
[
"Grassi",
"Francesco",
""
],
[
"Loukas",
"Andreas",
""
],
[
"Perraudin",
"Nathanaël",
""
],
[
"Ricaud",
"Benjamin",
""
]
] | TITLE: A Time-Vertex Signal Processing Framework
ABSTRACT: An emerging way to deal with high-dimensional non-euclidean data is to assume
that the underlying structure can be captured by a graph. Recently, ideas have
begun to emerge related to the analysis of time-varying graph signals. This
work aims to elevate the notion of joint harmonic analysis to a full-fledged
framework denoted as Time-Vertex Signal Processing, that links together the
time-domain signal processing techniques with the new tools of graph signal
processing. This entails three main contributions: (a) We provide a formal
motivation for harmonic time-vertex analysis as an analysis tool for the state
evolution of simple Partial Differential Equations on graphs. (b) We improve
the accuracy of joint filtering operators by up-to two orders of magnitude. (c)
Using our joint filters, we construct time-vertex dictionaries analyzing the
different scales and the local time-frequency content of a signal. The utility
of our tools is illustrated in numerous applications and datasets, such as
dynamic mesh denoising and classification, still-video inpainting, and source
localization in seismic events. Our results suggest that joint analysis of
time-vertex signals can bring benefits to regression and learning.
| no_new_dataset | 0.948346 |
1604.02182 | Joseph Robinson | Joseph P. Robinson, Ming Shao, Yue Wu, Yun Fu | Families in the Wild (FIW): Large-Scale Kinship Image Database and
Benchmarks | null | ACM MM (2016) 242-246 | 10.1145/2964284.2967219 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the largest kinship recognition dataset to date, Families in the
Wild (FIW). Motivated by the lack of a single, unified dataset for kinship
recognition, we aim to provide a dataset that captivates the interest of the
research community. With only a small team, we were able to collect, organize,
and label over 10,000 family photos of 1,000 families with our annotation tool
designed to mark complex hierarchical relationships and local label information
in a quick and efficient manner. We include several benchmarks for two
image-based tasks, kinship verification and family recognition. For this, we
incorporate several visual features and metric learning methods as baselines.
Also, we demonstrate that a pre-trained Convolutional Neural Network (CNN) as
an off-the-shelf feature extractor outperforms the other feature types. Then,
results were further boosted by fine-tuning two deep CNNs on FIW data: (1) for
kinship verification, a triplet loss function was learned on top of the network
of pre-trained weights; (2) for family recognition, a family-specific softmax
classifier was added to the network.
| [
{
"version": "v1",
"created": "Thu, 7 Apr 2016 21:45:53 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2017 03:15:48 GMT"
}
] | 2017-05-05T00:00:00 | [
[
"Robinson",
"Joseph P.",
""
],
[
"Shao",
"Ming",
""
],
[
"Wu",
"Yue",
""
],
[
"Fu",
"Yun",
""
]
] | TITLE: Families in the Wild (FIW): Large-Scale Kinship Image Database and
Benchmarks
ABSTRACT: We present the largest kinship recognition dataset to date, Families in the
Wild (FIW). Motivated by the lack of a single, unified dataset for kinship
recognition, we aim to provide a dataset that captivates the interest of the
research community. With only a small team, we were able to collect, organize,
and label over 10,000 family photos of 1,000 families with our annotation tool
designed to mark complex hierarchical relationships and local label information
in a quick and efficient manner. We include several benchmarks for two
image-based tasks, kinship verification and family recognition. For this, we
incorporate several visual features and metric learning methods as baselines.
Also, we demonstrate that a pre-trained Convolutional Neural Network (CNN) as
an off-the-shelf feature extractor outperforms the other feature types. Then,
results were further boosted by fine-tuning two deep CNNs on FIW data: (1) for
kinship verification, a triplet loss function was learned on top of the network
of pre-trained weights; (2) for family recognition, a family-specific softmax
classifier was added to the network.
| new_dataset | 0.963643 |
1606.07558 | Andrew Cotter | Gabriel Goh, Andrew Cotter, Maya Gupta, Michael Friedlander | Satisfying Real-world Goals with Dataset Constraints | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of minimizing misclassification error on a training set is often
just one of several real-world goals that might be defined on different
datasets. For example, one may require a classifier to also make positive
predictions at some specified rate for some subpopulation (fairness), or to
achieve a specified empirical recall. Other real-world goals include reducing
churn with respect to a previously deployed model, or stabilizing online
training. In this paper we propose handling multiple goals on multiple datasets
by training with dataset constraints, using the ramp penalty to accurately
quantify costs, and present an efficient algorithm to approximately optimize
the resulting non-convex constrained optimization problem. Experiments on both
benchmark and real-world industry datasets demonstrate the effectiveness of our
approach.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2016 03:42:41 GMT"
},
{
"version": "v2",
"created": "Wed, 3 May 2017 23:02:56 GMT"
}
] | 2017-05-05T00:00:00 | [
[
"Goh",
"Gabriel",
""
],
[
"Cotter",
"Andrew",
""
],
[
"Gupta",
"Maya",
""
],
[
"Friedlander",
"Michael",
""
]
] | TITLE: Satisfying Real-world Goals with Dataset Constraints
ABSTRACT: The goal of minimizing misclassification error on a training set is often
just one of several real-world goals that might be defined on different
datasets. For example, one may require a classifier to also make positive
predictions at some specified rate for some subpopulation (fairness), or to
achieve a specified empirical recall. Other real-world goals include reducing
churn with respect to a previously deployed model, or stabilizing online
training. In this paper we propose handling multiple goals on multiple datasets
by training with dataset constraints, using the ramp penalty to accurately
quantify costs, and present an efficient algorithm to approximately optimize
the resulting non-convex constrained optimization problem. Experiments on both
benchmark and real-world industry datasets demonstrate the effectiveness of our
approach.
| no_new_dataset | 0.948298 |
1609.08259 | Kristof Sch\"utt | Kristof T. Sch\"utt, Farhad Arbabzadah, Stefan Chmiela, Klaus R.
M\"uller, Alexandre Tkatchenko | Quantum-Chemical Insights from Deep Tensor Neural Networks | null | Nature Comm. 8, 13890 (2017) | 10.1038/ncomms13890 | null | physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning from data has led to paradigm shifts in a multitude of disciplines,
including web, text, and image search, speech recognition, as well as
bioinformatics. Can machine learning enable similar breakthroughs in
understanding quantum many-body systems? Here we develop an efficient deep
learning approach that enables spatially and chemically resolved insights into
quantum-mechanical observables of molecular systems. We unify concepts from
many-body Hamiltonians with purpose-designed deep tensor neural networks
(DTNN), which leads to size-extensive and uniformly accurate (1 kcal/mol)
predictions in compositional and configurational chemical space for molecules
of intermediate size. As an example of chemical relevance, the DTNN model
reveals a classification of aromatic rings with respect to their stability -- a
useful property that is not contained as such in the training dataset. Further
applications of DTNN for predicting atomic energies and local chemical
potentials in molecules, reliable isomer energies, and molecules with peculiar
electronic structure demonstrate the high potential of machine learning for
revealing novel insights into complex quantum-chemical systems.
| [
{
"version": "v1",
"created": "Tue, 27 Sep 2016 05:17:34 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Oct 2016 17:28:28 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Oct 2016 14:33:33 GMT"
},
{
"version": "v4",
"created": "Mon, 7 Nov 2016 11:03:49 GMT"
}
] | 2017-05-05T00:00:00 | [
[
"Schütt",
"Kristof T.",
""
],
[
"Arbabzadah",
"Farhad",
""
],
[
"Chmiela",
"Stefan",
""
],
[
"Müller",
"Klaus R.",
""
],
[
"Tkatchenko",
"Alexandre",
""
]
] | TITLE: Quantum-Chemical Insights from Deep Tensor Neural Networks
ABSTRACT: Learning from data has led to paradigm shifts in a multitude of disciplines,
including web, text, and image search, speech recognition, as well as
bioinformatics. Can machine learning enable similar breakthroughs in
understanding quantum many-body systems? Here we develop an efficient deep
learning approach that enables spatially and chemically resolved insights into
quantum-mechanical observables of molecular systems. We unify concepts from
many-body Hamiltonians with purpose-designed deep tensor neural networks
(DTNN), which leads to size-extensive and uniformly accurate (1 kcal/mol)
predictions in compositional and configurational chemical space for molecules
of intermediate size. As an example of chemical relevance, the DTNN model
reveals a classification of aromatic rings with respect to their stability -- a
useful property that is not contained as such in the training dataset. Further
applications of DTNN for predicting atomic energies and local chemical
potentials in molecules, reliable isomer energies, and molecules with peculiar
electronic structure demonstrate the high potential of machine learning for
revealing novel insights into complex quantum-chemical systems.
| no_new_dataset | 0.949153 |
1612.04600 | Joerg Evermann | Joerg Evermann, Jana-Rebecca Rehse, Peter Fettke | Predicting Process Behaviour using Deep Learning | 34 pages, 10 figures | null | 10.1016/j.dss.2017.04.003 | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting business process behaviour is an important aspect of business
process management. Motivated by research in natural language processing, this
paper describes an application of deep learning with recurrent neural networks
to the problem of predicting the next event in a business process. This is both
a novel method in process prediction, which has largely relied on explicit
process models, and also a novel application of deep learning methods. The
approach is evaluated on two real datasets and our results surpass the
state-of-the-art in prediction precision.
| [
{
"version": "v1",
"created": "Wed, 14 Dec 2016 12:33:28 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2017 17:22:08 GMT"
}
] | 2017-05-05T00:00:00 | [
[
"Evermann",
"Joerg",
""
],
[
"Rehse",
"Jana-Rebecca",
""
],
[
"Fettke",
"Peter",
""
]
] | TITLE: Predicting Process Behaviour using Deep Learning
ABSTRACT: Predicting business process behaviour is an important aspect of business
process management. Motivated by research in natural language processing, this
paper describes an application of deep learning with recurrent neural networks
to the problem of predicting the next event in a business process. This is both
a novel method in process prediction, which has largely relied on explicit
process models, and also a novel application of deep learning methods. The
approach is evaluated on two real datasets and our results surpass the
state-of-the-art in prediction precision.
| no_new_dataset | 0.949856 |
1703.08640 | Kun Yao | Kun Yao, John Herr, Seth Brown, John Parkhill | Bond Energies from a Diatomics-in-Molecules Neural Network | null | null | null | null | physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural networks are being used to make new types of empirical chemical models
as inexpensive as force fields, but with accuracy close to the ab-initio
methods used to build them. Besides modeling potential energy surfaces,
neural-nets can provide qualitative insights and make qualitative chemical
trends quantitatively predictable. In this work we present a neural-network
that predicts the energies of molecules as a sum of bond energies. The network
learns the total energies of the popular GDB9 dataset to a competitive MAE of
0.94 kcal/mol. The method is naturally linearly scaling, and applicable to
molecules of nanoscopic size. More importantly it gives chemical insight into
the relative strengths of bonds as a function of their molecular environment,
despite only being trained on total energy information. We show that the
network makes predictions of relative bond strengths in good agreement with
measured trends and human predictions. We show that DIM-NN learns the same
heuristic trends in relative bond strength developed by expert synthetic
chemists, and ab-initio bond order measures such as NBO analysis.
| [
{
"version": "v1",
"created": "Sat, 25 Mar 2017 02:50:00 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2017 00:49:13 GMT"
},
{
"version": "v3",
"created": "Wed, 29 Mar 2017 00:21:55 GMT"
},
{
"version": "v4",
"created": "Fri, 14 Apr 2017 18:44:54 GMT"
},
{
"version": "v5",
"created": "Wed, 3 May 2017 19:46:16 GMT"
}
] | 2017-05-05T00:00:00 | [
[
"Yao",
"Kun",
""
],
[
"Herr",
"John",
""
],
[
"Brown",
"Seth",
""
],
[
"Parkhill",
"John",
""
]
] | TITLE: Bond Energies from a Diatomics-in-Molecules Neural Network
ABSTRACT: Neural networks are being used to make new types of empirical chemical models
as inexpensive as force fields, but with accuracy close to the ab-initio
methods used to build them. Besides modeling potential energy surfaces,
neural-nets can provide qualitative insights and make qualitative chemical
trends quantitatively predictable. In this work we present a neural-network
that predicts the energies of molecules as a sum of bond energies. The network
learns the total energies of the popular GDB9 dataset to a competitive MAE of
0.94 kcal/mol. The method is naturally linearly scaling, and applicable to
molecules of nanoscopic size. More importantly it gives chemical insight into
the relative strengths of bonds as a function of their molecular environment,
despite only being trained on total energy information. We show that the
network makes predictions of relative bond strengths in good agreement with
measured trends and human predictions. We show that DIM-NN learns the same
heuristic trends in relative bond strength developed by expert synthetic
chemists, and ab-initio bond order measures such as NBO analysis.
| no_new_dataset | 0.950088 |
1704.04718 | Xu Youjun Xu Youjun | Youjun Xu, Jianfeng Pei, Luhua Lai | Deep Learning Based Regression and Multi-class Models for Acute Oral
Toxicity Prediction with Automatic Chemical Feature Extraction | 36 pages, 4 figures | null | null | null | stat.ML cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For quantitative structure-property relationship (QSPR) studies in
chemoinformatics, it is important to get interpretable relationship between
chemical properties and chemical features. However, the predictive power and
interpretability of QSPR models are usually two different objectives that are
difficult to achieve simultaneously. A deep learning architecture using
molecular graph encoding convolutional neural networks (MGE-CNN) provided a
universal strategy to construct interpretable QSPR models with high predictive
power. Instead of using application-specific preset molecular descriptors or
fingerprints, the models can be resolved using raw and pertinent features
without manual intervention or selection. In this study, we developed acute
oral toxicity (AOT) models of compounds using the MGE-CNN architecture as a
case study. Three types of high-level predictive models: regression model
(deepAOT-R), multi-classification model (deepAOT-C) and multi-task model
(deepAOT-CR) for AOT evaluation were constructed. These models highly
outperformed previously reported models. For the two external datasets
containing 1673 (test set I) and 375 (test set II) compounds, the R2 and mean
absolute error (MAE) of deepAOT-R on the test set I were 0.864 and 0.195, and
the prediction accuracy of deepAOT-C was 95.5% and 96.3% on the test set I and
II, respectively. The two external prediction accuracy of deepAOT-CR is 95.0%
and 94.1%, while the R2 and MAE are 0.861 and 0.204 for test set I,
respectively.
| [
{
"version": "v1",
"created": "Sun, 16 Apr 2017 04:17:32 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Apr 2017 02:10:10 GMT"
},
{
"version": "v3",
"created": "Thu, 4 May 2017 09:52:38 GMT"
}
] | 2017-05-05T00:00:00 | [
[
"Xu",
"Youjun",
""
],
[
"Pei",
"Jianfeng",
""
],
[
"Lai",
"Luhua",
""
]
] | TITLE: Deep Learning Based Regression and Multi-class Models for Acute Oral
Toxicity Prediction with Automatic Chemical Feature Extraction
ABSTRACT: For quantitative structure-property relationship (QSPR) studies in
chemoinformatics, it is important to get interpretable relationship between
chemical properties and chemical features. However, the predictive power and
interpretability of QSPR models are usually two different objectives that are
difficult to achieve simultaneously. A deep learning architecture using
molecular graph encoding convolutional neural networks (MGE-CNN) provided a
universal strategy to construct interpretable QSPR models with high predictive
power. Instead of using application-specific preset molecular descriptors or
fingerprints, the models can be resolved using raw and pertinent features
without manual intervention or selection. In this study, we developed acute
oral toxicity (AOT) models of compounds using the MGE-CNN architecture as a
case study. Three types of high-level predictive models: regression model
(deepAOT-R), multi-classification model (deepAOT-C) and multi-task model
(deepAOT-CR) for AOT evaluation were constructed. These models highly
outperformed previously reported models. For the two external datasets
containing 1673 (test set I) and 375 (test set II) compounds, the R2 and mean
absolute error (MAE) of deepAOT-R on the test set I were 0.864 and 0.195, and
the prediction accuracy of deepAOT-C was 95.5% and 96.3% on the test set I and
II, respectively. The two external prediction accuracy of deepAOT-CR is 95.0%
and 94.1%, while the R2 and MAE are 0.861 and 0.204 for test set I,
respectively.
| no_new_dataset | 0.952309 |
1705.01707 | Jan Svoboda | Jan Svoboda, Federico Monti, Michael M. Bronstein | Generative Convolutional Networks for Latent Fingerprint Reconstruction | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Performance of fingerprint recognition depends heavily on the extraction of
minutiae points. Enhancement of the fingerprint ridge pattern is thus an
essential pre-processing step that noticeably reduces false positive and
negative detection rates. A particularly challenging setting is when the
fingerprint images are corrupted or partially missing. In this work, we apply
generative convolutional networks to denoise visible minutiae and predict the
missing parts of the ridge pattern. The proposed enhancement approach is tested
as a pre-processing step in combination with several standard feature
extraction methods such as MINDTCT, followed by biometric comparison using MCC
and BOZORTH3. We evaluate our method on several publicly available latent
fingerprint datasets captured using different sensors.
| [
{
"version": "v1",
"created": "Thu, 4 May 2017 05:29:23 GMT"
}
] | 2017-05-05T00:00:00 | [
[
"Svoboda",
"Jan",
""
],
[
"Monti",
"Federico",
""
],
[
"Bronstein",
"Michael M.",
""
]
] | TITLE: Generative Convolutional Networks for Latent Fingerprint Reconstruction
ABSTRACT: Performance of fingerprint recognition depends heavily on the extraction of
minutiae points. Enhancement of the fingerprint ridge pattern is thus an
essential pre-processing step that noticeably reduces false positive and
negative detection rates. A particularly challenging setting is when the
fingerprint images are corrupted or partially missing. In this work, we apply
generative convolutional networks to denoise visible minutiae and predict the
missing parts of the ridge pattern. The proposed enhancement approach is tested
as a pre-processing step in combination with several standard feature
extraction methods such as MINDTCT, followed by biometric comparison using MCC
and BOZORTH3. We evaluate our method on several publicly available latent
fingerprint datasets captured using different sensors.
| no_new_dataset | 0.951006 |
1705.01759 | Hou-Ning Hu | Hou-Ning Hu, Yen-Chen Lin, Ming-Yu Liu, Hsien-Tzu Cheng, Yung-Ju
Chang, Min Sun | Deep 360 Pilot: Learning a Deep Agent for Piloting through 360{\deg}
Sports Video | 13 pages, 8 figures, To appear in CVPR 2017 as an Oral paper. The
first two authors contributed equally to this work.
https://aliensunmin.github.io/project/360video/ | null | null | null | cs.CV cs.GR cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Watching a 360{\deg} sports video requires a viewer to continuously select a
viewing angle, either through a sequence of mouse clicks or head movements. To
relieve the viewer from this "360 piloting" task, we propose "deep 360 pilot"
-- a deep learning-based agent for piloting through 360{\deg} sports videos
automatically. At each frame, the agent observes a panoramic image and has the
knowledge of previously selected viewing angles. The task of the agent is to
shift the current viewing angle (i.e. action) to the next preferred one (i.e.,
goal). We propose to directly learn an online policy of the agent from data. We
use the policy gradient technique to jointly train our pipeline: by minimizing
(1) a regression loss measuring the distance between the selected and ground
truth viewing angles, (2) a smoothness loss encouraging smooth transition in
viewing angle, and (3) maximizing an expected reward of focusing on a
foreground object. To evaluate our method, we build a new 360-Sports video
dataset consisting of five sports domains. We train domain-specific agents and
achieve the best performance on viewing angle selection accuracy and transition
smoothness compared to [51] and other baselines.
| [
{
"version": "v1",
"created": "Thu, 4 May 2017 09:26:58 GMT"
}
] | 2017-05-05T00:00:00 | [
[
"Hu",
"Hou-Ning",
""
],
[
"Lin",
"Yen-Chen",
""
],
[
"Liu",
"Ming-Yu",
""
],
[
"Cheng",
"Hsien-Tzu",
""
],
[
"Chang",
"Yung-Ju",
""
],
[
"Sun",
"Min",
""
]
] | TITLE: Deep 360 Pilot: Learning a Deep Agent for Piloting through 360{\deg}
Sports Video
ABSTRACT: Watching a 360{\deg} sports video requires a viewer to continuously select a
viewing angle, either through a sequence of mouse clicks or head movements. To
relieve the viewer from this "360 piloting" task, we propose "deep 360 pilot"
-- a deep learning-based agent for piloting through 360{\deg} sports videos
automatically. At each frame, the agent observes a panoramic image and has the
knowledge of previously selected viewing angles. The task of the agent is to
shift the current viewing angle (i.e. action) to the next preferred one (i.e.,
goal). We propose to directly learn an online policy of the agent from data. We
use the policy gradient technique to jointly train our pipeline: by minimizing
(1) a regression loss measuring the distance between the selected and ground
truth viewing angles, (2) a smoothness loss encouraging smooth transition in
viewing angle, and (3) maximizing an expected reward of focusing on a
foreground object. To evaluate our method, we build a new 360-Sports video
dataset consisting of five sports domains. We train domain-specific agents and
achieve the best performance on viewing angle selection accuracy and transition
smoothness compared to [51] and other baselines.
| new_dataset | 0.957794 |
1705.01782 | Li Liu | Yang Long, Li Liu, Ling Shao, Fumin Shen, Guiguang Ding, Jungong Han | From Zero-shot Learning to Conventional Supervised Classification:
Unseen Visual Data Synthesis | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robust object recognition systems usually rely on powerful feature extraction
mechanisms from a large number of real images. However, in many realistic
applications, collecting sufficient images for ever-growing new classes is
unattainable. In this paper, we propose a new Zero-shot learning (ZSL)
framework that can synthesise visual features for unseen classes without
acquiring real images. Using the proposed Unseen Visual Data Synthesis (UVDS)
algorithm, semantic attributes are effectively utilised as an intermediate clue
to synthesise unseen visual features at the training stage. Hereafter, ZSL
recognition is converted into the conventional supervised problem, i.e. the
synthesised visual features can be straightforwardly fed to typical classifiers
such as SVM. On four benchmark datasets, we demonstrate the benefit of using
synthesised unseen data. Extensive experimental results suggest that our
proposed approach significantly improve the state-of-the-art results.
| [
{
"version": "v1",
"created": "Thu, 4 May 2017 10:28:37 GMT"
}
] | 2017-05-05T00:00:00 | [
[
"Long",
"Yang",
""
],
[
"Liu",
"Li",
""
],
[
"Shao",
"Ling",
""
],
[
"Shen",
"Fumin",
""
],
[
"Ding",
"Guiguang",
""
],
[
"Han",
"Jungong",
""
]
] | TITLE: From Zero-shot Learning to Conventional Supervised Classification:
Unseen Visual Data Synthesis
ABSTRACT: Robust object recognition systems usually rely on powerful feature extraction
mechanisms from a large number of real images. However, in many realistic
applications, collecting sufficient images for ever-growing new classes is
unattainable. In this paper, we propose a new Zero-shot learning (ZSL)
framework that can synthesise visual features for unseen classes without
acquiring real images. Using the proposed Unseen Visual Data Synthesis (UVDS)
algorithm, semantic attributes are effectively utilised as an intermediate clue
to synthesise unseen visual features at the training stage. Hereafter, ZSL
recognition is converted into the conventional supervised problem, i.e. the
synthesised visual features can be straightforwardly fed to typical classifiers
such as SVM. On four benchmark datasets, we demonstrate the benefit of using
synthesised unseen data. Extensive experimental results suggest that our
proposed approach significantly improve the state-of-the-art results.
| no_new_dataset | 0.949669 |
1606.01379 | Hamed Azami | Hamed Azami, Mostafa Rostaghi, Daniel Abasolo, and Javier Escudero | Refined Composite Multiscale Dispersion Entropy and its Application to
Biomedical Signals | 8 pages, 6 figures | IEEE Transactions on Biomedical Engineering (2017) | 10.1109/TBME.2017.2679136 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiscale entropy (MSE) is a widely-used tool to analyze biomedical signals.
It was proposed to overcome the deficiencies of conventional entropy methods
when quantifying the complexity of time series. However, MSE is undefined for
very short signals and slow for real-time applications because of the use of
sample entropy (SampEn). To overcome these shortcomings, we introduce
multiscale dispersion entropy (DisEn - MDE) as a very fast and powerful method
to quantify the complexity of signals. MDE is based on our recently developed
DisEn, which has a computation cost of O(N), compared with O(N2) for SampEn. We
also propose the refined composite MDE (RCMDE) to improve the stability of MDE.
We evaluate MDE, RCMDE, and refined composite MSE (RCMSE) on synthetic signals
and find that these methods have similar behaviors but the MDE and RCMDE are
significantly faster than MSE and RCMSE, respectively. The results also
illustrate that RCMDE is more stable than MDE for short and noisy signals,
which are common in biomedical applications. To evaluate the proposed methods
on real signals, we employ three biomedical datasets, including focal and
non-focal electroencephalograms (EEGs), blood pressure recordings in Fantasia
database, and resting-state EEGs activity in Alzheimer's disease (AD). The
results again demonstrate a similar behavior of RCMSE, MDE and RCMDE, although
the RCMDE and MDE are significantly faster and lead to larger differences
between physiological conditions known to alter the complexity of the
physiological recordings. To sum up, MDE and RCMDE are expected to be useful
for the analysis of physiological signals thanks to their ability to
distinguish different types of dynamics.
| [
{
"version": "v1",
"created": "Sat, 4 Jun 2016 13:54:09 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Jul 2016 15:34:44 GMT"
},
{
"version": "v3",
"created": "Wed, 3 May 2017 16:28:49 GMT"
}
] | 2017-05-04T00:00:00 | [
[
"Azami",
"Hamed",
""
],
[
"Rostaghi",
"Mostafa",
""
],
[
"Abasolo",
"Daniel",
""
],
[
"Escudero",
"Javier",
""
]
] | TITLE: Refined Composite Multiscale Dispersion Entropy and its Application to
Biomedical Signals
ABSTRACT: Multiscale entropy (MSE) is a widely-used tool to analyze biomedical signals.
It was proposed to overcome the deficiencies of conventional entropy methods
when quantifying the complexity of time series. However, MSE is undefined for
very short signals and slow for real-time applications because of the use of
sample entropy (SampEn). To overcome these shortcomings, we introduce
multiscale dispersion entropy (DisEn - MDE) as a very fast and powerful method
to quantify the complexity of signals. MDE is based on our recently developed
DisEn, which has a computation cost of O(N), compared with O(N2) for SampEn. We
also propose the refined composite MDE (RCMDE) to improve the stability of MDE.
We evaluate MDE, RCMDE, and refined composite MSE (RCMSE) on synthetic signals
and find that these methods have similar behaviors but the MDE and RCMDE are
significantly faster than MSE and RCMSE, respectively. The results also
illustrate that RCMDE is more stable than MDE for short and noisy signals,
which are common in biomedical applications. To evaluate the proposed methods
on real signals, we employ three biomedical datasets, including focal and
non-focal electroencephalograms (EEGs), blood pressure recordings in Fantasia
database, and resting-state EEGs activity in Alzheimer's disease (AD). The
results again demonstrate a similar behavior of RCMSE, MDE and RCMDE, although
the RCMDE and MDE are significantly faster and lead to larger differences
between physiological conditions known to alter the complexity of the
physiological recordings. To sum up, MDE and RCMDE are expected to be useful
for the analysis of physiological signals thanks to their ability to
distinguish different types of dynamics.
| no_new_dataset | 0.945197 |
1608.03983 | Ilya Loshchilov | Ilya Loshchilov and Frank Hutter | SGDR: Stochastic Gradient Descent with Warm Restarts | ICLR 2017 conference paper | null | null | null | cs.LG cs.NE math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR
| [
{
"version": "v1",
"created": "Sat, 13 Aug 2016 13:46:05 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Aug 2016 13:05:07 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Feb 2017 14:33:00 GMT"
},
{
"version": "v4",
"created": "Mon, 6 Mar 2017 13:06:59 GMT"
},
{
"version": "v5",
"created": "Wed, 3 May 2017 16:28:09 GMT"
}
] | 2017-05-04T00:00:00 | [
[
"Loshchilov",
"Ilya",
""
],
[
"Hutter",
"Frank",
""
]
] | TITLE: SGDR: Stochastic Gradient Descent with Warm Restarts
ABSTRACT: Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR
| no_new_dataset | 0.94801 |
1704.05908 | Qizhe Xie | Qizhe Xie, Xuezhe Ma, Zihang Dai, Eduard Hovy | An Interpretable Knowledge Transfer Model for Knowledge Base Completion | Accepted by ACL 2017. Minor update | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge bases are important resources for a variety of natural language
processing tasks but suffer from incompleteness. We propose a novel embedding
model, \emph{ITransF}, to perform knowledge base completion. Equipped with a
sparse attention mechanism, ITransF discovers hidden concepts of relations and
transfer statistical strength through the sharing of concepts. Moreover, the
learned associations between relations and concepts, which are represented by
sparse attention vectors, can be interpreted easily. We evaluate ITransF on two
benchmark datasets---WN18 and FB15k for knowledge base completion and obtains
improvements on both the mean rank and Hits@10 metrics, over all baselines that
do not use additional information.
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2017 19:35:54 GMT"
},
{
"version": "v2",
"created": "Wed, 3 May 2017 05:20:09 GMT"
}
] | 2017-05-04T00:00:00 | [
[
"Xie",
"Qizhe",
""
],
[
"Ma",
"Xuezhe",
""
],
[
"Dai",
"Zihang",
""
],
[
"Hovy",
"Eduard",
""
]
] | TITLE: An Interpretable Knowledge Transfer Model for Knowledge Base Completion
ABSTRACT: Knowledge bases are important resources for a variety of natural language
processing tasks but suffer from incompleteness. We propose a novel embedding
model, \emph{ITransF}, to perform knowledge base completion. Equipped with a
sparse attention mechanism, ITransF discovers hidden concepts of relations and
transfer statistical strength through the sharing of concepts. Moreover, the
learned associations between relations and concepts, which are represented by
sparse attention vectors, can be interpreted easily. We evaluate ITransF on two
benchmark datasets---WN18 and FB15k for knowledge base completion and obtains
improvements on both the mean rank and Hits@10 metrics, over all baselines that
do not use additional information.
| no_new_dataset | 0.945349 |
1705.00045 | Xinyu Hua | Xinyu Hua and Lu Wang | Understanding and Detecting Supporting Arguments of Diverse Types | This paper is accepted as a short paper in ACL 2017 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the problem of sentence-level supporting argument detection
from relevant documents for user-specified claims. A dataset containing claims
and associated citation articles is collected from online debate website
idebate.org. We then manually label sentence-level supporting arguments from
the documents along with their types as study, factual, opinion, or reasoning.
We further characterize arguments of different types, and explore whether
leveraging type information can facilitate the supporting arguments detection
task. Experimental results show that LambdaMART (Burges, 2010) ranker that uses
features informed by argument types yields better performance than the same
ranker trained without type information.
| [
{
"version": "v1",
"created": "Fri, 28 Apr 2017 19:29:54 GMT"
},
{
"version": "v2",
"created": "Tue, 2 May 2017 22:00:13 GMT"
}
] | 2017-05-04T00:00:00 | [
[
"Hua",
"Xinyu",
""
],
[
"Wang",
"Lu",
""
]
] | TITLE: Understanding and Detecting Supporting Arguments of Diverse Types
ABSTRACT: We investigate the problem of sentence-level supporting argument detection
from relevant documents for user-specified claims. A dataset containing claims
and associated citation articles is collected from online debate website
idebate.org. We then manually label sentence-level supporting arguments from
the documents along with their types as study, factual, opinion, or reasoning.
We further characterize arguments of different types, and explore whether
leveraging type information can facilitate the supporting arguments detection
task. Experimental results show that LambdaMART (Burges, 2010) ranker that uses
features informed by argument types yields better performance than the same
ranker trained without type information.
| no_new_dataset | 0.931898 |
1705.01142 | Swetava Ganguli | Swetava Ganguli, Jared Dunnmon | Machine Learning for Better Models for Predicting Bond Prices | Submitted for publication | null | null | null | q-fin.ST cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bond prices are a reflection of extremely complex market interactions and
policies, making prediction of future prices difficult. This task becomes even
more challenging due to the dearth of relevant information, and accuracy is not
the only consideration--in trading situations, time is of the essence. Thus,
machine learning in the context of bond price predictions should be both fast
and accurate. In this course project, we use a dataset describing the previous
10 trades of a large number of bonds among other relevant descriptive metrics
to predict future bond prices. Each of 762,678 bonds in the dataset is
described by a total of 61 attributes, including a ground truth trade price. We
evaluate the performance of various supervised learning algorithms for
regression followed by ensemble methods, with feature and model selection
considerations being treated in detail. We further evaluate all methods on both
accuracy and speed. Finally, we propose a novel hybrid time-series aided
machine learning method that could be applied to such datasets in future work.
| [
{
"version": "v1",
"created": "Fri, 31 Mar 2017 15:12:49 GMT"
}
] | 2017-05-04T00:00:00 | [
[
"Ganguli",
"Swetava",
""
],
[
"Dunnmon",
"Jared",
""
]
] | TITLE: Machine Learning for Better Models for Predicting Bond Prices
ABSTRACT: Bond prices are a reflection of extremely complex market interactions and
policies, making prediction of future prices difficult. This task becomes even
more challenging due to the dearth of relevant information, and accuracy is not
the only consideration--in trading situations, time is of the essence. Thus,
machine learning in the context of bond price predictions should be both fast
and accurate. In this course project, we use a dataset describing the previous
10 trades of a large number of bonds among other relevant descriptive metrics
to predict future bond prices. Each of 762,678 bonds in the dataset is
described by a total of 61 attributes, including a ground truth trade price. We
evaluate the performance of various supervised learning algorithms for
regression followed by ensemble methods, with feature and model selection
considerations being treated in detail. We further evaluate all methods on both
accuracy and speed. Finally, we propose a novel hybrid time-series aided
machine learning method that could be applied to such datasets in future work.
| new_dataset | 0.96799 |
1705.01156 | Balazs Kovacs | Balazs Kovacs, Sean Bell, Noah Snavely, Kavita Bala | Shading Annotations in the Wild | CVPR 2017 | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding shading effects in images is critical for a variety of vision
and graphics problems, including intrinsic image decomposition, shadow removal,
image relighting, and inverse rendering. As is the case with other vision
tasks, machine learning is a promising approach to understanding shading - but
there is little ground truth shading data available for real-world images. We
introduce Shading Annotations in the Wild (SAW), a new large-scale, public
dataset of shading annotations in indoor scenes, comprised of multiple forms of
shading judgments obtained via crowdsourcing, along with shading annotations
automatically generated from RGB-D imagery. We use this data to train a
convolutional neural network to predict per-pixel shading information in an
image. We demonstrate the value of our data and network in an application to
intrinsic images, where we can reduce decomposition artifacts produced by
existing algorithms. Our database is available at
http://opensurfaces.cs.cornell.edu/saw/.
| [
{
"version": "v1",
"created": "Tue, 2 May 2017 19:54:31 GMT"
}
] | 2017-05-04T00:00:00 | [
[
"Kovacs",
"Balazs",
""
],
[
"Bell",
"Sean",
""
],
[
"Snavely",
"Noah",
""
],
[
"Bala",
"Kavita",
""
]
] | TITLE: Shading Annotations in the Wild
ABSTRACT: Understanding shading effects in images is critical for a variety of vision
and graphics problems, including intrinsic image decomposition, shadow removal,
image relighting, and inverse rendering. As is the case with other vision
tasks, machine learning is a promising approach to understanding shading - but
there is little ground truth shading data available for real-world images. We
introduce Shading Annotations in the Wild (SAW), a new large-scale, public
dataset of shading annotations in indoor scenes, comprised of multiple forms of
shading judgments obtained via crowdsourcing, along with shading annotations
automatically generated from RGB-D imagery. We use this data to train a
convolutional neural network to predict per-pixel shading information in an
image. We demonstrate the value of our data and network in an application to
intrinsic images, where we can reduce decomposition artifacts produced by
existing algorithms. Our database is available at
http://opensurfaces.cs.cornell.edu/saw/.
| new_dataset | 0.962743 |
1705.01180 | Jiyang Gao | Jiyang Gao, Zhenheng Yang, Ram Nevatia | Cascaded Boundary Regression for Temporal Action Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal action detection in long videos is an important problem.
State-of-the-art methods address this problem by applying action classifiers on
sliding windows. Although sliding windows may contain an identifiable portion
of the actions, they may not necessarily cover the entire action instance,
which would lead to inferior performance. We adapt a two-stage temporal action
detection pipeline with Cascaded Boundary Regression (CBR) model.
Class-agnostic proposals and specific actions are detected respectively in the
first and the second stage. CBR uses temporal coordinate regression to refine
the temporal boundaries of the sliding windows. The salient aspect of the
refinement process is that, inside each stage, the temporal boundaries are
adjusted in a cascaded way by feeding the refined windows back to the system
for further boundary refinement. We test CBR on THUMOS-14 and TVSeries, and
achieve state-of-the-art performance on both datasets. The performance gain is
especially remarkable under high IoU thresholds, e.g. map@tIoU=0.5 on THUMOS-14
is improved from 19.0% to 31.0%.
| [
{
"version": "v1",
"created": "Tue, 2 May 2017 21:45:21 GMT"
}
] | 2017-05-04T00:00:00 | [
[
"Gao",
"Jiyang",
""
],
[
"Yang",
"Zhenheng",
""
],
[
"Nevatia",
"Ram",
""
]
] | TITLE: Cascaded Boundary Regression for Temporal Action Detection
ABSTRACT: Temporal action detection in long videos is an important problem.
State-of-the-art methods address this problem by applying action classifiers on
sliding windows. Although sliding windows may contain an identifiable portion
of the actions, they may not necessarily cover the entire action instance,
which would lead to inferior performance. We adapt a two-stage temporal action
detection pipeline with Cascaded Boundary Regression (CBR) model.
Class-agnostic proposals and specific actions are detected respectively in the
first and the second stage. CBR uses temporal coordinate regression to refine
the temporal boundaries of the sliding windows. The salient aspect of the
refinement process is that, inside each stage, the temporal boundaries are
adjusted in a cascaded way by feeding the refined windows back to the system
for further boundary refinement. We test CBR on THUMOS-14 and TVSeries, and
achieve state-of-the-art performance on both datasets. The performance gain is
especially remarkable under high IoU thresholds, e.g. map@tIoU=0.5 on THUMOS-14
is improved from 19.0% to 31.0%.
| no_new_dataset | 0.94743 |
1705.01253 | Hongyang Xue | Hongyang Xue, Zhou Zhao, Deng Cai | The Forgettable-Watcher Model for Video Question Answering | null | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A number of visual question answering approaches have been proposed recently,
aiming at understanding the visual scenes by answering the natural language
questions. While the image question answering has drawn significant attention,
video question answering is largely unexplored.
Video-QA is different from Image-QA since the information and the events are
scattered among multiple frames. In order to better utilize the temporal
structure of the videos and the phrasal structures of the answers, we propose
two mechanisms: the re-watching and the re-reading mechanisms and combine them
into the forgettable-watcher model. Then we propose a TGIF-QA dataset for video
question answering with the help of automatic question generation. Finally, we
evaluate the models on our dataset. The experimental results show the
effectiveness of our proposed models.
| [
{
"version": "v1",
"created": "Wed, 3 May 2017 04:46:33 GMT"
}
] | 2017-05-04T00:00:00 | [
[
"Xue",
"Hongyang",
""
],
[
"Zhao",
"Zhou",
""
],
[
"Cai",
"Deng",
""
]
] | TITLE: The Forgettable-Watcher Model for Video Question Answering
ABSTRACT: A number of visual question answering approaches have been proposed recently,
aiming at understanding the visual scenes by answering the natural language
questions. While the image question answering has drawn significant attention,
video question answering is largely unexplored.
Video-QA is different from Image-QA since the information and the events are
scattered among multiple frames. In order to better utilize the temporal
structure of the videos and the phrasal structures of the answers, we propose
two mechanisms: the re-watching and the re-reading mechanisms and combine them
into the forgettable-watcher model. Then we propose a TGIF-QA dataset for video
question answering with the help of automatic question generation. Finally, we
evaluate the models on our dataset. The experimental results show the
effectiveness of our proposed models.
| new_dataset | 0.948822 |
1705.01371 | Fanyi Xiao | Fanyi Xiao, Leonid Sigal, Yong Jae Lee | Weakly-supervised Visual Grounding of Phrases with Linguistic Structures | CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a weakly-supervised approach that takes image-sentence pairs as
input and learns to visually ground (i.e., localize) arbitrary linguistic
phrases, in the form of spatial attention masks. Specifically, the model is
trained with images and their associated image-level captions, without any
explicit region-to-phrase correspondence annotations. To this end, we introduce
an end-to-end model which learns visual groundings of phrases with two types of
carefully designed loss functions. In addition to the standard discriminative
loss, which enforces that attended image regions and phrases are consistently
encoded, we propose a novel structural loss which makes use of the parse tree
structures induced by the sentences. In particular, we ensure complementarity
among the attention masks that correspond to sibling noun phrases, and
compositionality of attention masks among the children and parent phrases, as
defined by the sentence parse tree. We validate the effectiveness of our
approach on the Microsoft COCO and Visual Genome datasets.
| [
{
"version": "v1",
"created": "Wed, 3 May 2017 11:53:33 GMT"
}
] | 2017-05-04T00:00:00 | [
[
"Xiao",
"Fanyi",
""
],
[
"Sigal",
"Leonid",
""
],
[
"Lee",
"Yong Jae",
""
]
] | TITLE: Weakly-supervised Visual Grounding of Phrases with Linguistic Structures
ABSTRACT: We propose a weakly-supervised approach that takes image-sentence pairs as
input and learns to visually ground (i.e., localize) arbitrary linguistic
phrases, in the form of spatial attention masks. Specifically, the model is
trained with images and their associated image-level captions, without any
explicit region-to-phrase correspondence annotations. To this end, we introduce
an end-to-end model which learns visual groundings of phrases with two types of
carefully designed loss functions. In addition to the standard discriminative
loss, which enforces that attended image regions and phrases are consistently
encoded, we propose a novel structural loss which makes use of the parse tree
structures induced by the sentences. In particular, we ensure complementarity
among the attention masks that correspond to sibling noun phrases, and
compositionality of attention masks among the children and parent phrases, as
defined by the sentence parse tree. We validate the effectiveness of our
approach on the Microsoft COCO and Visual Genome datasets.
| no_new_dataset | 0.952131 |
1705.01402 | Zhe Chen | Yongshuai Shao and Zhe Chen | Reconstruction of Missing Big Sensor Data | null | null | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With ubiquitous sensors continuously monitoring and collecting large amounts
of information, there is no doubt that this is an era of big data. One of the
important sources for scientific big data is the datasets collected by Internet
of things (IoT). It's considered that these datesets contain highly useful and
valuable information. For an IoT application to analyze big sensor data, it is
necessary that the data are clean and lossless. However, due to unreliable
wireless link or hardware failure in the nodes, data loss in IoT is very
common. To reconstruct the missing big sensor data, firstly, we propose an
algorithm based on matrix rank-minimization method. Then, we consider IoT with
multiple types of sensor in each node. Accounting for possible correlations
among multiple-attribute sensor data, we propose tensor-based methods to
estimate missing values. Moreover, effective solutions are proposed using the
alternating direction method of multipliers. Finally, we evaluate the
approaches using two real sensor datasets with two missing data-patterns, i.e.,
random missing pattern and consecutive missing pattern. The experiments with
real-world sensor data show the effectiveness of the proposed methods.
| [
{
"version": "v1",
"created": "Wed, 3 May 2017 13:17:49 GMT"
}
] | 2017-05-04T00:00:00 | [
[
"Shao",
"Yongshuai",
""
],
[
"Chen",
"Zhe",
""
]
] | TITLE: Reconstruction of Missing Big Sensor Data
ABSTRACT: With ubiquitous sensors continuously monitoring and collecting large amounts
of information, there is no doubt that this is an era of big data. One of the
important sources for scientific big data is the datasets collected by Internet
of things (IoT). It's considered that these datesets contain highly useful and
valuable information. For an IoT application to analyze big sensor data, it is
necessary that the data are clean and lossless. However, due to unreliable
wireless link or hardware failure in the nodes, data loss in IoT is very
common. To reconstruct the missing big sensor data, firstly, we propose an
algorithm based on matrix rank-minimization method. Then, we consider IoT with
multiple types of sensor in each node. Accounting for possible correlations
among multiple-attribute sensor data, we propose tensor-based methods to
estimate missing values. Moreover, effective solutions are proposed using the
alternating direction method of multipliers. Finally, we evaluate the
approaches using two real sensor datasets with two missing data-patterns, i.e.,
random missing pattern and consecutive missing pattern. The experiments with
real-world sensor data show the effectiveness of the proposed methods.
| no_new_dataset | 0.942612 |
1411.6704 | Bikash Chandra | Bikash Chandra, Bhupesh Chawda, Biplab Kar, K. V. Maheshwara Reddy,
Shetal Shah, S. Sudarshan | Data Generation for Testing and Grading SQL Queries | 34 pages, The final publication is available at Springer via
http://dx.doi.org/10.1007/s00778-015-0395-0 | null | 10.1007/s00778-015-0395-0 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Correctness of SQL queries is usually tested by executing the queries on one
or more datasets. Erroneous queries are often the results of small changes, or
mutations of the correct query. A mutation Q' of a query Q is killed by a
dataset D if Q(D) $\neq$ Q'(D). Earlier work on the XData system showed how to
generate datasets that kill all mutations in a class of mutations that included
join type and comparison operation mutations.
In this paper, we extend the XData data generation techniques to handle a
wider variety of SQL queries and a much larger class of mutations. We have also
built a system for grading SQL queries using the datasets generated by XData.
We present a study of the effectiveness of the datasets generated by the
extended XData approach, using a variety of queries including queries submitted
by students as part of a database course. We show that the XData datasets
outperform predefined datasets as well as manual grading done earlier by
teaching assistants, while also avoiding the drudgery of manual correction.
Thus, we believe that our techniques will be of great value to database course
instructors and TAs, particularly to those of MOOCs. It will also be valuable
to database application developers and testers for testing SQL queries.
| [
{
"version": "v1",
"created": "Tue, 25 Nov 2014 02:06:02 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Dec 2014 14:13:18 GMT"
},
{
"version": "v3",
"created": "Wed, 13 May 2015 04:33:40 GMT"
},
{
"version": "v4",
"created": "Mon, 13 Jul 2015 11:57:44 GMT"
},
{
"version": "v5",
"created": "Tue, 2 May 2017 10:46:40 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Chandra",
"Bikash",
""
],
[
"Chawda",
"Bhupesh",
""
],
[
"Kar",
"Biplab",
""
],
[
"Reddy",
"K. V. Maheshwara",
""
],
[
"Shah",
"Shetal",
""
],
[
"Sudarshan",
"S.",
""
]
] | TITLE: Data Generation for Testing and Grading SQL Queries
ABSTRACT: Correctness of SQL queries is usually tested by executing the queries on one
or more datasets. Erroneous queries are often the results of small changes, or
mutations of the correct query. A mutation Q' of a query Q is killed by a
dataset D if Q(D) $\neq$ Q'(D). Earlier work on the XData system showed how to
generate datasets that kill all mutations in a class of mutations that included
join type and comparison operation mutations.
In this paper, we extend the XData data generation techniques to handle a
wider variety of SQL queries and a much larger class of mutations. We have also
built a system for grading SQL queries using the datasets generated by XData.
We present a study of the effectiveness of the datasets generated by the
extended XData approach, using a variety of queries including queries submitted
by students as part of a database course. We show that the XData datasets
outperform predefined datasets as well as manual grading done earlier by
teaching assistants, while also avoiding the drudgery of manual correction.
Thus, we believe that our techniques will be of great value to database course
instructors and TAs, particularly to those of MOOCs. It will also be valuable
to database application developers and testers for testing SQL queries.
| no_new_dataset | 0.888566 |
1507.06829 | Lisa Posch | Lisa Posch, Arnim Bleier, Philipp Schaer, Markus Strohmaier | The Polylingual Labeled Topic Model | Accepted for publication at KI 2015 (38th edition of the German
Conference on Artificial Intelligence) | null | 10.1007/978-3-319-24489-1_26 | null | cs.CL cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present the Polylingual Labeled Topic Model, a model which
combines the characteristics of the existing Polylingual Topic Model and
Labeled LDA. The model accounts for multiple languages with separate topic
distributions for each language while restricting the permitted topics of a
document to a set of predefined labels. We explore the properties of the model
in a two-language setting on a dataset from the social science domain. Our
experiments show that our model outperforms LDA and Labeled LDA in terms of
their held-out perplexity and that it produces semantically coherent topics
which are well interpretable by human subjects.
| [
{
"version": "v1",
"created": "Fri, 24 Jul 2015 13:01:20 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Posch",
"Lisa",
""
],
[
"Bleier",
"Arnim",
""
],
[
"Schaer",
"Philipp",
""
],
[
"Strohmaier",
"Markus",
""
]
] | TITLE: The Polylingual Labeled Topic Model
ABSTRACT: In this paper, we present the Polylingual Labeled Topic Model, a model which
combines the characteristics of the existing Polylingual Topic Model and
Labeled LDA. The model accounts for multiple languages with separate topic
distributions for each language while restricting the permitted topics of a
document to a set of predefined labels. We explore the properties of the model
in a two-language setting on a dataset from the social science domain. Our
experiments show that our model outperforms LDA and Labeled LDA in terms of
their held-out perplexity and that it produces semantically coherent topics
which are well interpretable by human subjects.
| no_new_dataset | 0.95297 |
1603.03183 | Chunhua Shen | Guosheng Lin, Chunhua Shen, Anton van den Hengel, Ian Reid | Exploring Context with Deep Structured models for Semantic Segmentation | 16 pages. Accepted to IEEE T. Pattern Analysis & Machine
Intelligence, 2017. Extended version of arXiv:1504.01013 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State-of-the-art semantic image segmentation methods are mostly based on
training deep convolutional neural networks (CNNs). In this work, we proffer to
improve semantic segmentation with the use of contextual information. In
particular, we explore `patch-patch' context and `patch-background' context in
deep CNNs. We formulate deep structured models by combining CNNs and
Conditional Random Fields (CRFs) for learning the patch-patch context between
image regions. Specifically, we formulate CNN-based pairwise potential
functions to capture semantic correlations between neighboring patches.
Efficient piecewise training of the proposed deep structured model is then
applied in order to avoid repeated expensive CRF inference during the course of
back propagation. For capturing the patch-background context, we show that a
network design with traditional multi-scale image inputs and sliding pyramid
pooling is very effective for improving performance. We perform comprehensive
evaluation of the proposed method. We achieve new state-of-the-art performance
on a number of challenging semantic segmentation datasets including $NYUDv2$,
$PASCAL$-$VOC2012$, $Cityscapes$, $PASCAL$-$Context$, $SUN$-$RGBD$,
$SIFT$-$flow$, and $KITTI$ datasets. Particularly, we report an
intersection-over-union score of $77.8$ on the $PASCAL$-$VOC2012$ dataset.
| [
{
"version": "v1",
"created": "Thu, 10 Mar 2016 08:34:19 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Mar 2016 12:24:30 GMT"
},
{
"version": "v3",
"created": "Tue, 2 May 2017 08:06:42 GMT"
}
] | 2017-05-03T00:00:00 | [
[
"Lin",
"Guosheng",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
],
[
"Reid",
"Ian",
""
]
] | TITLE: Exploring Context with Deep Structured models for Semantic Segmentation
ABSTRACT: State-of-the-art semantic image segmentation methods are mostly based on
training deep convolutional neural networks (CNNs). In this work, we proffer to
improve semantic segmentation with the use of contextual information. In
particular, we explore `patch-patch' context and `patch-background' context in
deep CNNs. We formulate deep structured models by combining CNNs and
Conditional Random Fields (CRFs) for learning the patch-patch context between
image regions. Specifically, we formulate CNN-based pairwise potential
functions to capture semantic correlations between neighboring patches.
Efficient piecewise training of the proposed deep structured model is then
applied in order to avoid repeated expensive CRF inference during the course of
back propagation. For capturing the patch-background context, we show that a
network design with traditional multi-scale image inputs and sliding pyramid
pooling is very effective for improving performance. We perform comprehensive
evaluation of the proposed method. We achieve new state-of-the-art performance
on a number of challenging semantic segmentation datasets including $NYUDv2$,
$PASCAL$-$VOC2012$, $Cityscapes$, $PASCAL$-$Context$, $SUN$-$RGBD$,
$SIFT$-$flow$, and $KITTI$ datasets. Particularly, we report an
intersection-over-union score of $77.8$ on the $PASCAL$-$VOC2012$ dataset.
| no_new_dataset | 0.947914 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.