id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1806.04010
|
Max Frei
|
Max Frei and Frank Einar Kruis
|
Fully automated primary particle size analysis of agglomerates on
transmission electron microscopy images via artificial neural networks
| null |
Powder Technology, Volume 332, 2018, Pages 120-130
|
10.1016/j.powtec.2018.03.032
| null |
cs.CV cond-mat.mtrl-sci physics.ins-det
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is a high demand for fully automated methods for the analysis of
primary particle size distributions of agglomerates on transmission electron
microscopy images. Therefore, a novel method, based on the utilization of
artificial neural networks, was proposed, implemented and validated. The
training of the artificial neural networks requires large quantities (up to
several hundreds of thousands) of transmission electron microscopy images of
agglomerates consisting of primary particles with known sizes. Since the manual
evaluation of such large amounts of transmission electron microscopy images is
not feasible, a synthesis of lifelike transmission electron microscopy images
as training data was implemented. The proposed method can compete with
state-of-the-art automated imaging particle size methods like the Hough
transformation, ultimate erosion and watershed transformation and is in some
cases even able to outperform these methods. It is however still outperformed
by the manual analysis.
|
[
{
"version": "v1",
"created": "Fri, 8 Jun 2018 13:11:09 GMT"
}
] | 2018-06-12T00:00:00 |
[
[
"Frei",
"Max",
""
],
[
"Kruis",
"Frank Einar",
""
]
] |
new_dataset
| 0.995509 |
1806.04051
|
Dakai Jin
|
Dakai Jin and Ziyue Xu and Youbao Tang and Adam P. Harrison and Daniel
J. Mollura
|
CT-Realistic Lung Nodule Simulation from 3D Conditional Generative
Adversarial Networks for Robust Lung Segmentation
|
MICCAI 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data availability plays a critical role for the performance of deep learning
systems. This challenge is especially acute within the medical image domain,
particularly when pathologies are involved, due to two factors: 1) limited
number of cases, and 2) large variations in location, scale, and appearance. In
this work, we investigate whether augmenting a dataset with artificially
generated lung nodules can improve the robustness of the progressive
holistically nested network (P-HNN) model for pathological lung segmentation of
CT scans. To achieve this goal, we develop a 3D generative adversarial network
(GAN) that effectively learns lung nodule property distributions in 3D space.
In order to embed the nodules within their background context, we condition the
GAN based on a volume of interest whose central part containing the nodule has
been erased. To further improve realism and blending with the background, we
propose a novel multi-mask reconstruction loss. We train our method on over
1000 nodules from the LIDC dataset. Qualitative results demonstrate the
effectiveness of our method compared to the state-of-art. We then use our GAN
to generate simulated training images where nodules lie on the lung border,
which are cases where the published P-HNN model struggles. Qualitative and
quantitative results demonstrate that armed with these simulated images, the
P-HNN model learns to better segment lung regions under these challenging
situations. As a result, our system provides a promising means to help overcome
the data paucity that commonly afflicts medical imaging.
|
[
{
"version": "v1",
"created": "Mon, 11 Jun 2018 15:19:36 GMT"
}
] | 2018-06-12T00:00:00 |
[
[
"Jin",
"Dakai",
""
],
[
"Xu",
"Ziyue",
""
],
[
"Tang",
"Youbao",
""
],
[
"Harrison",
"Adam P.",
""
],
[
"Mollura",
"Daniel J.",
""
]
] |
new_dataset
| 0.988598 |
1703.00154
|
Arno Solin
|
Arno Solin, Santiago Cortes, Esa Rahtu, Juho Kannala
|
Inertial Odometry on Handheld Smartphones
|
Appearing in Proceedings of the International Conference on
Information Fusion (FUSION 2018)
| null | null | null |
cs.CV stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Building a complete inertial navigation system using the limited quality data
provided by current smartphones has been regarded challenging, if not
impossible. This paper shows that by careful crafting and accounting for the
weak information in the sensor samples, smartphones are capable of pure
inertial navigation. We present a probabilistic approach for orientation and
use-case free inertial odometry, which is based on double-integrating rotated
accelerations. The strength of the model is in learning additive and
multiplicative IMU biases online. We are able to track the phone position,
velocity, and pose in real-time and in a computationally lightweight fashion by
solving the inference with an extended Kalman filter. The information fusion is
completed with zero-velocity updates (if the phone remains stationary),
altitude correction from barometric pressure readings (if available), and
pseudo-updates constraining the momentary speed. We demonstrate our approach
using an iPad and iPhone in several indoor dead-reckoning applications and in a
measurement tool setup.
|
[
{
"version": "v1",
"created": "Wed, 1 Mar 2017 07:00:01 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jun 2018 20:40:20 GMT"
}
] | 2018-06-11T00:00:00 |
[
[
"Solin",
"Arno",
""
],
[
"Cortes",
"Santiago",
""
],
[
"Rahtu",
"Esa",
""
],
[
"Kannala",
"Juho",
""
]
] |
new_dataset
| 0.991447 |
1712.03942
|
Michael Tschannen
|
Michael Tschannen, Aran Khanna, Anima Anandkumar
|
StrassenNets: Deep Learning with a Multiplication Budget
|
ICML 2018. Code available at https://github.com/mitscha/strassennets
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A large fraction of the arithmetic operations required to evaluate deep
neural networks (DNNs) consists of matrix multiplications, in both convolution
and fully connected layers. We perform end-to-end learning of low-cost
approximations of matrix multiplications in DNN layers by casting matrix
multiplications as 2-layer sum-product networks (SPNs) (arithmetic circuits)
and learning their (ternary) edge weights from data. The SPNs disentangle
multiplication and addition operations and enable us to impose a budget on the
number of multiplication operations. Combining our method with knowledge
distillation and applying it to image classification DNNs (trained on ImageNet)
and language modeling DNNs (using LSTMs), we obtain a first-of-a-kind reduction
in number of multiplications (over 99.5%) while maintaining the predictive
performance of the full-precision models. Finally, we demonstrate that the
proposed framework is able to rediscover Strassen's matrix multiplication
algorithm, learning to multiply $2 \times 2$ matrices using only 7
multiplications instead of 8.
|
[
{
"version": "v1",
"created": "Mon, 11 Dec 2017 18:49:07 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Feb 2018 12:59:10 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Jun 2018 10:59:23 GMT"
}
] | 2018-06-11T00:00:00 |
[
[
"Tschannen",
"Michael",
""
],
[
"Khanna",
"Aran",
""
],
[
"Anandkumar",
"Anima",
""
]
] |
new_dataset
| 0.993595 |
1806.02851
|
Krzysztof Fleszar
|
Timothy M. Chan, Thomas C. van Dijk, Krzysztof Fleszar, Joachim
Spoerhase, Alexander Wolff
|
Stabbing Rectangles by Line Segments - How Decomposition Reduces the
Shallow-Cell Complexity
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We initiate the study of the following natural geometric optimization
problem. The input is a set of axis-aligned rectangles in the plane. The
objective is to find a set of horizontal line segments of minimum total length
so that every rectangle is stabbed by some line segment. A line segment stabs a
rectangle if it intersects its left and its right boundary. The problem, which
we call Stabbing, can be motivated by a resource allocation problem and has
applications in geometric network design. To the best of our knowledge, only
special cases of this problem have been considered so far.
Stabbing is a weighted geometric set cover problem, which we show to be
NP-hard. A constrained variant of Stabbing turns out to be even APX-hard. While
for general set cover the best possible approximation ratio is $\Theta(\log
n)$, it is an important field in geometric approximation algorithms to obtain
better ratios for geometric set cover problems. Chan et al. [SODA'12]
generalize earlier results by Varadarajan [STOC'10] to obtain sub-logarithmic
performances for a broad class of weighted geometric set cover instances that
are characterized by having low shallow-cell complexity. The shallow-cell
complexity of Stabbing instances, however, can be high so that a direct
application of the framework of Chan et al. gives only logarithmic bounds. We
still achieve a constant-factor approximation by decomposing general instances
into what we call laminar instances that have low enough complexity.
Our decomposition technique yields constant-factor approximations also for
the variant where rectangles can be stabbed by horizontal and vertical segments
and for two further geometric set cover problems.
|
[
{
"version": "v1",
"created": "Thu, 7 Jun 2018 18:20:53 GMT"
}
] | 2018-06-11T00:00:00 |
[
[
"Chan",
"Timothy M.",
""
],
[
"van Dijk",
"Thomas C.",
""
],
[
"Fleszar",
"Krzysztof",
""
],
[
"Spoerhase",
"Joachim",
""
],
[
"Wolff",
"Alexander",
""
]
] |
new_dataset
| 0.984522 |
1806.02918
|
Maria Shugrina
|
Maria Shugrina, Amlan Kar, Karan Singh, Sanja Fidler
|
Color Sails: Discrete-Continuous Palettes for Deep Color Exploration
|
14 pages, 13 figures
| null | null | null |
cs.GR cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present color sails, a discrete-continuous color gamut representation that
extends the color gradient analogy to three dimensions and allows interactive
control of the color blending behavior. Our representation models a wide
variety of color distributions in a compact manner, and lends itself to
applications such as color exploration for graphic design, illustration and
similar fields. We propose a Neural Network that can fit a color sail to any
image. Then, the user can adjust color sail parameters to change the base
colors, their blending behavior and the number of colors, exploring a wide
range of options for the original design. In addition, we propose a Deep
Learning model that learns to automatically segment an image into
color-compatible alpha masks, each equipped with its own color sail. This
allows targeted color exploration by either editing their corresponding color
sails or using standard software packages. Our model is trained on a custom
diverse dataset of art and design. We provide both quantitative evaluations,
and a user study, demonstrating the effectiveness of color sail interaction.
Interactive demos are available at www.colorsails.com.
|
[
{
"version": "v1",
"created": "Thu, 7 Jun 2018 22:42:00 GMT"
}
] | 2018-06-11T00:00:00 |
[
[
"Shugrina",
"Maria",
""
],
[
"Kar",
"Amlan",
""
],
[
"Singh",
"Karan",
""
],
[
"Fidler",
"Sanja",
""
]
] |
new_dataset
| 0.993647 |
1806.02951
|
Minjia Shi
|
Minjia Shi, Hongwei Zhu, Liqin Qian, Lin Sok, Patrick Sol\'e
|
On self-dual and LCD double circulant and double negacirculant codes
over $\mathbb{F}_q + u\mathbb{F}_q$
|
20 pages, submitted on 26 November, 2017
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Double circulant codes of length $2n$ over the semilocal ring $R =
\mathbb{F}_q + u\mathbb{F}_q,\, u^2=u,$ are studied when $q$ is an odd prime
power, and $-1$ is a square in $\mathbb{F}_q.$ Double negacirculant codes of
length $2n$ are studied over $R$ when $n$ is even and $q$ is an odd prime
power. Exact enumeration of self-dual and LCD such codes for given length $2n$
is given. Employing a duality-preserving Gray map, self-dual and LCD codes of
length $4n$ over $\mathbb{F}_q$ are constructed. Using random coding and the
Artin conjecture, the relative distance of these codes is bounded below. The
parameters of examples of the modest length are computed. Several such codes
are optimal.
|
[
{
"version": "v1",
"created": "Fri, 8 Jun 2018 02:48:18 GMT"
}
] | 2018-06-11T00:00:00 |
[
[
"Shi",
"Minjia",
""
],
[
"Zhu",
"Hongwei",
""
],
[
"Qian",
"Liqin",
""
],
[
"Sok",
"Lin",
""
],
[
"Solé",
"Patrick",
""
]
] |
new_dataset
| 0.999436 |
1806.02974
|
Ram Prakash Sharma Mr.
|
Ram Prakash Sharma and Somnath Dey
|
Fingerprint liveness detection using local quality features
|
21 pages, 11 figures, 7 Tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fingerprint-based recognition has been widely deployed in various
applications. However, current recognition systems are vulnerable to spoofing
attacks which make use of an artificial replica of a fingerprint to deceive the
sensors. In such scenarios, fingerprint liveness detection ensures the actual
presence of a real legitimate fingerprint in contrast to a fake
self-manufactured synthetic sample. In this paper, we propose a static
software-based approach using quality features to detect the liveness in a
fingerprint. We have extracted features from a single fingerprint image to
overcome the issues faced in dynamic software-based approaches which require
longer computational time and user cooperation. The proposed system extracts 8
sensor independent quality features on a local level containing minute details
of the ridge-valley structure of real and fake fingerprints. These local
quality features constitutes a 13-dimensional feature vector. The system is
tested on a publically available dataset of LivDet 2009 competition. The
experimental results exhibit supremacy of the proposed method over current
state-of-the-art approaches providing least average classification error of
5.3% for LivDet 2009. Additionally, effectiveness of the best performing
features over LivDet 2009 is evaluated on the latest LivDet 2015 dataset which
contain fingerprints fabricated using unknown spoof materials. An average
classification error rate of 4.22% is achieved in comparison with 4.49%
obtained by the LivDet 2015 winner. Further, the proposed system utilizes a
single fingerprint image, which results in faster implications and makes it
more user-friendly.
|
[
{
"version": "v1",
"created": "Fri, 8 Jun 2018 05:48:10 GMT"
}
] | 2018-06-11T00:00:00 |
[
[
"Sharma",
"Ram Prakash",
""
],
[
"Dey",
"Somnath",
""
]
] |
new_dataset
| 0.969371 |
1806.03086
|
Damien Jacques
|
Damien C. Jacques
|
Mobile Phone Metadata for Development
|
28 pages, 8 figures, 1 table. Published in Ph.D. Thesis: Jacques,
D.C. Harnessing the Data Revolution for Food Security and Poverty Mapping:
synergies between Mobile Phone Data, Earth Observation and Official
Statistics in Senegal. May 2018
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Mobile phones are now widely adopted by most of the world population. Each
time a call is made (or an SMS sent), a Call Detail Record (CDR) is generated
by the telecom companies for billing purpose. These metadata provide
information on when, how, from where and with whom we communicate.
Conceptually, they can be described as a geospatial, dynamic, weighted and
directed network. Applications of CDRs for development are numerous. They have
been used to model the spread of infectious diseases, study road traffic,
support electrification planning strategies or map socio-economic level of
population. While massive, CDRs are not statistically representative of the
whole population due to several sources of bias (market, usage, spatial and
temporal resolution). Furthermore, mobile phone metadata are held by telecom
companies. Consequently, their access is not necessarily straightforward and
can seriously hamper any operational application. Finally, a trade-off exists
between privacy and utility when using sensitive data like CDRs. New
initiatives such as Open Algorithm might help to deal with these fundamental
questions by allowing researchers to run algorithms on the data that remain
safely stored behind the firewall of the providers.
|
[
{
"version": "v1",
"created": "Fri, 8 Jun 2018 11:06:39 GMT"
}
] | 2018-06-11T00:00:00 |
[
[
"Jacques",
"Damien C.",
""
]
] |
new_dataset
| 0.999304 |
1806.03108
|
Amir Kafshdar Goharshady
|
Krishnendu Chatterjee, Amir Kafshdar Goharshady, Rasmus Ibsen-Jensen,
Yaron Velner
|
Ergodic Mean-Payoff Games for the Analysis of Attacks in
Crypto-Currencies
|
Accepted to CONCUR 2018
| null | null | null |
cs.CR cs.GT cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Crypto-currencies are digital assets designed to work as a medium of
exchange, e.g., Bitcoin, but they are susceptible to attacks (dishonest
behavior of participants). A framework for the analysis of attacks in
crypto-currencies requires (a) modeling of game-theoretic aspects to analyze
incentives for deviation from honest behavior; (b) concurrent interactions
between participants; and (c) analysis of long-term monetary gains. Traditional
game-theoretic approaches for the analysis of security protocols consider
either qualitative temporal properties such as safety and termination, or the
very special class of one-shot (stateless) games. However, to analyze general
attacks on protocols for crypto-currencies, both stateful analysis and
quantitative objectives are necessary. In this work our main contributions are
as follows: (a) we show how a class of concurrent mean-payoff games, namely
ergodic games, can model various attacks that arise naturally in
crypto-currencies; (b) we present the first practical implementation of
algorithms for ergodic games that scales to model realistic problems for
crypto-currencies; and (c) we present experimental results showing that our
framework can handle games with thousands of states and millions of
transitions.
|
[
{
"version": "v1",
"created": "Fri, 8 Jun 2018 12:12:27 GMT"
}
] | 2018-06-11T00:00:00 |
[
[
"Chatterjee",
"Krishnendu",
""
],
[
"Goharshady",
"Amir Kafshdar",
""
],
[
"Ibsen-Jensen",
"Rasmus",
""
],
[
"Velner",
"Yaron",
""
]
] |
new_dataset
| 0.997502 |
1806.03223
|
Debanjan Ghosh
|
Elena Musi, Debanjan Ghosh, Smaranda Muresan
|
ChangeMyView Through Concessions: Do Concessions Increase Persuasion?
|
Dialogue and Discourse journal 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In discourse studies concessions are considered among those argumentative
strategies that increase persuasion. We aim to empirically test this hypothesis
by calculating the distribution of argumentative concessions in persuasive vs.
non-persuasive comments from the ChangeMyView subreddit. This constitutes a
challenging task since concessions are not always part of an argument. Drawing
from a theoretically-informed typology of concessions, we conduct an annotation
task to label a set of polysemous lexical markers as introducing an
argumentative concession or not and we observe their distribution in threads
that achieved and did not achieve persuasion. For the annotation, we used both
expert and novice annotators. With the ultimate goal of conducting the study on
large datasets, we present a self-training method to automatically identify
argumentative concessions using linguistically motivated features. We achieve a
moderate F1 of 57.4% on the development set and 46.0% on the test set via the
self-training method. These results are comparable to state of the art results
on similar tasks of identifying explicit discourse connective types from the
Penn Discourse Treebank. Our findings from the manual labeling and the
classification experiments indicate that the type of argumentative concessions
we investigated is almost equally likely to be used in winning and losing
arguments from the ChangeMyView dataset. While this result seems to contradict
theoretical assumptions, we provide some reasons for this discrepancy related
to the ChangeMyView subreddit.
|
[
{
"version": "v1",
"created": "Fri, 8 Jun 2018 15:38:04 GMT"
}
] | 2018-06-11T00:00:00 |
[
[
"Musi",
"Elena",
""
],
[
"Ghosh",
"Debanjan",
""
],
[
"Muresan",
"Smaranda",
""
]
] |
new_dataset
| 0.999286 |
1806.03236
|
Mohammad Asadul Hoque
|
Noah Carter, Mohammad A. Hoque, Md Salman Ahmed
|
Simulating Vehicle Movement and Multi-Hop Connectivity from Basic Safety
Messages
| null |
Noah Carter, Mohammad A. Hoque, Md Salman Ahmed. "Simulating
Vehicle Movement and Multi-Hop Connectivity from Basic Safety Messages." IEEE
SoutheastCon 2018
| null | null |
cs.CY cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
The Basic Safety Message (BSM) is a standardized communication packet that is
sent every tenth of a second between connected vehicles using Dedicated Short
Range Communication (DSRC). BSMs contain data about the sending vehicle's
state, such as speed, location, and the status of the turn signal. Currently,
many BSM datasets are available through the connected vehicle testbeds of U.S.
Department of Transportation from all over the country. However, without a
proper visualization tool, it is not possible to analyze or visually get an
overview of the spatio-temporal distribution of the data. With this goal, a web
application has been developed which can ingest a raw BSM dataset and display a
time-based simulation of vehicle movement. The simulation also displays
multi-hop vehicular network connectivity over DSRC. This paper gives details
about the application, including an explanation of the multi-hop partitioning
algorithm used to classify the vehicles into separate network partitions. A
performance analysis for the simulation is included, in which it is suggested
that calculating a connectivity matrix with the multi-hop partitioning
algorithm is computationally expensive for large number of vehicles.
|
[
{
"version": "v1",
"created": "Sat, 5 May 2018 14:12:18 GMT"
}
] | 2018-06-11T00:00:00 |
[
[
"Carter",
"Noah",
""
],
[
"Hoque",
"Mohammad A.",
""
],
[
"Ahmed",
"Md Salman",
""
]
] |
new_dataset
| 0.998813 |
1806.03243
|
Viktoras Veitas Mr.
|
Viktoras Kabir Veitas and Simon Delaere
|
In-vehicle data recording, storage and access management in autonomous
vehicles
|
Unpublished draft: 21 pages (with references); 8 figures
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transport sector is in the process of being rapidly and fundamentally
reshaped by autonomous and collaborative driving technologies. This reshaping
promises huge economic and social benefits as well as challenges in terms of
developing and deploying secure and safe transportations systems, their smooth
integration to social fabric. We have employed Policy Scan and Technology
Strategy Design methodology in order to identify concrete societal expectations
and problems and map them with mitigating technological availabilities in the
domain of autonomous driving and smart mobility.
Event Data Recorder for Autonomous Driving (EDR/AD) is an envisioned
subsystem of a vehicular Controller Area Network which ensures the
confidentiality, integrity and availability of data related to operation of a
vehicle in order to permit recovery of exact situation following the occurrence
of an event or on demand. The exact technical and regulatory requirements for
the device are still in the development internationally, but it is clear that
it will be included into vehicle type-approval requirements at UNECE level. We
present an analysis of the context of the usage of the EDR/AD in collaborative
intelligent transport systems, related security, data provenance and privacy,
other regulatory and technical issues considering many interest groups and
stakeholders involved. We present a concrete proposal for developing a EDR/AD
proof of the concept prototype with clear market deployment potential and urge
security researchers, vehicle manufacturers, and component suppliers to form a
collaboration towards implementing important technology for making future
autonomous vehicles more socially acceptable and legally compliant.
Furthermore, EDR/AD technology, apart from its immediate use in autonomous
driving and smart mobility domain has a potential to be extended to general
autonomous robot and AI applications.
|
[
{
"version": "v1",
"created": "Mon, 28 May 2018 17:46:54 GMT"
}
] | 2018-06-11T00:00:00 |
[
[
"Veitas",
"Viktoras Kabir",
""
],
[
"Delaere",
"Simon",
""
]
] |
new_dataset
| 0.989975 |
1806.03246
|
Vicky Charisi
|
Ornella Mich, Roberto Tiella
|
ROBOTIKANDO: a Web Tool for Supporting Teacher Practicing Robotics in
Kindergarten
|
Presented at Interaction Design and Children (IDC-CRI2018) Workshop
(arXiv:submit/2277826)
| null | null |
IDC-CRI/2018/02
|
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes ROBOTIKANDO, a web application for supporting both
kindergarten teachers in planning educational robotics activities and
educational robotics experts who would like to share their knowledge and
experience. ROBOTIKANDO has been designed and implemented following a co-design
process, which devised a conceptual map aiming to connect educational robotics
and kindergarten education principles. As future work, we are planning a
longitudinal evaluation with Preschools teachers. Moreover, we are thinking of
extending the application to teachers of primary and secondary schools.
|
[
{
"version": "v1",
"created": "Tue, 29 May 2018 17:14:09 GMT"
}
] | 2018-06-11T00:00:00 |
[
[
"Mich",
"Ornella",
""
],
[
"Tiella",
"Roberto",
""
]
] |
new_dataset
| 0.998611 |
1608.08474
|
Remi Chou
|
Remi A. Chou and Matthieu Bloch and Joerg Kliewer
|
Empirical and Strong Coordination via Soft Covering with Polar Codes
|
14 pages, two-column, 5 figures, accepted to IEEE Transactions on
Information Theory
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We design polar codes for empirical coordination and strong coordination in
two-node networks. Our constructions hinge on the fact that polar codes enable
explicit low-complexity schemes for soft covering. We leverage this property to
propose explicit and low-complexity coding schemes that achieve the capacity
regions of both empirical coordination and strong coordination for sequences of
actions taking value in an alphabet of prime cardinality. Our results improve
previously known polar coding schemes, which (i) were restricted to uniform
distributions and to actions obtained via binary symmetric channels for strong
coordination, (ii) required a non-negligible amount of common randomness for
empirical coordination, and (iii) assumed that the simulation of discrete
memoryless channels could be perfectly implemented. As a by-product of our
results, we obtain a polar coding scheme that achieves channel resolvability
for an arbitrary discrete memoryless channel whose input alphabet has prime
cardinality.
|
[
{
"version": "v1",
"created": "Tue, 30 Aug 2016 14:34:40 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Mar 2018 04:59:08 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Jun 2018 23:06:06 GMT"
}
] | 2018-06-08T00:00:00 |
[
[
"Chou",
"Remi A.",
""
],
[
"Bloch",
"Matthieu",
""
],
[
"Kliewer",
"Joerg",
""
]
] |
new_dataset
| 0.994688 |
1611.08198
|
Travis Gagie
|
Felipe A. Louza, Travis Gagie and Guilherme P. Telles
|
Burrows-Wheeler transform and LCP array construction in constant space
|
Accepted to JDA
|
Journal of Discrete Algorithms, 42 (2017) 14-22
|
10.1016/j.jda.2016.11.003
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article we extend the elegant in-place Burrows-Wheeler transform
(BWT) algorithm proposed by Crochemore et al. (Crochemore et al., 2015). Our
extension is twofold: we first show how to compute simultaneously the longest
common prefix (LCP) array as well as the BWT, using constant additional space;
we then show how to build the LCP array directly in compressed representation
using Elias coding, still using constant additional space and with no
asymptotic slowdown. Furthermore, we provide a time/space tradeoff for our
algorithm when additional memory is allowed. Our algorithm runs in quadratic
time, as does Crochemore et al.'s, and is supported by interesting properties
of the BWT and of the LCP array, contributing to our understanding of the
time/space tradeoff curve for building indexing structures.
|
[
{
"version": "v1",
"created": "Thu, 24 Nov 2016 14:43:57 GMT"
}
] | 2018-06-08T00:00:00 |
[
[
"Louza",
"Felipe A.",
""
],
[
"Gagie",
"Travis",
""
],
[
"Telles",
"Guilherme P.",
""
]
] |
new_dataset
| 0.986789 |
1706.01580
|
Hasnain Vohra
|
Hasnain Vohra, Maxim Bazik, Matthew Antone, Joseph Mundy and William
Stephenson
|
Global-Local Airborne Mapping (GLAM): Reconstructing a City from Aerial
Videos
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Monocular visual SLAM has become an attractive practical approach for robot
localization and 3D environment mapping, since cameras are small, lightweight,
inexpensive, and produce high-rate, high-resolution data streams. Although
numerous robust tools have been developed, most existing systems are designed
to operate in terrestrial environments and at relatively small scale (a few
thousand frames) due to constraints on computation and storage.
In this paper, we present a feature-based visual SLAM system for aerial video
whose simple design permits near real-time operation, and whose scalability
permits large-area mapping using tens of thousands of frames, all on a single
conventional computer. Our approach consists of two parallel threads: the first
incrementally creates small locally consistent submaps and estimates camera
poses at video rate; the second aligns these submaps with one another to
produce a single globally consistent map via factor graph optimization over
both poses and landmarks. Scale drift is minimized through the use of
7-degree-of-freedom similarity transformations during submap alignment.
We quantify our system's performance on both simulated and real data sets,
and demonstrate city-scale map reconstruction accurate to within 2 meters using
nearly 90,000 aerial video frames - to our knowledge, the largest and fastest
such reconstruction to date.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2017 01:54:27 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jun 2018 05:39:15 GMT"
}
] | 2018-06-08T00:00:00 |
[
[
"Vohra",
"Hasnain",
""
],
[
"Bazik",
"Maxim",
""
],
[
"Antone",
"Matthew",
""
],
[
"Mundy",
"Joseph",
""
],
[
"Stephenson",
"William",
""
]
] |
new_dataset
| 0.952055 |
1710.00381
|
Mike Borowczak
|
Mike Borowczak and George Purdy
|
S-CHIRP: Secure Communication for Heterogeneous IoTs with Round-Robin
Protection
|
6 Pages, 4 figures, 2 tables, IEEE International Conference on
Consumer Electronics
| null |
10.1109/ICCE.2018.8326301
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work introduces CHIRP - an algorithm for communication between
ultra-portable heterogeneous IoT devices with a type of round-robin protection
mechanism. This algorithm is presented both in its basic form as well as in a
secured form in order to secure and maintain trust boundaries and communication
within specific groups of heterogeneous devices. The specific target
application scenarios includes resource constrained environments where a
co-located swarm of devices (adversarial in mission or objective) is also
present. CHIRP, and its secured version (S-CHIRP), enables complete
peer-to-peer communication of a $n$-agent network of devices in as few as n
rounds. In addition to the n-round cycle length, the proposed communication
mechanism has the following major properties: nodes communication is entirely
decentralized, communication is resilient to the loss of nodes, and finally
communication is resilient to the (re)-entry of nodes. Theoretical models show
that even the secure implementation of this mechanism is capable of scaling to
IoT swarms in the million device range with memory constraints in the < 10 MB
range
|
[
{
"version": "v1",
"created": "Sun, 1 Oct 2017 17:47:31 GMT"
}
] | 2018-06-08T00:00:00 |
[
[
"Borowczak",
"Mike",
""
],
[
"Purdy",
"George",
""
]
] |
new_dataset
| 0.993371 |
1710.02025
|
Mike Borowczak
|
Adrian Barberis, Danny Radosevich, Wyatt Emery, and Mike Borowczak
|
Portable Tor Router: Easily Enabling Web Privacy for Consumers
|
6 pages, 5 figures, IEEE ICCE Conference
| null |
10.1109/ICCE.2018.8326333
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
On-line privacy is of major public concern. Unfortunately, for the average
consumer, there is no simple mechanism to browse the Internet privately on
multiple devices. Most available Internet privacy mechanisms are either
expensive, not readily available, untrusted, or simply provide trivial
information masking. We propose that the simplest, most effective and
inexpensive way of gaining privacy, without sacrificing unnecessary amounts of
functionality and speed, is to mask the user's IP address while also encrypting
all data. We hypothesized that the Tor protocol is aptly suited to address
these needs. With this in mind we implemented a Tor router using a single board
computer and the open-source Tor protocol code. We found that our proposed
solution was able to meet five of our six goals soon after its implementation:
cost effectiveness, immediacy of privacy, simplicity of use, ease of execution,
and unimpaired functionality. Our final criterion of speed was sacrificed for
greater privacy but it did not fall so low as to impair day-to-day
functionality. With a total cost of roughly $100.00 USD and a speed cap of
around 2 Megabits per second we were able to meet our goal of an affordable,
convenient, and usable solution to increased on-line privacy for the average
consumer.
|
[
{
"version": "v1",
"created": "Thu, 5 Oct 2017 13:55:23 GMT"
}
] | 2018-06-08T00:00:00 |
[
[
"Barberis",
"Adrian",
""
],
[
"Radosevich",
"Danny",
""
],
[
"Emery",
"Wyatt",
""
],
[
"Borowczak",
"Mike",
""
]
] |
new_dataset
| 0.99403 |
1804.02556
|
Thomas Debris-Alazard
|
Thomas Debris-Alazard and Jean-Pierre Tillich
|
Two attacks on rank metric code-based schemes: RankSign and an
Identity-Based-Encryption scheme
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
RankSign [GRSZ14a] is a code-based signature scheme proposed to the NIST
competition for quantum-safe cryptography [AGHRZ17] and, moreover, is a
fundamental building block of a new Identity-Based-Encryption (IBE) [GHPT17a].
This signature scheme is based on the rank metric and enjoys remarkably small
key sizes, about 10KBytes for an intended level of security of 128 bits.
Unfortunately we will show that all the parameters proposed for this scheme in
[AGHRZ17] can be broken by an algebraic attack that exploits the fact that the
augmented LRPC codes used in this scheme have very low weight codewords.
Therefore, without RankSign the IBE cannot be instantiated at this time. As a
second contribution we will show that the problem is deeper than finding a new
signature in rank-based cryptography, we also found an attack on the generic
problem upon which its security reduction relies. However, contrarily to the
RankSign scheme, it seems that the parameters of the IBE scheme could be chosen
in order to avoid our attack. Finally, we have also shown that if one replaces
the rank metric in the [GHPT17a] IBE scheme by the Hamming metric, then a
devastating attack can be found.
|
[
{
"version": "v1",
"created": "Sat, 7 Apr 2018 13:05:43 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jun 2018 07:31:55 GMT"
}
] | 2018-06-08T00:00:00 |
[
[
"Debris-Alazard",
"Thomas",
""
],
[
"Tillich",
"Jean-Pierre",
""
]
] |
new_dataset
| 0.997618 |
1804.07893
|
Kumiko Tanaka-Ishii
|
Tatsuru Kobayashi, Kumiko Tanaka-Ishii
|
Taylor's law for Human Linguistic Sequences
|
11 pages, 16 figures, Accepted as ACL 2018 long paper
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Taylor's law describes the fluctuation characteristics underlying a system in
which the variance of an event within a time span grows by a power law with
respect to the mean. Although Taylor's law has been applied in many natural and
social systems, its application for language has been scarce. This article
describes a new quantification of Taylor's law in natural language and reports
an analysis of over 1100 texts across 14 languages. The Taylor exponents of
written natural language texts were found to exhibit almost the same value. The
exponent was also compared for other language-related data, such as the
child-directed speech, music, and programming language code. The results show
how the Taylor exponent serves to quantify the fundamental structural
complexity underlying linguistic time series. The article also shows the
applicability of these findings in evaluating language models.
|
[
{
"version": "v1",
"created": "Sat, 21 Apr 2018 05:24:10 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jun 2018 15:18:20 GMT"
}
] | 2018-06-08T00:00:00 |
[
[
"Kobayashi",
"Tatsuru",
""
],
[
"Tanaka-Ishii",
"Kumiko",
""
]
] |
new_dataset
| 0.999052 |
1804.08972
|
Tiziano Portenier
|
Tiziano Portenier, Qiyang Hu, Attila Szab\'o, Siavash Arjomand
Bigdeli, Paolo Favaro, Matthias Zwicker
|
FaceShop: Deep Sketch-based Face Image Editing
|
13 pages, 20 figures
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel system for sketch-based face image editing, enabling users
to edit images intuitively by sketching a few strokes on a region of interest.
Our interface features tools to express a desired image manipulation by
providing both geometry and color constraints as user-drawn strokes. As an
alternative to the direct user input, our proposed system naturally supports a
copy-paste mode, which allows users to edit a given image region by using parts
of another exemplar image without the need of hand-drawn sketching at all. The
proposed interface runs in real-time and facilitates an interactive and
iterative workflow to quickly express the intended edits. Our system is based
on a novel sketch domain and a convolutional neural network trained end-to-end
to automatically learn to render image regions corresponding to the input
strokes. To achieve high quality and semantically consistent results we train
our neural network on two simultaneous tasks, namely image completion and image
translation. To the best of our knowledge, we are the first to combine these
two tasks in a unified framework for interactive image editing. Our results
show that the proposed sketch domain, network architecture, and training
procedure generalize well to real user input and enable high quality synthesis
results without additional post-processing.
|
[
{
"version": "v1",
"created": "Tue, 24 Apr 2018 12:03:45 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jun 2018 13:28:54 GMT"
}
] | 2018-06-08T00:00:00 |
[
[
"Portenier",
"Tiziano",
""
],
[
"Hu",
"Qiyang",
""
],
[
"Szabó",
"Attila",
""
],
[
"Bigdeli",
"Siavash Arjomand",
""
],
[
"Favaro",
"Paolo",
""
],
[
"Zwicker",
"Matthias",
""
]
] |
new_dataset
| 0.997358 |
1805.09488
|
Yang Zhou
|
Yang Zhou, Zhan Xu, Chris Landreth, Evangelos Kalogerakis, Subhransu
Maji, Karan Singh
|
VisemeNet: Audio-Driven Animator-Centric Speech Animation
|
10 pages, 5 figures, to appear in SIGGRAPH 2018
| null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel deep-learning based approach to producing animator-centric
speech motion curves that drive a JALI or standard FACS-based production
face-rig, directly from input audio. Our three-stage Long Short-Term Memory
(LSTM) network architecture is motivated by psycho-linguistic insights:
segmenting speech audio into a stream of phonetic-groups is sufficient for
viseme construction; speech styles like mumbling or shouting are strongly
co-related to the motion of facial landmarks; and animator style is encoded in
viseme motion curve profiles. Our contribution is an automatic real-time
lip-synchronization from audio solution that integrates seamlessly into
existing animation pipelines. We evaluate our results by: cross-validation to
ground-truth data; animator critique and edits; visual comparison to recent
deep-learning lip-synchronization solutions; and showing our approach to be
resilient to diversity in speaker and language.
|
[
{
"version": "v1",
"created": "Thu, 24 May 2018 02:34:42 GMT"
}
] | 2018-06-08T00:00:00 |
[
[
"Zhou",
"Yang",
""
],
[
"Xu",
"Zhan",
""
],
[
"Landreth",
"Chris",
""
],
[
"Kalogerakis",
"Evangelos",
""
],
[
"Maji",
"Subhransu",
""
],
[
"Singh",
"Karan",
""
]
] |
new_dataset
| 0.999696 |
1806.02366
|
Alex James Dr
|
Kamilya Smagulova and Kazybek Adam and Olga Krestinskaya and Alex
Pappachen James
|
Design of CMOS-memristor Circuits for LSTM architecture
| null |
IEEE International Conferences on Electron Devices and Solid-State
Circuits, 2018
| null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Long Short-Term memory (LSTM) architecture is a well-known approach for
building recurrent neural networks (RNN) useful in sequential processing of
data in application to natural language processing. The near-sensor hardware
implementation of LSTM is challenged due to large parallelism and complexity.
We propose a 0.18 m CMOS, GST memristor LSTM hardware architecture for
near-sensor processing. The proposed system is validated in a forecasting
problem based on Keras model.
|
[
{
"version": "v1",
"created": "Wed, 6 Jun 2018 18:14:59 GMT"
}
] | 2018-06-08T00:00:00 |
[
[
"Smagulova",
"Kamilya",
""
],
[
"Adam",
"Kazybek",
""
],
[
"Krestinskaya",
"Olga",
""
],
[
"James",
"Alex Pappachen",
""
]
] |
new_dataset
| 0.960921 |
1806.02424
|
Hao Jiang
|
Quanzeng You, Hao Jiang
|
Action4D: Real-time Action Recognition in the Crowd and Clutter
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recognizing every person's action in a crowded and cluttered environment is a
challenging task. In this paper, we propose a real-time action recognition
method, Action4D, which gives reliable and accurate results in the real-world
settings. We propose to tackle the action recognition problem using a holistic
4D "scan" of a cluttered scene to include every detail about the people and
environment. Recognizing multiple people's actions in the cluttered 4D
representation is a new problem. In this paper, we propose novel methods to
solve this problem. We propose a new method to track people in 4D, which can
reliably detect and follow each person in real time. We propose a new deep
neural network, the Action4D-Net, to recognize the action of each tracked
person. The Action4D-Net's novel structure uses both the global feature and the
focused attention to achieve state-of-the-art result. Our real-time method is
invariant to camera view angles, resistant to clutter and able to handle crowd.
The experimental results show that the proposed method is fast, reliable and
accurate. Our method paves the way to action recognition in the real-world
applications and is ready to be deployed to enable smart homes, smart factories
and smart stores.
|
[
{
"version": "v1",
"created": "Wed, 6 Jun 2018 20:59:40 GMT"
}
] | 2018-06-08T00:00:00 |
[
[
"You",
"Quanzeng",
""
],
[
"Jiang",
"Hao",
""
]
] |
new_dataset
| 0.995899 |
1806.02452
|
Tahsin Reasat
|
Samiul Alam, Tahsin Reasat, Rashed Mohammad Doha, Ahmed Imtiaz Humayun
|
NumtaDB - Assembled Bengali Handwritten Digits
|
6 page, 12 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
To benchmark Bengali digit recognition algorithms, a large publicly available
dataset is required which is free from biases originating from geographical
location, gender, and age. With this aim in mind, NumtaDB, a dataset consisting
of more than 85,000 images of hand-written Bengali digits, has been assembled.
This paper documents the collection and curation process of numerals along with
the salient statistics of the dataset.
|
[
{
"version": "v1",
"created": "Wed, 6 Jun 2018 23:02:06 GMT"
}
] | 2018-06-08T00:00:00 |
[
[
"Alam",
"Samiul",
""
],
[
"Reasat",
"Tahsin",
""
],
[
"Doha",
"Rashed Mohammad",
""
],
[
"Humayun",
"Ahmed Imtiaz",
""
]
] |
new_dataset
| 0.999912 |
1806.02536
|
Duc-Phong Le
|
Duc-Phong Le, Nadia El Mrabet, Safia Haloui, Chik How Tan
|
On the near prime-order MNT curves
|
15 pages
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In their seminar paper, Miyaji, Nakabayashi and Takano introduced the first
method to construct families of prime-order elliptic curves with small
embedding degrees, namely k = 3, 4, and 6. These curves, so-called MNT curves,
were then extended by Scott and Barreto, and also Galbraith, McKee and Valenca
to near prime-order curves with the same embedding degrees. In this paper, we
extend the method of Scott and Barreto to introduce an explicit and simple
algorithm that is able to generate all families of MNT curves with any given
cofactor. Furthermore, we analyze the number of potential families of these
curves that could be obtained for a given embedding degree $k$ and a cofactor
h. We then discuss the generalized Pell equations that allow us to construct
particular curves. Finally, we provide statistics of the near prime-order MNT
curves.
|
[
{
"version": "v1",
"created": "Thu, 7 Jun 2018 07:10:01 GMT"
}
] | 2018-06-08T00:00:00 |
[
[
"Le",
"Duc-Phong",
""
],
[
"Mrabet",
"Nadia El",
""
],
[
"Haloui",
"Safia",
""
],
[
"Tan",
"Chik How",
""
]
] |
new_dataset
| 0.998711 |
1806.02681
|
Wanderson Ten\'orio
|
Carlos Munuera, Wanderson Ten\'orio, Fernando Torres
|
Locally Recoverable codes from algebraic curves with separated variables
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A Locally Recoverable code is an error-correcting code such that any erasure
in a single coordinate of a codeword can be recovered from a small subset of
other coordinates. We study Locally Recoverable Algebraic Geometry codes
arising from certain curves defined by equations with separated variables. The
recovery of erasures is obtained by means of Lagrangian interpolation in
general, and simply by one addition in some particular cases.
|
[
{
"version": "v1",
"created": "Thu, 7 Jun 2018 13:48:35 GMT"
}
] | 2018-06-08T00:00:00 |
[
[
"Munuera",
"Carlos",
""
],
[
"Tenório",
"Wanderson",
""
],
[
"Torres",
"Fernando",
""
]
] |
new_dataset
| 0.999588 |
1609.08095
|
Ignasi Sau
|
Marin Bougeret, Ignasi Sau
|
How much does a treedepth modulator help to obtain polynomial kernels
beyond sparse graphs?
|
23 pages, 3 figures
| null | null | null |
cs.DS cs.CC math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the last years, kernelization with structural parameters has been an
active area of research within the field of parameterized complexity. As a
relevant example, Gajarsk{\`y} et al. [ESA 2013] proved that every graph
problem satisfying a property called finite integer index admits a linear
kernel on graphs of bounded expansion and an almost linear kernel on nowhere
dense graphs, parameterized by the size of a $c$-treedepth modulator, which is
a vertex set whose removal results in a graph of treedepth at most $c$, where
$c \geq 1$ is a fixed integer. The authors left as further research to
investigate this parameter on general graphs, and in particular to find
problems that, while admitting polynomial kernels on sparse graphs, behave
differently on general graphs.
In this article we answer this question by finding two very natural such
problems: we prove that Vertex Cover admits a polynomial kernel on general
graphs for any integer $c \geq 1$, and that Dominating Set does not for any
integer $c \geq 2$ even on degenerate graphs, unless $\text{NP} \subseteq
\text{coNP}/\text{poly}$. For the positive result, we build on the techniques
of Jansen and Bodlaender [STACS 2011], and for the negative result we use a
polynomial parameter transformation for $c\geq 3$ and an OR-cross-composition
for $c = 2$. As existing results imply that Dominating Set admits a polynomial
kernel on degenerate graphs for $c = 1$, our result provides a dichotomy about
the existence of polynomial kernels for Dominating Set on degenerate graphs
with this parameter.
|
[
{
"version": "v1",
"created": "Mon, 26 Sep 2016 17:41:03 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Jun 2018 10:44:14 GMT"
}
] | 2018-06-07T00:00:00 |
[
[
"Bougeret",
"Marin",
""
],
[
"Sau",
"Ignasi",
""
]
] |
new_dataset
| 0.995634 |
1609.08436
|
Kangru Wang
|
Kangru Wang, Lei Qu, Lili Chen, Yuzhang Gu, DongChen zhu, Xiaolin
Zhang
|
Non-flat Ground Detection Based on A Local Descriptor
|
9 pages, submitted to IEICE Transactions on Information and Systems
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The detection of road and free space remains challenging for non-flat plane,
especially with the varying latitudinal and longitudinal slope or in the case
of multi-ground plane. In this paper, we propose a framework of the ground
plane detection with stereo vision. The main contribution of this paper is a
newly proposed descriptor which is implemented in the disparity image to obtain
a disparity texture image. The ground plane regions can be distinguished from
their surroundings effectively in the disparity texture image. Because the
descriptor is implemented in the local area of the image, it can address well
the problem of non-flat plane. And we also present a complete framework to
detect the ground plane regions base on the disparity texture image with
convolutional neural network architecture.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2016 13:41:04 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Sep 2016 16:16:37 GMT"
},
{
"version": "v3",
"created": "Sat, 28 Jan 2017 06:22:12 GMT"
},
{
"version": "v4",
"created": "Tue, 21 Feb 2017 11:49:17 GMT"
},
{
"version": "v5",
"created": "Tue, 7 Mar 2017 12:47:26 GMT"
},
{
"version": "v6",
"created": "Wed, 19 Apr 2017 16:20:42 GMT"
},
{
"version": "v7",
"created": "Sat, 22 Apr 2017 12:01:35 GMT"
},
{
"version": "v8",
"created": "Tue, 5 Jun 2018 07:20:42 GMT"
},
{
"version": "v9",
"created": "Wed, 6 Jun 2018 00:54:05 GMT"
}
] | 2018-06-07T00:00:00 |
[
[
"Wang",
"Kangru",
""
],
[
"Qu",
"Lei",
""
],
[
"Chen",
"Lili",
""
],
[
"Gu",
"Yuzhang",
""
],
[
"zhu",
"DongChen",
""
],
[
"Zhang",
"Xiaolin",
""
]
] |
new_dataset
| 0.985237 |
1707.09683
|
Hanyu Jiang
|
Hanyu Jiang, Narayan Ganesan and Yu-Dong Yao
|
CUDAMPF++: A Proactive Resource Exhaustion Scheme for Accelerating
Homologous Sequence Search on CUDA-enabled GPU
|
15 pages, submitted to academic journal
|
IEEE.Trans.Parallel.Distibuted.Sys. (2018)
|
10.1109/TPDS.2018.2830393
| null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Genomic sequence alignment is an important research topic in bioinformatics
and continues to attract significant efforts. As genomic data grow
exponentially, however, most of alignment methods face challenges due to their
huge computational costs. HMMER, a suite of bioinformatics tools, is widely
used for the analysis of homologous protein and nucleotide sequences with high
sensitivity, based on profile hidden Markov models (HMMs). Its latest version,
HMMER3, introdues a heuristic pipeline to accelerate the alignment process,
which is carried out on central processing units (CPUs) with the support of
streaming SIMD extensions (SSE) instructions. Few acceleration results have
since been reported based on HMMER3. In this paper, we propose a five-tiered
parallel framework, CUDAMPF++, to accelerate the most computationally intensive
stages of HMMER3's pipeline, multiple/single segment Viterbi (MSV/SSV), on a
single graphics processing unit (GPU). As an architecture-aware design, the
proposed framework aims to fully utilize hardware resources via exploiting
finer-grained parallelism (multi-sequence alignment) compared with its
predecessor (CUDAMPF). In addition, we propose a novel method that proactively
sacrifices L1 Cache Hit Ratio (CHR) to get improved performance and scalability
in return. A comprehensive evaluation shows that the proposed framework
outperfroms all existig work and exhibits good consistency in performance
regardless of the variation of query models or protein sequence datasets. For
MSV (SSV) kernels, the peak performance of the CUDAMPF++ is 283.9 (471.7) GCUPS
on a single K40 GPU, and impressive speedups ranging from 1.x (1.7x) to 168.3x
(160.7x) are achieved over the CPU-based implementation (16 cores, 32 threads).
|
[
{
"version": "v1",
"created": "Sun, 30 Jul 2017 23:58:45 GMT"
}
] | 2018-06-07T00:00:00 |
[
[
"Jiang",
"Hanyu",
""
],
[
"Ganesan",
"Narayan",
""
],
[
"Yao",
"Yu-Dong",
""
]
] |
new_dataset
| 0.990734 |
1805.10705
|
Magnus Oskarsson
|
Magnus Oskarsson
|
A fast minimal solver for absolute camera pose with unknown focal length
and radial distortion from four planar points
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present a fast minimal solver for absolute camera pose
estimation from four known points that lie in a plane. We assume a perspective
camera model with unknown focal length and unknown radial distortion. The
radial distortion is modelled using the division model with one parameter. We
show that the solutions to this problem can be found from a univariate
six-degree polynomial. This results in a very fast and numerically stable
solver.
|
[
{
"version": "v1",
"created": "Sun, 27 May 2018 22:53:21 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Jun 2018 20:18:09 GMT"
}
] | 2018-06-07T00:00:00 |
[
[
"Oskarsson",
"Magnus",
""
]
] |
new_dataset
| 0.996519 |
1806.01844
|
Snehanshu Saha
|
Snehanshu Saha, Archana Mathur, Kakoli Bora, Surbhi Agrawal, Suryoday
Basak
|
SBAF: A New Activation Function for Artificial Neural Net based
Habitability Classification
|
arXiv admin note: substantial text overlap with arXiv:1805.08810
| null | null | null |
cs.LG astro-ph.IM stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We explore the efficacy of using a novel activation function in Artificial
Neural Networks (ANN) in characterizing exoplanets into different classes. We
call this Saha-Bora Activation Function (SBAF) as the motivation is derived
from long standing understanding of using advanced calculus in modeling
habitability score of Exoplanets. The function is demonstrated to possess nice
analytical properties and doesn't seem to suffer from local oscillation
problems. The manuscript presents the analytical properties of the activation
function and the architecture implemented on the function. Keywords:
Astroinformatics, Machine Learning, Exoplanets, ANN, Activation Function.
|
[
{
"version": "v1",
"created": "Wed, 6 Jun 2018 13:33:04 GMT"
}
] | 2018-06-07T00:00:00 |
[
[
"Saha",
"Snehanshu",
""
],
[
"Mathur",
"Archana",
""
],
[
"Bora",
"Kakoli",
""
],
[
"Agrawal",
"Surbhi",
""
],
[
"Basak",
"Suryoday",
""
]
] |
new_dataset
| 0.984241 |
1806.01911
|
Rakshith Shetty
|
Rakshith Shetty, Mario Fritz, Bernt Schiele
|
Adversarial Scene Editing: Automatic Object Removal from Weak
Supervision
| null | null | null | null |
cs.CV cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While great progress has been made recently in automatic image manipulation,
it has been limited to object centric images like faces or structured scene
datasets. In this work, we take a step towards general scene-level image
editing by developing an automatic interaction-free object removal model. Our
model learns to find and remove objects from general scene images using
image-level labels and unpaired data in a generative adversarial network (GAN)
framework. We achieve this with two key contributions: a two-stage editor
architecture consisting of a mask generator and image in-painter that
co-operate to remove objects, and a novel GAN based prior for the mask
generator that allows us to flexibly incorporate knowledge about object shapes.
We experimentally show on two datasets that our method effectively removes a
wide variety of objects using weak supervision only
|
[
{
"version": "v1",
"created": "Tue, 5 Jun 2018 19:45:20 GMT"
}
] | 2018-06-07T00:00:00 |
[
[
"Shetty",
"Rakshith",
""
],
[
"Fritz",
"Mario",
""
],
[
"Schiele",
"Bernt",
""
]
] |
new_dataset
| 0.996538 |
1806.01976
|
YangQuan Chen Prof.
|
Sina Dehghan, Tiebiao Zhao, Yang Zhao, Jie Yuan, Abdullah Ates,
YangQuan Chen
|
PID2018 Benchmark Challenge: Model Predictive Control With Conditional
Integral Control Using A General Purpose Optimal Control Problem Solver -
RIOTS
|
6 pages, 7 figures, 3rd IFAC Conference on Advances in
Proportional-Integral-Derivative Control
| null | null | null |
cs.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a multi-variable Model Predictive Control (MPC) based
controller for the one-staged refrigeration cycle model described in the
PID2018 Benchmark Challenge. This model represents a two-input, two-output
system with strong nonlinearities and high coupling between its variables. A
general purpose optimal control problem (OCP) solver Matlab toolbox called
RIOTS is used as the OCP solver for the proposed MPC scheme which allows for
straightforward implementation of the method and for solving a wide range of
constrained linear and nonlinear optimal control problems. A conditional
integral (CI) compensator is embedded in the controller to compensate for the
small steady state errors. This method shows significant improvements in
performance compared to both discrete decentralized control (C1) and
multi-variable PID controller (C2) originally given in PID2018 Benchmark
Challenge as a baseline. Our solution is introduced in detail in this paper and
our final results using the overall relative index, $J$, are 0.2 over C1 and
0.3 over C2, respectively. In other words, we achieved 80% improvement over C1
and 70% improvement over C2. We expect to achieve further improvements when
some optimized searching efforts are used for MPC and CI parameter tuning.
|
[
{
"version": "v1",
"created": "Wed, 6 Jun 2018 01:55:02 GMT"
}
] | 2018-06-07T00:00:00 |
[
[
"Dehghan",
"Sina",
""
],
[
"Zhao",
"Tiebiao",
""
],
[
"Zhao",
"Yang",
""
],
[
"Yuan",
"Jie",
""
],
[
"Ates",
"Abdullah",
""
],
[
"Chen",
"YangQuan",
""
]
] |
new_dataset
| 0.982494 |
1806.02053
|
Uday Tupakula
|
Vijay Varadharajan, Kallol Karmakar, Uday Tupakula, Michael Hitchens
|
A Policy based Security Architecture for Software Defined Networks
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As networks expand in size and complexity, they pose greater administrative
and management challenges. Software Defined Networks (SDN) offer a promising
approach to meeting some of these challenges. In this paper, we propose a
policy driven security architecture for securing end to end services across
multiple SDN domains. We develop a language based approach to design security
policies that are relevant for securing SDN services and communications. We
describe the policy language and its use in specifying security policies to
control the flow of information in a multi-domain SDN. We demonstrate the
specification of fine grained security policies based on a variety of
attributes such as parameters associated with users and devices/switches,
context information such as location and routing information, and services
accessed in SDN as well as security attributes associated with the switches and
Controllers in different domains. An important feature of our architecture is
its ability to specify path and flow based security policies, which are
significant for securing end to end services in SDNs. We describe the design
and the implementation of our proposed policy based security architecture and
demonstrate its use in scenarios involving both intra and inter-domain
communications with multiple SDN Controllers. We analyse the performance
characteristics of our architecture as well as discuss how our architecture is
able to counteract various security attacks. The dynamic security policy based
approach and the distribution of corresponding security capabilities
intelligently as a service layer that enable flow based security enforcement
and protection of multitude of network devices against attacks are important
contributions of this paper.
|
[
{
"version": "v1",
"created": "Wed, 6 Jun 2018 08:19:52 GMT"
}
] | 2018-06-07T00:00:00 |
[
[
"Varadharajan",
"Vijay",
""
],
[
"Karmakar",
"Kallol",
""
],
[
"Tupakula",
"Uday",
""
],
[
"Hitchens",
"Michael",
""
]
] |
new_dataset
| 0.988253 |
1806.02055
|
Hazem Sallouha
|
Hazem Sallouha, Mohammad Mahdi Azari, and Sofie Pollin
|
Energy-Constrained UAV Trajectory Design for Ground Node Localization
|
This work has been submitted to IEEE Globecom 2018 for possible
publication
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of aerial anchors for localizing terrestrial nodes has recently been
recognized as a cost-effective, swift and flexible solution for better
localization accuracy, providing localization services when the GPS is jammed
or satellite reception is not possible. In this paper, the localization of
terrestrial nodes when using mobile unmanned aerial vehicles (UAVs) as aerial
anchors is presented. We propose a novel framework to derive localization error
in urban areas. In contrast to the existing works, our framework includes
height-dependent UAV to ground channel characteristics and a highly detailed
UAV energy consumption model. This enables us to explore different tradeoffs
and optimize UAV trajectory for minimum localization error. In particular, we
investigate the impact of UAV altitude, hovering time, number of waypoints and
path length through formulating an energy-constrained optimization problem. Our
results show that increasing the hovering time decreases the localization error
considerably at the cost of a higher energy consumption. To keep the
localization error below 100m, shorter hovering is only possible when the path
altitude and radius are optimized. For a constant hovering time of 5 seconds,
tuning both parameters to their optimal values brings the localization error
from 150m down to 65m with a power saving around 25%
|
[
{
"version": "v1",
"created": "Wed, 6 Jun 2018 08:21:49 GMT"
}
] | 2018-06-07T00:00:00 |
[
[
"Sallouha",
"Hazem",
""
],
[
"Azari",
"Mohammad Mahdi",
""
],
[
"Pollin",
"Sofie",
""
]
] |
new_dataset
| 0.98355 |
1806.02221
|
Zhaohui Yang
|
Zhaohui Yang, Cunhua Pan, Mohammad Shikh-Bahaei, Wei Xu, Ming Chen,
Maged Elkashlan, Arumugam Nallanathan
|
Joint Altitude, Beamwidth, Location and Bandwidth Optimization for
UAV-Enabled Communications
|
4 pages 2 figures, IEEE Communications Letters
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This letter investigates an uplink power control problem for unmanned aerial
vehicles (UAVs) assisted wireless communications. We jointly optimize the UAV's
flying altitude, antenna beamwidth, UAV's location and ground terminals'
allocated bandwidth and transmit power to minimize the sum uplink power subject
to the minimal rate demand. An iterative algorithm is proposed with low
complexity to obtain a suboptimal solution. Numerical results show that the
proposed algorithm can achieve good performance in terms of uplink sum power
saving.
|
[
{
"version": "v1",
"created": "Wed, 6 Jun 2018 14:33:43 GMT"
}
] | 2018-06-07T00:00:00 |
[
[
"Yang",
"Zhaohui",
""
],
[
"Pan",
"Cunhua",
""
],
[
"Shikh-Bahaei",
"Mohammad",
""
],
[
"Xu",
"Wei",
""
],
[
"Chen",
"Ming",
""
],
[
"Elkashlan",
"Maged",
""
],
[
"Nallanathan",
"Arumugam",
""
]
] |
new_dataset
| 0.992488 |
1511.05646
|
Alon Eden
|
Vincent Cohen-Addad, Alon Eden, Michal Feldman, Amos Fiat
|
The Invisible Hand of Dynamic Market Pricing
| null | null | null | null |
cs.GT cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Walrasian prices, if they exist, have the property that one can assign every
buyer some bundle in her demand set, such that the resulting assignment will
maximize social welfare. Unfortunately, this assumes carefully breaking ties
amongst different bundles in the buyer demand set. Presumably, the shopkeeper
cleverly convinces the buyer to break ties in a manner consistent with
maximizing social welfare. Lacking such a shopkeeper, if buyers arrive
sequentially and simply choose some arbitrary bundle in their demand set, the
social welfare may be arbitrarily bad. In the context of matching markets, we
show how to compute dynamic prices, based upon the current inventory, that
guarantee that social welfare is maximized. Such prices are set without knowing
the identity of the next buyer to arrive. We also show that this is impossible
in general (e.g., for coverage valuations), but consider other scenarios where
this can be done. We further extend our results to Bayesian and bounded
rationality models.
|
[
{
"version": "v1",
"created": "Wed, 18 Nov 2015 02:50:09 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Jun 2018 11:55:23 GMT"
}
] | 2018-06-06T00:00:00 |
[
[
"Cohen-Addad",
"Vincent",
""
],
[
"Eden",
"Alon",
""
],
[
"Feldman",
"Michal",
""
],
[
"Fiat",
"Amos",
""
]
] |
new_dataset
| 0.981242 |
1705.02111
|
Pascal Giard
|
Pascal Giard, Alexios Balatsoukas-Stimming, and Andreas Burg
|
Blind Detection of Polar Codes
|
6 pages, 8 figures, to appear at the IEEE Int. Workshop on Signal
Process. Syst. (SiPS) 2017
| null |
10.1109/SiPS.2017.8109977
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Polar codes were recently chosen to protect the control channel information
in the next-generation mobile communication standard (5G) defined by the 3GPP.
As a result, receivers will have to implement blind detection of polar coded
frames in order to keep complexity, latency, and power consumption tractable.
As a newly proposed class of block codes, the problem of polar-code blind
detection has received very little attention. In this work, we propose a
low-complexity blind-detection algorithm for polar-encoded frames. We base this
algorithm on a novel detection metric with update rules that leverage the a
priori knowledge of the frozen-bit locations, exploiting the inherent
structures that these locations impose on a polar-encoded block of data. We
show that the proposed detection metric allows to clearly distinguish
polar-encoded frames from other types of data by considering the cumulative
distribution functions of the detection metric, and the receiver operating
characteristic. The presented results are tailored to the 5G standardization
effort discussions, i.e., we consider a short low-rate polar code concatenated
with a CRC.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2017 07:43:52 GMT"
},
{
"version": "v2",
"created": "Fri, 12 May 2017 07:53:12 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Jul 2017 12:36:34 GMT"
}
] | 2018-06-06T00:00:00 |
[
[
"Giard",
"Pascal",
""
],
[
"Balatsoukas-Stimming",
"Alexios",
""
],
[
"Burg",
"Andreas",
""
]
] |
new_dataset
| 0.999429 |
1806.01270
|
Kai Rothauge
|
Alex Gittens, Kai Rothauge, Shusen Wang, Michael W. Mahoney, Jey
Kottalam, Lisa Gerhardt, Prabhat, Michael Ringenburg, Kristyn Maschhoff
|
Alchemist: An Apache Spark <=> MPI Interface
|
Accepted for publication in Concurrency and Computation: Practice and
Experience, Special Issue on the Cray User Group 2018. arXiv admin note: text
overlap with arXiv:1805.11800
| null | null | null |
cs.DC cs.DB physics.data-an stat.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Apache Spark framework for distributed computation is popular in the data
analytics community due to its ease of use, but its MapReduce-style programming
model can incur significant overheads when performing computations that do not
map directly onto this model. One way to mitigate these costs is to off-load
computations onto MPI codes. In recent work, we introduced Alchemist, a system
for the analysis of large-scale data sets. Alchemist calls MPI-based libraries
from within Spark applications, and it has minimal coding, communication, and
memory overheads. In particular, Alchemist allows users to retain the
productivity benefits of working within the Spark software ecosystem without
sacrificing performance efficiency in linear algebra, machine learning, and
other related computations.
In this paper, we discuss the motivation behind the development of Alchemist,
and we provide a detailed overview its design and usage. We also demonstrate
the efficiency of our approach on medium-to-large data sets, using some
standard linear algebra operations, namely matrix multiplication and the
truncated singular value decomposition of a dense matrix, and we compare the
performance of Spark with that of Spark+Alchemist. These computations are run
on the NERSC supercomputer Cori Phase 1, a Cray XC40.
|
[
{
"version": "v1",
"created": "Sun, 3 Jun 2018 23:25:29 GMT"
}
] | 2018-06-06T00:00:00 |
[
[
"Gittens",
"Alex",
""
],
[
"Rothauge",
"Kai",
""
],
[
"Wang",
"Shusen",
""
],
[
"Mahoney",
"Michael W.",
""
],
[
"Kottalam",
"Jey",
""
],
[
"Gerhardt",
"Lisa",
""
],
[
"Prabhat",
"",
""
],
[
"Ringenburg",
"Michael",
""
],
[
"Maschhoff",
"Kristyn",
""
]
] |
new_dataset
| 0.978477 |
1806.01419
|
Pooya Monshizadeh
|
Romeo Ortega, Nima Monshizadeh, Pooya Monshizadeh, Dmitry Bazylev,
Anton Pyrkin
|
Permanent Magnet Synchronous Motors are Globally Asymptotically
Stabilizable with PI Current Control
| null | null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This note shows that the industry standard desired equilibrium for permanent
magnet synchronous motors (i.e., maximum torque per Ampere) can be globally
asymptotically stabilized with a PI control around the current errors, provided
some viscous friction (possibly small) is present in the rotor dynamics and the
proportional gain of the PI is suitably chosen. Instrumental to establish this
surprising result is the proof that the map from voltages to currents of the
incremental model of the motor satisfies some passivity properties. The
analysis relies on basic Lyapunov theory making the result available to a wide
audience.
|
[
{
"version": "v1",
"created": "Mon, 4 Jun 2018 23:01:56 GMT"
}
] | 2018-06-06T00:00:00 |
[
[
"Ortega",
"Romeo",
""
],
[
"Monshizadeh",
"Nima",
""
],
[
"Monshizadeh",
"Pooya",
""
],
[
"Bazylev",
"Dmitry",
""
],
[
"Pyrkin",
"Anton",
""
]
] |
new_dataset
| 0.998889 |
1806.01526
|
Piek Vossen
|
Piek Vossen, Selene Baez, Lenka Baj\v{c}eti\'c, and Bram Kraaijeveld
|
Leolani: a reference machine with a theory of mind for social
communication
|
Invited keynote at 21st International Conference on Text, Speech and
Dialogue, https://www.tsdconference.org/tsd2018/
| null | null | null |
cs.AI cs.CL cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our state of mind is based on experiences and what other people tell us. This
may result in conflicting information, uncertainty, and alternative facts. We
present a robot that models relativity of knowledge and perception within
social interaction following principles of the theory of mind. We utilized
vision and speech capabilities on a Pepper robot to build an interaction model
that stores the interpretations of perceptions and conversations in combination
with provenance on its sources. The robot learns directly from what people tell
it, possibly in relation to its perception. We demonstrate how the robot's
communication is driven by hunger to acquire more knowledge from and on people
and objects, to resolve uncertainties and conflicts, and to share awareness of
the per- ceived environment. Likewise, the robot can make reference to the
world and its knowledge about the world and the encounters with people that
yielded this knowledge.
|
[
{
"version": "v1",
"created": "Tue, 5 Jun 2018 07:36:36 GMT"
}
] | 2018-06-06T00:00:00 |
[
[
"Vossen",
"Piek",
""
],
[
"Baez",
"Selene",
""
],
[
"Bajčetić",
"Lenka",
""
],
[
"Kraaijeveld",
"Bram",
""
]
] |
new_dataset
| 0.999384 |
1806.01737
|
Adnan Aijaz
|
M. Omar Al-Kadri, Adnan Aijaz, Arumugam Nallanathan
|
X-FDR: A Cross-Layer Routing Protocol for Multi-hop Full-Duplex Wireless
Networks
|
IEEE Wireless Communications - Accepted for Publication
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recent developments in self-interference (SI) cancellation techniques
have led to the practical realization of full-duplex (FD) radios that can
perform simultaneous transmission and reception. FD technology is attractive
for various legacy communications standards. In this paper, after discussing
the opportunities of FD technology at the network layer, we present a
cross-layer aided routing protocol, termed as X-FDR, for multi-hop FD wireless
networks. X-FDR exploits a Physical (PHY) layer model capturing imperfection of
SI cancellation. At the medium access control (MAC) layer, X-FDR adopts an
optimized MAC protocol which implements a power control mechanism without
creating the hidden terminal problem. X-FDR exploits the unique characteristics
of FD technology at the network layer to construct energy-efficient and low
end-to-end latency routes in the network. Performance evaluation demonstrates
the effectiveness of X-FDR in achieving the gains of FD at higher layers of the
protocol stack.
|
[
{
"version": "v1",
"created": "Tue, 5 Jun 2018 15:06:10 GMT"
}
] | 2018-06-06T00:00:00 |
[
[
"Al-Kadri",
"M. Omar",
""
],
[
"Aijaz",
"Adnan",
""
],
[
"Nallanathan",
"Arumugam",
""
]
] |
new_dataset
| 0.999668 |
1705.01085
|
Mohamed Grissa
|
Mohamed Grissa, Attila A. Yavuz, and Bechir Hamdaoui
|
When the Hammer Meets the Nail: Multi-Server PIR for Database-Driven CRN
with Location Privacy Assurance
|
10 pages, double column
|
IEEE Conference on Communications and Network Security (CNS), Oct
2017, pp. 1-9
|
10.1109/CNS.2017.8228646
| null |
cs.NI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show that it is possible to achieve information theoretic location privacy
for secondary users (SUs) in database-driven cognitive radio networks (CRNs)
with an end-to-end delay less than a second, which is significantly better than
that of the existing alternatives offering only a computational privacy. This
is achieved based on a keen observation that, by the requirement of Federal
Communications Commission (FCC), all certified spectrum databases synchronize
their records. Hence, the same copy of spectrum database is available through
multiple (distinct) providers. We harness the synergy between multi-server
private information retrieval (PIR) and database- driven CRN architecture to
offer an optimal level of privacy with high efficiency by exploiting this
observation. We demonstrated, analytically and experimentally with deployments
on actual cloud systems that, our adaptations of multi-server PIR outperform
that of the (currently) fastest single-server PIR by a magnitude of times with
information theoretic security, collusion resiliency, and fault-tolerance
features. Our analysis indicates that multi-server PIR is an ideal
cryptographic tool to provide location privacy in database-driven CRNs, in
which the requirement of replicated databases is a natural part of the system
architecture, and therefore SUs can enjoy all advantages of multi-server PIR
without any additional architectural and deployment costs.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2017 17:41:36 GMT"
}
] | 2018-06-05T00:00:00 |
[
[
"Grissa",
"Mohamed",
""
],
[
"Yavuz",
"Attila A.",
""
],
[
"Hamdaoui",
"Bechir",
""
]
] |
new_dataset
| 0.983123 |
1706.03659
|
Joyson Sebastian
|
Joyson Sebastian, Can Karakus, Suhas Diggavi
|
Approximate Capacity of Fast Fading Interference Channels with No
Instantaneous CSIT
|
Minor typos corrected
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop a characterization of fading models, which assigns a number called
logarithmic Jensen's gap to a given fading model. We show that as a consequence
of a finite logarithmic Jensen's gap, approximate capacity region can be
obtained for fast fading interference channels (FF-IC) for several scenarios.
We illustrate three instances where a constant capacity gap can be obtained as
a function of the logarithmic Jensen's gap. Firstly for an FF-IC with neither
feedback nor instantaneous channel state information at transmitter (CSIT), if
the fading distribution has finite logarithmic Jensen's gap, we show that a
rate-splitting scheme based on average interference-to-noise ratio (inr) can
achieve its approximate capacity. Secondly we show that a similar scheme can
achieve the approximate capacity of FF-IC with feedback and delayed CSIT, if
the fading distribution has finite logarithmic Jensen's gap. Thirdly, when this
condition holds, we show that point-to-point codes can achieve approximate
capacity for a class of FF-IC with feedback. We prove that the logarithmic
Jensen's gap is finite for common fading models, including Rayleigh and
Nakagami fading, thereby obtaining the approximate capacity region of FF-IC
with these fading models. For Rayleigh fading the capacity gap is obtained as
1.83 bits per channel use for non-feedback case and 2.83 bits per channel use
for feedback case. Our analysis also yields approximate capacity results for
fading 2-tap ISI channel and fading interference multiple access channel as
corollaries.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2017 14:31:17 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Oct 2017 00:56:27 GMT"
},
{
"version": "v3",
"created": "Thu, 12 Oct 2017 16:40:13 GMT"
},
{
"version": "v4",
"created": "Fri, 26 Jan 2018 21:06:50 GMT"
},
{
"version": "v5",
"created": "Wed, 16 May 2018 04:59:05 GMT"
},
{
"version": "v6",
"created": "Sun, 3 Jun 2018 20:03:03 GMT"
}
] | 2018-06-05T00:00:00 |
[
[
"Sebastian",
"Joyson",
""
],
[
"Karakus",
"Can",
""
],
[
"Diggavi",
"Suhas",
""
]
] |
new_dataset
| 0.993909 |
1803.07130
|
Joachim Breitner
|
Joachim Breitner
|
A promise checked is a promise kept: Inspection Testing
|
15 pages. Submitted to Haskell'18. Includes an additional appendix
| null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Occasionally, developers need to ensure that the compiler treats their code
in a specific way that is only visible by inspecting intermediate or final
compilation artifacts. This is particularly common with carefully crafted
compositional libraries, where certain usage patterns are expected to trigger
an intricate sequence of compiler optimizations -- stream fusion is a
well-known example.
The developer of such a library has to manually inspect build artifacts and
check for the expected properties. Because this is too tedious to do often, it
will likely go unnoticed if the property is broken by a change to the library
code, its dependencies or the compiler. The lack of automation has led to
released versions of such libraries breaking their documented promises.
This indicates that there is an unrecognized need for a new testing paradigm,
inspection testing, where the programmer declaratively describes non-functional
properties of an compilation artifact and the compiler checks these properties.
We define inspection testing abstractly, implement it in the context of Haskell
and show that it increases the quality of such libraries.
|
[
{
"version": "v1",
"created": "Mon, 19 Mar 2018 19:29:47 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Jun 2018 10:22:49 GMT"
}
] | 2018-06-05T00:00:00 |
[
[
"Breitner",
"Joachim",
""
]
] |
new_dataset
| 0.998762 |
1806.00525
|
Huda Alamri
|
Huda Alamri, Vincent Cartillier, Raphael Gontijo Lopes, Abhishek Das,
Jue Wang, Irfan Essa, Dhruv Batra, Devi Parikh, Anoop Cherian, Tim K. Marks,
Chiori Hori
|
Audio Visual Scene-Aware Dialog (AVSD) Challenge at DSTC7
| null | null | null | null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene-aware dialog systems will be able to have conversations with users
about the objects and events around them. Progress on such systems can be made
by integrating state-of-the-art technologies from multiple research areas
including end-to-end dialog systems visual dialog, and video description. We
introduce the Audio Visual Scene Aware Dialog (AVSD) challenge and dataset. In
this challenge, which is one track of the 7th Dialog System Technology
Challenges (DSTC7) workshop1, the task is to build a system that generates
responses in a dialog about an input video
|
[
{
"version": "v1",
"created": "Fri, 1 Jun 2018 19:51:58 GMT"
}
] | 2018-06-05T00:00:00 |
[
[
"Alamri",
"Huda",
""
],
[
"Cartillier",
"Vincent",
""
],
[
"Lopes",
"Raphael Gontijo",
""
],
[
"Das",
"Abhishek",
""
],
[
"Wang",
"Jue",
""
],
[
"Essa",
"Irfan",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Parikh",
"Devi",
""
],
[
"Cherian",
"Anoop",
""
],
[
"Marks",
"Tim K.",
""
],
[
"Hori",
"Chiori",
""
]
] |
new_dataset
| 0.999402 |
1806.00616
|
Zhiyuan Tang
|
Zhiyuan Tang, Dong Wang and Qing Chen
|
AP18-OLR Challenge: Three Tasks and Their Baselines
|
arXiv admin note: substantial text overlap with arXiv:1706.09742
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The third oriental language recognition (OLR) challenge AP18-OLR is
introduced in this paper, including the data profile, the tasks and the
evaluation principles. Following the events in the last two years, namely
AP16-OLR and AP17-OLR, the challenge this year focuses on more challenging
tasks, including (1) short-duration utterances, (2) confusing languages, and
(3) open-set recognition. The same as the previous events, the data of AP18-OLR
is also provided by SpeechOcean and the NSFC M2ASR project. Baselines based on
both the i-vector model and neural networks are constructed for the
participants' reference. We report the baseline results on the three tasks and
demonstrate that the three tasks are truly challenging. All the data is free
for participants, and the Kaldi recipes for the baselines have been published
online.
|
[
{
"version": "v1",
"created": "Sat, 2 Jun 2018 10:07:10 GMT"
}
] | 2018-06-05T00:00:00 |
[
[
"Tang",
"Zhiyuan",
""
],
[
"Wang",
"Dong",
""
],
[
"Chen",
"Qing",
""
]
] |
new_dataset
| 0.965144 |
1806.00678
|
Brian Goldfain
|
Brian Goldfain, Paul Drews, Changxi You, Matthew Barulic, Orlin Velev,
Panagiotis Tsiotras, and James M. Rehg
|
AutoRally An open platform for aggressive autonomous driving
| null | null | null | null |
cs.RO cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article presents AutoRally, a 1$:$5 scale robotics testbed for
autonomous vehicle research. AutoRally is designed for robustness, ease of use,
and reproducibility, so that a team of two people with limited knowledge of
mechanical engineering, electrical engineering, and computer science can
construct and then operate the testbed to collect real world autonomous driving
data in whatever domain they wish to study. Complete documentation to construct
and operate the platform is available online along with tutorials, example
controllers, and a driving dataset collected at the Georgia Tech Autonomous
Racing Facility. Offline estimation algorithms are used to determine parameters
for physics-based dynamics models using an adaptive limited memory joint state
unscented Kalman filter. Online vehicle state estimation using a factor graph
optimization scheme and a convolutional neural network for semantic
segmentation of drivable surface are presented. All algorithms are tested with
real world data from the fleet of six AutoRally robots at the Georgia Tech
Autonomous Racing Facility tracks, and serve as a demonstration of the
robot$'$s capabilities.
|
[
{
"version": "v1",
"created": "Sat, 2 Jun 2018 17:46:33 GMT"
}
] | 2018-06-05T00:00:00 |
[
[
"Goldfain",
"Brian",
""
],
[
"Drews",
"Paul",
""
],
[
"You",
"Changxi",
""
],
[
"Barulic",
"Matthew",
""
],
[
"Velev",
"Orlin",
""
],
[
"Tsiotras",
"Panagiotis",
""
],
[
"Rehg",
"James M.",
""
]
] |
new_dataset
| 0.997976 |
1806.00779
|
Ye Chi
|
Ye Chi
|
Gemini: Reducing DRAM Cache Hit Latency by Hybrid Mappings
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Die-stacked DRAM caches are increasingly advocated to bridge the performance
gap between on-chip Cache and main memory. It is essential to improve DRAM
cache hit rate and lower cache hit latency simultaneously. Prior DRAM cache
designs fall into two categories according to the data mapping polices:
set-associative and direct-mapped, achieving either one. In this paper, we
propose a partial direct-mapped die-stacked DRAM cache to achieve the both
objectives simultaneously, called Gemini, which is motivated by the following
observations: applying unified mapping policy to different blocks cannot
achieve high cache hit rate and low hit latency in terms of mapping structure.
Gemini cache classifies data into leading blocks and following blocks, and
places them with static mapping and dynamic mapping respectively in a unified
set-associative structure. Gemini also designs a replacement policy to balance
the different blocks miss penalty and the recency, and provides strategies to
mitigate cache thrashing due to block type transitions. Experimental results
demonstrate that Gemini cache can narrow the hit latency gap with direct-mapped
cache significantly, from 1.75X to 1.22X on average, and can achieve comparable
hit rate with set-associative cache. Compared with the state-of-the-art
baselines, i.e., enhanced Loh-Hill cache, Gemini improves the IPC by up to 20%
respectively.
|
[
{
"version": "v1",
"created": "Sun, 3 Jun 2018 12:29:26 GMT"
}
] | 2018-06-05T00:00:00 |
[
[
"Chi",
"Ye",
""
]
] |
new_dataset
| 0.986523 |
1806.00814
|
Tsunehiko Kameda
|
Binay Bhattacharya, Yuya Higashikawa, Tsunehiko Kameda, Naoki Katoh
|
Minmax Regret 1-Sink for Aggregate Evacuation Time on Path Networks
|
21 pages, 7 figures
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Evacuation in emergency situations can be modeled by a dynamic flow network.
Two criteria have been used before: one is the evacuation completion time and
the other is the aggregate evacuation time of individual evacuees. The aim of
this paper is to optimize the aggregate evacuation time in the simplest case,
where the network is a path and only one evacuation center (called a sink) is
to be introduced. The evacuees are initially located at the vertices, but their
precise numbers are unknown, and are given by upper and lower bounds. Under
this assumption, we compute the sink location that minimizes the maximum
"regret." We present an $O(n^2\log n)$ time algorithm to solve this problem,
improving upon the previously fastest $O(n^3)$ time algorithm, where $n$ is the
number of vertices.
|
[
{
"version": "v1",
"created": "Sun, 3 Jun 2018 15:25:01 GMT"
}
] | 2018-06-05T00:00:00 |
[
[
"Bhattacharya",
"Binay",
""
],
[
"Higashikawa",
"Yuya",
""
],
[
"Kameda",
"Tsunehiko",
""
],
[
"Katoh",
"Naoki",
""
]
] |
new_dataset
| 0.998223 |
1806.00874
|
Chieh-Chi Kao
|
Chieh-Chi Kao, Yuxiang Wang, Jonathan Waltman, Pradeep Sen
|
Patch-Based Image Hallucination for Super Resolution with Detail
Reconstruction from Similar Sample Images
|
13 pages, 8 figures, submitted to IEEE Transactions on Multimedia,
under revision
| null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image hallucination and super-resolution have been studied for decades, and
many approaches have been proposed to upsample low-resolution images using
information from the images themselves, multiple example images, or large image
databases. However, most of this work has focused exclusively on small
magnification levels because the algorithms simply sharpen the blurry edges in
the upsampled images - no actual new detail is typically reconstructed in the
final result. In this paper, we present a patch-based algorithm for image
hallucination which, for the first time, properly synthesizes novel high
frequency detail. To do this, we pose the synthesis problem as a patch-based
optimization which inserts coherent, high-frequency detail from
contextually-similar images of the same physical scene/subject provided from
either a personal image collection or a large online database. The resulting
image is visually plausible and contains coherent high frequency information.
We demonstrate the robustness of our algorithm by testing it on a large number
of images and show that its performance is considerably superior to all
state-of-the-art approaches, a result that is verified to be statistically
significant through a randomized user study.
|
[
{
"version": "v1",
"created": "Sun, 3 Jun 2018 20:59:43 GMT"
}
] | 2018-06-05T00:00:00 |
[
[
"Kao",
"Chieh-Chi",
""
],
[
"Wang",
"Yuxiang",
""
],
[
"Waltman",
"Jonathan",
""
],
[
"Sen",
"Pradeep",
""
]
] |
new_dataset
| 0.984927 |
1806.00890
|
Konstantinos Rematas
|
Konstantinos Rematas, Ira Kemelmacher-Shlizerman, Brian Curless, Steve
Seitz
|
Soccer on Your Tabletop
|
CVPR'18. Project: http://grail.cs.washington.edu/projects/soccer/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a system that transforms a monocular video of a soccer game into a
moving 3D reconstruction, in which the players and field can be rendered
interactively with a 3D viewer or through an Augmented Reality device. At the
heart of our paper is an approach to estimate the depth map of each player,
using a CNN that is trained on 3D player data extracted from soccer video
games. We compare with state of the art body pose and depth estimation
techniques, and show results on both synthetic ground truth benchmarks, and
real YouTube soccer footage.
|
[
{
"version": "v1",
"created": "Sun, 3 Jun 2018 22:51:35 GMT"
}
] | 2018-06-05T00:00:00 |
[
[
"Rematas",
"Konstantinos",
""
],
[
"Kemelmacher-Shlizerman",
"Ira",
""
],
[
"Curless",
"Brian",
""
],
[
"Seitz",
"Steve",
""
]
] |
new_dataset
| 0.997341 |
1806.00901
|
Gui-Song Xia
|
Xin-Yi Tong, Qikai Lu, Gui-Song Xia, Liangpei Zhang
|
Large-scale Land Cover Classification in GaoFen-2 Satellite Imagery
|
IGARSS'18 conference paper
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many significant applications need land cover information of remote sensing
images that are acquired from different areas and times, such as change
detection and disaster monitoring. However, it is difficult to find a generic
land cover classification scheme for different remote sensing images due to the
spectral shift caused by diverse acquisition condition. In this paper, we
develop a novel land cover classification method that can deal with large-scale
data captured from widely distributed areas and different times. Additionally,
we establish a large-scale land cover classification dataset consisting of 150
Gaofen-2 imageries as data support for model training and performance
evaluation. Our experiments achieve outstanding classification accuracy
compared with traditional methods.
|
[
{
"version": "v1",
"created": "Mon, 4 Jun 2018 00:12:00 GMT"
}
] | 2018-06-05T00:00:00 |
[
[
"Tong",
"Xin-Yi",
""
],
[
"Lu",
"Qikai",
""
],
[
"Xia",
"Gui-Song",
""
],
[
"Zhang",
"Liangpei",
""
]
] |
new_dataset
| 0.999349 |
1806.00951
|
Xinxin Fan
|
Xinxin Fan
|
Faster Dual-Key Stealth Address for Blockchain-Based Internet of Things
Systems
|
to be published in ICBC 2018
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stealth address prevents public association of a blockchain transaction's
output with a recipient's wallet address and hides the actual destination
address of a transaction. While stealth address provides an effective
privacy-enhancing technology for a cryptocurrency network, it requires
blockchain nodes to actively monitor all the transactions and compute the
purported destination addresses, which restricts its application for
resource-constrained environments like Internet of Things (IoT). In this paper,
we propose DKSAP-IoT, a faster dual-key stealth address protocol for
blockchain-based IoT systems. DKSAP-IoT utilizes a technique similar to the TLS
session resumption to improve the performance and reduce the transaction size
at the same time between two communication peers. Our theoretical analysis as
well as the extensive experiments on an embedded computing platform demonstrate
that DKSAP-IoT is able to reduce the computational overhead by at least 50%
when compared to the state-of-the-art scheme, thereby paving the way for its
application to blockchain-based IoT systems.
|
[
{
"version": "v1",
"created": "Mon, 4 Jun 2018 04:49:22 GMT"
}
] | 2018-06-05T00:00:00 |
[
[
"Fan",
"Xinxin",
""
]
] |
new_dataset
| 0.997635 |
1806.00958
|
YangQuan Chen Prof.
|
Abdullah Ates, Jie Yuan, Sina Dehghan, Yang Zhao, Celaleddin Yeroglu,
YangQuan Chen
|
PID2018 Benchmark Challenge:Multi-Objective Stochastic Optimization
Algorithm
| null | null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a multi-objective stochastic optimization method for
tuning of the controller parameters of Refrigeration Systems based on Vapour
Compression. Stochastic Multi Parameter Divergence Optimization (SMDO)
algorithm is modified for minimization of the Multi Objective function for
optimization process. System control performance is improved by tuning of the
PI controller parameters according to discrete time model of the refrigeration
system with multi objective function by adding conditional integral structure
that is preferred to reduce the steady state error of the system. Simulations
are compared with existing results via many graphical and numerical solutions.
|
[
{
"version": "v1",
"created": "Mon, 4 Jun 2018 05:24:43 GMT"
}
] | 2018-06-05T00:00:00 |
[
[
"Ates",
"Abdullah",
""
],
[
"Yuan",
"Jie",
""
],
[
"Dehghan",
"Sina",
""
],
[
"Zhao",
"Yang",
""
],
[
"Yeroglu",
"Celaleddin",
""
],
[
"Chen",
"YangQuan",
""
]
] |
new_dataset
| 0.957948 |
1806.01041
|
Francisco Crespo Mr
|
Francisco Crespo Estefan\'ia Mart\'in
|
Applications for mobile devices focused on support for autism spectrum
disorder population and / or people in their immediate environment in their
daily lives: a systematic and practical review from a Spanish - speaking
perspective
|
16 pages, 8 figures, 2 tables
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The present study has made a review of scientific publications on
applications focused on autism, most of them developed for communication,
social behavior and learning, which coincides with what is observed in a
digital market that practically lacks scientific validation. The study has also
found only 135 of these type of applications with a Spanish version available
(in a practical sense), developed mostly for daily life of an autistic person
and/or people from their immediate environment. By using these applications,
there are positive results in terms of learning and permanent adoption of
behaviors and skills, but it is necessary to deepen research and further
development of applications focused on leisure, resources for parents and
professionals, and supporting of autistic adult needs.
|
[
{
"version": "v1",
"created": "Mon, 4 Jun 2018 10:53:57 GMT"
}
] | 2018-06-05T00:00:00 |
[
[
"Martín",
"Francisco Crespo Estefanía",
""
]
] |
new_dataset
| 0.984532 |
1806.01065
|
Komei Sugiura
|
Komei Sugiura
|
SuMo-SS: Submodular Optimization Sensor Scattering for Deploying Sensor
Networks by Drones
|
Accepted to IEEE Robotics and Automation Letters
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To meet the immediate needs of environmental monitoring or hazardous event
detection, we consider the automatic deployment of a group of low-cost or
disposable sensors by a drone. Introducing sensors by drones to an environment
instead of humans has advantages in terms of worker safety and time
requirements. In this study, we define "sensor scattering (SS)" as the problem
of maximizing the information-theoretic gain from sensors scattered on the
ground by a drone. SS is challenging due to its combinatorial explosion nature,
because the number of possible combination of sensor positions increases
exponentially with the increase in the number of sensors. In this paper, we
propose an online planning method called SubModular Optimization Sensor
Scattering (SuMo-SS). Unlike existing methods, the proposed method can deal
with uncertainty in sensor positions. It does not suffer from combinatorial
explosion but obtains a (1-1/e)-approximation of the optimal solution. We built
a physical drone that can scatter sensors in an indoor environment as well as a
simulation environment based on the drone and the environment. In this paper,
we present the theoretical background of our proposed method and its
experimental validation.
|
[
{
"version": "v1",
"created": "Mon, 4 Jun 2018 11:56:33 GMT"
}
] | 2018-06-05T00:00:00 |
[
[
"Sugiura",
"Komei",
""
]
] |
new_dataset
| 0.992408 |
1806.01081
|
Nattachai Watcharapinchai
|
Nattachai Watcharapinchai, Sitapa Rujikietgumjorn, Sanparith Marukatat
|
Sloth Search System at the Video Browser Showdown 2018 - Final Notes
|
Final note paper about the Sloth Search System at the VBS 2018
competition
| null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This short paper provides further details of the Sloth Search System, which
was developed by the NECTEC team for the Video Browser Showdown (VBS) 2018.
|
[
{
"version": "v1",
"created": "Mon, 4 Jun 2018 12:49:03 GMT"
}
] | 2018-06-05T00:00:00 |
[
[
"Watcharapinchai",
"Nattachai",
""
],
[
"Rujikietgumjorn",
"Sitapa",
""
],
[
"Marukatat",
"Sanparith",
""
]
] |
new_dataset
| 0.990463 |
1708.05655
|
Cem Tekin
|
Cem Tekin and Eralp Turgay
|
Multi-objective Contextual Multi-armed Bandit with a Dominant Objective
|
To appear in IEEE Transactions on Signal Processing, link:
https://ieeexplore.ieee.org/document/8368272/
| null |
10.1109/TSP.2018.2841822
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a new multi-objective contextual multi-armed bandit
(MAB) problem with two objectives, where one of the objectives dominates the
other objective. Unlike single-objective MAB problems in which the learner
obtains a random scalar reward for each arm it selects, in the proposed
problem, the learner obtains a random reward vector, where each component of
the reward vector corresponds to one of the objectives and the distribution of
the reward depends on the context that is provided to the learner at the
beginning of each round. We call this problem contextual multi-armed bandit
with a dominant objective (CMAB-DO). In CMAB-DO, the goal of the learner is to
maximize its total reward in the non-dominant objective while ensuring that it
maximizes its total reward in the dominant objective. In this case, the optimal
arm given a context is the one that maximizes the expected reward in the
non-dominant objective among all arms that maximize the expected reward in the
dominant objective. First, we show that the optimal arm lies in the Pareto
front. Then, we propose the multi-objective contextual multi-armed bandit
algorithm (MOC-MAB), and define two performance measures: the 2-dimensional
(2D) regret and the Pareto regret. We show that both the 2D regret and the
Pareto regret of MOC-MAB are sublinear in the number of rounds. We also compare
the performance of the proposed algorithm with other state-of-the-art methods
in synthetic and real-world datasets. The proposed model and the algorithm have
a wide range of real-world applications that involve multiple and possibly
conflicting objectives ranging from wireless communication to medical diagnosis
and recommender systems.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2017 15:41:58 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Mar 2018 18:29:43 GMT"
},
{
"version": "v3",
"created": "Fri, 1 Jun 2018 17:29:37 GMT"
}
] | 2018-06-04T00:00:00 |
[
[
"Tekin",
"Cem",
""
],
[
"Turgay",
"Eralp",
""
]
] |
new_dataset
| 0.997653 |
1712.01393
|
Yipin Zhou
|
Yipin Zhou, Zhaowen Wang, Chen Fang, Trung Bui and Tamara L. Berg
|
Visual to Sound: Generating Natural Sound for Videos in the Wild
|
Project page:
http://bvision11.cs.unc.edu/bigpen/yipin/visual2sound_webpage/visual2sound.html
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As two of the five traditional human senses (sight, hearing, taste, smell,
and touch), vision and sound are basic sources through which humans understand
the world. Often correlated during natural events, these two modalities combine
to jointly affect human perception. In this paper, we pose the task of
generating sound given visual input. Such capabilities could help enable
applications in virtual reality (generating sound for virtual scenes
automatically) or provide additional accessibility to images or videos for
people with visual impairments. As a first step in this direction, we apply
learning-based methods to generate raw waveform samples given input video
frames. We evaluate our models on a dataset of videos containing a variety of
sounds (such as ambient sounds and sounds from people/animals). Our experiments
show that the generated sounds are fairly realistic and have good temporal
synchronization with the visual inputs.
|
[
{
"version": "v1",
"created": "Mon, 4 Dec 2017 22:24:29 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Jun 2018 06:40:49 GMT"
}
] | 2018-06-04T00:00:00 |
[
[
"Zhou",
"Yipin",
""
],
[
"Wang",
"Zhaowen",
""
],
[
"Fang",
"Chen",
""
],
[
"Bui",
"Trung",
""
],
[
"Berg",
"Tamara L.",
""
]
] |
new_dataset
| 0.997489 |
1802.00565
|
Abel Ag Rb Guimaraes M.Sc
|
Abel Ag Rb Guimaraes, Ghassem Tofighi
|
Detecting Zones and Threat on 3D Body for Security in Airports using
Deep Machine Learning
|
7 pages, 17 figures, This article was accepted from the Star
Conference, Data Science and Big Data Analyses MAY 24-25, 2018 | Toronto,
Canada
| null |
10.5281/zenodo.1189345
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this research, it was used a segmentation and classification method to
identify threat recognition in human scanner images of airport security. The
Department of Homeland Security's (DHS) in USA has a higher false alarm,
produced from theirs algorithms using today's scanners at the airports. To
repair this problem they started a new competition at Kaggle site asking the
science community to improve their detection with new algorithms. The dataset
used in this research comes from DHS at
https://www.kaggle.com/c/passenger-screening-algorithm-challenge/data According
to DHS: "This dataset contains a large number of body scans acquired by a new
generation of millimeter wave scanner called the High Definition-Advanced
Imaging Technology (HD-AIT) system. They are comprised of volunteers wearing
different clothing types (from light summer clothes to heavy winter clothes),
different body mass indices, different genders, different numbers of threats,
and different types of threats". Using Python as a principal language, the
preprocessed of the dataset images extracted features from 200 bodies using:
intensity, intensity differences and local neighbourhood to detect, to produce
segmentation regions and label those regions to be used as a truth in a
training and test dataset. The regions are subsequently give to a CNN deep
learning classifier to predict 17 classes (that represents the body zones):
zone1, zone2, ... zone17 and zones with threat in a total of 34 zones. The
analysis showed the results of the classifier an accuracy of 98.2863% and a
loss of 0.091319, as well as an average of 100% for recall and precision.
|
[
{
"version": "v1",
"created": "Fri, 2 Feb 2018 05:45:21 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Feb 2018 17:14:09 GMT"
}
] | 2018-06-04T00:00:00 |
[
[
"Guimaraes",
"Abel Ag Rb",
""
],
[
"Tofighi",
"Ghassem",
""
]
] |
new_dataset
| 0.999819 |
1802.06430
|
Arjun Nitin Bhagoji
|
Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Mung Chiang,
and Prateek Mittal
|
DARTS: Deceiving Autonomous Cars with Toxic Signs
|
Submitted to ACM CCS 2018; Extended version of [1801.02780] Rogue
Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos
| null | null | null |
cs.CR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sign recognition is an integral part of autonomous cars. Any
misclassification of traffic signs can potentially lead to a multitude of
disastrous consequences, ranging from a life-threatening accident to even a
large-scale interruption of transportation services relying on autonomous cars.
In this paper, we propose and examine security attacks against sign recognition
systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed
attacks DARTS). In particular, we introduce two novel methods to create these
toxic signs. First, we propose Out-of-Distribution attacks, which expand the
scope of adversarial examples by enabling the adversary to generate these
starting from an arbitrary point in the image space compared to prior attacks
which are restricted to existing training/test data (In-Distribution). Second,
we present the Lenticular Printing attack, which relies on an optical
phenomenon to deceive the traffic sign recognition system. We extensively
evaluate the effectiveness of the proposed attacks in both virtual and
real-world settings and consider both white-box and black-box threat models.
Our results demonstrate that the proposed attacks are successful under both
settings and threat models. We further show that Out-of-Distribution attacks
can outperform In-Distribution attacks on classifiers defended using the
adversarial training defense, exposing a new attack vector for these defenses.
|
[
{
"version": "v1",
"created": "Sun, 18 Feb 2018 19:39:28 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Feb 2018 02:50:49 GMT"
},
{
"version": "v3",
"created": "Thu, 31 May 2018 20:05:27 GMT"
}
] | 2018-06-04T00:00:00 |
[
[
"Sitawarin",
"Chawin",
""
],
[
"Bhagoji",
"Arjun Nitin",
""
],
[
"Mosenia",
"Arsalan",
""
],
[
"Chiang",
"Mung",
""
],
[
"Mittal",
"Prateek",
""
]
] |
new_dataset
| 0.996805 |
1805.09919
|
Anastasia Mavridou
|
Anastasia Mavridou, Joseph Sifakis, Janos Sztipanovits
|
DesignBIP: A Design Studio for Modeling and Generating Systems with BIP
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Behavior-Interaction-Priority (BIP) framework, rooted in rigorous
semantics, allows the construction of systems that are correct-by-design. BIP
has been effectively used for the construction and analysis of large systems
such as robot controllers and satellite on-board software. Nevertheless, the
specification of BIP models is done in a purely textual manner without any code
editor support. To facilitate the specification of BIP models, we present
DesignBIP, a web-based, collaborative, version-controlled design studio. To
promote model scaling and reusability of BIP models, we use a graphical
language for modeling parameterized BIP models with rigorous semantics. We
present the various services provided by the design studio, including model
editors, code editors, consistency checking mechanisms, code generators, and
integration with the JavaBIP tool-set.
|
[
{
"version": "v1",
"created": "Thu, 24 May 2018 22:04:37 GMT"
},
{
"version": "v2",
"created": "Thu, 31 May 2018 22:31:39 GMT"
}
] | 2018-06-04T00:00:00 |
[
[
"Mavridou",
"Anastasia",
""
],
[
"Sifakis",
"Joseph",
""
],
[
"Sztipanovits",
"Janos",
""
]
] |
new_dataset
| 0.997605 |
1805.11016
|
Vardaan Pahuja
|
Shagun Sodhani, Vardaan Pahuja
|
Memory Augmented Self-Play
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-play is an unsupervised training procedure which enables the
reinforcement learning agents to explore the environment without requiring any
external rewards. We augment the self-play setting by providing an external
memory where the agent can store experience from the previous tasks. This
enables the agent to come up with more diverse self-play tasks resulting in
faster exploration of the environment. The agent pretrained in the memory
augmented self-play setting easily outperforms the agent pretrained in
no-memory self-play setting.
|
[
{
"version": "v1",
"created": "Mon, 28 May 2018 16:22:02 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Jun 2018 02:16:46 GMT"
}
] | 2018-06-04T00:00:00 |
[
[
"Sodhani",
"Shagun",
""
],
[
"Pahuja",
"Vardaan",
""
]
] |
new_dataset
| 0.95745 |
1806.00051
|
Biao He
|
Biao He, Hamid Jafarkhani
|
Millimeter Wave Communications with Reconfigurable Antennas
|
presented at IEEE ICC 2018
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The highly sparse nature of propagation channels and the restricted use of
radio frequency (RF) chains at transceivers limit the performance of millimeter
wave (mmWave) multiple-input multiple-output (MIMO) systems. Introducing
reconfigurable antennas to mmWave can offer an additional degree of freedom on
designing mmWave MIMO systems. This paper provides a theoretical framework for
studying the mmWave MIMO with reconfigurable antennas. We present an
architecture of reconfigurable mmWave MIMO with beamspace hybrid analog-digital
beamformers and reconfigurable antennas at both the transmitter and the
receiver. We show that employing reconfigurable antennas can provide throughput
gain for the mmWave MIMO. We derive the expression for the average throughput
gain of using reconfigurable antennas, and further simplify the expression by
considering the case of large number of reconfiguration states. In addition, we
propose a low-complexity algorithm for the reconfiguration state and beam
selection, which achieves nearly the same throughput performance as the optimal
selection of reconfiguration state and beams by exhaustive search.
|
[
{
"version": "v1",
"created": "Thu, 31 May 2018 19:01:39 GMT"
}
] | 2018-06-04T00:00:00 |
[
[
"He",
"Biao",
""
],
[
"Jafarkhani",
"Hamid",
""
]
] |
new_dataset
| 0.998291 |
1806.00137
|
YangQuan Chen Prof.
|
Jie Yuan, Abdullah Ates, Sina Dehghan, Yang Zhao, Shumin Fei, YangQuan
Chen
|
PID2018 Benchmark Challenge: Model-based Feedforward Compensator with A
Conditional Integrator
|
6 pages, 8 figures
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Since proportional-integral-derivative (PID) controllers absolutely dominate
the control engineering, numbers of different control structures and theories
have been developed to enhance the efficiency of PID controllers. Thus, it is
essential and inspiring to operate different PID control strategies to the
PID2018 Benchmark Challenge. In this paper, a novel control strategy is
designed for this refrigeration system, where a feedforward compensator and a
conditional integrator are utilized to compensate the disturbances and remove
the steady-state error in the benchmark problem, respectively. The simulation
results given in the benchmark problem show the straightforward effectiveness
of the proposed control structure compared with the existing control methods.
|
[
{
"version": "v1",
"created": "Thu, 31 May 2018 23:36:12 GMT"
}
] | 2018-06-04T00:00:00 |
[
[
"Yuan",
"Jie",
""
],
[
"Ates",
"Abdullah",
""
],
[
"Dehghan",
"Sina",
""
],
[
"Zhao",
"Yang",
""
],
[
"Fei",
"Shumin",
""
],
[
"Chen",
"YangQuan",
""
]
] |
new_dataset
| 0.996489 |
1806.00193
|
Jatin Bedi
|
Jatin Bedi, Durga Toshniwal
|
SFA-GTM: Seismic Facies Analysis Based on Generative Topographic Map and
RBF
|
11 Pages, 4 figures, 2 Tables, Part of DMG2 2018 proceedings
(arXiv:1805.04541)
| null | null | null |
cs.CE physics.geo-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Seismic facies identification plays a significant role in reservoir
characterization. It helps in identifying the various lithological and
stratigraphical changes in reservoir properties. With the increase in the size
of seismic data or number of attributes to be analyzed, the manual process for
facies identification becomes complicated and time-consuming. Even though
seismic attributes add multiple dimensions to the data, their role in reservoir
characterization is very crucial. There exist different linear transformation
methods that use seismic attributes for identification, characterization, and
visualization of seismic facies. These linear transformation methods have been
widely used for facies characterization. However, there are some limitations
associated with these methods such as deciding the width parameters, number of
clusters, convergence rate etc. Therefore, the present research work uses
non-linear transformation approach that overcomes some of the major limitations
of linear approaches. The proposed Seismic facies analysis approach based on
Generative Topographic Map \& Radial Basis Function(SFA-GTM) works by
calculating the set of four Gray Level Co-occurrence Matrix(GLCM) texture based
attributes viz. energy, homogeneity, contrast, and dissimilarity. The
Generative Topographic Map(GTM) is used for unsupervised classification of
seismic facies based on the set of calculated texture attributes. Further, the
present work uses Radial Basis Function(RBF) for interpolating the missing
values in the data.
|
[
{
"version": "v1",
"created": "Fri, 1 Jun 2018 04:50:10 GMT"
}
] | 2018-06-04T00:00:00 |
[
[
"Bedi",
"Jatin",
""
],
[
"Toshniwal",
"Durga",
""
]
] |
new_dataset
| 0.997753 |
1806.00429
|
Claudia Flores-Saviaga
|
Claudia Flores-Saviaga (1), Brian C. Keegan (2), Saiph Savage (1 and
3) ((1) West Virginia University, (2) University of Colorado Boulder, (3)
Universidad Nacional Autonoma de Mexico (UNAM))
|
Mobilizing the Trump Train: Understanding Collective Action in a
Political Trolling Community
| null |
International 12th AAAI Conference on Web and Social Media (ICWSM
2018)
| null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Political trolls initiate online discord not only for the lulz (laughs) but
also for ideological reasons, such as promoting their desired political
candidates. Political troll groups recently gained spotlight because they were
considered central in helping Donald Trump win the 2016 US presidential
election, which involved difficult mass mobilizations. Political trolls face
unique challenges as they must build their own communities while simultaneously
disrupting others. However, little is known about how political trolls mobilize
sufficient participation to suddenly become problems for others. We performed a
quantitative longitudinal analysis of more than 16 million comments from one of
the most popular and disruptive political trolling communities, the subreddit
/r/The\_Donald (T\D). We use T_D as a lens to understand participation and
collective action within these deviant spaces. In specific, we first study the
characteristics of the most active participants to uncover what might drive
their sustained participation. Next, we investigate how these active
individuals mobilize their community to action. Through our analysis, we
uncover that the most active employed distinct discursive strategies to
mobilize participation, and deployed technical tools like bots to create a
shared identity and sustain engagement. We conclude by providing data-backed
design implications for designers of civic media.
|
[
{
"version": "v1",
"created": "Fri, 1 Jun 2018 16:35:42 GMT"
}
] | 2018-06-04T00:00:00 |
[
[
"Flores-Saviaga",
"Claudia",
"",
"1 and\n 3"
],
[
"Keegan",
"Brian C.",
"",
"1 and\n 3"
],
[
"Savage",
"Saiph",
"",
"1 and\n 3"
]
] |
new_dataset
| 0.959202 |
1806.00466
|
Aneeq Zia
|
Aneeq Zia, Andrew Hung, Irfan Essa, and Anthony Jarc
|
Surgical Activity Recognition in Robot-Assisted Radical Prostatectomy
using Deep Learning
|
Accepted at MICCAI 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adverse surgical outcomes are costly to patients and hospitals. Approaches to
benchmark surgical care are often limited to gross measures across the entire
procedure despite the performance of particular tasks being largely responsible
for undesirable outcomes. In order to produce metrics from tasks as opposed to
the whole procedure, methods to recognize automatically individual surgical
tasks are needed. In this paper, we propose several approaches to recognize
surgical activities in robot-assisted minimally invasive surgery using deep
learning. We collected a clinical dataset of 100 robot-assisted radical
prostatectomies (RARP) with 12 tasks each and propose `RP-Net', a modified
version of InceptionV3 model, for image based surgical activity recognition. We
achieve an average precision of 80.9% and average recall of 76.7% across all
tasks using RP-Net which out-performs all other RNN and CNN based models
explored in this paper. Our results suggest that automatic surgical activity
recognition during RARP is feasible and can be the foundation for advanced
analytics.
|
[
{
"version": "v1",
"created": "Fri, 1 Jun 2018 17:55:38 GMT"
}
] | 2018-06-04T00:00:00 |
[
[
"Zia",
"Aneeq",
""
],
[
"Hung",
"Andrew",
""
],
[
"Essa",
"Irfan",
""
],
[
"Jarc",
"Anthony",
""
]
] |
new_dataset
| 0.998219 |
1702.06322
|
Ranveer Singh
|
Ranveer Singh, Ravindra B. Bapat
|
Eigenvalues of weakly balanced signed graphs and graphs with negative
cliques
| null | null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a signed graph $G$, an induced subgraph is called a negative clique if it
is a complete graph and all of its edges are negative. In this paper, we give
the characteristic polynomials and the eigenvalues of some signed graphs having
negative cliques. This includes cycle graphs, path graphs, complete graphs with
vertex-disjoint negative cliques of different orders, and star block graphs
with negative cliques. Interestingly, if we reverse the signs of the edges of
these graphs, we get the families of weakly balanced signed graphs, thus the
eigenvalues of wide classes of weakly balanced signed graphs are also
calculated. In social network theory, the eigenvalues of the signed graphs play
an important role in determining their stability and developing the measures
for the degree of balance.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2017 10:45:27 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Jan 2018 16:28:03 GMT"
},
{
"version": "v3",
"created": "Tue, 1 May 2018 20:15:46 GMT"
},
{
"version": "v4",
"created": "Thu, 31 May 2018 04:35:43 GMT"
}
] | 2018-06-01T00:00:00 |
[
[
"Singh",
"Ranveer",
""
],
[
"Bapat",
"Ravindra B.",
""
]
] |
new_dataset
| 0.984699 |
1704.08224
|
Arjun Chandrasekaran
|
Arjun Chandrasekaran and Devi Parikh and Mohit Bansal
|
Punny Captions: Witty Wordplay in Image Descriptions
|
NAACL 2018 (11 pages)
| null | null | null |
cs.CL cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wit is a form of rich interaction that is often grounded in a specific
situation (e.g., a comment in response to an event). In this work, we attempt
to build computational models that can produce witty descriptions for a given
image. Inspired by a cognitive account of humor appreciation, we employ
linguistic wordplay, specifically puns, in image descriptions. We develop two
approaches which involve retrieving witty descriptions for a given image from a
large corpus of sentences, or generating them via an encoder-decoder neural
network architecture. We compare our approach against meaningful baseline
approaches via human studies and show substantial improvements. We find that
when a human is subject to similar constraints as the model regarding word
usage and style, people vote the image descriptions generated by our model to
be slightly wittier than human-written witty descriptions. Unsurprisingly,
humans are almost always wittier than the model when they are free to choose
the vocabulary, style, etc.
|
[
{
"version": "v1",
"created": "Wed, 26 Apr 2017 17:22:53 GMT"
},
{
"version": "v2",
"created": "Thu, 31 May 2018 17:45:50 GMT"
}
] | 2018-06-01T00:00:00 |
[
[
"Chandrasekaran",
"Arjun",
""
],
[
"Parikh",
"Devi",
""
],
[
"Bansal",
"Mohit",
""
]
] |
new_dataset
| 0.998879 |
1705.01613
|
Cody Buntain
|
Cody Buntain and Jennifer Golbeck
|
Automatically Identifying Fake News in Popular Twitter Threads
| null |
2017 IEEE International Conference on Smart Cloud (SmartCloud)
|
10.1109/SmartCloud.2017.40
| null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Information quality in social media is an increasingly important issue, but
web-scale data hinders experts' ability to assess and correct much of the
inaccurate content, or `fake news,' present in these platforms. This paper
develops a method for automating fake news detection on Twitter by learning to
predict accuracy assessments in two credibility-focused Twitter datasets:
CREDBANK, a crowdsourced dataset of accuracy assessments for events in Twitter,
and PHEME, a dataset of potential rumors in Twitter and journalistic
assessments of their accuracies. We apply this method to Twitter content
sourced from BuzzFeed's fake news dataset and show models trained against
crowdsourced workers outperform models based on journalists' assessment and
models trained on a pooled dataset of both crowdsourced workers and
journalists. All three datasets, aligned into a uniform format, are also
publicly available. A feature analysis then identifies features that are most
predictive for crowdsourced and journalistic accuracy assessments, results of
which are consistent with prior work. We close with a discussion contrasting
accuracy and credibility and why models of non-experts outperform models of
journalists for fake news detection in Twitter.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2017 20:34:19 GMT"
},
{
"version": "v2",
"created": "Wed, 30 May 2018 21:08:44 GMT"
}
] | 2018-06-01T00:00:00 |
[
[
"Buntain",
"Cody",
""
],
[
"Golbeck",
"Jennifer",
""
]
] |
new_dataset
| 0.999739 |
1803.07640
|
Matt Gardner
|
Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi,
Nelson Liu, Matthew Peters, Michael Schmitz, Luke Zettlemoyer
|
AllenNLP: A Deep Semantic Natural Language Processing Platform
|
Describes the initial version of AllenNLP. Many features and models
have been added since the first release. This is the paper to cite if you use
AllenNLP in your research. Updated 5/31/2018 with version accepted to the NLP
OSS workshop help at ACL 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes AllenNLP, a platform for research on deep learning
methods in natural language understanding. AllenNLP is designed to support
researchers who want to build novel language understanding models quickly and
easily. It is built on top of PyTorch, allowing for dynamic computation graphs,
and provides (1) a flexible data API that handles intelligent batching and
padding, (2) high-level abstractions for common operations in working with
text, and (3) a modular and extensible experiment framework that makes doing
good science easy. It also includes reference implementations of high quality
approaches for both core semantic problems (e.g. semantic role labeling (Palmer
et al., 2005)) and language understanding applications (e.g. machine
comprehension (Rajpurkar et al., 2016)). AllenNLP is an ongoing open-source
effort maintained by engineers and researchers at the Allen Institute for
Artificial Intelligence.
|
[
{
"version": "v1",
"created": "Tue, 20 Mar 2018 20:32:07 GMT"
},
{
"version": "v2",
"created": "Thu, 31 May 2018 17:56:14 GMT"
}
] | 2018-06-01T00:00:00 |
[
[
"Gardner",
"Matt",
""
],
[
"Grus",
"Joel",
""
],
[
"Neumann",
"Mark",
""
],
[
"Tafjord",
"Oyvind",
""
],
[
"Dasigi",
"Pradeep",
""
],
[
"Liu",
"Nelson",
""
],
[
"Peters",
"Matthew",
""
],
[
"Schmitz",
"Michael",
""
],
[
"Zettlemoyer",
"Luke",
""
]
] |
new_dataset
| 0.9969 |
1805.12268
|
Bassem Khalfi
|
Hassan Sinky, Bassem Khalfi, Bechir Hamdaoui, Ammar Rayes
|
Responsive Content-Centric Delivery in Large Urban Communication
Networks: A LinkNYC Use-Case
|
12 pages, 9 figures
|
IEEE Transactions on Wireless Communications, vol. 17, no. 3, pp.
1688-1699, March 2018
|
10.1109/TWC.2017.2784433
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large urban communication networks such as smart cities are an ecosystem of
devices and services cooperating to address multiple issues that greatly
benefit end users, cities and the environment. LinkNYC is a first-of-its-kind
urban communications network aiming to replace all payphones in the five
boroughs of New York City (NYC) with kiosk-like structures providing free
public Wi-Fi. We consolidate these networks with standalone edge cloud devices
known as cloudlets and introduce geographically distributed content delivery
cloudlets (CDCs) to store popular Internet content closer to end users;
essential in environments with diverse and dynamic content interests. A
content-centric and delivery framework is proposed leveraging NYC's population
densities and CDCs for interest-based in-network caching. Analysis shows that
although the adoption of multiple CDCs dramatically improves overall network
performance, advanced caching policies are needed when considering increased
content heterogeneity. Thus, we propose popularity-driven (pLFU) and
cooperation-based (sLFU) caching policies at individual CDCs to account for
user and content dynamics over time. The amalgamation of urban population
densities, multiple CDC placements and smarter caching techniques help exploit
the ultimate benefits of a content-centric urban communications network and
dramatically improves overall network performance and responsiveness. Our
proposed solutions are validated using LinkNYC as a use-case.
|
[
{
"version": "v1",
"created": "Thu, 31 May 2018 00:27:14 GMT"
}
] | 2018-06-01T00:00:00 |
[
[
"Sinky",
"Hassan",
""
],
[
"Khalfi",
"Bassem",
""
],
[
"Hamdaoui",
"Bechir",
""
],
[
"Rayes",
"Ammar",
""
]
] |
new_dataset
| 0.996534 |
1805.12371
|
Dharin Parekh Mr.
|
Dharin Parekh, Ankitesh Gupta, Shharrnam Chhatpar, Anmol Yash Kumar,
Manasi Kulkarni
|
Lip Reading Using Convolutional Auto Encoders as Feature Extractor
|
6 pages, 6 tables, 9 Figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual recognition of speech using the lip movement is called Lip-reading.
Recent developments in this nascent field uses different neural networks as
feature extractors which serve as input to a model which can map the temporal
relationship and classify. Though end to end sentence level Lip-reading is the
current trend, we proposed a new model which employs word level classification
and breaks the set benchmarks for standard datasets. In our model we use
convolutional autoencoders as feature extractors which are then fed to a Long
short-term memory model. We tested our proposed model on BBC's LRW dataset,
MIRACL-VC1 and GRID dataset. Achieving a classification accuracy of 98% on
MIRACL-VC1 as compared to 93.4% of the set benchmark (Rekik et al., 2014). On
BBC's LRW the proposed model performed better than the baseline model of
convolutional neural networks and Long short-term memory model (Garg et al.,
2016). Showing the features learned by the models we clearly indicate how the
proposed model works better than the baseline model. The same model can also be
extended for end to end sentence level classification.
|
[
{
"version": "v1",
"created": "Thu, 31 May 2018 08:20:12 GMT"
}
] | 2018-06-01T00:00:00 |
[
[
"Parekh",
"Dharin",
""
],
[
"Gupta",
"Ankitesh",
""
],
[
"Chhatpar",
"Shharrnam",
""
],
[
"Kumar",
"Anmol Yash",
""
],
[
"Kulkarni",
"Manasi",
""
]
] |
new_dataset
| 0.974575 |
1805.12387
|
Laurent Orseau
|
Laurent Orseau, Simon McGregor McGill, Shane Legg
|
Agents and Devices: A Relative Definition of Agency
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
According to Dennett, the same system may be described using a `physical'
(mechanical) explanatory stance, or using an `intentional' (belief- and
goal-based) explanatory stance. Humans tend to find the physical stance more
helpful for certain systems, such as planets orbiting a star, and the
intentional stance for others, such as living animals. We define a formal
counterpart of physical and intentional stances within computational theory: a
description of a system as either a device, or an agent, with the key
difference being that `devices' are directly described in terms of an
input-output mapping, while `agents' are described in terms of the function
they optimise. Bayes' rule can then be applied to calculate the subjective
probability of a system being a device or an agent, based only on its
behaviour. We illustrate this using the trajectories of an object in a toy
grid-world domain.
|
[
{
"version": "v1",
"created": "Thu, 31 May 2018 09:12:14 GMT"
}
] | 2018-06-01T00:00:00 |
[
[
"Orseau",
"Laurent",
""
],
[
"McGill",
"Simon McGregor",
""
],
[
"Legg",
"Shane",
""
]
] |
new_dataset
| 0.998017 |
1805.12393
|
Yuyu Zhang
|
Yuyu Zhang, Hanjun Dai, Kamil Toraman, Le Song
|
KG^2: Learning to Reason Science Exam Questions with Contextual
Knowledge Graph Embeddings
| null | null | null | null |
cs.LG cs.AI cs.CL stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The AI2 Reasoning Challenge (ARC), a new benchmark dataset for question
answering (QA) has been recently released. ARC only contains natural science
questions authored for human exams, which are hard to answer and require
advanced logic reasoning. On the ARC Challenge Set, existing state-of-the-art
QA systems fail to significantly outperform random baseline, reflecting the
difficult nature of this task. In this paper, we propose a novel framework for
answering science exam questions, which mimics human solving process in an
open-book exam. To address the reasoning challenge, we construct contextual
knowledge graphs respectively for the question itself and supporting sentences.
Our model learns to reason with neural embeddings of both knowledge graphs.
Experiments on the ARC Challenge Set show that our model outperforms the
previous state-of-the-art QA systems.
|
[
{
"version": "v1",
"created": "Thu, 31 May 2018 09:39:14 GMT"
}
] | 2018-06-01T00:00:00 |
[
[
"Zhang",
"Yuyu",
""
],
[
"Dai",
"Hanjun",
""
],
[
"Toraman",
"Kamil",
""
],
[
"Song",
"Le",
""
]
] |
new_dataset
| 0.998741 |
1805.12473
|
Abd\"ulkadir \c{C}ak{\i}r
|
Abd\"ulkadir \c{C}akir and \"Umm\"u\c{s}an \c{C}itak
|
Simulation Of Logic Circuit Tests On Android-Based Mobile Devices
|
13 pages, 20 figures
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study, an application that can run on Android and Windows-based
mobile devices was developed to allow students attending such classes as
Numerical/Digital Electronics, Logic Circuits, Basic Electronics Measurement
and Electronic Systems in Turkey's Vocation and Technical Education Schools to
easily carry out the simulation of logic gates, as well as logic circuit tests
performed using logic gates. A 2D-mobile application that runs on both
platforms was developed using the C# language on the Unity3D editor. To assess
the usability of the mobile application, a one-hour training session was
administered in March of the 2017-2018 academic year to two groups of students
from a single class in the sixth grade of an Imam Hatip Secondary School
affiliated to the Ministry of National Education. Each of the two groups
contained 12 students who were assumed to be equivalent, and who had no prior
knowledge of the subject. The training of the first group began with a lecture
on basic logic gates using a blackboard, and involved no simulations, while the
second group, in addition to being given the same the lecture, received
additional training involving demonstrations of the developed mobile
application and its simulations. Following the lectures, a written exam was
applied to both groups. An evaluation of the exam results revealed that 83
percent of the students who had been given demonstrations of the mobile
application were able to perform the circuit task completely, whereas only 50
percent of the other were able to complete the task. It was concluded that the
application was both useful and facilitating for to the students, and it was
also noted that students who were supported by the mobile application had
gained a better grasp of the topic by being able to see and practice the
simulations first hand.
|
[
{
"version": "v1",
"created": "Thu, 24 May 2018 21:14:03 GMT"
}
] | 2018-06-01T00:00:00 |
[
[
"Çakir",
"Abdülkadir",
""
],
[
"Çitak",
"Ümmüşan",
""
]
] |
new_dataset
| 0.992997 |
1805.12480
|
Li Yang
|
Hua Dong, Li Yang
|
A voting scheme with post-quantum security based on physical laws
|
23pages,1figure,5tables
| null | null | null |
cs.CR quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional cryptography is under huge threat along of the evolution of
quantum information and computing. In this paper, we propose a new post-quantum
voting scheme based on physical laws by using encrypted no-key protocol to
transmit message in the channel, which ensures the post-quantum security.
Unlike lattice-based and multivariate-based electronic voting schemes, whose
security is based on the computational problems assumption that has not been
solved by effective quantum algorithms until now, the security of the voting
scheme based on the physical laws is depended on inherent limitations of
quantum computers and not influenced by the evolution of new quantum
algorithms. In detail, we also rigorously demonstrate that the scheme achieves
the post-quantum security and all properties necessary for voting scheme such
as the completeness, robustness, privacy, eligibility, unreusability, fairness,
and verifiability.
|
[
{
"version": "v1",
"created": "Wed, 18 Apr 2018 12:04:14 GMT"
}
] | 2018-06-01T00:00:00 |
[
[
"Dong",
"Hua",
""
],
[
"Yang",
"Li",
""
]
] |
new_dataset
| 0.979699 |
1711.10911
|
Sascha Timme
|
Paul Breiding and Sascha Timme
|
HomotopyContinuation.jl: A package for homotopy continuation in Julia
|
8 pages
| null | null | null |
cs.MS math.AG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the Julia package HomotopyContinuation.jl, which provides an
algorithmic framework for solving polynomial systems by numerical homotopy
continuation. We introduce the basic capabilities of the package and
demonstrate the software on an illustrative example. We motivate our choice of
Julia and how its features allow us to improve upon existing software packages
with respect to usability, modularity and performance. Furthermore, we compare
the performance of HomotopyContinuation.jl to the existing packages Bertini and
PHCpack.
|
[
{
"version": "v1",
"created": "Tue, 28 Nov 2017 15:53:22 GMT"
},
{
"version": "v2",
"created": "Wed, 30 May 2018 09:14:53 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Breiding",
"Paul",
""
],
[
"Timme",
"Sascha",
""
]
] |
new_dataset
| 0.999635 |
1712.08189
|
Xavier Ouvrard
|
Xavier Ouvrard, Jean-Marie Le Goff, St\'ephane Marchand-Maillet
|
Adjacency and Tensor Representation in General Hypergraphs Part 1:
e-adjacency Tensor Uniformisation Using Homogeneous Polynomials
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adjacency between two vertices in graphs or hypergraphs is a pairwise
relationship. It is redefined in this article as 2-adjacency. In general
hypergraphs, hyperedges hold for $n$-adic relationship. To keep the $n$-adic
relationship the concepts of $k$-adjacency and e-adjacency are defined. In
graphs 2-adjacency and e-adjacency concepts match, just as $k$-adjacency and
e-adjacency do for $k$-uniform hypergraphs. For general hypergraphs these
concepts are different. This paper also contributes in a uniformization process
of a general hypergraph to allow the definition of an e-adjacency tensor,
viewed as a hypermatrix, reflecting the general hypergraph structure. This
symmetric e-adjacency hypermatrix allows to capture not only the degree of the
vertices and the cardinality of the hyperedges but also makes a full separation
of the different layers of a hypergraph.
|
[
{
"version": "v1",
"created": "Thu, 21 Dec 2017 19:49:50 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Jan 2018 15:58:23 GMT"
},
{
"version": "v3",
"created": "Thu, 26 Apr 2018 07:19:36 GMT"
},
{
"version": "v4",
"created": "Fri, 18 May 2018 09:28:12 GMT"
},
{
"version": "v5",
"created": "Wed, 30 May 2018 13:50:00 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Ouvrard",
"Xavier",
""
],
[
"Goff",
"Jean-Marie Le",
""
],
[
"Marchand-Maillet",
"Stéphane",
""
]
] |
new_dataset
| 0.998574 |
1801.01331
|
Dat Quoc Nguyen
|
Thanh Vu, Dat Quoc Nguyen, Dai Quoc Nguyen, Mark Dras, Mark Johnson
|
VnCoreNLP: A Vietnamese Natural Language Processing Toolkit
|
Proceedings of the 2018 Conference of the North American Chapter of
the Association for Computational Linguistics: Demonstrations, NAACL 2018, to
appear
| null |
10.18653/v1/N18-5012
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an easy-to-use and fast toolkit, namely VnCoreNLP---a Java NLP
annotation pipeline for Vietnamese. Our VnCoreNLP supports key natural language
processing (NLP) tasks including word segmentation, part-of-speech (POS)
tagging, named entity recognition (NER) and dependency parsing, and obtains
state-of-the-art (SOTA) results for these tasks. We release VnCoreNLP to
provide rich linguistic annotations to facilitate research work on Vietnamese
NLP. Our VnCoreNLP is open-source and available at:
https://github.com/vncorenlp/VnCoreNLP
|
[
{
"version": "v1",
"created": "Thu, 4 Jan 2018 12:52:43 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Apr 2018 13:13:39 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Vu",
"Thanh",
""
],
[
"Nguyen",
"Dat Quoc",
""
],
[
"Nguyen",
"Dai Quoc",
""
],
[
"Dras",
"Mark",
""
],
[
"Johnson",
"Mark",
""
]
] |
new_dataset
| 0.996169 |
1802.08379
|
Chao-Chun Hsu
|
Sheng-Yeh Chen and Chao-Chun Hsu, Chuan-Chun Kuo, Ting-Hao (Kenneth)
Huang, Lun-Wei Ku
|
EmotionLines: An Emotion Corpus of Multi-Party Conversations
|
LREC2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Feeling emotion is a critical characteristic to distinguish people from
machines. Among all the multi-modal resources for emotion detection, textual
datasets are those containing the least additional information in addition to
semantics, and hence are adopted widely for testing the developed systems.
However, most of the textual emotional datasets consist of emotion labels of
only individual words, sentences or documents, which makes it challenging to
discuss the contextual flow of emotions. In this paper, we introduce
EmotionLines, the first dataset with emotions labeling on all utterances in
each dialogue only based on their textual content. Dialogues in EmotionLines
are collected from Friends TV scripts and private Facebook messenger dialogues.
Then one of seven emotions, six Ekman's basic emotions plus the neutral
emotion, is labeled on each utterance by 5 Amazon MTurkers. A total of 29,245
utterances from 2,000 dialogues are labeled in EmotionLines. We also provide
several strong baselines for emotion detection models on EmotionLines in this
paper.
|
[
{
"version": "v1",
"created": "Fri, 23 Feb 2018 04:06:38 GMT"
},
{
"version": "v2",
"created": "Wed, 30 May 2018 09:15:57 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Chen",
"Sheng-Yeh",
"",
"Kenneth"
],
[
"Hsu",
"Chao-Chun",
"",
"Kenneth"
],
[
"Kuo",
"Chuan-Chun",
"",
"Kenneth"
],
[
"Ting-Hao",
"",
"",
"Kenneth"
],
[
"Huang",
"",
""
],
[
"Ku",
"Lun-Wei",
""
]
] |
new_dataset
| 0.999524 |
1805.07662
|
Mohammad S Khan
|
Anirudh Paranjothi, Mohammad S. Khan, and Mohammed Atiquzzaman
|
DFCV: A Novel Approach for Message Dissemination in Connected Vehicles
using Dynamic Fog
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vehicular Ad-hoc Network (VANET) has emerged as a promising solution for
enhancing road safety. Routing of messages in VANET is challenging due to
packet delays arising from high mobility of vehicles, frequently changing
topology, and high density of vehicles, leading to frequent route breakages and
packet losses. Previous researchers have used either mobility in vehicular fog
computing or cloud computing to solve the routing issue, but they suffer from
large packet delays and frequent packet losses. We propose Dynamic Fog for
Connected Vehicles (DFCV), a fog computing based scheme which dynamically
creates, increments and destroys fog nodes depending on the communication
needs. The novelty of DFCV lies in providing lower delays and guaranteed
message delivery at high vehicular densities. Simulations were conducted using
hybrid simulation consisting of ns-2, SUMO, and Cloudsim. Results show that
DFCV ensures efficient resource utilization, lower packet delays and losses at
high vehicle densities.
|
[
{
"version": "v1",
"created": "Sat, 19 May 2018 21:59:51 GMT"
},
{
"version": "v2",
"created": "Wed, 30 May 2018 15:37:01 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Paranjothi",
"Anirudh",
""
],
[
"Khan",
"Mohammad S.",
""
],
[
"Atiquzzaman",
"Mohammed",
""
]
] |
new_dataset
| 0.999409 |
1805.09601
|
Wei Lu
|
Wei Lu and Wei Yang and Tinghua Ai
|
Evaluating Non-Motorized Transport Popularity of Urban Roads by Sports
GPS Tracks
|
5 pages, 4 figures, CPGIS2018-The 26th International Conference on
Geoinformatics
|
The 26th International Conference on Geoinformatics, Kunming,
2018, Paper NO.357
| null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Non-motorized transport is becoming increasingly important in urban
development of cities in China. How to evaluate the non-motorized transport
popularity of urban roads is an interesting question to study. The great amount
of tracking data generated by smart mobile devices give us opportunities to
solve this problem. This study aims to provide a data driven method for
evaluating the popularity (walkability and bikeability) of urban non-motorized
transport system. This paper defines a p-index to evaluate the popular degree
of road segments which is based on the cycling, running, and walking GPS track
data from outdoor activities logging applications. According to the p-index
definition, this paper evaluates the non-motorized transport popularity of
urban area in Wuhan city within different temporal periods.
|
[
{
"version": "v1",
"created": "Thu, 24 May 2018 11:02:42 GMT"
},
{
"version": "v2",
"created": "Wed, 30 May 2018 11:03:30 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Lu",
"Wei",
""
],
[
"Yang",
"Wei",
""
],
[
"Ai",
"Tinghua",
""
]
] |
new_dataset
| 0.990703 |
1805.11729
|
Justus Thies
|
Justus Thies, Michael Zollh\"ofer, Christian Theobalt, Marc
Stamminger, Matthias Nie{\ss}ner
|
HeadOn: Real-time Reenactment of Human Portrait Videos
|
Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at
Siggraph'18
| null |
10.1145/3197517.3201350
| null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose HeadOn, the first real-time source-to-target reenactment approach
for complete human portrait videos that enables transfer of torso and head
motion, face expression, and eye gaze. Given a short RGB-D video of the target
actor, we automatically construct a personalized geometry proxy that embeds a
parametric head, eye, and kinematic torso model. A novel real-time reenactment
algorithm employs this proxy to photo-realistically map the captured motion
from the source actor to the target actor. On top of the coarse geometric
proxy, we propose a video-based rendering technique that composites the
modified target portrait video via view- and pose-dependent texturing, and
creates photo-realistic imagery of the target actor under novel torso and head
poses, facial expressions, and gaze directions. To this end, we propose a
robust tracking of the face and torso of the source actor. We extensively
evaluate our approach and show significant improvements in enabling much
greater flexibility in creating realistic reenacted output videos.
|
[
{
"version": "v1",
"created": "Tue, 29 May 2018 22:24:13 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Thies",
"Justus",
""
],
[
"Zollhöfer",
"Michael",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Stamminger",
"Marc",
""
],
[
"Nießner",
"Matthias",
""
]
] |
new_dataset
| 0.998481 |
1805.11768
|
Michael Green
|
Michael Cerny Green, Ahmed Khalifa, Gabriella A. B. Barros, and Julian
Togelius
|
"Press Space to Fire": Automatic Video Game Tutorial Generation
|
6 pages, 4 figures, 1 table, Published at the EXAG workshop as a part
of AIIDE 2017
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose the problem of tutorial generation for games, i.e. to generate
tutorials which can teach players to play games, as an AI problem. This problem
can be approached in several ways, including generating natural language
descriptions of game rules, generating instructive game levels, and generating
demonstrations of how to play a game using agents that play in a human-like
manner. We further argue that the General Video Game AI framework provides a
useful testbed for addressing this problem.
|
[
{
"version": "v1",
"created": "Wed, 30 May 2018 01:21:33 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Green",
"Michael Cerny",
""
],
[
"Khalifa",
"Ahmed",
""
],
[
"Barros",
"Gabriella A. B.",
""
],
[
"Togelius",
"Julian",
""
]
] |
new_dataset
| 0.999191 |
1805.11814
|
Phuong Anh Nguyen Mr
|
Phuong Anh Nguyen, Yi-Jie Lu, Hao Zhang, Chong-Wah Ngo
|
The VIREO KIS at VBS 2018
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This short paper presents the video browsing tool of VIREO team which has
been used in the Video Browser Showdown 2018. All added functions in the final
version are introduced and experiences gained from the benchmark are also
shared.
|
[
{
"version": "v1",
"created": "Wed, 30 May 2018 05:32:02 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Nguyen",
"Phuong Anh",
""
],
[
"Lu",
"Yi-Jie",
""
],
[
"Zhang",
"Hao",
""
],
[
"Ngo",
"Chong-Wah",
""
]
] |
new_dataset
| 0.99695 |
1805.11820
|
Christian Blum
|
Christian Blum and Haroldo Gambini Santos
|
Generic CP-Supported CMSA for Binary Integer Linear Programs
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Construct, Merge, Solve and Adapt (CMSA) is a general hybrid metaheuristic
for solving combinatorial optimization problems. At each iteration, CMSA (1)
constructs feasible solutions to the tackled problem instance in a
probabilistic way and (2) solves a reduced problem instance (if possible) to
optimality. The construction of feasible solutions is hereby problem-specific,
usually involving a fast greedy heuristic. The goal of this paper is to design
a problem-agnostic CMSA variant whose exclusive input is an integer linear
program (ILP). In order to reduce the complexity of this task, the current
study is restricted to binary ILPs. In addition to a basic problem-agnostic
CMSA variant, we also present an extended version that makes use of a
constraint propagation engine for constructing solutions. The results show that
our technique is able to match the upper bounds of the standalone application
of CPLEX in the context of rather easy-to-solve instances, while it generally
outperforms the standalone application of CPLEX in the context of hard
instances. Moreover, the results indicate that the support of the constraint
propagation engine is useful in the context of problems for which finding
feasible solutions is rather difficult.
|
[
{
"version": "v1",
"created": "Wed, 30 May 2018 06:22:34 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Blum",
"Christian",
""
],
[
"Santos",
"Haroldo Gambini",
""
]
] |
new_dataset
| 0.954917 |
1805.11821
|
Loet Leydesdorff
|
Loet Leydesdorff and Ivan Cucco
|
Regions, Innovation Systems, and the North-South Divide in Italy
| null | null | null | null |
cs.DL cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Using firm-level data collected by Statistics Italy for 2008, 2011, and 2015,
we examine the Triple-Helix synergy among geographical and size distributions
of firms, and the NACE codes attributed to these firms, at the different levels
of regional and national government. At which levels is innovation-systemness
indicated? The contributions of regions to the Italian innovation system have
increased, but synergy generation between regions and supra-regionally has
remained at almost 45%. As against the statistical classification of Italy into
twenty regions or into Northern, Central, and Southern Italy, the greatest
synergy is retrieved by considering the country in terms of Northern and
Southern Italy as two sub-systems, with Tuscany included as part of Northern
Italy. We suggest that separate innovation strategies should be developed for
these two parts of the country. The current focus on regions for innovation
policies may to some extent be an artifact of the statistics and EU policies.
In terms of sectors, both medium- and high-tech manufacturing (MHTM) and
knowledge-intensive services (KIS) are proportionally integrated in the various
regions.
|
[
{
"version": "v1",
"created": "Wed, 30 May 2018 06:33:30 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Leydesdorff",
"Loet",
""
],
[
"Cucco",
"Ivan",
""
]
] |
new_dataset
| 0.995834 |
1805.11824
|
Soujanya Poria
|
Rhea Sukthanker, Soujanya Poria, Erik Cambria, Ramkumar
Thirunavukarasu
|
Anaphora and Coreference Resolution: A Review
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Entity resolution aims at resolving repeated references to an entity in a
document and forms a core component of natural language processing (NLP)
research. This field possesses immense potential to improve the performance of
other NLP fields like machine translation, sentiment analysis, paraphrase
detection, summarization, etc. The area of entity resolution in NLP has seen
proliferation of research in two separate sub-areas namely: anaphora resolution
and coreference resolution. Through this review article, we aim at clarifying
the scope of these two tasks in entity resolution. We also carry out a detailed
analysis of the datasets, evaluation metrics and research methods that have
been adopted to tackle this NLP problem. This survey is motivated with the aim
of providing the reader with a clear understanding of what constitutes this NLP
problem and the issues that require attention.
|
[
{
"version": "v1",
"created": "Wed, 30 May 2018 06:49:15 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Sukthanker",
"Rhea",
""
],
[
"Poria",
"Soujanya",
""
],
[
"Cambria",
"Erik",
""
],
[
"Thirunavukarasu",
"Ramkumar",
""
]
] |
new_dataset
| 0.999444 |
1805.11847
|
Igor Korkin
|
Igor Korkin
|
Hypervisor-Based Active Data Protection for Integrity and
Confidentiality of Dynamically Allocated Memory in Windows Kernel
|
Proceedings of the 13th annual Conference on Digital Forensics,
Security and Law (CDFSL), University of Texas at San Antonio (UTSA), San
Antonio, Texas. May 17-18 2018. 24 pages, 8 figures, 8 tables, 72 references
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the main issues in the OS security is providing trusted code execution
in an untrusted environment. During executing, kernel-mode drivers dynamically
allocate memory to store and process their data: Windows core kernel
structures, users' private information, and sensitive data of third-party
drivers. All this data can be tampered with by kernel-mode malware. Attacks on
Windows-based computers can cause not just hiding a malware driver, process
privilege escalation, and stealing private data but also failures of industrial
CNC machines. Windows built-in security and existing approaches do not provide
the integrity and confidentiality of the allocated memory of third-party
drivers. The proposed hypervisor-based system (AllMemPro) protects allocated
data from being modified or stolen. AllMemPro prevents access to even 1 byte of
allocated data, adapts for newly allocated memory in real time, and protects
the driver without its source code. AllMemPro works well on newest Windows 10
1709 x64.
|
[
{
"version": "v1",
"created": "Wed, 30 May 2018 08:05:20 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Korkin",
"Igor",
""
]
] |
new_dataset
| 0.996619 |
1805.11868
|
Sahil Swami
|
Sahil Swami, Ankush Khandelwal, Vinay Singh, Syed Sarfaraz Akhtar,
Manish Shrivastava
|
An English-Hindi Code-Mixed Corpus: Stance Annotation and Baseline
System
|
9 pages, CICling 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social media has become one of the main channels for peo- ple to communicate
and share their views with the society. We can often detect from these views
whether the person is in favor, against or neu- tral towards a given topic.
These opinions from social media are very useful for various companies. We
present a new dataset that consists of 3545 English-Hindi code-mixed tweets
with opinion towards Demoneti- sation that was implemented in India in 2016
which was followed by a large countrywide debate. We present a baseline
supervised classification system for stance detection developed using the same
dataset that uses various machine learning techniques to achieve an accuracy of
58.7% on 10-fold cross validation.
|
[
{
"version": "v1",
"created": "Wed, 30 May 2018 09:03:50 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Swami",
"Sahil",
""
],
[
"Khandelwal",
"Ankush",
""
],
[
"Singh",
"Vinay",
""
],
[
"Akhtar",
"Syed Sarfaraz",
""
],
[
"Shrivastava",
"Manish",
""
]
] |
new_dataset
| 0.999797 |
1805.11869
|
Sahil Swami
|
Sahil Swami, Ankush Khandelwal, Vinay Singh, Syed Sarfaraz Akhtar,
Manish Shrivastava
|
A Corpus of English-Hindi Code-Mixed Tweets for Sarcasm Detection
|
9 pages, CICLing 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social media platforms like twitter and facebook have be- come two of the
largest mediums used by people to express their views to- wards different
topics. Generation of such large user data has made NLP tasks like sentiment
analysis and opinion mining much more important. Using sarcasm in texts on
social media has become a popular trend lately. Using sarcasm reverses the
meaning and polarity of what is implied by the text which poses challenge for
many NLP tasks. The task of sarcasm detection in text is gaining more and more
importance for both commer- cial and security services. We present the first
English-Hindi code-mixed dataset of tweets marked for presence of sarcasm and
irony where each token is also annotated with a language tag. We present a
baseline su- pervised classification system developed using the same dataset
which achieves an average F-score of 78.4 after using random forest classifier
and performing 10-fold cross validation.
|
[
{
"version": "v1",
"created": "Wed, 30 May 2018 09:08:54 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Swami",
"Sahil",
""
],
[
"Khandelwal",
"Ankush",
""
],
[
"Singh",
"Vinay",
""
],
[
"Akhtar",
"Syed Sarfaraz",
""
],
[
"Shrivastava",
"Manish",
""
]
] |
new_dataset
| 0.997978 |
1805.11873
|
Matthew Hague
|
Christopher Broadbent, Arnaud Carayol, Matthew Hague, and Olivier
Serre
|
Emptiness of Stack Automata is NEXPTIME-complete: A Correction
| null | null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A saturation algorithm for collapsible pushdown systems was published in
ICALP 2012. This work introduced a class of stack automata used to recognised
regular sets of collapsible pushdown configurations. It was shown that these
automata form an effective boolean algebra, have a linear time membership
problem, and are equivalent to an alternative automata representation appearing
in LICS 2010. It was also claimed that the emptiness problem for stack automata
is PSPACE-complete. Unfortunately, this claim is not true. We show that the
problem is in fact NEXPTIME-complete when the stacks being accepted are
collapsible pushdown stacks, rather than the annotated stacks used in ICALP
2012.
|
[
{
"version": "v1",
"created": "Wed, 30 May 2018 09:15:03 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Broadbent",
"Christopher",
""
],
[
"Carayol",
"Arnaud",
""
],
[
"Hague",
"Matthew",
""
],
[
"Serre",
"Olivier",
""
]
] |
new_dataset
| 0.996309 |
1805.11912
|
Hojoon Lee
|
Hojoon Lee, Chihyun Song, Brent Byunghoon Kang
|
Lord of the x86 Rings: A Portable User Mode Privilege Separation
Architecture on x86
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern applications are increasingly advanced and complex, and inevitably
contain exploitable software bugs despite the ongoing efforts. The applications
today often involve processing of sensitive information. However, the lack of
privilege separation within the user space leaves sensitive application secret
such as cryptographic keys just as unprotected as a "hello world" string.
Cutting-edge hardware-supported security features are being introduced.
However, the features are often vendor-specific or lack compatibility with
older generations of the processors. The situation leaves developers with no
portable solution to incorporate protection for the sensitive application
component.
We propose LOTRx86, a fundamental and portable approach for user space
privilege separation. Our approach creates a more privileged user execution
layer called PrivUser through harnessing the underused intermediate privilege
levels on the x86 architecture. The PrivUser memory space, a set of pages
within process address space that are inaccessible to user mode, is a safe
place for application secrets and routines that access them. We implement the
LOTRx86 ABI that exports the privilege-based, accessing the protected
application secret only requires a change in the privilege, eliminating the
need for costly remote procedure calls or change in address space.
We evaluated our platform by developing a proof-of-concept LOTRx86-enabled
web server that employs our architecture to securely access its private key
during SSL connection and thereby mitigating the HeartBleed vulnerability by
design. We conducted a set of experiments including a performance measurement
on the PoC on both Intel and AMD PCs, and confirmed that LOTRx86 incurs only a
limited performance overhead.
|
[
{
"version": "v1",
"created": "Wed, 30 May 2018 12:09:45 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Lee",
"Hojoon",
""
],
[
"Song",
"Chihyun",
""
],
[
"Kang",
"Brent Byunghoon",
""
]
] |
new_dataset
| 0.997824 |
1805.11934
|
YangQuan Chen Prof.
|
Yang Zhao, Sina Dehghan, Abdullah Ates, Jie Yuan, Fengyu Zhou, Yan Li,
and YangQuan Chen
|
PID2018 Benchmark Challenge: learning feedforward control
|
6 pages,12 figures, 3rd IFAC Conference on Advances in
Proportional-Integral-Derivative Control
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The design and application of learning feedforward controllers (LFFC) for the
one-staged refrigeration cycle model described in the PID2018 Benchmark
Challenge is presented, and its effectiveness is evaluated. The control system
consists of two components: 1) a preset PID component and 2) a learning
feedforward component which is a function approximator that is adapted on the
basis of the feedback signal. A B-spline network based LFFC and a low-pass
filter based LFFC are designed to track the desired outlet temperature of
evaporator secondary flux and the superheating degree of refrigerant at
evaporator outlet. Encouraging simulation results are included. Qualitative and
quantitative comparison results evaluations show that, with little effort, a
high-performance control system can be obtained with this approach. Our initial
simple attempt of low-pass filter based LFFC and B-spline network based LFFC
give J=0.4902 and J=0.6536 relative to the decentralized PID controller,
respectively. Besides, the initial attempt of a combination controller of our
optimized PI controller and low-pass filter LFFC gives J=0.6947 relative to the
multi-variable PID controller.
|
[
{
"version": "v1",
"created": "Wed, 30 May 2018 17:03:50 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Zhao",
"Yang",
""
],
[
"Dehghan",
"Sina",
""
],
[
"Ates",
"Abdullah",
""
],
[
"Yuan",
"Jie",
""
],
[
"Zhou",
"Fengyu",
""
],
[
"Li",
"Yan",
""
],
[
"Chen",
"YangQuan",
""
]
] |
new_dataset
| 0.998889 |
1805.12097
|
Shahar Somin
|
Shahar Somin, Goren Gordon and Yaniv Altshuler
|
Social Signals in the Ethereum Trading Network
| null | null | null | null |
cs.SI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blockchain technology, which has been known by mostly small technological
circles up until recently, is bursting throughout the globe, with a potential
economic and social impact that could fundamentally alter traditional financial
and social structures. Issuing cryptocurrencies on top of the Blockchain system
by startups and private sector companies is becoming a ubiquitous phenomenon,
inducing the trading of these crypto-coins among their holders using dedicated
exchanges.
Apart from being a trading ledger for tokens, Blockchain can also be observed
as a social network. Analyzing and modeling the dynamics of the "social
signals" of this network can contribute to our understanding of this ecosystem
and the forces acting within in.
This work is the first analysis of the network properties of the ERC20
protocol compliant crypto-coins' trading data. Considering all trading wallets
as a network's nodes, and constructing its edges using buy--sell trades, we can
analyze the network properties of the ERC20 network. Examining several periods
of time, and several data aggregation variants, we demonstrate that the network
displays strong power-law properties. These results coincide with current
network theory expectations, however nonetheless, are the first scientific
validation of it, for the ERC20 trading data.
The data we examined is composed of over 30 million ERC20 tokens trades,
performed by over 6.8 million unique wallets, lapsing over a two years period
between February 2016 and February 2018.
|
[
{
"version": "v1",
"created": "Wed, 30 May 2018 17:28:21 GMT"
}
] | 2018-05-31T00:00:00 |
[
[
"Somin",
"Shahar",
""
],
[
"Gordon",
"Goren",
""
],
[
"Altshuler",
"Yaniv",
""
]
] |
new_dataset
| 0.998004 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.