id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1804.04340
|
Ankan Bansal
|
Ankan Bansal and Karan Sikka and Gaurav Sharma and Rama Chellappa and
Ajay Divakaran
|
Zero-Shot Object Detection
|
17 pages. ECCV 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce and tackle the problem of zero-shot object detection (ZSD),
which aims to detect object classes which are not observed during training. We
work with a challenging set of object classes, not restricting ourselves to
similar and/or fine-grained categories as in prior works on zero-shot
classification. We present a principled approach by first adapting
visual-semantic embeddings for ZSD. We then discuss the problems associated
with selecting a background class and motivate two background-aware approaches
for learning robust detectors. One of these models uses a fixed background
class and the other is based on iterative latent assignments. We also outline
the challenge associated with using a limited number of training classes and
propose a solution based on dense sampling of the semantic label space using
auxiliary data with a large number of categories. We propose novel splits of
two standard detection datasets - MSCOCO and VisualGenome, and present
extensive empirical results in both the traditional and generalized zero-shot
settings to highlight the benefits of the proposed methods. We provide useful
insights into the algorithm and conclude by posing some open questions to
encourage further research.
|
[
{
"version": "v1",
"created": "Thu, 12 Apr 2018 06:23:11 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Jul 2018 06:07:37 GMT"
}
] | 2018-07-30T00:00:00 |
[
[
"Bansal",
"Ankan",
""
],
[
"Sikka",
"Karan",
""
],
[
"Sharma",
"Gaurav",
""
],
[
"Chellappa",
"Rama",
""
],
[
"Divakaran",
"Ajay",
""
]
] |
new_dataset
| 0.965316 |
1805.01548
|
Rafael Pereira Pires
|
Rafael Pires, David Goltzsche, Sonia Ben Mokhtar, Sara Bouchenak,
Antoine Boutet, Pascal Felber, R\"udiger Kapitza, Marcelo Pasin and Valerio
Schiavoni
|
CYCLOSA: Decentralizing Private Web Search Through SGX-Based Browser
Extensions
| null |
38th IEEE International Conference on Distributed Computing
Systems (ICDCS 2018)
|
10.1109/ICDCS.2018.00053
| null |
cs.DC cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
By regularly querying Web search engines, users (unconsciously) disclose
large amounts of their personal data as part of their search queries, among
which some might reveal sensitive information (e.g. health issues, sexual,
political or religious preferences). Several solutions exist to allow users
querying search engines while improving privacy protection. However, these
solutions suffer from a number of limitations: some are subject to user
re-identification attacks, while others lack scalability or are unable to
provide accurate results. This paper presents CYCLOSA, a secure, scalable and
accurate private Web search solution. CYCLOSA improves security by relying on
trusted execution environments (TEEs) as provided by Intel SGX. Further,
CYCLOSA proposes a novel adaptive privacy protection solution that reduces the
risk of user re- identification. CYCLOSA sends fake queries to the search
engine and dynamically adapts their count according to the sensitivity of the
user query. In addition, CYCLOSA meets scalability as it is fully
decentralized, spreading the load for distributing fake queries among other
nodes. Finally, CYCLOSA achieves accuracy of Web search as it handles the real
query and the fake queries separately, in contrast to other existing solutions
that mix fake and real query results.
|
[
{
"version": "v1",
"created": "Thu, 3 May 2018 21:34:07 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Jul 2018 09:07:54 GMT"
}
] | 2018-07-30T00:00:00 |
[
[
"Pires",
"Rafael",
""
],
[
"Goltzsche",
"David",
""
],
[
"Mokhtar",
"Sonia Ben",
""
],
[
"Bouchenak",
"Sara",
""
],
[
"Boutet",
"Antoine",
""
],
[
"Felber",
"Pascal",
""
],
[
"Kapitza",
"Rüdiger",
""
],
[
"Pasin",
"Marcelo",
""
],
[
"Schiavoni",
"Valerio",
""
]
] |
new_dataset
| 0.956993 |
1805.01563
|
Rafael Pereira Pires
|
Stefan Contiu, Rafael Pires, S\'ebastien Vaucher, Marcelo Pasin,
Pascal Felber and Laurent R\'eveill\`ere
|
IBBE-SGX: Cryptographic Group Access Control using Trusted Execution
Environments
| null |
48th IEEE/IFIP International Conference on Dependable Systems and
Networks (DSN 2018)
|
10.1109/DSN.2018.00032
| null |
cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While many cloud storage systems allow users to protect their data by making
use of encryption, only few support collaborative editing on that data. A major
challenge for enabling such collaboration is the need to enforce cryptographic
access control policies in a secure and efficient manner. In this paper, we
introduce IBBE-SGX, a new cryptographic access control extension that is
efficient both in terms of computation and storage even when processing large
and dynamic workloads of membership operations, while at the same time offering
zero knowledge guarantees. IBBE-SGX builds upon Identity-Based Broadcasting
Encryption (IBBE). We address IBBE's impracticality for cloud deployments by
exploiting Intel Software Guard Extensions (SGX) to derive cuts in the
computational complexity. Moreover, we propose a group partitioning mechanism
such that the computational cost of membership update is bound to a fixed
constant partition size rather than the size of the whole group. We have
implemented and evaluated our new access control extension. Results highlight
that IBBE-SGX performs membership changes 1.2 orders of magnitude faster than
the traditional approach of Hybrid Encryption (HE), producing group metadata
that are 6 orders of magnitude smaller than HE, while at the same time offering
zero knowledge guarantees.
|
[
{
"version": "v1",
"created": "Thu, 3 May 2018 22:41:30 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Jul 2018 09:15:56 GMT"
}
] | 2018-07-30T00:00:00 |
[
[
"Contiu",
"Stefan",
""
],
[
"Pires",
"Rafael",
""
],
[
"Vaucher",
"Sébastien",
""
],
[
"Pasin",
"Marcelo",
""
],
[
"Felber",
"Pascal",
""
],
[
"Réveillère",
"Laurent",
""
]
] |
new_dataset
| 0.995952 |
1807.04058
|
Mateusz Trokielewicz
|
Mateusz Trokielewicz and Adam Czajka and Piotr Maciejewicz
|
Presentation Attack Detection for Cadaver Iris
|
Accepted for publication at the 9th IEEE International Conference on
Biometrics: Theory, Applications, and Systems (BTAS 2018), Los Angeles, USA,
October 22-25, 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a deep-learning-based method for iris presentation attack
detection (PAD) when iris images are obtained from deceased people. Our
approach is based on the VGG-16 architecture fine-tuned with a database of 574
post-mortem, near-infrared iris images from the
Warsaw-BioBase-PostMortem-Iris-v1 database, complemented by a dataset of 256
images of live irises, collected within the scope of this study. Experiments
described in this paper show that our approach is able to correctly classify
iris images as either representing a live or a dead eye in almost 99% of the
trials, averaged over 20 subject-disjoint, train/test splits. We also show that
the post-mortem iris detection accuracy increases as time since death elapses,
and that we are able to construct a classification system with
APCER=0%@BPCER=1% (Attack Presentation and Bona Fide Presentation
Classification Error Rates, respectively) when only post-mortem samples
collected at least 16 hours post-mortem are considered. Since acquisitions of
ante- and post-mortem samples differ significantly, we applied countermeasures
to minimize bias in our classification methodology caused by image properties
that are not related to the PAD. This included using the same iris sensor in
collection of ante- and post-mortem samples, and analysis of class activation
maps to ensure that discriminant iris regions utilized by our classifier are
related to properties of the eye, and not to those of the acquisition protocol.
This paper offers the first known to us PAD method in a post-mortem setting,
together with an explanation of the decisions made by the convolutional neural
network. Along with the paper we offer source codes, weights of the trained
network, and a dataset of live iris images to facilitate reproducibility and
further research.
|
[
{
"version": "v1",
"created": "Wed, 11 Jul 2018 10:35:22 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Jul 2018 07:46:59 GMT"
}
] | 2018-07-30T00:00:00 |
[
[
"Trokielewicz",
"Mateusz",
""
],
[
"Czajka",
"Adam",
""
],
[
"Maciejewicz",
"Piotr",
""
]
] |
new_dataset
| 0.996872 |
1807.10425
|
Mustafa Mukadam
|
Mustafa Mukadam and Jing Dong and Frank Dellaert and Byron Boots
|
STEAP: simultaneous trajectory estimation and planning
|
Published in Autonomous Robots
| null |
10.1007/s10514-018-9770-1
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a unified probabilistic framework for simultaneous trajectory
estimation and planning (STEAP). Estimation and planning problems are usually
considered separately, however, within our framework we show that solving them
simultaneously can be more accurate and efficient. The key idea is to compute
the full continuous-time trajectory from start to goal at each time-step. While
the robot traverses the trajectory, the history portion of the trajectory
signifies the solution to the estimation problem, and the future portion of the
trajectory signifies a solution to the planning problem. Building on recent
probabilistic inference approaches to continuous-time localization and mapping
and continuous-time motion planning, we solve the joint problem by iteratively
recomputing the maximum a posteriori trajectory conditioned on all available
sensor data and cost information. Our approach can contend with
high-degree-of-freedom (DOF) trajectory spaces, uncertainty due to limited
sensing capabilities, model inaccuracy, the stochastic effect of executing
actions, and can find a solution in real-time. We evaluate our framework
empirically in both simulation and on a mobile manipulator.
|
[
{
"version": "v1",
"created": "Fri, 27 Jul 2018 03:49:45 GMT"
}
] | 2018-07-30T00:00:00 |
[
[
"Mukadam",
"Mustafa",
""
],
[
"Dong",
"Jing",
""
],
[
"Dellaert",
"Frank",
""
],
[
"Boots",
"Byron",
""
]
] |
new_dataset
| 0.999037 |
1807.10470
|
Jiangyu Wang
|
Jiangyu Wang and Huanxin Chen
|
BSAS: Beetle Swarm Antennae Search Algorithm for Optimization Problems
|
4 pages, 4 figures
| null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Beetle antennae search (BAS) is an efficient meta-heuristic algorithm.
However, the convergent results of BAS rely heavily on the random beetle
direction in every iterations. More specifically, different random seeds may
cause different optimized results. Besides, the step-size update algorithm of
BAS cannot guarantee objective become smaller in iterative process. In order to
solve these problems, this paper proposes Beetle Swarm Antennae Search
Algorithm (BSAS) which combines swarm intelligence algorithm with
feedback-based step-size update strategy. BSAS employs k beetles to find more
optimal position in each moving rather than one beetle. The step-size updates
only when k beetles return without better choices. Experiments are carried out
on building system identification. The results reveal the efficacy of the BSAS
algorithm to avoid influence of random direction of Beetle. In addition, the
estimation errors decrease as the beetles number goes up.
|
[
{
"version": "v1",
"created": "Fri, 27 Jul 2018 07:49:10 GMT"
}
] | 2018-07-30T00:00:00 |
[
[
"Wang",
"Jiangyu",
""
],
[
"Chen",
"Huanxin",
""
]
] |
new_dataset
| 0.995588 |
1807.10507
|
Adam Barker
|
Nnamdi Ekwe-Ekwe and Adam Barker
|
Location, Location, Location: Exploring Amazon EC2 Spot Instance Pricing
Across Geographical Regions - Extended Version
|
Extended version of CCGrid 2018 paper entitled "Location, Location,
Location: Exploring Amazon EC2 Spot Instance Pricing Across Geographical
Regions"
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cloud computing is becoming an almost ubiquitous part of the computing
landscape. For many companies today, moving their entire infrastructure and
workloads to the cloud reduces complexity, time to deployment, and saves money.
Spot Instances, a subset of Amazon's cloud computing infrastructure (EC2),
expands on this. They allow a user to bid on spare compute capacity in Amazon's
data centres at heavily discounted prices. If demand was ever to increase such
that the user's maximum bid is exceeded, their instance is terminated.
In this paper, we conduct one of the first detailed analyses of how location
affects the overall cost of deployment of a spot instance. We analyse pricing
data across all available Amazon Web Services regions for 60 days for a variety
of spot instance types. We relate the data we find to the overall AWS region as
well as to the Availability Zone within that region.
We conclude that location does play a critical role in spot instance pricing
and also that pricing differs depending on the granularity of that location -
from a more coarse-grained AWS region to a more fine-grained Availability Zone
within a region. We relate the pricing differences we find to the price's
reliability, confirming whether one can be confident in the prices reported and
subsequently, in the ensuing bids one makes.
We conclude by showing that it is possible to run workloads on Spot Instances
achieving both a very low risk of termination as well as paying very low
amounts per hour.
|
[
{
"version": "v1",
"created": "Fri, 27 Jul 2018 09:35:15 GMT"
}
] | 2018-07-30T00:00:00 |
[
[
"Ekwe-Ekwe",
"Nnamdi",
""
],
[
"Barker",
"Adam",
""
]
] |
new_dataset
| 0.999366 |
1807.10535
|
Michael Schwarz
|
Michael Schwarz and Martin Schwarzl and Moritz Lipp and Daniel Gruss
|
NetSpectre: Read Arbitrary Memory over Network
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present NetSpectre, a generic remote Spectre variant 1
attack. For this purpose, we demonstrate the first access-driven remote
Evict+Reload cache attack over network, leaking 15 bits per hour. Beyond
retrofitting existing attacks to a network scenario, we also demonstrate the
first Spectre attack which does not use a cache covert channel. Instead, we
present a novel high-performance AVX-based covert channel that we use in our
cache-free Spectre attack. We show that in particular remote Spectre attacks
perform significantly better with the AVX-based covert channel, leaking 60 bits
per hour from the target system. We verified that our NetSpectre attacks work
in local-area networks as well as between virtual machines in the Google cloud.
NetSpectre marks a paradigm shift from local attacks, to remote attacks,
exposing a much wider range and larger number of devices to Spectre attacks.
Spectre attacks now must also be considered on devices which do not run any
potentially attacker-controlled code at all. We show that especially in this
remote scenario, attacks based on weaker gadgets which do not leak actual data,
are still very powerful to break address-space layout randomization remotely.
Several of the Spectre gadgets we discuss are more versatile than anticipated.
In particular, value-thresholding is a technique we devise, which leaks a
secret value without the typical bit selection mechanisms. We outline
challenges for future research on Spectre attacks and Spectre mitigations.
|
[
{
"version": "v1",
"created": "Fri, 27 Jul 2018 11:13:18 GMT"
}
] | 2018-07-30T00:00:00 |
[
[
"Schwarz",
"Michael",
""
],
[
"Schwarzl",
"Martin",
""
],
[
"Lipp",
"Moritz",
""
],
[
"Gruss",
"Daniel",
""
]
] |
new_dataset
| 0.990159 |
1807.10547
|
Haitian Zheng
|
Haitian Zheng, Mengqi Ji, Haoqian Wang, Yebin Liu, Lu Fang
|
CrossNet: An End-to-end Reference-based Super Resolution Network using
Cross-scale Warping
|
To be appeared in ECCV 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Reference-based Super-resolution (RefSR) super-resolves a low-resolution
(LR) image given an external high-resolution (HR) reference image, where the
reference image and LR image share similar viewpoint but with significant
resolution gap x8. Existing RefSR methods work in a cascaded way such as patch
matching followed by synthesis pipeline with two independently defined
objective functions, leading to the inter-patch misalignment, grid effect and
inefficient optimization. To resolve these issues, we present CrossNet, an
end-to-end and fully-convolutional deep neural network using cross-scale
warping. Our network contains image encoders, cross-scale warping layers, and
fusion decoder: the encoder serves to extract multi-scale features from both
the LR and the reference images; the cross-scale warping layers spatially
aligns the reference feature map with the LR feature map; the decoder finally
aggregates feature maps from both domains to synthesize the HR output. Using
cross-scale warping, our network is able to perform spatial alignment at
pixel-level in an end-to-end fashion, which improves the existing schemes both
in precision (around 2dB-4dB) and efficiency (more than 100 times faster).
|
[
{
"version": "v1",
"created": "Fri, 27 Jul 2018 12:15:40 GMT"
}
] | 2018-07-30T00:00:00 |
[
[
"Zheng",
"Haitian",
""
],
[
"Ji",
"Mengqi",
""
],
[
"Wang",
"Haoqian",
""
],
[
"Liu",
"Yebin",
""
],
[
"Fang",
"Lu",
""
]
] |
new_dataset
| 0.955749 |
1807.10548
|
Olyvia Kundu
|
Olyvia Kundu, Swagat Kumar
|
A Novel Geometry-based Algorithm for Robust Grasping in Extreme Clutter
Environment
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper looks into the problem of grasping unknown objects in a cluttered
environment using 3D point cloud data obtained from a range or an RGBD sensor.
The objective is to identify graspable regions and detect suitable grasp poses
from a single view, possibly, partial 3D point cloud without any apriori
knowledge of the object geometry. The problem is solved in two steps: (1)
identifying and segmenting various object surfaces and, (2) searching for
suitable grasping handles on these surfaces by applying geometric constraints
of the physical gripper. The first step is solved by using a modified version
of region growing algorithm that uses a pair of thresholds for smoothness
constraint on local surface normals to find natural boundaries of object
surfaces. In this process, a novel concept of edge point is introduced that
allows us to segment between different surfaces of the same object. The second
step is solved by converting a 6D pose detection problem into a 1D linear
search problem by projecting 3D cloud points onto the principal axes of the
object surface. The graspable handles are then localized by applying physical
constraints of the gripper. The resulting method allows us to grasp all kinds
of objects including rectangular or box-type objects with flat surfaces which
have been difficult so far to deal with in the grasping literature. The
proposed method is simple and can be implemented in real-time and does not
require any off-line training phase for finding these affordances. The
improvements achieved is demonstrated through comparison with another
state-of-the-art grasping algorithm on various publicly-available and
self-created datasets.
|
[
{
"version": "v1",
"created": "Fri, 27 Jul 2018 12:18:20 GMT"
}
] | 2018-07-30T00:00:00 |
[
[
"Kundu",
"Olyvia",
""
],
[
"Kumar",
"Swagat",
""
]
] |
new_dataset
| 0.982442 |
1807.10573
|
Pan Wei
|
Pan Wei, Lucas Cagle, Tasmia Reza, John Ball and James Gafford
|
LiDAR and Camera Detection Fusion in a Real Time Industrial Multi-Sensor
Collision Avoidance System
|
34 pages
|
MDPI journal Electronics, 7(6), 84, May, 2018
| null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Collision avoidance is a critical task in many applications, such as ADAS
(advanced driver-assistance systems), industrial automation and robotics. In an
industrial automation setting, certain areas should be off limits to an
automated vehicle for protection of people and high-valued assets. These areas
can be quarantined by mapping (e.g., GPS) or via beacons that delineate a
no-entry area. We propose a delineation method where the industrial vehicle
utilizes a LiDAR {(Light Detection and Ranging)} and a single color camera to
detect passive beacons and model-predictive control to stop the vehicle from
entering a restricted space. The beacons are standard orange traffic cones with
a highly reflective vertical pole attached. The LiDAR can readily detect these
beacons, but suffers from false positives due to other reflective surfaces such
as worker safety vests. Herein, we put forth a method for reducing false
positive detection from the LiDAR by projecting the beacons in the camera
imagery via a deep learning method and validating the detection using a neural
network-learned projection from the camera to the LiDAR space. Experimental
data collected at Mississippi State University's Center for Advanced Vehicular
Systems (CAVS) shows the effectiveness of the proposed system in keeping the
true detection while mitigating false positives.
|
[
{
"version": "v1",
"created": "Wed, 11 Jul 2018 16:55:09 GMT"
}
] | 2018-07-30T00:00:00 |
[
[
"Wei",
"Pan",
""
],
[
"Cagle",
"Lucas",
""
],
[
"Reza",
"Tasmia",
""
],
[
"Ball",
"John",
""
],
[
"Gafford",
"James",
""
]
] |
new_dataset
| 0.994844 |
1807.10580
|
Zhijie Fang
|
Zhijie Fang and Antonio M. L\'opez
|
Is the Pedestrian going to Cross? Answering by 2D Pose Estimation
|
This is a paper presented in IEEE Intelligent Vehicles Symposium
(IEEE IV 2018)
| null | null | null |
cs.CV cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our recent work suggests that, thanks to nowadays powerful CNNs, image-based
2D pose estimation is a promising cue for determining pedestrian intentions
such as crossing the road in the path of the ego-vehicle, stopping before
entering the road, and starting to walk or bending towards the road. This
statement is based on the results obtained on non-naturalistic sequences
(Daimler dataset), i.e. in sequences choreographed specifically for performing
the study. Fortunately, a new publicly available dataset (JAAD) has appeared
recently to allow developing methods for detecting pedestrian intentions in
naturalistic driving conditions; more specifically, for addressing the relevant
question is the pedestrian going to cross? Accordingly, in this paper we use
JAAD to assess the usefulness of 2D pose estimation for answering such a
question. We combine CNN-based pedestrian detection, tracking and pose
estimation to predict the crossing action from monocular images. Overall, the
proposed pipeline provides new state-of-the-art results.
|
[
{
"version": "v1",
"created": "Sun, 15 Jul 2018 17:57:54 GMT"
}
] | 2018-07-30T00:00:00 |
[
[
"Fang",
"Zhijie",
""
],
[
"López",
"Antonio M.",
""
]
] |
new_dataset
| 0.999693 |
1807.10609
|
Ariel Ruiz-Garcia
|
Yahaya Isah Shehu, Ariel Ruiz-Garcia, Vasile Palade, Anne James
|
Sokoto Coventry Fingerprint Dataset
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper presents the Sokoto Coventry Fingerprint Dataset (SOCOFing), a
biometric fingerprint database designed for academic research purposes.
SOCOFing is made up of 6,000 fingerprint images from 600 African subjects.
SOCOFing contains unique attributes such as labels for gender, hand and finger
name as well as synthetically altered versions with three different levels of
alteration for obliteration, central rotation, and z-cut. The dataset is freely
available for noncommercial research purposes at:
https://www.kaggle.com/ruizgara/socofing
|
[
{
"version": "v1",
"created": "Tue, 24 Jul 2018 13:14:11 GMT"
}
] | 2018-07-30T00:00:00 |
[
[
"Shehu",
"Yahaya Isah",
""
],
[
"Ruiz-Garcia",
"Ariel",
""
],
[
"Palade",
"Vasile",
""
],
[
"James",
"Anne",
""
]
] |
new_dataset
| 0.99987 |
1807.10695
|
Ruolong Lian
|
Jin Hee Kim, Brett Grady, Ruolong Lian, John Brothers, Jason H.
Anderson
|
FPGA-Based CNN Inference Accelerator Synthesized from Multi-Threaded C
Software
| null |
J. H. Kim, B. Grady, R. Lian, J. Brothers and J. H. Anderson,
"FPGA-based CNN inference accelerator synthesized from multi-threaded C
software," 2017 30th IEEE International System-on-Chip Conference (SOCC),
Munich, 2017, pp. 268-273
|
10.1109/SOCC.2017.8226056
| null |
cs.LG cs.AR cs.PF cs.PL stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A deep-learning inference accelerator is synthesized from a C-language
software program parallelized with Pthreads. The software implementation uses
the well-known producer/consumer model with parallel threads interconnected by
FIFO queues. The LegUp high-level synthesis (HLS) tool synthesizes threads into
parallel FPGA hardware, translating software parallelism into spatial
parallelism. A complete system is generated where convolution, pooling and
padding are realized in the synthesized accelerator, with remaining tasks
executing on an embedded ARM processor. The accelerator incorporates reduced
precision, and a novel approach for zero-weight-skipping in convolution. On a
mid-sized Intel Arria 10 SoC FPGA, peak performance on VGG-16 is 138 effective
GOPS.
|
[
{
"version": "v1",
"created": "Fri, 27 Jul 2018 15:46:16 GMT"
}
] | 2018-07-30T00:00:00 |
[
[
"Kim",
"Jin Hee",
""
],
[
"Grady",
"Brett",
""
],
[
"Lian",
"Ruolong",
""
],
[
"Brothers",
"John",
""
],
[
"Anderson",
"Jason H.",
""
]
] |
new_dataset
| 0.998608 |
1807.10740
|
Marcely Zanon Boito
|
Marcely Zanon Boito, Antonios Anastasopoulos, Marika Lekakou, Aline
Villavicencio, Laurent Besacier
|
A small Griko-Italian speech translation corpus
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an extension to a very low-resource parallel corpus
collected in an endangered language, Griko, making it useful for computational
research. The corpus consists of 330 utterances (about 20 minutes of speech)
which have been transcribed and translated in Italian, with annotations for
word-level speech-to-transcription and speech-to-translation alignments. The
corpus also includes morphosyntactic tags and word-level glosses. Applying an
automatic unit discovery method, pseudo-phones were also generated. We detail
how the corpus was collected, cleaned and processed, and we illustrate its use
on zero-resource tasks by presenting some baseline results for the task of
speech-to-translation alignment and unsupervised word discovery. The dataset is
available online, aiming to encourage replicability and diversity in
computational language documentation experiments.
|
[
{
"version": "v1",
"created": "Fri, 27 Jul 2018 17:29:20 GMT"
}
] | 2018-07-30T00:00:00 |
[
[
"Boito",
"Marcely Zanon",
""
],
[
"Anastasopoulos",
"Antonios",
""
],
[
"Lekakou",
"Marika",
""
],
[
"Villavicencio",
"Aline",
""
],
[
"Besacier",
"Laurent",
""
]
] |
new_dataset
| 0.991137 |
1704.07293
|
Tobias B\"ottger
|
Tobias Bottger, Patrick Follmann, Michael Fauser
|
Measuring the Accuracy of Object Detectors and Trackers
|
10 pages, 7 Figures
| null |
10.1007/978-3-319-66709-6
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The accuracy of object detectors and trackers is most commonly evaluated by
the Intersection over Union (IoU) criterion. To date, most approaches are
restricted to axis-aligned or oriented boxes and, as a consequence, many
datasets are only labeled with boxes. Nevertheless, axis-aligned or oriented
boxes cannot accurately capture an object's shape. To address this, a number of
densely segmented datasets has started to emerge in both the object detection
and the object tracking communities. However, evaluating the accuracy of object
detectors and trackers that are restricted to boxes on densely segmented data
is not straightforward. To close this gap, we introduce the relative
Intersection over Union (rIoU) accuracy measure. The measure normalizes the IoU
with the optimal box for the segmentation to generate an accuracy measure that
ranges between 0 and 1 and allows a more precise measurement of accuracies.
Furthermore, it enables an efficient and easy way to understand scenes and the
strengths and weaknesses of an object detection or tracking approach. We
display how the new measure can be efficiently calculated and present an
easy-to-use evaluation framework. The framework is tested on the DAVIS and the
VOT2016 segmentations and has been made available to the community.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2017 15:41:35 GMT"
}
] | 2018-07-27T00:00:00 |
[
[
"Bottger",
"Tobias",
""
],
[
"Follmann",
"Patrick",
""
],
[
"Fauser",
"Michael",
""
]
] |
new_dataset
| 0.969404 |
1803.00944
|
Eduardo R. B. Marques
|
Keila Lima, Eduardo R. B. Marques, Jos\'e Pinto, and Jo\~ao B. Sousa
|
Dolphin: a task orchestration language for autonomous vehicle networks
|
IEEE/RSJ IROS'18 - http://iros2018.org
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Dolphin, an extensible programming language for autonomous vehicle
networks. A Dolphin program expresses an orchestrated execution of tasks
defined compositionally for multiple vehicles. Building upon the base case of
elementary one-vehicle tasks, the built-in operators include support for
composing tasks in several forms, for instance according to concurrent,
sequential, or event-based task flow. The language is implemented as a Groovy
DSL, facilitating extension and integration with external software packages, in
particular robotic toolkits. The paper describes the Dolphin language, its
integration with an open-source toolchain for autonomous vehicles, and results
from field tests using unmanned underwater vehicles (UUVs) and unmanned aerial
vehicles (UAVs).
|
[
{
"version": "v1",
"created": "Fri, 2 Mar 2018 16:44:53 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Jul 2018 12:35:13 GMT"
}
] | 2018-07-27T00:00:00 |
[
[
"Lima",
"Keila",
""
],
[
"Marques",
"Eduardo R. B.",
""
],
[
"Pinto",
"José",
""
],
[
"Sousa",
"João B.",
""
]
] |
new_dataset
| 0.999083 |
1803.09331
|
Xingyi Zhou
|
Xingyi Zhou, Arjun Karpur, Linjie Luo, Qixing Huang
|
StarMap for Category-Agnostic Keypoint and Viewpoint Estimation
|
ECCV 2018. Supplementary material with more qualitative results and
higher resolution is available on the code page
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic keypoints provide concise abstractions for a variety of visual
understanding tasks. Existing methods define semantic keypoints separately for
each category with a fixed number of semantic labels in fixed indices. As a
result, this keypoint representation is in-feasible when objects have a varying
number of parts, e.g. chairs with varying number of legs. We propose a
category-agnostic keypoint representation, which combines a multi-peak heatmap
(StarMap) for all the keypoints and their corresponding features as 3D
locations in the canonical viewpoint (CanViewFeature) defined for each
instance. Our intuition is that the 3D locations of the keypoints in canonical
object views contain rich semantic and compositional information. Using our
flexible representation, we demonstrate competitive performance in keypoint
detection and localization compared to category-specific state-of-the-art
methods. Moreover, we show that when augmented with an additional depth channel
(DepthMap) to lift the 2D keypoints to 3D, our representation can achieve
state-of-the-art results in viewpoint estimation. Finally, we show that our
category-agnostic keypoint representation can be generalized to novel
categories.
|
[
{
"version": "v1",
"created": "Sun, 25 Mar 2018 20:28:53 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Jul 2018 04:31:28 GMT"
}
] | 2018-07-27T00:00:00 |
[
[
"Zhou",
"Xingyi",
""
],
[
"Karpur",
"Arjun",
""
],
[
"Luo",
"Linjie",
""
],
[
"Huang",
"Qixing",
""
]
] |
new_dataset
| 0.998029 |
1804.09542
|
Garegin Grigoryan
|
Garegin Grigoryan, Keivan Bahmani, Grayson Schermerhorn, Yaoqing Liu
|
GRASP: a GReen energy Aware SDN Platform
|
INFOCOM18 WKSHPS CNERT '18
| null |
10.1109/INFCOMW.2018.8407012
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The transition to renewable energy sources for data centers has become a
popular trend in the IT industry. However, the volatility of renewable energy,
such as solar and wind power, impedes the operation of green data centers. In
this work, we leverage Software Defined Networking (SDN) to build GRASP, a
platform that schedules job requests to distributed data centers according to
the amount of green energy available at each site. GRASP can be re-configured
with different scheduling algorithms to address diverse factors such as amounts
of instantly available solar power, wind power and CPU load of data centers. We
utilize realistic green energy datasets from National Solar Radiation Database
and evaluate GRASP in the GENI testbed; in addition, we create necessary GENI
artifacts to repeat our experiment. GRASP can serve as a practical platform to
test various job scheduling mechanisms for distributed green data centers.
|
[
{
"version": "v1",
"created": "Wed, 25 Apr 2018 13:26:13 GMT"
}
] | 2018-07-27T00:00:00 |
[
[
"Grigoryan",
"Garegin",
""
],
[
"Bahmani",
"Keivan",
""
],
[
"Schermerhorn",
"Grayson",
""
],
[
"Liu",
"Yaoqing",
""
]
] |
new_dataset
| 0.992958 |
1807.09828
|
Arno Solin
|
Santiago Cort\'es, Arno Solin, Esa Rahtu, Juho Kannala
|
ADVIO: An authentic dataset for visual-inertial odometry
|
To appear in European Conference on Computer Vision (ECCV)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The lack of realistic and open benchmarking datasets for pedestrian
visual-inertial odometry has made it hard to pinpoint differences in published
methods. Existing datasets either lack a full six degree-of-freedom
ground-truth or are limited to small spaces with optical tracking systems. We
take advantage of advances in pure inertial navigation, and develop a set of
versatile and challenging real-world computer vision benchmark sets for
visual-inertial odometry. For this purpose, we have built a test rig equipped
with an iPhone, a Google Pixel Android phone, and a Google Tango device. We
provide a wide range of raw sensor data that is accessible on almost any
modern-day smartphone together with a high-quality ground-truth track. We also
compare resulting visual-inertial tracks from Google Tango, ARCore, and Apple
ARKit with two recent methods published in academic forums. The data sets cover
both indoor and outdoor cases, with stairs, escalators, elevators, office
environments, a shopping mall, and metro station.
|
[
{
"version": "v1",
"created": "Wed, 25 Jul 2018 19:13:58 GMT"
}
] | 2018-07-27T00:00:00 |
[
[
"Cortés",
"Santiago",
""
],
[
"Solin",
"Arno",
""
],
[
"Rahtu",
"Esa",
""
],
[
"Kannala",
"Juho",
""
]
] |
new_dataset
| 0.99979 |
1807.09882
|
Chris Thomas
|
Christopher Thomas and Adriana Kovashka
|
Persuasive Faces: Generating Faces in Advertisements
| null |
In British Machine Vision Conference (BMVC), Newcastle upon Tyne,
UK, September 2018
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we examine the visual variability of objects across different
ad categories, i.e. what causes an advertisement to be visually persuasive. We
focus on modeling and generating faces which appear to come from different
types of ads. For example, if faces in beauty ads tend to be women wearing
lipstick, a generative model should portray this distinct visual appearance.
Training generative models which capture such category-specific differences is
challenging because of the highly diverse appearance of faces in ads and the
relatively limited amount of available training data. To address these
problems, we propose a conditional variational autoencoder which makes use of
predicted semantic attributes and facial expressions as a supervisory signal
when training. We show how our model can be used to produce visually distinct
faces which appear to be from a fixed ad topic category. Our human studies and
quantitative and qualitative experiments confirm that our method greatly
outperforms a variety of baselines, including two variations of a
state-of-the-art generative adversarial network, for transforming faces to be
more ad-category appropriate. Finally, we show preliminary generation results
for other types of objects, conditioned on an ad topic.
|
[
{
"version": "v1",
"created": "Wed, 25 Jul 2018 22:21:53 GMT"
}
] | 2018-07-27T00:00:00 |
[
[
"Thomas",
"Christopher",
""
],
[
"Kovashka",
"Adriana",
""
]
] |
new_dataset
| 0.994415 |
1807.09977
|
Jie Xue
|
Jie Xue
|
Colored range closest-pair problem under general distance functions
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The range closest-pair (RCP) problem is the range-search version of the
classical closest-pair problem, which aims to store a given dataset of points
in some data structure such that when a query range $X$ is specified, the
closest pair of points contained in $X$ can be reported efficiently. A natural
generalization of the RCP problem is the {colored range closest-pair} (CRCP)
problem in which the given data points are colored and the goal is to find the
closest {bichromatic} pair contained in the query range. All the previous work
on the RCP problem was restricted to the uncolored version and the Euclidean
distance function. In this paper, we make the first progress on the CRCP
problem. We investigate the problem under a general distance function induced
by a monotone norm; in particular, this covers all the $L_p$-metrics for $p >
0$ and the $L_\infty$-metric. We design efficient $(1+\varepsilon)$-approximate
CRCP data structures for orthogonal queries in $\mathbb{R}^2$, where
$\varepsilon>0$ is a pre-specified parameter. The highlights are two data
structures for answering rectangle queries, one of which uses
$O(\varepsilon^{-1} n \log^4 n)$ space and $O(\log^4 n + \varepsilon^{-1}
\log^3 n + \varepsilon^{-2} \log n)$ query time while the other uses
$O(\varepsilon^{-1} n \log^3 n)$ space and $O(\log^5 n + \varepsilon^{-1}
\log^4 n + \varepsilon^{-2} \log^2 n)$ query time. In addition, we also apply
our techniques to the CRCP problem in higher dimensions, obtaining efficient
data structures for slab, 2-box, and 3D dominance queries. Before this paper,
almost all the existing results for the RCP problem were achieved in
$\mathbb{R}^2$.
|
[
{
"version": "v1",
"created": "Thu, 26 Jul 2018 07:01:13 GMT"
}
] | 2018-07-27T00:00:00 |
[
[
"Xue",
"Jie",
""
]
] |
new_dataset
| 0.9996 |
1807.10051
|
Katia Jaffres-Runser
|
Katia Jaffr\`es-Runser and Gentian Jakllari
|
PCach: The Case for Pre-Caching your Mobile Data
|
To appear as a 4p paper in the proceedings of the 43nd IEEE
Conference on Local Computer Networks (LCN), Chicago, USA, October 1-4, 2018
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
We present PCach, a smartphone-based approach for relieving the congestion in
cellular network resulting from the exponential growth in mobile data traffic.
The basic idea underlying PCach is simple: use WiFi to proactively cache
content on the smartphone's memory, which otherwise would have been delivered
through the cellular network. However, it leads to several challenging
questions, including how much mobile data actually flows through cellular
networks, how much data can be pre-cached, and when and what to pre-cache. We
address these questions progressively using a thorough analysis of user data
collected from our purpose-built crowdsensing Android application, actively
utilized by 45 users for periods dating back to July 2014. Our analysis shows
that the median smartphone user transfers 15% of their data via the cellular
network and that 80\% of it can be pre-cached via WiFi. To capitalize on these
results, we draw on a careful analysis of the measurement data to introduce an
algorithm that can run stand-alone on off-the-shelf smartphones and predict
with good accuracy when and what to pre-cache.
|
[
{
"version": "v1",
"created": "Thu, 26 Jul 2018 10:19:04 GMT"
}
] | 2018-07-27T00:00:00 |
[
[
"Jaffrès-Runser",
"Katia",
""
],
[
"Jakllari",
"Gentian",
""
]
] |
new_dataset
| 0.992313 |
1807.10129
|
Andrew Fitzgibbon
|
Filip \v{S}rajer, Zuzana Kukelova, Andrew Fitzgibbon
|
A Benchmark of Selected Algorithmic Differentiation Tools on Some
Problems in Computer Vision and Machine Learning
|
Previous versions of this article appeared at AD2016---7th
International Conference on Algorithmic Differentiation, and in Optimization
Methods and Software, Taylor and Francis, Feb 2018 (online)
| null | null | null |
cs.MS cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Algorithmic differentiation (AD) allows exact computation of derivatives
given only an implementation of an objective function. Although many AD tools
are available, a proper and efficient implementation of AD methods is not
straightforward. The existing tools are often too different to allow for a
general test suite. In this paper, we compare fifteen ways of computing
derivatives including eleven automatic differentiation tools implementing
various methods and written in various languages (C++, F#, MATLAB, Julia and
Python), two symbolic differentiation tools, finite differences, and
hand-derived computation.
We look at three objective functions from computer vision and machine
learning. These objectives are for the most part simple, in the sense that no
iterative loops are involved, and conditional statements are encapsulated in
functions such as {\tt abs} or {\tt logsumexp}. However, it is important for
the success of algorithmic differentiation that such `simple' objective
functions are handled efficiently, as so many problems in computer vision and
machine learning are of this form.
Of course, our results depend on programmer skill, and familiarity with the
tools. However, we contend that this paper presents an important datapoint: a
skilled programmer devoting roughly a week to each tool produced the timings we
present. We have made our implementations available as open source to allow the
community to replicate and update these benchmarks.
|
[
{
"version": "v1",
"created": "Thu, 26 Jul 2018 13:42:30 GMT"
}
] | 2018-07-27T00:00:00 |
[
[
"Šrajer",
"Filip",
""
],
[
"Kukelova",
"Zuzana",
""
],
[
"Fitzgibbon",
"Andrew",
""
]
] |
new_dataset
| 0.998675 |
1807.10154
|
Felix Ingrand F
|
Mohammed Foughali and F\'elix Ingrand and Anthony Mallet
|
GenoM3 Templates: from Middleware Independence to Formal Models
Synthesis
| null | null | null |
LAAS report N{\deg} 17022. 2017
|
cs.RO cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
GenoM is an approach to develop robotic software components, which can be
controlled, and assembled to build complex applications. Its latest version
GenoM3, provides a template mechanism which is versatile enough to deploy
components for different middleware without any change in the specification and
user code. But this same template mechanism also enables us to automatically
synthesize formal models (for two Validation and Verification frameworks) of
the final components. We illustrate our approach on a real deployed example of
a drone flight controller for which we prove offline real-time properties, and
an outdoor robot for which we synthesize a controller to perform runtime
verification.
|
[
{
"version": "v1",
"created": "Thu, 26 Jul 2018 14:05:59 GMT"
}
] | 2018-07-27T00:00:00 |
[
[
"Foughali",
"Mohammed",
""
],
[
"Ingrand",
"Félix",
""
],
[
"Mallet",
"Anthony",
""
]
] |
new_dataset
| 0.997168 |
1807.10215
|
Jen-Tang Lu
|
Jen-Tang Lu, Stefano Pedemonte, Bernardo Bizzo, Sean Doyle, Katherine
P. Andriole, Mark H. Michalski, R. Gilberto Gonzalez, Stuart R. Pomerantz
|
DeepSPINE: Automated Lumbar Vertebral Segmentation, Disc-level
Designation, and Spinal Stenosis Grading Using Deep Learning
|
Accepted as spotlight talk at Machine Learning for Healthcare (MLHC)
2018. Supplementary Video: https://bit.ly/DeepSPINE
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The high prevalence of spinal stenosis results in a large volume of MRI
imaging, yet interpretation can be time-consuming with high inter-reader
variability even among the most specialized radiologists. In this paper, we
develop an efficient methodology to leverage the subject-matter-expertise
stored in large-scale archival reporting and image data for a deep-learning
approach to fully-automated lumbar spinal stenosis grading. Specifically, we
introduce three major contributions: (1) a natural-language-processing scheme
to extract level-by-level ground-truth labels from free-text radiology reports
for the various types and grades of spinal stenosis (2) accurate vertebral
segmentation and disc-level localization using a U-Net architecture combined
with a spine-curve fitting method, and (3) a multi-input, multi-task, and
multi-class convolutional neural network to perform central canal and foraminal
stenosis grading on both axial and sagittal imaging series inputs with the
extracted report-derived labels applied to corresponding imaging level
segments. This study uses a large dataset of 22796 disc-levels extracted from
4075 patients. We achieve state-of-the-art performance on lumbar spinal
stenosis classification and expect the technique will increase both radiology
workflow efficiency and the perceived value of radiology reports for referring
clinicians and patients.
|
[
{
"version": "v1",
"created": "Thu, 26 Jul 2018 15:59:49 GMT"
}
] | 2018-07-27T00:00:00 |
[
[
"Lu",
"Jen-Tang",
""
],
[
"Pedemonte",
"Stefano",
""
],
[
"Bizzo",
"Bernardo",
""
],
[
"Doyle",
"Sean",
""
],
[
"Andriole",
"Katherine P.",
""
],
[
"Michalski",
"Mark H.",
""
],
[
"Gonzalez",
"R. Gilberto",
""
],
[
"Pomerantz",
"Stuart R.",
""
]
] |
new_dataset
| 0.996929 |
1702.06111
|
Hien Ngo Quoc
|
Erik G. Larsson, Thomas L. Marzetta, Hien Quoc Ngo, and Hong Yang
|
Antenna Count for Massive MIMO: 1.9 GHz versus 60 GHz
|
IEEE Communications Magazine, accepted
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
If we assume line-of-sight propagation and perfect channel state information
at the base station -- consistent with slow moving terminals -- then a direct
performance comparison between single-cell Massive MIMO at PCS and mmWave
frequency bands is straightforward and highly illuminating. Line-of-sight
propagation is considered favorable for mmWave because of minimal attenuation,
and its facilitation of hybrid beamforming to reduce the required number of
active transceivers. We quantify the number of mmWave (60 GHz) service antennas
that are needed to duplicate the performance of a specified number of PCS (1.9
GHz) service antennas. As a baseline we consider a modest PCS deployment of 128
antennas serving 18 terminals. We find that, to achieve the same per-terminal
max-min 95%-likely downlink throughput, 10000 mmWave antennas are needed. To
match the total antenna area of the PCS array would require 128000
half-wavelength mmWave antennas, but a much reduced number is adequate because
the large number of antennas also confers greater channel orthogonality. The
principal alleged benefit of mmWave technology--vast amounts of inexpensive
spectrum--is at least partially offset by the complexity of possibly unwieldy
amounts of hardware.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2017 18:51:06 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Jul 2018 22:36:14 GMT"
},
{
"version": "v3",
"created": "Wed, 25 Jul 2018 16:42:34 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Larsson",
"Erik G.",
""
],
[
"Marzetta",
"Thomas L.",
""
],
[
"Ngo",
"Hien Quoc",
""
],
[
"Yang",
"Hong",
""
]
] |
new_dataset
| 0.95895 |
1704.08615
|
Matthias K\"ummerer
|
Matthias K\"ummerer, Thomas S. A. Wallis, Matthias Bethge
|
Saliency Benchmarking Made Easy: Separating Models, Maps and Metrics
|
published at ECCV 2018
| null | null | null |
cs.CV stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dozens of new models on fixation prediction are published every year and
compared on open benchmarks such as MIT300 and LSUN. However, progress in the
field can be difficult to judge because models are compared using a variety of
inconsistent metrics. Here we show that no single saliency map can perform well
under all metrics. Instead, we propose a principled approach to solve the
benchmarking problem by separating the notions of saliency models, maps and
metrics. Inspired by Bayesian decision theory, we define a saliency model to be
a probabilistic model of fixation density prediction and a saliency map to be a
metric-specific prediction derived from the model density which maximizes the
expected performance on that metric given the model density. We derive these
optimal saliency maps for the most commonly used saliency metrics (AUC, sAUC,
NSS, CC, SIM, KL-Div) and show that they can be computed analytically or
approximated with high precision. We show that this leads to consistent
rankings in all metrics and avoids the penalties of using one saliency map for
all metrics. Our method allows researchers to have their model compete on many
different metrics with state-of-the-art in those metrics: "good" models will
perform well in all metrics.
|
[
{
"version": "v1",
"created": "Thu, 27 Apr 2017 15:07:42 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Jul 2018 13:31:14 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Kümmerer",
"Matthias",
""
],
[
"Wallis",
"Thomas S. A.",
""
],
[
"Bethge",
"Matthias",
""
]
] |
new_dataset
| 0.954566 |
1707.06628
|
Louay Bazzi
|
Louay Bazzi
|
On the covering radius of small codes versus dual distance
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tiet\"{a}v\"{a}inen's upper and lower bounds assert that for block-length-$n$
linear codes with dual distance $d$, the covering radius $R$ is at most
$\frac{n}{2}-(\frac{1}{2}-o(1))\sqrt{dn}$ and typically at least
$\frac{n}{2}-\Theta(\sqrt{dn\log{\frac{n}{d}}})$. The gap between those bounds
on $R -\frac{n}{2}$ is an $\Theta(\sqrt{\log{\frac{n}{d}}})$ factor related to
the gap between the worst covering radius given $d$ and the sphere-covering
bound. Our focus in this paper is on the case when $d = o(n)$, i.e., when the
code size is subexponential and the gap is $w(1)$. We show that up to a
constant, the gap can be eliminated by relaxing the covering requirement to
allow for missing $o(1)$ fraction of points. Namely, if the dual distance $d =
o(n)$, then for sufficiently large $d$, almost all points can be covered with
radius $R\leq\frac{n}{2}-\Theta(\sqrt{dn\log{\frac{n}{d}}})$. Compared to
random linear codes, our bound on $R-\frac{n}{2}$ is asymptotically tight up to
a factor less than $3$. We give applications to dual BCH codes. The proof
builds on the author's previous work on the weight distribution of cosets of
linear codes, which we simplify in this paper and extend from codes to
probability distributions on $\{0,1\}^n$, thus enabling the extension of the
above result to $(d-1)$-wise independent distributions.
|
[
{
"version": "v1",
"created": "Thu, 20 Jul 2017 17:38:58 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Jul 2018 13:51:17 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Bazzi",
"Louay",
""
]
] |
new_dataset
| 0.993913 |
1708.09653
|
Antonios Symvonis
|
Anargyros Oikonomou, Antonios Symvonis
|
Simple Compact Monotone Tree Drawings
|
A preliminary version of this paper which included the one-quadrant
algorithm for monotone tree drawings was presented in the 25th International
Symposium on Graph Drawing and Network Visualization, GD 2017
| null | null | null |
cs.DS cs.CG cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A monotone drawing of a graph G is a straight-line drawing of G such that
every pair of vertices is connected by a path that is monotone with respect to
some direction.
Trees, as a special class of graphs, have been the focus of several papers
and, recently, He and He~\cite{mt:4} showed how to produce a monotone drawing
of an arbitrary $n$-vertex tree that is contained in a $12n \times 12n$ grid.
All monotone tree drawing algorithms that have appeared in the literature
consider rooted ordered trees and they draw them so that (i) the root of the
tree is drawn at the origin of the drawing, (ii) the drawing is confined in the
first quadrant, and (iii) the ordering/embedding of the tree is respected. In
this paper, we provide a simple algorithm that has the exact same
characteristics and, given an $n$-vertex rooted tree $T$, it outputs a monotone
drawing of $T$ that fits on a $n \times n$ grid.
For unrooted ordered trees, we present an algorithms that produces monotone
drawings that respect the ordering and fit in an $(n+1) \times (\frac{n}{2}
+1)$ grid, while, for unrooted non-ordered trees we produce monotone drawings
of good aspect ratio which fit on a grid of size at most $\left\lfloor
\frac{3}{4} \left(n+2\right)\right\rfloor \times \left\lfloor \frac{3}{4}
\left(n+2\right)\right\rfloor$.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2017 10:23:36 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Sep 2017 21:31:07 GMT"
},
{
"version": "v3",
"created": "Tue, 24 Jul 2018 21:44:44 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Oikonomou",
"Anargyros",
""
],
[
"Symvonis",
"Antonios",
""
]
] |
new_dataset
| 0.999435 |
1801.02854
|
Nathan Ratliff
|
Nathan D. Ratliff and Jan Issac and Daniel Kappler and Stan Birchfield
and Dieter Fox
|
Riemannian Motion Policies
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the Riemannian Motion Policy (RMP), a new mathematical object
for modular motion generation. An RMP is a second-order dynamical system
(acceleration field or motion policy) coupled with a corresponding Riemannian
metric. The motion policy maps positions and velocities to accelerations, while
the metric captures the directions in the space important to the policy. We
show that RMPs provide a straightforward and convenient method for combining
multiple motion policies and transforming such policies from one space (such as
the task space) to another (such as the configuration space) in geometrically
consistent ways. The operators we derive for these combinations and
transformations are provably optimal, have linearity properties making them
agnostic to the order of application, and are strongly analogous to the
covariant transformations of natural gradients popular in the machine learning
literature. The RMP framework enables the fusion of motion policies from
different motion generation paradigms, such as dynamical systems, dynamic
movement primitives (DMPs), optimal control, operational space control,
nonlinear reactive controllers, motion optimization, and model predictive
control (MPC), thus unifying these disparate techniques from the literature.
RMPs are easy to implement and manipulate, facilitate controller design,
simplify handling of joint limits, and clarify a number of open questions
regarding the proper fusion of motion generation methods (such as incorporating
local reactive policies into long-horizon optimizers). We demonstrate the
effectiveness of RMPs on both simulation and real robots, including their
ability to naturally and efficiently solve complicated collision avoidance
problems previously handled by more complex planners.
|
[
{
"version": "v1",
"created": "Tue, 9 Jan 2018 09:44:21 GMT"
},
{
"version": "v2",
"created": "Sat, 3 Mar 2018 21:55:19 GMT"
},
{
"version": "v3",
"created": "Wed, 25 Jul 2018 07:54:52 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Ratliff",
"Nathan D.",
""
],
[
"Issac",
"Jan",
""
],
[
"Kappler",
"Daniel",
""
],
[
"Birchfield",
"Stan",
""
],
[
"Fox",
"Dieter",
""
]
] |
new_dataset
| 0.99918 |
1801.06011
|
Julian Steil
|
Julian Steil, Philipp M\"uller, Yusuke Sugano, Andreas Bulling
|
Forecasting User Attention During Everyday Mobile Interactions Using
Device-Integrated and Wearable Sensors
|
13 pages, 9 figures
| null |
10.1145/3229434.3229439
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual attention is highly fragmented during mobile interactions, but the
erratic nature of attention shifts currently limits attentive user interfaces
to adapting after the fact, i.e. after shifts have already happened. We instead
study attention forecasting -- the challenging task of predicting users' gaze
behaviour (overt visual attention) in the near future. We present a novel
long-term dataset of everyday mobile phone interactions, continuously recorded
from 20 participants engaged in common activities on a university campus over
4.5 hours each (more than 90 hours in total). We propose a proof-of-concept
method that uses device-integrated sensors and body-worn cameras to encode rich
information on device usage and users' visual scene. We demonstrate that our
method can forecast bidirectional attention shifts and predict whether the
primary attentional focus is on the handheld mobile device. We study the impact
of different feature sets on performance and discuss the significant potential
but also remaining challenges of forecasting user attention during mobile
interactions.
|
[
{
"version": "v1",
"created": "Thu, 18 Jan 2018 13:47:11 GMT"
},
{
"version": "v2",
"created": "Tue, 8 May 2018 17:03:25 GMT"
},
{
"version": "v3",
"created": "Wed, 25 Jul 2018 07:24:28 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Steil",
"Julian",
""
],
[
"Müller",
"Philipp",
""
],
[
"Sugano",
"Yusuke",
""
],
[
"Bulling",
"Andreas",
""
]
] |
new_dataset
| 0.974554 |
1803.07635
|
Garrett Thomas
|
Garrett Thomas, Melissa Chien, Aviv Tamar, Juan Aparicio Ojea, Pieter
Abbeel
|
Learning Robotic Assembly from CAD
|
In the proceedings of the IEEE International Conference on Robotics
and Automation (ICRA), Brisbane, Australia, May 2018
| null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, motivated by recent manufacturing trends, we investigate
autonomous robotic assembly. Industrial assembly tasks require contact-rich
manipulation skills, which are challenging to acquire using classical control
and motion planning approaches. Consequently, robot controllers for assembly
domains are presently engineered to solve a particular task, and cannot easily
handle variations in the product or environment. Reinforcement learning (RL) is
a promising approach for autonomously acquiring robot skills that involve
contact-rich dynamics. However, RL relies on random exploration for learning a
control policy, which requires many robot executions, and often gets trapped in
locally suboptimal solutions. Instead, we posit that prior knowledge, when
available, can improve RL performance. We exploit the fact that in modern
assembly domains, geometric information about the task is readily available via
the CAD design files. We propose to leverage this prior knowledge by guiding RL
along a geometric motion plan, calculated using the CAD data. We show that our
approach effectively improves over traditional control approaches for tracking
the motion plan, and can solve assembly tasks that require high precision, even
without accurate state estimation. In addition, we propose a neural network
architecture that can learn to track the motion plan, and generalize the
assembly controller to changes in the object positions.
|
[
{
"version": "v1",
"created": "Tue, 20 Mar 2018 20:16:18 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Jul 2018 21:22:57 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Thomas",
"Garrett",
""
],
[
"Chien",
"Melissa",
""
],
[
"Tamar",
"Aviv",
""
],
[
"Ojea",
"Juan Aparicio",
""
],
[
"Abbeel",
"Pieter",
""
]
] |
new_dataset
| 0.961064 |
1803.08395
|
Philipp Jordan
|
Philipp Jordan, Omar Mubin, Mohammad Obaid, Paula Alexandra Silva
|
Exploring the Referral and Usage of Science Fiction in HCI Literature
|
v1: 20 pages, 4 figures, 3 tables, HCI International 2018 accepted
submission v2: 20 pages, 4 figures, 3 tables, added link/doi for Springer
proceeding
| null |
10.1007/978-3-319-91803-7_2
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Research on science fiction (sci-fi) in scientific publications has indicated
the usage of sci-fi stories, movies or shows to inspire novel Human-Computer
Interaction (HCI) research. Yet no studies have analysed sci-fi in a top-ranked
computer science conference at present. For that reason, we examine the CHI
main track for the presence and nature of sci-fi referrals in relationship to
HCI research. We search for six sci-fi terms in a dataset of 5812 CHI main
proceedings and code the context of 175 sci-fi referrals in 83 papers indexed
in the CHI main track. In our results, we categorize these papers into five
contemporary HCI research themes wherein sci-fi and HCI interconnect: 1)
Theoretical Design Research; 2) New Interactions; 3) Human-Body Modification or
Extension; 4) Human-Robot Interaction and Artificial Intelligence; and 5)
Visions of Computing and HCI. In conclusion, we discuss results and
implications located in the promising arena of sci-fi and HCI research.
|
[
{
"version": "v1",
"created": "Thu, 22 Mar 2018 15:08:09 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Jul 2018 08:58:06 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Jordan",
"Philipp",
""
],
[
"Mubin",
"Omar",
""
],
[
"Obaid",
"Mohammad",
""
],
[
"Silva",
"Paula Alexandra",
""
]
] |
new_dataset
| 0.99822 |
1804.08292
|
Patrick Follmann
|
Patrick Follmann, Tobias B\"ottger, Philipp H\"artinger, Rebecca
K\"onig, Markus Ulrich
|
MVTec D2S: Densely Segmented Supermarket Dataset
|
accepted to ECCV 2018
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce the Densely Segmented Supermarket (D2S) dataset, a novel
benchmark for instance-aware semantic segmentation in an industrial domain. It
contains 21,000 high-resolution images with pixel-wise labels of all object
instances. The objects comprise groceries and everyday products from 60
categories. The benchmark is designed such that it resembles the real-world
setting of an automatic checkout, inventory, or warehouse system. The training
images only contain objects of a single class on a homogeneous background,
while the validation and test sets are much more complex and diverse. To
further benchmark the robustness of instance segmentation methods, the scenes
are acquired with different lightings, rotations, and backgrounds. We ensure
that there are no ambiguities in the labels and that every instance is labeled
comprehensively. The annotations are pixel-precise and allow using crops of
single instances for articial data augmentation. The dataset covers several
challenges highly relevant in the field, such as a limited amount of training
data and a high diversity in the test and validation sets. The evaluation of
state-of-the-art object detection and instance segmentation methods on D2S
reveals significant room for improvement.
|
[
{
"version": "v1",
"created": "Mon, 23 Apr 2018 09:01:26 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Jul 2018 15:50:26 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Follmann",
"Patrick",
""
],
[
"Böttger",
"Tobias",
""
],
[
"Härtinger",
"Philipp",
""
],
[
"König",
"Rebecca",
""
],
[
"Ulrich",
"Markus",
""
]
] |
new_dataset
| 0.999823 |
1807.06749
|
Giovanni De Magistris
|
Giovanni De Magistris, Asim Munawar, Tu-Hoa Pham, Tadanobu Inoue,
Phongtharin Vinayavekhin, Ryuki Tachibana
|
Experimental Force-Torque Dataset for Robot Learning of Multi-Shape
Insertion
|
video at: https://youtu.be/6rLc9fAtzAQ 36th Annual Conference of the
Robotics Society of Japan (RSJ 2018), Kasugai, Japan, 2018
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The accurate modeling of real-world systems and physical interactions is a
common challenge towards the resolution of robotics tasks. Machine learning
approaches have demonstrated significant results in the modeling of complex
systems (e.g., articulated robot structures, cable stretch, fluid dynamics), or
to learn robotics tasks (e.g., grasping, reaching) from raw sensor measurements
without explicit programming, using reinforcement learning. However, a common
bottleneck in machine learning techniques resides in the availability of
suitable data. While many vision-based datasets have been released in the
recent years, ones involving physical interactions, of particular interest for
the robotic community, have been scarcer. In this paper, we present a public
dataset on peg-in-hole insertion tasks containing force-torque and pose
information for multiple variations of convex-shaped pegs. We demonstrate how
this dataset can be used to train a robot to insert polyhedral pegs into holes
using only 6-axis force/torque sensor measurements as inputs, as well as other
tasks involving contact such as shape recognition.
|
[
{
"version": "v1",
"created": "Wed, 18 Jul 2018 02:45:01 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Jul 2018 04:30:32 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"De Magistris",
"Giovanni",
""
],
[
"Munawar",
"Asim",
""
],
[
"Pham",
"Tu-Hoa",
""
],
[
"Inoue",
"Tadanobu",
""
],
[
"Vinayavekhin",
"Phongtharin",
""
],
[
"Tachibana",
"Ryuki",
""
]
] |
new_dataset
| 0.999508 |
1807.07247
|
Dwarikanath Mahapatra
|
Zongyuan Ge, Dwarikanath Mahapatra, Suman Sedai, Rahil Garnavi, Rajib
Chakravorty
|
Chest X-rays Classification: A Multi-Label and Fine-Grained Problem
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The widely used ChestX-ray14 dataset addresses an important medical image
classification problem and has the following caveats: 1) many lung pathologies
are visually similar, 2) a variant of diseases including lung cancer,
tuberculosis, and pneumonia are present in a single scan, i.e. multiple labels
and 3) The incidence of healthy images is much larger than diseased samples,
creating imbalanced data. These properties are common in medical domain.
Existing literature uses stateof- the-art DensetNet/Resnet models being
transfer learned where output neurons of the networks are trained for
individual diseases to cater for multiple diseases labels in each image.
However, most of them don't consider relationship between multiple classes. In
this work we have proposed a novel error function, Multi-label Softmax Loss
(MSML), to specifically address the properties of multiple labels and
imbalanced data. Moreover, we have designed deep network architecture based on
fine-grained classification concept that incorporates MSML. We have evaluated
our proposed method on various network backbones and showed consistent
performance improvements of AUC-ROC scores on the ChestX-ray14 dataset. The
proposed error function provides a new method to gain improved performance
across wider medical datasets.
|
[
{
"version": "v1",
"created": "Thu, 19 Jul 2018 06:02:54 GMT"
},
{
"version": "v2",
"created": "Sat, 21 Jul 2018 12:47:43 GMT"
},
{
"version": "v3",
"created": "Tue, 24 Jul 2018 22:15:49 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Ge",
"Zongyuan",
""
],
[
"Mahapatra",
"Dwarikanath",
""
],
[
"Sedai",
"Suman",
""
],
[
"Garnavi",
"Rahil",
""
],
[
"Chakravorty",
"Rajib",
""
]
] |
new_dataset
| 0.980696 |
1807.09332
|
Xianfu Chen
|
Xianfu Chen and Pei Liu and Hang Liu and Celimuge Wu and Yusheng Ji
|
Multipath Transmission Scheduling in Millimeter Wave Cloud Radio Access
Networks
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Millimeter wave (mmWave) communications provide great potential for
next-generation cellular networks to meet the demands of fast-growing mobile
data traffic with plentiful spectrum available. However, in a mmWave cellular
system, the shadowing and blockage effects lead to the intermittent
connectivity, and the handovers are more frequent. This paper investigates an
``all-mmWave'' cloud radio access network (cloud-RAN), in which both the
fronthaul and the radio access links operate at mmWave. To address the
intermittent transmissions, we allow the mobile users (MUs) to establish
multiple connections to the central unit over the remote radio heads (RRHs).
Specifically, we propose a multipath transmission framework by leveraging the
``all-mmWave'' cloud-RAN architecture, which makes decisions of the RRH
association and the packet transmission scheduling according to the
time-varying network statistics, such that a MU experiences the minimum
queueing delay and packet drops. The joint RRH association and transmission
scheduling problem is formulated as a Markov decision process (MDP). Due to the
problem size, a low-complexity online learning scheme is put forward, which
requires no a priori statistic information of network dynamics. Simulations
show that our proposed scheme outperforms the state-of-art baselines, in terms
of average queue length and average packet dropping rate.
|
[
{
"version": "v1",
"created": "Tue, 17 Jul 2018 06:53:49 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Chen",
"Xianfu",
""
],
[
"Liu",
"Pei",
""
],
[
"Liu",
"Hang",
""
],
[
"Wu",
"Celimuge",
""
],
[
"Ji",
"Yusheng",
""
]
] |
new_dataset
| 0.984812 |
1807.09343
|
Jin-Hee Cho Dr.
|
Dilli P. Sharma, Dong Seong Kim, Seunghyun Yoon, Hyuk Lim, Jin-Hee
Cho, Terrence J. Moore
|
FRVM: Flexible Random Virtual IP Multiplexing in Software-Defined
Networks
| null |
IEEE TrustCom 2018
| null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Network address shuffling is one of moving target defense (MTD) techniques
that can invalidate the address information attackers have collected based on
the current network IP configuration. We propose a software-defined
networking-based MTD technique called Flexible Random Virtual IP Multiplexing,
namely FRVM, which aims to defend against network reconnaissance and scanning
attacks. FRVM enables a host machine to have multiple, random, time-varying
virtual IP addresses, which are multiplexed to a real IP address of the host.
Multiplexing or de-multiplexing event dynamically remaps all the virtual
network addresses of the hosts. Therefore, at the end of a multiplexing event,
FRVM aims to make the attackers lose any knowledge gained through the
reconnaissance and to disturb their scanning strategy. In this work, we analyze
and evaluate our proposed FRVM in terms of the attack success probability under
scanning attacks and target host discovery attacks.
|
[
{
"version": "v1",
"created": "Wed, 18 Jul 2018 20:14:24 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Sharma",
"Dilli P.",
""
],
[
"Kim",
"Dong Seong",
""
],
[
"Yoon",
"Seunghyun",
""
],
[
"Lim",
"Hyuk",
""
],
[
"Cho",
"Jin-Hee",
""
],
[
"Moore",
"Terrence J.",
""
]
] |
new_dataset
| 0.997234 |
1807.09368
|
Jans Glagolevs
|
Karlis Freivalds and Jans Glagolevs
|
Graph Compact Orthogonal Layout Algorithm
| null |
Freivalds K., Glagolevs J. (2014) Graph Compact Orthogonal Layout
Algorithm. In: Fouilhoux P., Gouveia L., Mahjoub A., Paschos V. (eds)
Combinatorial Optimization. ISCO 2014. Lecture Notes in Computer Science, vol
8596. Springer, Cham
|
10.1007/978-3-319-09174-7_22
| null |
cs.OH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There exist many orthogonal graph drawing algorithms that minimize edge
crossings or edge bends, however they produce unsatisfactory drawings in many
practical cases. In this paper we present a grid-based algorithm for drawing
orthogonal graphs with nodes of prescribed size. It distinguishes by creating
pleasant and compact drawings in relatively small running time. The main idea
is to minimize the total edge length that implicitly minimizes crossings and
makes the drawing easy to comprehend. The algorithm is based on combining local
and global improvements. Local improvements are moving each node to a new place
and swapping of nodes. Global improvement is based on constrained quadratic
programming approach that minimizes the total edge length while keeping node
relative positions.
|
[
{
"version": "v1",
"created": "Tue, 24 Jul 2018 21:42:29 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Freivalds",
"Karlis",
""
],
[
"Glagolevs",
"Jans",
""
]
] |
new_dataset
| 0.998837 |
1807.09377
|
Kristopher Micinski
|
Kristopher Micinski and Zhanpeng Wang and Thomas Gilray
|
Racets: Faceted Execution in Racket
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Faceted Execution is a linguistic paradigm for dynamic information-flow
control. Under faceted execution, secure program data is represented by faceted
values: decision trees that encode how the data should appear to its owner
(represented by a label) versus everyone else. When labels are allowed to be
first-class (i.e., predicates that decide at runtime which data to reveal),
faceted execution enables policy-agnostic programming: a programming style that
allows privacy policies for data to be enforced independently of code that
computes on that data.
To date, implementations of faceted execution are relatively heavyweight:
requiring either changing the language runtime or the application code (e.g.,
by using monads). Following Racket's languages-as-libraries approach, we
present Racets: an implementation of faceted execution as a library of macros.
Given Racket's highly-expressive macro system, our implementation follows
relatively directly from the semantics of faceted execution. To demonstrate how
Racets can be used for policy-agnostic programming, we use it to build a
web-based game of Battleship. Our implementation sheds light on several
interesting issues in interacting with code written without faceted execution.
Our Racets implementation is open source, under development, and available
online.
|
[
{
"version": "v1",
"created": "Tue, 24 Jul 2018 22:27:14 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Micinski",
"Kristopher",
""
],
[
"Wang",
"Zhanpeng",
""
],
[
"Gilray",
"Thomas",
""
]
] |
new_dataset
| 0.987281 |
1807.09392
|
Hemant Malik
|
Ovidiu Daescu and Hemant Malik
|
Does a robot path have clearance c?
| null | null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Most path planning problems among polygonal obstacles ask to find a path that
avoids the obstacles and is optimal with respect to some measure or a
combination of measures, for example an $u$-to-$v$ shortest path of clearance
at least $c$, where $u$ and $v$ are points in the free space and $c$ is a
positive constant. In practical applications, such as emergency
interventions/evacuations and medical treatment planning, a number of
$u$-to-$v$ paths are suggested by experts and the question is whether such
paths satisfy specific requirements, such as a given clearance from the
obstacles. We address the following path query problem: Given a set $S$ of $m$
disjoint simple polygons in the plane, with a total of $n$ vertices, preprocess
them so that for a query consisting of a positive constant $c$ and a simple
polygonal path $\pi$ with $k$ vertices, from a point $u$ to a point $v$ in free
space, where $k$ is much smaller than $n$, one can quickly decide whether $\pi$
has clearance at least $c$ (that is, there is no polygonal obstacle within
distance $c$ of $\pi$). To do so, we show how to solve the following related
problem: Given a set $S$ of $m$ simple polygons in $\Re^{2}$, preprocess $S$
into a data structure so that the polygon in $S$ closest to a query line
segment $s$ can be reported quickly. We present an $O(t \log n)$ time, $O(t)$
space preprocessing, $O((n / \sqrt{t}) \log ^{7/2} n)$ query time solution for
this problem, for any $n ^{1 + \epsilon} \leq t \leq n^{2}$. For a path with
$k$ segments, this results in $O((n k / \sqrt{t}) \log ^{7/2} n)$ query time,
which is a significant improvement over algorithms that can be derived from
existing computational geometry methods when $k$ is small.
|
[
{
"version": "v1",
"created": "Tue, 24 Jul 2018 23:41:58 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Daescu",
"Ovidiu",
""
],
[
"Malik",
"Hemant",
""
]
] |
new_dataset
| 0.971551 |
1807.09472
|
Sergi Abadal
|
X. Timoneda (1), S. Abadal (1), A. Cabellos-Aparicio (1), D. Manessis
(2), J. Zhou (3), A. Franques (3), J. Torrellas (3), E. Alarc\'on (1) ((1)
Universitat Polit\`ecnica de Catalunya, (2) Fraunhofer IZM, (3) University of
Illinois at Urbana-Champaign)
|
Millimeter-Wave Propagation within a Computer Chip Package
|
Presented at the 2018 International Symposium on Circuits & Systems
(ISCAS)
| null |
10.1109/ISCAS.2018.8351875
| null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wireless Network-on-Chip (WNoC) appears as a promising alternative to
conventional interconnect fabrics for chip-scale communications. The WNoC
paradigm has been extensively analyzed from the physical, network and
architecture perspectives assuming mmWave band operation. However, there has
not been a comprehensive study at this band for realistic chip packages and,
thus, the characteristics of such wireless channel remain not fully understood.
This work addresses this issue by accurately modeling a flip-chip package and
investigating the wave propagation inside it. Through parametric studies, a
locally optimal configuration for 60 GHz WNoC is obtained, showing that
chip-wide attenuation below 32.6 dB could be achieved with standard processes.
Finally, the applicability of the methodology is discussed for higher bands and
other integrated environments such as a Software-Defined Metamaterial (SDM).
|
[
{
"version": "v1",
"created": "Wed, 25 Jul 2018 08:14:27 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Timoneda",
"X.",
""
],
[
"Abadal",
"S.",
""
],
[
"Cabellos-Aparicio",
"A.",
""
],
[
"Manessis",
"D.",
""
],
[
"Zhou",
"J.",
""
],
[
"Franques",
"A.",
""
],
[
"Torrellas",
"J.",
""
],
[
"Alarcón",
"E.",
""
]
] |
new_dataset
| 0.996355 |
1807.09510
|
Luca Carcano
|
Luca Carcano, Emanuele Plebani, Danilo Pietro Pau, Marco Piastra
|
Pre-trainable Reservoir Computing with Recursive Neural Gas
|
8 pages, 6 figures
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Echo State Networks (ESN) are a class of Recurrent Neural Networks (RNN) that
has gained substantial popularity due to their effectiveness, ease of use and
potential for compact hardware implementation. An ESN contains the three
network layers input, reservoir and readout where the reservoir is the truly
recurrent network. The input and reservoir layers of an ESN are initialized at
random and never trained afterwards and the training of the ESN is applied to
the readout layer only. The alternative of Recursive Neural Gas (RNG) is one of
the many proposals of fully-trainable reservoirs that can be found in the
literature. Although some improvements in performance have been reported with
RNG, to the best of authors' knowledge, no experimental comparative results are
known with benchmarks for which ESN is known to yield excellent results. This
work describes an accurate model of RNG together with some extensions to the
models presented in the literature and shows comparative results on three
well-known and accepted datasets. The experimental results obtained show that,
under specific circumstances, RNG-based reservoirs can achieve better
performance.
|
[
{
"version": "v1",
"created": "Wed, 25 Jul 2018 10:05:46 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Carcano",
"Luca",
""
],
[
"Plebani",
"Emanuele",
""
],
[
"Pau",
"Danilo Pietro",
""
],
[
"Piastra",
"Marco",
""
]
] |
new_dataset
| 0.971527 |
1807.09607
|
Feng Gu
|
Feng Gu, Nikolay Burlutskiy, Mats Andersson and Lena Kajland Wilen
|
Multi-Resolution Networks for Semantic Segmentation in Whole Slide
Images
|
Accepted by MICCAI COMPAY 2018 Workshop
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Digital pathology provides an excellent opportunity for applying fully
convolutional networks (FCNs) to tasks, such as semantic segmentation of whole
slide images (WSIs). However, standard FCNs face challenges with respect to
multi-resolution, inherited from the pyramid arrangement of WSIs. As a result,
networks specifically designed to learn and aggregate information at different
levels are desired. In this paper, we propose two novel multi-resolution
networks based on the popular `U-Net' architecture, which are evaluated on a
benchmark dataset for binary semantic segmentation in WSIs. The proposed
methods outperform the U-Net, demonstrating superior learning and
generalization capabilities.
|
[
{
"version": "v1",
"created": "Wed, 25 Jul 2018 13:54:11 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Gu",
"Feng",
""
],
[
"Burlutskiy",
"Nikolay",
""
],
[
"Andersson",
"Mats",
""
],
[
"Wilen",
"Lena Kajland",
""
]
] |
new_dataset
| 0.952378 |
1807.09627
|
Carolina Raposo
|
Carolina Raposo, Cristovao Sousa, Luis Ribeiro, Rui Melo, Joao P.
Barreto, Joao Oliveira, Pedro Marques and Fernando Fonseca
|
Video-based computer aided arthroscopy for patient specific
reconstruction of the Anterior Cruciate Ligament
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Anterior Cruciate Ligament (ACL) tear is a common medical condition that
is treated using arthroscopy by pulling a tissue graft through a tunnel opened
with a drill. The correct anatomical position and orientation of this tunnel is
crucial for knee stability, and drilling an adequate bone tunnel is the most
technically challenging part of the procedure. This paper presents, for the
first time, a guidance system based solely on intra-operative video for guiding
the drilling of the tunnel. Our solution uses small, easily recognizable visual
markers that are attached to the bone and tools for estimating their relative
pose. A recent registration algorithm is employed for aligning a pre-operative
image of the patient's anatomy with a set of contours reconstructed by touching
the bone surface with an instrumented tool. Experimental validation using
ex-vivo data shows that the method enables the accurate registration of the
pre-operative model with the bone, providing useful information for guiding the
surgeon during the medical procedure.
|
[
{
"version": "v1",
"created": "Wed, 25 Jul 2018 14:22:38 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Raposo",
"Carolina",
""
],
[
"Sousa",
"Cristovao",
""
],
[
"Ribeiro",
"Luis",
""
],
[
"Melo",
"Rui",
""
],
[
"Barreto",
"Joao P.",
""
],
[
"Oliveira",
"Joao",
""
],
[
"Marques",
"Pedro",
""
],
[
"Fonseca",
"Fernando",
""
]
] |
new_dataset
| 0.998449 |
1807.09679
|
Mat\'u\v{s} Sul\'ir
|
Mat\'u\v{s} Sul\'ir and Jaroslav Porub\"an
|
RuntimeSearch: Ctrl+F for a Running Program
| null |
Proceedings of the 32nd IEEE/ACM International Conference on
Automated Software Engineering (ASE), IEEE, 2017, pp. 388-393
|
10.1109/ASE.2017.8115651
| null |
cs.SE cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developers often try to find occurrences of a certain term in a software
system. Traditionally, a text search is limited to static source code files. In
this paper, we introduce a simple approach, RuntimeSearch, where the given term
is searched in the values of all string expressions in a running program. When
a match is found, the program is paused and its runtime properties can be
explored with a traditional debugger. The feasibility and usefulness of
RuntimeSearch is demonstrated on a medium-sized Java project.
|
[
{
"version": "v1",
"created": "Wed, 25 Jul 2018 15:57:00 GMT"
}
] | 2018-07-26T00:00:00 |
[
[
"Sulír",
"Matúš",
""
],
[
"Porubän",
"Jaroslav",
""
]
] |
new_dataset
| 0.998896 |
1803.01094
|
Amirsina Torfi
|
Amirsina Torfi
|
SpeechPy - A Library for Speech Processing and Recognition
| null |
Journal of Open Source Software, 3(27), 749, 2018
|
10.21105/joss.00749
| null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
SpeechPy is an open source Python package that contains speech preprocessing
techniques, speech features, and important post-processing operations. It
provides most frequent used speech features including MFCCs and filterbank
energies alongside with the log-energy of filter-banks. The aim of the package
is to provide researchers with a simple tool for speech feature extraction and
processing purposes in applications such as Automatic Speech Recognition and
Speaker Verification.
|
[
{
"version": "v1",
"created": "Sat, 3 Mar 2018 02:30:55 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Mar 2018 01:08:08 GMT"
},
{
"version": "v3",
"created": "Fri, 25 May 2018 21:22:19 GMT"
}
] | 2018-07-25T00:00:00 |
[
[
"Torfi",
"Amirsina",
""
]
] |
new_dataset
| 0.995428 |
1807.08217
|
Keerthana P G
|
Basel Alghanem, Keerthana P G
|
Asynchronous Advantage Actor-Critic Agent for Starcraft II
|
arXiv admin note: text overlap with arXiv:1708.04782 by other authors
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Deep reinforcement learning, and especially the Asynchronous Advantage
Actor-Critic algorithm, has been successfully used to achieve super-human
performance in a variety of video games. Starcraft II is a new challenge for
the reinforcement learning community with the release of pysc2 learning
environment proposed by Google Deepmind and Blizzard Entertainment. Despite
being a target for several AI developers, few have achieved human level
performance. In this project we explain the complexities of this environment
and discuss the results from our experiments on the environment. We have
compared various architectures and have proved that transfer learning can be an
effective paradigm in reinforcement learning research for complex scenarios
requiring skill transfer.
|
[
{
"version": "v1",
"created": "Sun, 22 Jul 2018 01:07:43 GMT"
}
] | 2018-07-25T00:00:00 |
[
[
"Alghanem",
"Basel",
""
],
[
"G",
"Keerthana P",
""
]
] |
new_dataset
| 0.955983 |
1807.09023
|
Andrew Adamatzky
|
Andrew Adamatzky and Mohammad Mahdi Dehshibi
|
Exploring Tehran with excitable medium
| null | null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An excitable chemical medium --- Belousov-Zhabotinsky (BZ) reaction --- is
proven to be a fruitful substrate for prototyping unconventional computing
devices. These include image processors, logical circuits, and robot
controllers. We study a BZ potential for characterising a geometry of street
networks on a fragment of Tehran street map. The city was chosen because it is
one of the most populated cities in the World with nearly uncontrollable urban
growth. In numerical experiments with Oregonator model of BZ reaction, we
demonstrate that excitability of the medium allows acts as a selector between
omnidirectional waves and soliton-like localised excitations. We uncover a
phase-transition like dynamics, controlled by the excitability, of coverage of
the street network by excitation wave-fronts. In the cluster analysis, we show
how the network geometry, when it meets propagation of BZ wave-front, relates
to the traffic flow of Tehran
|
[
{
"version": "v1",
"created": "Tue, 24 Jul 2018 10:35:23 GMT"
}
] | 2018-07-25T00:00:00 |
[
[
"Adamatzky",
"Andrew",
""
],
[
"Dehshibi",
"Mohammad Mahdi",
""
]
] |
new_dataset
| 0.999071 |
1807.09040
|
Anshoo Tandon
|
Anshoo Tandon, Mehul Motani, Lav R. Varshney
|
Are RLL Codes Suitable for Simultaneous Energy and Information Transfer?
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Run-length limited (RLL) codes are a well-studied class of constrained codes
having application in diverse areas such as optical and magnetic data recording
systems, DNA-based storage, and visible light communication. RLL codes have
also been proposed for the emerging area of simultaneous energy and information
transfer, where the receiver uses the received signal for decoding information
as well as for harvesting energy to run its circuitry. In this paper, we show
that RLL codes are not the best codes for simultaneous energy and information
transfer, in terms of the maximum number of codewords which avoid energy
outage, i.e., outage-constrained capacity. Specifically, we show that sliding
window constrained (SWC) codes and subblock energy constrained (SEC) codes have
significantly higher outage-constrained capacities than RLL codes.
|
[
{
"version": "v1",
"created": "Tue, 24 Jul 2018 11:26:06 GMT"
}
] | 2018-07-25T00:00:00 |
[
[
"Tandon",
"Anshoo",
""
],
[
"Motani",
"Mehul",
""
],
[
"Varshney",
"Lav R.",
""
]
] |
new_dataset
| 0.99935 |
1807.09064
|
Xiaoguang Han
|
Xiaoguang Han, Kangcheng Hou, Dong Du, Yuda Qiu, Yizhou Yu, Kun Zhou,
Shuguang Cui
|
CaricatureShop: Personalized and Photorealistic Caricature Sketching
|
12 pages,16 figures,submitted to IEEE TVCG
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose the first sketching system for interactively
personalized and photorealistic face caricaturing. Input an image of a human
face, the users can create caricature photos by manipulating its facial feature
curves. Our system firstly performs exaggeration on the recovered 3D face model
according to the edited sketches, which is conducted by assigning the laplacian
of each vertex a scaling factor. To construct the mapping between 2D sketches
and a vertex-wise scaling field, a novel deep learning architecture is
developed. With the obtained 3D caricature model, two images are generated, one
obtained by applying 2D warping guided by the underlying 3D mesh deformation
and the other obtained by re-rendering the deformed 3D textured model. These
two images are then seamlessly integrated to produce our final output. Due to
the severely stretching of meshes, the rendered texture is of blurry
appearances. A deep learning approach is exploited to infer the missing details
for enhancing these blurry regions. Moreover, a relighting operation is
invented to further improve the photorealism of the result. Both quantitative
and qualitative experiment results validated the efficiency of our sketching
system and the superiority of our proposed techniques against existing methods.
|
[
{
"version": "v1",
"created": "Tue, 24 Jul 2018 12:26:57 GMT"
}
] | 2018-07-25T00:00:00 |
[
[
"Han",
"Xiaoguang",
""
],
[
"Hou",
"Kangcheng",
""
],
[
"Du",
"Dong",
""
],
[
"Qiu",
"Yuda",
""
],
[
"Yu",
"Yizhou",
""
],
[
"Zhou",
"Kun",
""
],
[
"Cui",
"Shuguang",
""
]
] |
new_dataset
| 0.973478 |
1807.09069
|
Cun Li
|
Cun Li, Jun Hu, Bart Hengeveld, Caroline Hummels
|
Slots-Memento : A System Facilitating Intergenerational Story Sharing
and Preservation of Family Mementos
|
Slots-Memento : A System Facilitating Intergenerational Story Sharing
and Preservation of Family Mementos
|
The International Journal of Multimedia & Its Applications (IJMA)
Vol.10, No.1/2/3, June 2018
| null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Family mementos document events shaping family life, telling a story within
and between family members. The elderly collected some mementos for children,
but never recorded stories related to those objects. In this paper, in order to
understand the status quo of memento storytelling and sharing of elderly
people, contextual inquiry was conducted, which further helped us to identify
design opportunities and requirements. Resulting design was defined after
brainstorm and user consultation, which was Slots- Memento, a system consisting
a slot machine-like device used by the elderly and a flash drive used by the
young. The Slots machine-like device utilizes with the metaphor of slots
machine, which integrates functions of memento photo displaying, story
recording, and preservation. In the flash disk, the young could copy memento
photos to it. The system aims to facilitate memento story sharing and
preservation within family members. Preliminary evaluation and user test were
conducted in evaluation section, the results showed that Slots-Memento was
understood and accepted by the elderly users. Photos of mementos were easy to
recall memories. It enabled the elderly people to be aware of the stories of
the family mementos, as well as aroused their desire to share them with family
members. Related research methodology includes contextual inquiry,
brainstorming, prototyping, scenario creation, and user test.
|
[
{
"version": "v1",
"created": "Tue, 24 Jul 2018 12:39:15 GMT"
}
] | 2018-07-25T00:00:00 |
[
[
"Li",
"Cun",
""
],
[
"Hu",
"Jun",
""
],
[
"Hengeveld",
"Bart",
""
],
[
"Hummels",
"Caroline",
""
]
] |
new_dataset
| 0.998902 |
1807.09074
|
Donlaporn Srifar
|
Donlaporn Srifar
|
360 virtual reality travel media for elderly
| null | null | null | null |
cs.HC cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The objectives of this qualitative research were to study the model of
360-degree virtual reality travel media, to compare appropriateness of moving
360-degree virtual reality travel media for elderly with both still and moving
cameras, and to study satisfaction of elderly in 360-degree virtual reality
travel media. The informants are 10 elders with age above and equal to 60 years
old who live in Bangkok regardless of genders. Data were collected through
documents, detailed interview, and non-participant observation of elders to
360-degree virtual reality travel media with data triangulation. 1. From the
literature review 1. The creation must primarily consider the target consumers
on their physics 2. must have fluidity on changing the view of the camera by
calibrating with the target consumers 3. The image displayed must not move too
fast to prevent dizziness and improve the comfort of the target consumers. It
is also highly recommended to implement a function to customize the movement
rate for the customer. 2. From the in-depth interview with the target
consumers, the results found that 1. They are worried and not used to the
equipment 2. They have no idea where to look 3. They feel excited 5. They are
interested in what is more to see 6. They feel like they did actually travel
there 7. They can hear the sound clearly 8. They do not like when the camera is
moving and find still camera more comfortable. 3. From the non-participant
observation and found that they are always excited, laughed, and smiled when
watching the media. They always asked where this is and why they cannot see
anything when turning around.
|
[
{
"version": "v1",
"created": "Tue, 24 Jul 2018 12:51:25 GMT"
}
] | 2018-07-25T00:00:00 |
[
[
"Srifar",
"Donlaporn",
""
]
] |
new_dataset
| 0.967412 |
1807.09154
|
Santosh Vipparthi Kumar
|
Monu Verma, Prafulla Saxena, Santosh. K. Vipparthi, Gridhari Singh
|
QUEST: Quadriletral Senary bit Pattern for Facial Expression Recognition
|
7 pages, 7 tables, 6 Figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Facial expression has a significant role in analyzing human cognitive state.
Deriving an accurate facial appearance representation is a critical task for an
automatic facial expression recognition application. This paper provides a new
feature descriptor named as Quadrilateral Senary bit Pattern for facial
expression recognition. The QUEST pattern encoded the intensity changes by
emphasizing the relationship between neighboring and reference pixels by
dividing them into two quadrilaterals in a local neighborhood. Thus, the
resultant gradient edges reveal the transitional variation information, that
improves the classification rate by discriminating expression classes.
Moreover, it also enhances the capability of the descriptor to deal with
viewpoint variations and illumination changes. The trine relationship in a
quadrilateral structure helps to extract the expressive edges and suppressing
noise elements to enhance the robustness to noisy conditions. The QUEST pattern
generates a six-bit compact code, which improves the efficiency of the FER
system with more discriminability. The effectiveness of the proposed method is
evaluated by conducting several experiments on four benchmark datasets: MMI,
GEMEP-FERA, OULU-CASIA, and ISED. The experimental results show better
performance of the proposed method as compared to existing state-art-the
approaches.
|
[
{
"version": "v1",
"created": "Tue, 24 Jul 2018 14:39:48 GMT"
}
] | 2018-07-25T00:00:00 |
[
[
"Verma",
"Monu",
""
],
[
"Saxena",
"Prafulla",
""
],
[
"Vipparthi",
"Santosh. K.",
""
],
[
"Singh",
"Gridhari",
""
]
] |
new_dataset
| 0.964681 |
1807.09175
|
Antonina Nepeivoda
|
Antonina Nepeivoda
|
Supercompiling String Programs Using Word Equations as Constraints
| null | null | null | null |
cs.LO cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe a general parameterized scheme of program and constraint analyses
allowing us to specify both the program specialization method known as
Turchin's supercompilation and Hmelevskii's algorithm solving the quadratic
word equations. The scheme is specified for both sorts of the analysis and
works in a joint algorithm in which these two sorts of the analysis are used
together. The word equations and the inequalities on regular patterns are used
as the string constraint language in the algorithm.
|
[
{
"version": "v1",
"created": "Fri, 29 Jun 2018 13:50:53 GMT"
}
] | 2018-07-25T00:00:00 |
[
[
"Nepeivoda",
"Antonina",
""
]
] |
new_dataset
| 0.996886 |
1807.09192
|
Weidi Xie
|
Weidi Xie and Andrew Zisserman
|
Multicolumn Networks for Face Recognition
|
To appear in BMVC2018
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The objective of this work is set-based face recognition, i.e. to decide if
two sets of images of a face are of the same person or not. Conventionally, the
set-wise feature descriptor is computed as an average of the descriptors from
individual face images within the set. In this paper, we design a neural
network architecture that learns to aggregate based on both "visual" quality
(resolution, illumination), and "content" quality (relative importance for
discriminative classification). To this end, we propose a Multicolumn Network
(MN) that takes a set of images (the number in the set can vary) as input, and
learns to compute a fix-sized feature descriptor for the entire set. To
encourage high-quality representations, each individual input image is first
weighted by its "visual" quality, determined by a self-quality assessment
module, and followed by a dynamic recalibration based on "content" qualities
relative to the other images within the set. Both of these qualities are learnt
implicitly during training for set-wise classification. Comparing with the
previous state-of-the-art architectures trained with the same dataset
(VGGFace2), our Multicolumn Networks show an improvement of between 2-6% on the
IARPA IJB face recognition benchmarks, and exceed the state of the art for all
methods on these benchmarks.
|
[
{
"version": "v1",
"created": "Tue, 24 Jul 2018 15:45:58 GMT"
}
] | 2018-07-25T00:00:00 |
[
[
"Xie",
"Weidi",
""
],
[
"Zisserman",
"Andrew",
""
]
] |
new_dataset
| 0.985278 |
1505.00947
|
Haisheng Xu Dr.
|
Haisheng Xu, Rick S. Blum, Jian Wang and Jian Yuan
|
Colocated MIMO Radar Waveform Design for Transmit Beampattern Formation
|
22 pages, 6 figures, Accepted by IEEE Transactions on Aerospace and
Electronic Systems
|
IEEE Transactions on Aerospace and Electronic Systems 51(2015)
1558 - 1568
|
10.1109/TAES.2014.140249
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, colocated MIMO radar waveform design is considered by
minimizing the integrated side-lobe level to obtain beam patterns with lower
side-lobe levels than competing methods. First, a quadratic programming problem
is formulated to design beam patterns by using the criteria for a minimal
integrated side-lobe level. A theorem is derived that provides a closed-form
analytical optimal solution that appears to be an extension of the Rayleigh
quotient minimization for a possibly singular matrix in quadratic form. Such
singularities are shown to occur in the problem of interest, but proofs for the
optimum solution in these singular matrix cases could not be found in the
literature. Next, an additional constraint is added to obtain beam patterns
with desired 3 dB beamwidths, resulting in a nonconvex quadratically
constrained quadratic program which is NP-hard. A semidefinite program and a
Gaussian randomized semidefinite relaxation are used to determine feasible
solutions arbitrarily close to the solution to the original problem.
Theoretical and numerical analyses illustrate the impacts of changing the
number of transmitters and orthogonal waveforms employed in the designs.
Numerical comparisons are conducted to evaluate the proposed design approaches.
|
[
{
"version": "v1",
"created": "Tue, 5 May 2015 10:39:06 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Xu",
"Haisheng",
""
],
[
"Blum",
"Rick S.",
""
],
[
"Wang",
"Jian",
""
],
[
"Yuan",
"Jian",
""
]
] |
new_dataset
| 0.957331 |
1509.08346
|
Jalil Modares
|
Jalil Modares, Nicholas Mastronarde
|
UB-ANC Drone: A Flexible Airborne Networking and Communications Testbed
| null | null | null | null |
cs.NI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the University at Buffalo's Airborne Networking and Communications
Testbed (UB-ANC Drone). UB-ANC Drone is an open software/hardware platform that
aims to facilitate rapid testing and repeatable comparative evaluation of
airborne networking and communications protocols at different layers of the
protocol stack. It combines quadcopters capable of autonomous flight with
sophisticated command and control capabilities and embedded software-defined
radios (SDRs), which enable flexible deployment of novel communications and
networking protocols. This is in contrast to existing airborne network
testbeds, which rely on standard inflexible wireless technologies, e.g., Wi-Fi
or Zigbee. UB-ANC Drone is designed with emphasis on modularity and
extensibility, and is built around popular open-source projects and standards
developed by the research and hobby communities. This makes UB-ANC Drone highly
customizable, while also simplifying its adoption. In this paper, we describe
UB-ANC Drone's hardware and software architecture.
|
[
{
"version": "v1",
"created": "Mon, 28 Sep 2015 15:05:16 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Oct 2015 22:34:45 GMT"
},
{
"version": "v3",
"created": "Sat, 21 Jul 2018 15:20:45 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Modares",
"Jalil",
""
],
[
"Mastronarde",
"Nicholas",
""
]
] |
new_dataset
| 0.999806 |
1601.01736
|
Elod Pal Csirmaz
|
Elod Pal Csirmaz
|
Algebraic File Synchronization: Adequacy and Completeness
| null | null | null | null |
cs.DC cs.DM cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With distributed computing and mobile applications, synchronizing diverging
replicas of data structures is a more and more common problem. We use algebraic
methods to reason about filesystem operations, and introduce a simplified
definition of conflicting updates to filesystems. We also define algorithms for
update detection and reconciliation and present rigorous proofs that they not
only work as intended, but also cannot be improved on.
To achieve this, we introduce a novel, symmetric set of filesystem commands
with higher information content, which removes edge cases and increases the
predictive powers of our algebraic model. We also present a number of generally
useful classes and properties of sequences of commands.
While these results are often intuitive, providing exact proofs for them is
far from trivial. They contribute to our understanding of this special type of
algebraic model, and toward building more complete algebras of filesystem trees
and extending algebraic approaches to other data storage protocols. They also
form a theoretical basis for specifying and guaranteeing the error-free
operation of applications that implement an algebraic approach to
synchronization.
|
[
{
"version": "v1",
"created": "Fri, 8 Jan 2016 01:01:55 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Jul 2018 23:44:04 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Csirmaz",
"Elod Pal",
""
]
] |
new_dataset
| 0.996721 |
1706.03424
|
Weixun Zhou
|
Weixun Zhou, Shawn Newsam, Congmin Li, Zhenfeng Shao
|
PatternNet: A Benchmark Dataset for Performance Evaluation of Remote
Sensing Image Retrieval
|
49 pages
| null |
10.1016/j.isprsjprs.2018.01.004
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Remote sensing image retrieval(RSIR), which aims to efficiently retrieve data
of interest from large collections of remote sensing data, is a fundamental
task in remote sensing. Over the past several decades, there has been
significant effort to extract powerful feature representations for this task
since the retrieval performance depends on the representative strength of the
features. Benchmark datasets are also critical for developing, evaluating, and
comparing RSIR approaches. Current benchmark datasets are deficient in that 1)
they were originally collected for land use/land cover classification and not
image retrieval, 2) they are relatively small in terms of the number of classes
as well the number of sample images per class, and 3) the retrieval performance
has saturated. These limitations have severely restricted the development of
novel feature representations for RSIR, particularly the recent deep-learning
based features which require large amounts of training data. We therefore
present in this paper, a new large-scale remote sensing dataset termed
"PatternNet" that was collected specifically for RSIR. PatternNet was collected
from high-resolution imagery and contains 38 classes with 800 images per class.
We also provide a thorough review of RSIR approaches ranging from traditional
handcrafted feature based methods to recent deep learning based ones. We
evaluate over 35 methods to establish extensive baseline results for future
RSIR research using the PatternNet benchmark.
|
[
{
"version": "v1",
"created": "Sun, 11 Jun 2017 23:45:07 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Jul 2017 04:37:30 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Zhou",
"Weixun",
""
],
[
"Newsam",
"Shawn",
""
],
[
"Li",
"Congmin",
""
],
[
"Shao",
"Zhenfeng",
""
]
] |
new_dataset
| 0.999733 |
1707.09585
|
Avi Ben-Cohen
|
Avi Ben-Cohen, Eyal Klang, Stephen P. Raskin, Michal Marianne Amitai,
and Hayit Greenspan
|
Virtual PET Images from CT Data Using Deep Convolutional Networks:
Initial Results
|
To be presented at SASHIMI2017: Simulation and Synthesis in Medical
Imaging, MICCAI 2017
| null |
10.1007/978-3-319-68127-6_6
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we present a novel system for PET estimation using CT scans. We
explore the use of fully convolutional networks (FCN) and conditional
generative adversarial networks (GAN) to export PET data from CT data. Our
dataset includes 25 pairs of PET and CT scans where 17 were used for training
and 8 for testing. The system was tested for detection of malignant tumors in
the liver region. Initial results look promising showing high detection
performance with a TPR of 92.3% and FPR of 0.25 per case. Future work entails
expansion of the current system to the entire body using a much larger dataset.
Such a system can be used for tumor detection and drug treatment evaluation in
a CT-only environment instead of the expansive and radioactive PET-CT scan.
|
[
{
"version": "v1",
"created": "Sun, 30 Jul 2017 06:43:42 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Ben-Cohen",
"Avi",
""
],
[
"Klang",
"Eyal",
""
],
[
"Raskin",
"Stephen P.",
""
],
[
"Amitai",
"Michal Marianne",
""
],
[
"Greenspan",
"Hayit",
""
]
] |
new_dataset
| 0.997534 |
1710.09494
|
James Lathrop
|
Samuel J. Ellis, Titus H. Klinge, James I. Lathrop, Jack H. Lutz,
Robyn R. Lutz, Andrew S. Miner, and Hugh D. Potter
|
Runtime Fault Detection in Programmed Molecular Systems
| null | null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Watchdog timers are devices that are commonly used to monitor the health of
safety-critical hardware and software systems. Their primary function is to
raise an alarm if the monitored systems fail to emit periodic "heartbeats" that
signal their well-being. In this paper we design and verify a molecular
watchdog timer for monitoring the health of programmed molecular nanosystems.
This raises new challenges because our molecular watchdog timer and the system
that it monitors both operate in the probabilistic environment of chemical
kinetics, where many failures are certain to occur and it is especially hard to
detect the absence of a signal.
Our molecular watchdog timer is the result of an incremental design process
that uses goal-oriented requirements engineering, simulation, stochastic
analysis, and software verification tools. We demonstrate the molecular
watchdog's functionality by having it monitor a molecular oscillator. Both the
molecular watchdog timer and the oscillator are implemented as chemical
reaction networks, which are the current programming language of choice for
many molecular programming applications.
|
[
{
"version": "v1",
"created": "Wed, 25 Oct 2017 23:41:30 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Jul 2018 17:23:40 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Ellis",
"Samuel J.",
""
],
[
"Klinge",
"Titus H.",
""
],
[
"Lathrop",
"James I.",
""
],
[
"Lutz",
"Jack H.",
""
],
[
"Lutz",
"Robyn R.",
""
],
[
"Miner",
"Andrew S.",
""
],
[
"Potter",
"Hugh D.",
""
]
] |
new_dataset
| 0.99926 |
1711.07426
|
Siddharth Mahendran
|
Siddharth Mahendran, Haider Ali and Rene Vidal
|
Convolutional Networks for Object Category and 3D Pose Estimation from
2D Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current CNN-based algorithms for recovering the 3D pose of an object in an
image assume knowledge about both the object category and its 2D localization
in the image. In this paper, we relax one of these constraints and propose to
solve the task of joint object category and 3D pose estimation from an image
assuming known 2D localization. We design a new architecture for this task
composed of a feature network that is shared between subtasks, an object
categorization network built on top of the feature network, and a collection of
category dependent pose regression networks. We also introduce suitable loss
functions and a training method for the new architecture. Experiments on the
challenging PASCAL3D+ dataset show state-of-the-art performance in the joint
categorization and pose estimation task. Moreover, our performance on the joint
task is comparable to the performance of state-of-the-art methods on the
simpler 3D pose estimation with known object category task.
|
[
{
"version": "v1",
"created": "Mon, 20 Nov 2017 17:31:27 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Mar 2018 19:15:52 GMT"
},
{
"version": "v3",
"created": "Fri, 20 Jul 2018 19:21:36 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Mahendran",
"Siddharth",
""
],
[
"Ali",
"Haider",
""
],
[
"Vidal",
"Rene",
""
]
] |
new_dataset
| 0.992284 |
1801.08624
|
Bo Chang
|
Bo Chang, Qiong Zhang, Shenyi Pan, Lili Meng
|
Generating Handwritten Chinese Characters using CycleGAN
|
Accepted at WACV 2018
| null |
10.1109/WACV.2018.00028
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Handwriting of Chinese has long been an important skill in East Asia.
However, automatic generation of handwritten Chinese characters poses a great
challenge due to the large number of characters. Various machine learning
techniques have been used to recognize Chinese characters, but few works have
studied the handwritten Chinese character generation problem, especially with
unpaired training data. In this work, we formulate the Chinese handwritten
character generation as a problem that learns a mapping from an existing
printed font to a personalized handwritten style. We further propose DenseNet
CycleGAN to generate Chinese handwritten characters. Our method is applied not
only to commonly used Chinese characters but also to calligraphy work with
aesthetic values. Furthermore, we propose content accuracy and style
discrepancy as the evaluation metrics to assess the quality of the handwritten
characters generated. We then use our proposed metrics to evaluate the
generated characters from CASIA dataset as well as our newly introduced Lanting
calligraphy dataset.
|
[
{
"version": "v1",
"created": "Thu, 25 Jan 2018 22:36:05 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Chang",
"Bo",
""
],
[
"Zhang",
"Qiong",
""
],
[
"Pan",
"Shenyi",
""
],
[
"Meng",
"Lili",
""
]
] |
new_dataset
| 0.993453 |
1805.09772
|
Hamid Tizhoosh
|
Graham Bleaney, Matthew Kuzyk, Julian Man, Hossein Mayanloo,
H.R.Tizhoosh
|
Auto-Detection of Safety Issues in Baby Products
|
To appear in proceedings of The 31st IEA-AIE 2018, June 25-28, 2018,
Montreal, Canada
| null | null | null |
cs.LG cs.CL cs.IR stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Every year, thousands of people receive consumer product related injuries.
Research indicates that online customer reviews can be processed to
autonomously identify product safety issues. Early identification of safety
issues can lead to earlier recalls, and thus fewer injuries and deaths. A
dataset of product reviews from Amazon.com was compiled, along with
\emph{SaferProducts.gov} complaints and recall descriptions from the Consumer
Product Safety Commission (CPSC) and European Commission Rapid Alert system. A
system was built to clean the collected text and to extract relevant features.
Dimensionality reduction was performed by computing feature relevance through a
Random Forest and discarding features with low information gain. Various
classifiers were analyzed, including Logistic Regression, SVMs,
Na{\"i}ve-Bayes, Random Forests, and an Ensemble classifier. Experimentation
with various features and classifier combinations resulted in a logistic
regression model with 66\% precision in the top 50 reviews surfaced. This
classifier outperforms all benchmarks set by related literature and consumer
product safety professionals.
|
[
{
"version": "v1",
"created": "Fri, 27 Apr 2018 15:33:50 GMT"
},
{
"version": "v2",
"created": "Sat, 21 Jul 2018 23:43:59 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Bleaney",
"Graham",
""
],
[
"Kuzyk",
"Matthew",
""
],
[
"Man",
"Julian",
""
],
[
"Mayanloo",
"Hossein",
""
],
[
"Tizhoosh",
"H. R.",
""
]
] |
new_dataset
| 0.999756 |
1807.08015
|
Ant\'onio Ravara
|
Patr\'icia Monteiro, Jo\~ao Louren\c{c}o, and Ant\'onio Ravara
|
Uma an\'alise comparativa de ferramentas de an\'alise est\'atica para
dete\c{c}\~ao de erros de mem\'oria
|
Article in Portuguese, accepted in the national informatics
conference INForum (http://inforum.org.pt/INForum2018)
| null | null | null |
cs.SE cs.PL cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
--- Portuguese version
As falhas de software est\~ao com frequ\^encia associadas a acidentes com
graves consequ\^encias econ\'omicas e/ou humanas, pelo que se torna imperioso
investir na valida\c{c}\~ao do software, nomeadamente daquele que \'e
cr\'itico. Este artigo endere\c{c}a a tem\'atica da qualidade do software
atrav\'es de uma an\'alise comparativa da usabilidade e efic\'acia de quatro
ferramentas de an\'alise est\'atica de programas em C/C++. Este estudo permitiu
compreender o grande potencial e o elevado impacto que as ferramentas de
an\'alise est\'atica podem ter na valida\c{c}\~ao e verifica\c{c}\~ao de
software. Como resultado complementar, foram identificados novos erros em
programas de c\'odigo aberto e com elevada popularidade, que foram reportados.
--- English version
Software bugs are frequently associated with accidents with serious
economical and/or human consequences, being thus imperative the investment in
the validation of software, namely of the critical one. This article addresses
the topic of software quality by making a comparative analysis of the usability
and efficiency of four static analysis tools for C/C++ programs. This study
allow to understand the big potential and high impact that these tools may have
in the validation and verification of software. As a complementary result, we
identified new errors in very popular open source projects, which have been
reported.
|
[
{
"version": "v1",
"created": "Fri, 20 Jul 2018 20:12:24 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Monteiro",
"Patrícia",
""
],
[
"Lourenço",
"João",
""
],
[
"Ravara",
"António",
""
]
] |
new_dataset
| 0.985248 |
1807.08026
|
Dakshil Shah
|
Dakshil Shah, Varshali Kumar
|
TCP SYN Cookie Vulnerability
|
3 pages, 5 figures
| null | null | null |
cs.NI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
TCP SYN Cookies were implemented to mitigate against DoS attacks. It ensured
that the server did not have to store any information for half-open
connections. A SYN cookie contains all information required by the server to
know the request is valid. However, the usage of these cookies introduces a
vulnerability that allows an attacker to guess the initial sequence number and
use that to spoof a connection or plant false logs.
|
[
{
"version": "v1",
"created": "Fri, 20 Jul 2018 20:51:32 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Shah",
"Dakshil",
""
],
[
"Kumar",
"Varshali",
""
]
] |
new_dataset
| 0.999273 |
1807.08048
|
Haoyang Fan
|
Haoyang Fan, Fan Zhu, Changchun Liu, Liangliang Zhang, Li Zhuang, Dong
Li, Weicheng Zhu, Jiangtao Hu, Hongye Li, Qi Kong
|
Baidu Apollo EM Motion Planner
| null | null | null | null |
cs.RO cs.AI cs.LG cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this manuscript, we introduce a real-time motion planning system based on
the Baidu Apollo (open source) autonomous driving platform. The developed
system aims to address the industrial level-4 motion planning problem while
considering safety, comfort and scalability. The system covers multilane and
single-lane autonomous driving in a hierarchical manner: (1) The top layer of
the system is a multilane strategy that handles lane-change scenarios by
comparing lane-level trajectories computed in parallel. (2) Inside the
lane-level trajectory generator, it iteratively solves path and speed
optimization based on a Frenet frame. (3) For path and speed optimization, a
combination of dynamic programming and spline-based quadratic programming is
proposed to construct a scalable and easy-to-tune framework to handle traffic
rules, obstacle decisions and smoothness simultaneously. The planner is
scalable to both highway and lower-speed city driving scenarios. We also
demonstrate the algorithm through scenario illustrations and on-road test
results.
The system described in this manuscript has been deployed to dozens of Baidu
Apollo autonomous driving vehicles since Apollo v1.5 was announced in September
2017. As of May 16th, 2018, the system has been tested under 3,380 hours and
approximately 68,000 kilometers (42,253 miles) of closed-loop autonomous
driving under various urban scenarios.
The algorithm described in this manuscript is available at
https://github.com/ApolloAuto/apollo/tree/master/modules/planning.
|
[
{
"version": "v1",
"created": "Fri, 20 Jul 2018 22:34:17 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Fan",
"Haoyang",
""
],
[
"Zhu",
"Fan",
""
],
[
"Liu",
"Changchun",
""
],
[
"Zhang",
"Liangliang",
""
],
[
"Zhuang",
"Li",
""
],
[
"Li",
"Dong",
""
],
[
"Zhu",
"Weicheng",
""
],
[
"Hu",
"Jiangtao",
""
],
[
"Li",
"Hongye",
""
],
[
"Kong",
"Qi",
""
]
] |
new_dataset
| 0.993716 |
1807.08117
|
L\'eo Stefanesco
|
Paul-Andr\'e Melli\`es and L\'eo Stefanesco
|
An Asynchronous soundness theorem for concurrent separation logic
|
Full version of an extended abstract published at LICS 2018
| null | null | null |
cs.PL cs.LO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Concurrent separation logic (CSL) is a specification logic for concurrent
imperative programs with shared memory and locks. In this paper, we develop a
concurrent and interactive account of the logic inspired by asynchronous game
semantics. To every program $C$, we associate a pair of asynchronous transition
systems $[C]_S$ and $[C]_L$ which describe the operational behavior of the Code
when confronted to its Environment or Frame --- both at the level of machine
states ($S$) and of machine instructions and locks ($L$). We then establish
that every derivation tree $\pi$ of a judgment $\Gamma\vdash\{P\}C\{Q\}$
defines a winning and asynchronous strategy $[\pi]_{Sep}$ with respect to both
asynchronous semantics $[C]_S$ and $[C]_L$. From this, we deduce an
asynchronous soundness theorem for CSL, which states that the canonical map
$\mathcal{L}:[C]_S\to[C]_L$ from the stateful semantics $[C]_S$ to the
stateless semantics $[C]_L$ satisfies a basic fibrational property. We advocate
that this provides a clean and conceptual explanation for the usual soundness
theorem of CSL, including the absence of data races.
|
[
{
"version": "v1",
"created": "Sat, 21 Jul 2018 10:01:36 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Melliès",
"Paul-André",
""
],
[
"Stefanesco",
"Léo",
""
]
] |
new_dataset
| 0.998932 |
1807.08142
|
Guy Barshap Gb
|
Guy Barshap
|
{\em Crypto-Battleships} or How to play Battleships game over the
Blockchain?
|
16 pg, a draft version
| null | null | null |
cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Battleships is a well known traditional board game for two players which
dates from World War I. Though, the game has several digital version
implementations, they are affected by similar major drawbacks such as fairness
and a trust model that relies on third party. In this paper, we demonstrate how
to implement a fair, resistant to denial-of-service, where the honest winner
earns the deposit money {\em immediately}. The game is built on a
permissionless Blockchain that supports Turing complete smart-contract
computation.
|
[
{
"version": "v1",
"created": "Sat, 21 Jul 2018 12:57:14 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Barshap",
"Guy",
""
]
] |
new_dataset
| 0.998443 |
1807.08205
|
Mingda Zhang
|
Mingda Zhang, Rebecca Hwa and Adriana Kovashka
|
Equal But Not The Same: Understanding the Implicit Relationship Between
Persuasive Images and Text
|
To appear in BMVC2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Images and text in advertisements interact in complex, non-literal ways. The
two channels are usually complementary, with each channel telling a different
part of the story. Current approaches, such as image captioning methods, only
examine literal, redundant relationships, where image and text show exactly the
same content. To understand more complex relationships, we first collect a
dataset of advertisement interpretations for whether the image and slogan in
the same visual advertisement form a parallel (conveying the same message
without literally saying the same thing) or non-parallel relationship, with the
help of workers recruited on Amazon Mechanical Turk. We develop a variety of
features that capture the creativity of images and the specificity or ambiguity
of text, as well as methods that analyze the semantics within and across
channels. We show that our method outperforms standard image-text alignment
approaches on predicting the parallel/non-parallel relationship between image
and text.
|
[
{
"version": "v1",
"created": "Sat, 21 Jul 2018 20:53:39 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Zhang",
"Mingda",
""
],
[
"Hwa",
"Rebecca",
""
],
[
"Kovashka",
"Adriana",
""
]
] |
new_dataset
| 0.9938 |
1807.08241
|
Malik Aqeel Anwar
|
Malik Aqeel Anwar, Arijit Raychowdhury
|
NAVREN-RL: Learning to fly in real environment via end-to-end deep
reinforcement learning using monocular images
| null | null | null | null |
cs.LG cs.CV cs.RO stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present NAVREN-RL, an approach to NAVigate an unmanned aerial vehicle in
an indoor Real ENvironment via end-to-end reinforcement learning RL. A suitable
reward function is designed keeping in mind the cost and weight constraints for
micro drone with minimum number of sensing modalities. Collection of small
number of expert data and knowledge based data aggregation is integrated into
the RL process to aid convergence. Experimentation is carried out on a Parrot
AR drone in different indoor arenas and the results are compared with other
baseline technologies. We demonstrate how the drone successfully avoids
obstacles and navigates across different arenas.
|
[
{
"version": "v1",
"created": "Sun, 22 Jul 2018 06:10:04 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Anwar",
"Malik Aqeel",
""
],
[
"Raychowdhury",
"Arijit",
""
]
] |
new_dataset
| 0.99863 |
1807.08280
|
Andros Tjandra
|
Andros Tjandra, Sakriani Sakti, Satoshi Nakamura
|
Multi-scale Alignment and Contextual History for Attention Mechanism in
Sequence-to-sequence Model
| null | null | null | null |
cs.CL cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A sequence-to-sequence model is a neural network module for mapping two
sequences of different lengths. The sequence-to-sequence model has three core
modules: encoder, decoder, and attention. Attention is the bridge that connects
the encoder and decoder modules and improves model performance in many tasks.
In this paper, we propose two ideas to improve sequence-to-sequence model
performance by enhancing the attention module. First, we maintain the history
of the location and the expected context from several previous time-steps.
Second, we apply multiscale convolution from several previous attention vectors
to the current decoder state. We utilized our proposed framework for
sequence-to-sequence speech recognition and text-to-speech systems. The results
reveal that our proposed extension could improve performance significantly
compared to a standard attention baseline.
|
[
{
"version": "v1",
"created": "Sun, 22 Jul 2018 13:10:30 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Tjandra",
"Andros",
""
],
[
"Sakti",
"Sakriani",
""
],
[
"Nakamura",
"Satoshi",
""
]
] |
new_dataset
| 0.996894 |
1807.08295
|
Julliano Nascimento
|
Erika M. M. Coelho, Hebert Coelho, Julliano R. Nascimento, Jayme L.
Szwarcfiter
|
On the Geodetic Hull Number of Complementary Prisms
|
12 pages, 5 figures
| null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $G$ be a finite, simple, and undirected graph and let $S$ be a set of
vertices of $G$. In the geodetic convexity, a set of vertices $S$ of a graph
$G$ is convex if all vertices belonging to any shortest path between two
vertices of $S$ lie in $S$. The convex hull $H(S)$ of $S$ is the smallest
convex set containing $S$. If $H(S) = V(G)$, then $S$ is a hull set. The
cardinality $h(G)$ of a minimum hull set of $G$ is the hull number of $G$. The
complementary prism $G\overline{G}$ of a graph $G$ arises from the disjoint
union of the graph $G$ and $\overline{G}$ by adding the edges of a perfect
matching between the corresponding vertices of $G$ and $\overline{G}$.
Motivated by previous work, we determine and present lower and upper bounds on
the hull number of complementary prisms of trees, disconnected graphs and
cographs. We also show that the hull number on complementary prisms cannot be
limited in the geodetic convexity, unlike the $P_3$-convexity.
|
[
{
"version": "v1",
"created": "Sun, 22 Jul 2018 15:06:18 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Coelho",
"Erika M. M.",
""
],
[
"Coelho",
"Hebert",
""
],
[
"Nascimento",
"Julliano R.",
""
],
[
"Szwarcfiter",
"Jayme L.",
""
]
] |
new_dataset
| 0.950575 |
1807.08350
|
Guillermo Laguna
|
Guillermo J. Laguna and Sourabh Bhattacharya
|
Tracking Mobile Intruders in an Art Gallery: Guard Deployment
Strategies, Fundamental Limitations, and Performance Guarantees
|
21 pages, submitted to Discrete & Computational Geometry journal
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper addresses the problem of tracking mobile intruders in a polygonal
environment. We assume that a team of diagonal guards is deployed inside the
polygon to provide mobile coverage. First, we formulate the problem of tracking
a mobile intruder inside a polygonal environment as a multi-robot task
allocation (MRTA) problem. Leveraging on guard deployment strategies in art
gallery problems for mobile coverage, we show that the problem of finding the
minimum speed of guards to persistently track a single mobile intruder is
NP-hard. Next, for a given maximum speed of the intruder and the guards, we
propose a technique to partition a polygon, and compute a feasible allocation
of guards to the partitions. We prove the correctness of the proposed
algorithm, and show its completeness for a specific class of inputs. We
classify the guards based on the structural properties of the partitions
allocated to them. Based on the classification, we propose motion strategy for
the guards to track the mobile intruder when it is located in the partition
allocated to the guard. Finally, we extend the proposed technique to address
guard deployment and allocation strategies for non-simple polygons and multiple
intruders.
|
[
{
"version": "v1",
"created": "Sun, 22 Jul 2018 19:18:28 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Laguna",
"Guillermo J.",
""
],
[
"Bhattacharya",
"Sourabh",
""
]
] |
new_dataset
| 0.95288 |
1807.08465
|
Philipp Blandfort
|
Philipp Blandfort, Desmond Patton, William R. Frey, Svebor Karaman,
Surabhi Bhargava, Fei-Tzin Lee, Siddharth Varia, Chris Kedzie, Michael B.
Gaskell, Rossano Schifanella, Kathleen McKeown, Shih-Fu Chang
|
Multimodal Social Media Analysis for Gang Violence Prevention
| null | null | null | null |
cs.LG cs.CL stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gang violence is a severe issue in major cities across the U.S. and recent
studies [Patton et al. 2017] have found evidence of social media communications
that can be linked to such violence in communities with high rates of exposure
to gang activity. In this paper we partnered computer scientists with social
work researchers, who have domain expertise in gang violence, to analyze how
public tweets with images posted by youth who mention gang associations on
Twitter can be leveraged to automatically detect psychosocial factors and
conditions that could potentially assist social workers and violence outreach
workers in prevention and early intervention programs. To this end, we
developed a rigorous methodology for collecting and annotating tweets. We
gathered 1,851 tweets and accompanying annotations related to visual concepts
and the psychosocial codes: aggression, loss, and substance use. These codes
are relevant to social work interventions, as they represent possible pathways
to violence on social media. We compare various methods for classifying tweets
into these three classes, using only the text of the tweet, only the image of
the tweet, or both modalities as input to the classifier. In particular, we
analyze the usefulness of mid-level visual concepts and the role of different
modalities for this tweet classification task. Our experiments show that
individually, text information dominates classification performance of the loss
class, while image information dominates the aggression and substance use
classes. Our multimodal approach provides a very promising improvement (18%
relative in mean average precision) over the best single modality approach.
Finally, we also illustrate the complexity of understanding social media data
and elaborate on open challenges.
|
[
{
"version": "v1",
"created": "Mon, 23 Jul 2018 07:52:52 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Blandfort",
"Philipp",
""
],
[
"Patton",
"Desmond",
""
],
[
"Frey",
"William R.",
""
],
[
"Karaman",
"Svebor",
""
],
[
"Bhargava",
"Surabhi",
""
],
[
"Lee",
"Fei-Tzin",
""
],
[
"Varia",
"Siddharth",
""
],
[
"Kedzie",
"Chris",
""
],
[
"Gaskell",
"Michael B.",
""
],
[
"Schifanella",
"Rossano",
""
],
[
"McKeown",
"Kathleen",
""
],
[
"Chang",
"Shih-Fu",
""
]
] |
new_dataset
| 0.993663 |
1807.08500
|
Athanasios Kehagias
|
Athanasios Kehagias
|
Generalized Cops and Robbers: A Multi-Player Pursuit Game on Graphs
| null | null | null | null |
cs.DM cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce and study the Generalized Cops and Robbers game (GCR), an
N-player pursuit game in graphs. The two-player version is essentially
equivalent to the classic Cops and Robbers (CR) game. The three-player version
can be understood as two CR games played simultaneously on the same graph; a
player can be at the same time both pursuer and evader. The same is true for
four or more players. We formulate GCR as a discounted stochastic game of
perfect information and prove that, for three or more players, it has at least
two Nash Equilibria: one in positional deterministic strategies and another in
non-positional ones. We also study the capturing properties of GCR Nash
Equilibria in connection to the cop-number of a graph. Finally, we briefly
discuss GCR as a member of a wider family of multi-player graph pursuit games
with rather interesting properties.
|
[
{
"version": "v1",
"created": "Mon, 23 Jul 2018 09:31:26 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Kehagias",
"Athanasios",
""
]
] |
new_dataset
| 0.991614 |
1807.08563
|
Kaixuan Wang
|
Kaixuan Wang, Shaojie Shen
|
MVDepthNet: Real-time Multiview Depth Estimation Neural Network
|
This paper is accepted by 3DV 2018
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although deep neural networks have been widely applied to computer vision
problems, extending them into multiview depth estimation is non-trivial. In
this paper, we present MVDepthNet, a convolutional network to solve the depth
estimation problem given several image-pose pairs from a localized monocular
camera in neighbor viewpoints. Multiview observations are encoded in a cost
volume and then combined with the reference image to estimate the depth map
using an encoder-decoder network. By encoding the information from multiview
observations into the cost volume, our method achieves real-time performance
and the flexibility of traditional methods that can be applied regardless of
the camera intrinsic parameters and the number of images. Geometric data
augmentation is used to train MVDepthNet. We further apply MVDepthNet in a
monocular dense mapping system that continuously estimates depth maps using a
single localized moving camera. Experiments show that our method can generate
depth maps efficiently and precisely.
|
[
{
"version": "v1",
"created": "Mon, 23 Jul 2018 12:37:13 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Wang",
"Kaixuan",
""
],
[
"Shen",
"Shaojie",
""
]
] |
new_dataset
| 0.998939 |
1807.08699
|
Tim Ophelders
|
Kevin Buchin, Tim Ophelders, Bettina Speckmann
|
SETH Says: Weak Fr\'echet Distance is Faster, but only if it is
Continuous and in One Dimension
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show by reduction from the Orthogonal Vectors problem that algorithms with
strongly subquadratic running time cannot approximate the Fr\'echet distance
between curves better than a factor $3$ unless SETH fails. We show that similar
reductions cannot achieve a lower bound with a factor better than $3$. Our
lower bound holds for the continuous, the discrete, and the weak discrete
Fr\'echet distance even for curves in one dimension. Interestingly, the
continuous weak Fr\'echet distance behaves differently. Our lower bound still
holds for curves in two dimensions and higher. However, for curves in one
dimension, we provide an exact algorithm to compute the weak Fr\'echet distance
in linear time.
|
[
{
"version": "v1",
"created": "Mon, 23 Jul 2018 16:23:20 GMT"
}
] | 2018-07-24T00:00:00 |
[
[
"Buchin",
"Kevin",
""
],
[
"Ophelders",
"Tim",
""
],
[
"Speckmann",
"Bettina",
""
]
] |
new_dataset
| 0.997926 |
1707.08323
|
Jianchao Tan
|
Jianchao Tan, Stephen DiVerdi, Jingwan Lu, Yotam Gingold
|
Pigmento: Pigment-Based Image Analysis and Editing
|
add copyright to images; add acknowledgements, is accepted by IEEE
Transactions on Visualization and Computer Graphics (IEEE TVCG)
| null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The colorful appearance of a physical painting is determined by the
distribution of paint pigments across the canvas, which we model as a per-pixel
mixture of a small number of pigments with multispectral absorption and
scattering coefficients. We present an algorithm to efficiently recover this
structure from an RGB image, yielding a plausible set of pigments and a low RGB
reconstruction error. We show that under certain circumstances we are able to
recover pigments that are close to ground truth, while in all cases our results
are always plausible. Using our decomposition, we repose standard digital image
editing operations as operations in pigment space rather than RGB, with
interestingly novel results. We demonstrate tonal adjustments, selection
masking, cut-copy-paste, recoloring, palette summarization, and edge
enhancement.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2017 08:50:14 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Jul 2018 21:11:36 GMT"
},
{
"version": "v3",
"created": "Thu, 19 Jul 2018 22:42:54 GMT"
}
] | 2018-07-23T00:00:00 |
[
[
"Tan",
"Jianchao",
""
],
[
"DiVerdi",
"Stephen",
""
],
[
"Lu",
"Jingwan",
""
],
[
"Gingold",
"Yotam",
""
]
] |
new_dataset
| 0.998307 |
1802.05384
|
Thibault Groueix M.
|
Thibault Groueix, Matthew Fisher, Vladimir G. Kim, Bryan C. Russell,
Mathieu Aubry
|
AtlasNet: A Papier-M\^ach\'e Approach to Learning 3D Surface Generation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a method for learning to generate the surface of 3D shapes. Our
approach represents a 3D shape as a collection of parametric surface elements
and, in contrast to methods generating voxel grids or point clouds, naturally
infers a surface representation of the shape. Beyond its novelty, our new shape
generation framework, AtlasNet, comes with significant advantages, such as
improved precision and generalization capabilities, and the possibility to
generate a shape of arbitrary resolution without memory issues. We demonstrate
these benefits and compare to strong baselines on the ShapeNet benchmark for
two applications: (i) auto-encoding shapes, and (ii) single-view reconstruction
from a still image. We also provide results showing its potential for other
applications, such as morphing, parametrization, super-resolution, matching,
and co-segmentation.
|
[
{
"version": "v1",
"created": "Thu, 15 Feb 2018 02:07:30 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Apr 2018 10:42:48 GMT"
},
{
"version": "v3",
"created": "Fri, 20 Jul 2018 16:00:34 GMT"
}
] | 2018-07-23T00:00:00 |
[
[
"Groueix",
"Thibault",
""
],
[
"Fisher",
"Matthew",
""
],
[
"Kim",
"Vladimir G.",
""
],
[
"Russell",
"Bryan C.",
""
],
[
"Aubry",
"Mathieu",
""
]
] |
new_dataset
| 0.991078 |
1803.06092
|
Guangyu Robert Yang
|
Guangyu Robert Yang, Igor Ganichev, Xiao-Jing Wang, Jonathon Shlens,
David Sussillo
|
A Dataset and Architecture for Visual Reasoning with a Working Memory
| null | null | null | null |
cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A vexing problem in artificial intelligence is reasoning about events that
occur in complex, changing visual stimuli such as in video analysis or game
play. Inspired by a rich tradition of visual reasoning and memory in cognitive
psychology and neuroscience, we developed an artificial, configurable visual
question and answer dataset (COG) to parallel experiments in humans and
animals. COG is much simpler than the general problem of video analysis, yet it
addresses many of the problems relating to visual and logical reasoning and
memory -- problems that remain challenging for modern deep learning
architectures. We additionally propose a deep learning architecture that
performs competitively on other diagnostic VQA datasets (i.e. CLEVR) as well as
easy settings of the COG dataset. However, several settings of COG result in
datasets that are progressively more challenging to learn. After training, the
network can zero-shot generalize to many new tasks. Preliminary analyses of the
network architectures trained on COG demonstrate that the network accomplishes
the task in a manner interpretable to humans.
|
[
{
"version": "v1",
"created": "Fri, 16 Mar 2018 06:53:45 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Jul 2018 14:12:49 GMT"
}
] | 2018-07-23T00:00:00 |
[
[
"Yang",
"Guangyu Robert",
""
],
[
"Ganichev",
"Igor",
""
],
[
"Wang",
"Xiao-Jing",
""
],
[
"Shlens",
"Jonathon",
""
],
[
"Sussillo",
"David",
""
]
] |
new_dataset
| 0.999634 |
1804.02233
|
Igor Mozeti\v{c}
|
Igor Mozeti\v{c}, Peter Gabrov\v{s}ek, Petra Kralj Novak
|
Forex trading and Twitter: Spam, bots, and reputation manipulation
|
MIS2: Misinformation and Misbehavior Mining on the Web, Workshop at
WSDM-18, Marina Del Rey, CA, USA, Feb. 9, 2018
| null | null | null |
cs.SI cs.CL cs.CY econ.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Currency trading (Forex) is the largest world market in terms of volume. We
analyze trading and tweeting about the EUR-USD currency pair over a period of
three years. First, a large number of tweets were manually labeled, and a
Twitter stance classification model is constructed. The model then classifies
all the tweets by the trading stance signal: buy, hold, or sell (EUR vs. USD).
The Twitter stance is compared to the actual currency rates by applying the
event study methodology, well-known in financial economics. It turns out that
there are large differences in Twitter stance distribution and potential
trading returns between the four groups of Twitter users: trading robots,
spammers, trading companies, and individual traders. Additionally, we observe
attempts of reputation manipulation by post festum removal of tweets with poor
predictions, and deleting/reposting of identical tweets to increase the
visibility without tainting one's Twitter timeline.
|
[
{
"version": "v1",
"created": "Fri, 6 Apr 2018 12:36:28 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Apr 2018 11:53:56 GMT"
}
] | 2018-07-23T00:00:00 |
[
[
"Mozetič",
"Igor",
""
],
[
"Gabrovšek",
"Peter",
""
],
[
"Novak",
"Petra Kralj",
""
]
] |
new_dataset
| 0.980595 |
1807.04701
|
Sudipta Chattopadhyay
|
Sudipta Chattopadhyay, Abhik Roychoudhury
|
Symbolic Verification of Cache Side-channel Freedom
| null |
IEEE Transactions on Computer-Aided Design of Integrated Circuits
and Systems, 2018
| null | null |
cs.SE cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cache timing attacks allow third-party observers to retrieve sensitive
information from program executions. But, is it possible to automatically check
the vulnerability of a program against cache timing attacks and then,
automatically shield program executions against these attacks? For a given
program, a cache configuration and an attack model, our CACHEFIX framework
either verifies the cache side-channel freedom of the program or synthesizes a
series of patches to ensure cache side-channel freedom during program
execution. At the core of our framework is a novel symbolic verification
technique based on automated abstraction refinement of cache semantics. The
power of such a framework is to allow symbolic reasoning over counterexample
traces and to combine it with runtime monitoring for eliminating cache side
channels during program execution. Our evaluation with routines from OpenSSL,
libfixedtimefixedpoint, GDK and FourQlib libraries reveals that our CACHEFIX
approach (dis)proves cache sidechannel freedom within an average of 75 seconds.
Besides, in all except one case, CACHEFIX synthesizes all patches within 20
minutes to ensure cache side-channel freedom of the respective routines during
execution.
|
[
{
"version": "v1",
"created": "Thu, 12 Jul 2018 16:14:24 GMT"
}
] | 2018-07-23T00:00:00 |
[
[
"Chattopadhyay",
"Sudipta",
""
],
[
"Roychoudhury",
"Abhik",
""
]
] |
new_dataset
| 0.96579 |
1807.06822
|
Dong Hao
|
Bin Li, Dong Hao, Dengji Zhao, Tao Zhou
|
Customer Sharing in Economic Networks with Costs
|
Proceedings of the Twenty-Seventh International Joint Conference on
Artificial Intelligence. Main track. Pages 368-374. 2018
| null | null | null |
cs.GT cs.AI cs.MA cs.SI econ.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In an economic market, sellers, infomediaries and customers constitute an
economic network. Each seller has her own customer group and the seller's
private customers are unobservable to other sellers. Therefore, a seller can
only sell commodities among her own customers unless other sellers or
infomediaries share her sale information to their customer groups. However, a
seller is not incentivized to share others' sale information by default, which
leads to inefficient resource allocation and limited revenue for the sale. To
tackle this problem, we develop a novel mechanism called customer sharing
mechanism (CSM) which incentivizes all sellers to share each other's sale
information to their private customer groups. Furthermore, CSM also
incentivizes all customers to truthfully participate in the sale. In the end,
CSM not only allocates the commodities efficiently but also optimizes the
seller's revenue.
|
[
{
"version": "v1",
"created": "Wed, 18 Jul 2018 08:55:27 GMT"
}
] | 2018-07-23T00:00:00 |
[
[
"Li",
"Bin",
""
],
[
"Hao",
"Dong",
""
],
[
"Zhao",
"Dengji",
""
],
[
"Zhou",
"Tao",
""
]
] |
new_dataset
| 0.997571 |
1807.06850
|
Adrian Santos
|
Adrian Santos and Janne Jarvinen and Jari Partanen and Markku Oivo and
Natalia Juristo
|
Does the performance of TDD hold across software companies and premises?
A group of industrial experiments on TDD
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Test-Driven Development (TDD) has been claimed to increase external software
quality. However, the extent to which TDD increases external quality has been
seldom studied in industrial experiments. We conduct four industrial
experiments in two different companies to evaluate the performance of TDD on
external quality. We study whether the performance of TDD holds across premises
within the same company and across companies. We identify participant-level
characteristics impacting results. Iterative-Test Last (ITL), the reverse
approach of TDD, outperforms TDD in three out of four premises. ITL outperforms
TDD in both companies. The larger the experience with unit testing and testing
tools, the larger the difference in performance between ITL and TDD (in favour
of ITL). Technological environment (i.e., programming language and testing
tool) seems not to impact results. Evaluating participant-level characteristics
impacting results in industrial experiments may ease the understanding of the
performance of TDD in realistic settings.
|
[
{
"version": "v1",
"created": "Wed, 18 Jul 2018 10:34:49 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Jul 2018 10:47:02 GMT"
}
] | 2018-07-23T00:00:00 |
[
[
"Santos",
"Adrian",
""
],
[
"Jarvinen",
"Janne",
""
],
[
"Partanen",
"Jari",
""
],
[
"Oivo",
"Markku",
""
],
[
"Juristo",
"Natalia",
""
]
] |
new_dataset
| 0.999358 |
1807.07596
|
Marinella Sciortino
|
F. Garofalo, G. Rosone, M. Sciortino, D. Verzotto
|
The colored longest common prefix array computed via sequential scans
|
Preliminary version of the paper that will be included in the SPIRE
2018 proceedings
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the increased availability of large datasets of biological sequences,
the tools for sequence comparison are now relying on efficient alignment-free
approaches to a greater extent. Most of the alignment-free approaches require
the computation of statistics of the sequences in the dataset. Such
computations become impractical in internal memory when very large collections
of long sequences are considered. In this paper, we present a new conceptual
data structure, the colored longest common prefix array (cLCP), that allows to
efficiently tackle several problems with an alignment-free approach. In fact,
we show that such a data structure can be computed via sequential scans in
semi-external memory. By using cLCP, we propose an efficient lightweight
strategy to solve the multi-string Average Common Substring (ACS) problem, that
consists in the pairwise comparison of a single string against a collection of
$m$ strings simultaneously, in order to obtain $m$ ACS induced distances.
Experimental results confirm the effectiveness of our approach.
|
[
{
"version": "v1",
"created": "Thu, 19 Jul 2018 18:33:20 GMT"
}
] | 2018-07-23T00:00:00 |
[
[
"Garofalo",
"F.",
""
],
[
"Rosone",
"G.",
""
],
[
"Sciortino",
"M.",
""
],
[
"Verzotto",
"D.",
""
]
] |
new_dataset
| 0.999373 |
1807.07617
|
Matthias Zeppelzauer
|
Matthias Zeppelzauer and Alexis Ringot and Florian Taurer
|
SoniControl - A Mobile Ultrasonic Firewall
|
To appear in proceedings of 2018 ACM Multimedia Conference October
22--26, 2018, Seoul, Republic of Korea
| null |
10.1145/3240508.3241393
| null |
cs.MM cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The exchange of data between mobile devices in the near-ultrasonic frequency
band is a new promising technology for near field communication (NFC) but also
raises a number of privacy concerns. We present the first ultrasonic firewall
that reliably detects ultrasonic communication and provides the user with
effective means to prevent hidden data exchange. This demonstration showcases a
new media-based communication technology ("data over audio") together with its
related privacy concerns. It enables users to (i) interactively test out and
experience ultrasonic information exchange and (ii) shows how to protect
oneself against unwanted tracking.
|
[
{
"version": "v1",
"created": "Thu, 19 Jul 2018 19:18:51 GMT"
}
] | 2018-07-23T00:00:00 |
[
[
"Zeppelzauer",
"Matthias",
""
],
[
"Ringot",
"Alexis",
""
],
[
"Taurer",
"Florian",
""
]
] |
new_dataset
| 0.999646 |
1807.07752
|
Shaunak Joshi
|
Shaunak Joshi and Deepali Deshpande
|
Twitter Sentiment Analysis System
|
5 pages
|
International Journal of Computer Applications (2018)
|
10.5120/ijca2018917319
| null |
cs.CL cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social media is increasingly used by humans to express their feelings and
opinions in the form of short text messages. Detecting sentiments in the text
has a wide range of applications including identifying anxiety or depression of
individuals and measuring well-being or mood of a community. Sentiments can be
expressed in many ways that can be seen such as facial expression and gestures,
speech and by written text. Sentiment Analysis in text documents is essentially
a content-based classification problem involving concepts from the domains of
Natural Language Processing as well as Machine Learning. In this paper,
sentiment recognition based on textual data and the techniques used in
sentiment analysis are discussed.
|
[
{
"version": "v1",
"created": "Fri, 20 Jul 2018 09:19:08 GMT"
}
] | 2018-07-23T00:00:00 |
[
[
"Joshi",
"Shaunak",
""
],
[
"Deshpande",
"Deepali",
""
]
] |
new_dataset
| 0.963237 |
1807.07770
|
Iosif Szeidert PhD
|
Cristian Vasar, Octavian Prostean, Ioan Filip, Iosif Szeidert
|
Wind Energy Conversion System - a Laboratory Setup
|
5 pages, 6 figures, SACI 2018, IEEE 12th International Symposium on
Applied Computational Intelligence and Informatics, May 17-19, Timi\c{s}oara,
Romania, pp. 313-317, ISBN: 978-1-5386-4639-7
|
SACI 2018, IEEE 12th International Symposium on Applied
Computational Intelligence and Informatics, May 17-19, Timi\c{s}oara,
Romania, pp. 313-317, ISBN: 978-1-5386-4639-7, IEEE
| null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a laboratory setup usable for the design and testing of a
Wind Energy Conversion System respectively their control solutions. The stand
can be used for research or in the engineering educational system offering the
possibility of studying the behavior of wind energy conversion systems,
including testing of some adequate control techniques, allowing the transition
from simple simulations on the computer to practical functional tests, much
closer to the reality of the site. The stand architecture is based on a
hardware platform integrating electrical machines, control equipment, power
devices, sensors, computing systems and appropriate software, all allowing one
flexible configuration to test a multitude of scenarios specific to the wind
energy domain. The wind turbine is emulated using an asynchronous motor with
direct torque control based on rotating speed measurement. The controlled
torque is applied to a synchronous generator and the output power is injected
into the grid.
|
[
{
"version": "v1",
"created": "Fri, 20 Jul 2018 10:15:06 GMT"
}
] | 2018-07-23T00:00:00 |
[
[
"Vasar",
"Cristian",
""
],
[
"Prostean",
"Octavian",
""
],
[
"Filip",
"Ioan",
""
],
[
"Szeidert",
"Iosif",
""
]
] |
new_dataset
| 0.991807 |
1807.07818
|
Bal\'azs Csan\'ad Cs\'aji
|
Bal\'azs Csan\'ad Cs\'aji, Zsolt Kem\'eny, Gianfranco Pedone, Andr\'as
Kuti, J\'ozsef V\'ancza
|
Wireless Multi-Sensor Networks for Smart Cities: A Prototype System with
Statistical Data Analysis
|
9 pages, 8 figures, 3 tables, 27 references
|
IEEE Sensors Journal, Volume 17, Issue 23, 2017, pp. 7667-7676
|
10.1109/JSEN.2017.2736785
| null |
cs.CY cs.LG cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As urbanization proceeds at an astonishing rate, cities have to continuously
improve their solutions that affect the safety, health and overall wellbeing of
their residents. Smart city projects worldwide build on advanced sensor,
information and communication technologies to help dealing with issues like air
pollution, waste management, traffic optimization, and energy efficiency. The
paper reports about the prototype of a smart city initiative in Budapest which
applies various sensors installed on the public lighting system and a
cloud-based analytical module. While the installed wireless multi-sensor
network gathers information about a number of stressors, the module integrates
and statistically processes the data. The module can handle inconsistent,
missing and noisy data and can extrapolate the measurements in time and space,
namely, it can create short-term forecasts and smoothed maps, both accompanied
by reliability estimates. The resulting database uses geometric representations
and can serve as an information centre for public services.
|
[
{
"version": "v1",
"created": "Fri, 20 Jul 2018 12:51:15 GMT"
}
] | 2018-07-23T00:00:00 |
[
[
"Csáji",
"Balázs Csanád",
""
],
[
"Kemény",
"Zsolt",
""
],
[
"Pedone",
"Gianfranco",
""
],
[
"Kuti",
"András",
""
],
[
"Váncza",
"József",
""
]
] |
new_dataset
| 0.999304 |
1807.07824
|
Serhiy Semerikov
|
S. O. Semerikov, A. M. Striuk, K. I. Slovak, N. V. Rashevska, Yu. V.
Yechkalo
|
A man with a computer face (to the 80th anniversary of Ivan Edward
Sutherland)
|
16 pages, 8 figures, in Ukrainian
|
New computer technology 16 (2018) 9-24
| null | null |
cs.GL
|
http://creativecommons.org/licenses/by/4.0/
|
The article presents the main milestones of the science and technology
biography of Ivan Edward Sutherland. The influence of the family and the school
on the development of its research competencies is shown, and little-known
biographical facts explaining the evolution of his scientific interests is
presented: from dynamic object-oriented graphic systems through systems of
virtual reality to asynchronous circuits.
|
[
{
"version": "v1",
"created": "Tue, 3 Jul 2018 18:00:40 GMT"
}
] | 2018-07-23T00:00:00 |
[
[
"Semerikov",
"S. O.",
""
],
[
"Striuk",
"A. M.",
""
],
[
"Slovak",
"K. I.",
""
],
[
"Rashevska",
"N. V.",
""
],
[
"Yechkalo",
"Yu. V.",
""
]
] |
new_dataset
| 0.999576 |
1706.07568
|
Mohamed Hassan Dr.
|
Nivedita Sritharan, Anirudh M. Kaushik, Mohamed Hassan, and Hiren
Patel
|
HourGlass: Predictable Time-based Cache Coherence Protocol for
Dual-Critical Multi-Core Systems
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a hardware mechanism called HourGlass to predictably share data in
a multi-core system where cores are explicitly designated as critical or
non-critical. HourGlass is a time-based cache coherence protocol for
dual-critical multi-core systems that ensures worst-case latency (WCL) bounds
for memory requests originating from critical cores. Although HourGlass does
not provide either WCL or bandwidth guarantees for memory requests from
non-critical cores, it promotes the use of timers to improve its bandwidth
utilization while still maintaining WCL bounds for critical cores. This
encourages a trade-off between the WCL bounds for critical cores, and the
improved memory bandwidth for non-critical cores via timer configurations. We
evaluate HourGlass using gem5, and with multithreaded benchmark suites
including SPLASH-2, and synthetic workloads. Our results show that the WCL for
critical cores with HourGlass is always within the analytical WCL bounds, and
provides a tighter WCL bound on critical cores compared to the state-of-the-art
real-time cache coherence protocol. Further, we show that HourGlass enables a
trade-off between provable WCL bounds for critical cores, and improved
bandwidth utilization for non-critical cores. The average-case performance of
HourGlass is comparable to the state-of-the-art real-time cache coherence
protocol, and suffers a slowdown of 1.43x and 1.46x compared to the
conventional MSI and MESI protocols.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2017 05:36:19 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jul 2017 20:58:37 GMT"
},
{
"version": "v3",
"created": "Wed, 18 Jul 2018 21:10:32 GMT"
}
] | 2018-07-20T00:00:00 |
[
[
"Sritharan",
"Nivedita",
""
],
[
"Kaushik",
"Anirudh M.",
""
],
[
"Hassan",
"Mohamed",
""
],
[
"Patel",
"Hiren",
""
]
] |
new_dataset
| 0.999066 |
1710.00477
|
Santiago Castro
|
Santiago Castro, Luis Chiruzzo, Aiala Ros\'a, Diego Garat and
Guillermo Moncecchi
|
A Crowd-Annotated Spanish Corpus for Humor Analysis
|
Camera-ready version of the paper submitted to SocialNLP 2018, with a
fixed typo
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computational Humor involves several tasks, such as humor recognition, humor
generation, and humor scoring, for which it is useful to have human-curated
data. In this work we present a corpus of 27,000 tweets written in Spanish and
crowd-annotated by their humor value and funniness score, with about four
annotations per tweet, tagged by 1,300 people over the Internet. It is equally
divided between tweets coming from humorous and non-humorous accounts. The
inter-annotator agreement Krippendorff's alpha value is 0.5710. The dataset is
available for general use and can serve as a basis for humor detection and as a
first step to tackle subjectivity.
|
[
{
"version": "v1",
"created": "Mon, 2 Oct 2017 04:16:36 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Oct 2017 23:17:52 GMT"
},
{
"version": "v3",
"created": "Mon, 28 May 2018 18:26:21 GMT"
},
{
"version": "v4",
"created": "Thu, 19 Jul 2018 04:52:36 GMT"
}
] | 2018-07-20T00:00:00 |
[
[
"Castro",
"Santiago",
""
],
[
"Chiruzzo",
"Luis",
""
],
[
"Rosá",
"Aiala",
""
],
[
"Garat",
"Diego",
""
],
[
"Moncecchi",
"Guillermo",
""
]
] |
new_dataset
| 0.974253 |
1806.03255
|
Austin Hounsel
|
Austin Hounsel, Prateek Mittal, Nick Feamster
|
Automatically Generating a Large, Culture-Specific Blocklist for China
| null | null | null | null |
cs.CY cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Internet censorship measurements rely on lists of websites to be tested, or
"block lists" that are curated by third parties. Unfortunately, many of these
lists are not public, and those that are tend to focus on a small group of
topics, leaving other types of sites and services untested. To increase and
diversify the set of sites on existing block lists, we use natural language
processing and search engines to automatically discover a much wider range of
websites that are censored in China. Using these techniques, we create a list
of 1125 websites outside the Alexa Top 1,000 that cover Chinese politics,
minority human rights organizations, oppressed religions, and more.
Importantly, $\textit{none of the sites we discover are present on the current
largest block list}$. The list that we develop not only vastly expands the set
of sites that current Internet measurement tools can test, but it also deepens
our understanding of the nature of content that is censored in China. We have
released both this new block list and the code for generating it.
|
[
{
"version": "v1",
"created": "Mon, 4 Jun 2018 20:58:09 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Jul 2018 16:02:31 GMT"
}
] | 2018-07-20T00:00:00 |
[
[
"Hounsel",
"Austin",
""
],
[
"Mittal",
"Prateek",
""
],
[
"Feamster",
"Nick",
""
]
] |
new_dataset
| 0.998326 |
1807.07333
|
Javid Dadashkarimi
|
Javid Dadashkarimi and Sekhar Tatikonda
|
Sequence to Logic with Copy and Cache
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating logical form equivalents of human language is a fresh way to
employ neural architectures where long short-term memory effectively captures
dependencies in both encoder and decoder units.
The logical form of the sequence usually preserves information from the
natural language side in the form of similar tokens, and recently a copying
mechanism has been proposed which increases the probability of outputting
tokens from the source input through decoding.
In this paper we propose a caching mechanism as a more general form of the
copying mechanism which also weighs all the words from the source vocabulary
according to their relation to the current decoding context.
Our results confirm that the proposed method achieves improvements in
sequence/token-level accuracy on sequence to logical form tasks. Further
experiments on cross-domain adversarial attacks show substantial improvements
when using the most influential examples of other domains for training.
|
[
{
"version": "v1",
"created": "Thu, 19 Jul 2018 10:32:52 GMT"
}
] | 2018-07-20T00:00:00 |
[
[
"Dadashkarimi",
"Javid",
""
],
[
"Tatikonda",
"Sekhar",
""
]
] |
new_dataset
| 0.99603 |
1807.07336
|
Yinan Qi
|
Yinan Qi, Mythri Hunukumbure, Hyungju Nam, Hyunil Yoo, Saidhiraj Amuru
|
On the Phase Tracking Reference Signal (PT-RS) Design for 5G New Radio
(NR)
|
5 pages, 12 figures, accepted by VTC Fall 2018
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The volume of mobile data traffic has been driven to an unprecedented high
level due to the proliferation of smartphones/mobile devices that support a
wide range of broadband applications and services, requiring a next generation
mobile communication system, i.e., the fifth generation (5G). Millimeter wave
(mmWave) bands can offer much larger available spectrum bandwidth and thus are
considered as one of the most promising approaches to significantly boost the
capacity in 5G NR. However, devices and network radio nodes operating on mmWave
bands suffer from phase noise and without correction of phase noise, the
performance of the network could potentially suffer significant losses. In this
paper, we investigate the effects of phase noise and provide comprehensive
solutions to track the phase noise by using phase tracking reference signals
(PT-RS), as currently standardized in 3GPP Release 15. The design aspects such
as PT-RS pattern, interference randomization, multi-TRP operation, etc., are
investigated and evaluation results are also provided.
|
[
{
"version": "v1",
"created": "Thu, 19 Jul 2018 10:38:01 GMT"
}
] | 2018-07-20T00:00:00 |
[
[
"Qi",
"Yinan",
""
],
[
"Hunukumbure",
"Mythri",
""
],
[
"Nam",
"Hyungju",
""
],
[
"Yoo",
"Hyunil",
""
],
[
"Amuru",
"Saidhiraj",
""
]
] |
new_dataset
| 0.998396 |
1807.07438
|
Wei Guo
|
Wei Guo, Weile Zhang, Pengcheng Mu, Feifei Gao, and Hai Lin
|
High-Mobility Wideband Massive MIMO Communications: Doppler
Compensation, Analysis and Scaling Law
|
arXiv admin note: text overlap with arXiv:1704.04725
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we apply angle-domain Doppler compensation for high-mobility
wideband massive multi-input multi-output (MIMO) uplink transmission. The
time-varying multipath channel is considered between high-speed terminal and
static base station (BS), where multiple Doppler frequency offsets (DFOs) are
associated with distinct angle of departures (AoDs). With the aid of the
large-scale uniform linear array (ULA) at the transmitter, we design a
beamforming network to generate multiple parallel beamforming branches, each
transmitting signal pointing to one particular angle. Then, the transmitted
signal in each branch will experience only one dominant DFO when passing over
the time-varying channel, which can be easily compensated before transmission
starts. We theoretically analyze the Doppler spread of the equivalent uplink
channel after angle-domain Doppler compensation, which takes into account both
the mainlobe and sidelobes of the transmit beam in each branch. It is seen that
the channel time-variation can be effectively suppressed if the number of
transmit antennas is sufficiently large. Interestingly, the asymptotic scaling
law of channel variation is obtained, which shows that the Doppler spread is
proportional to the maximum DFO and decreases approximately as $1/\sqrt{M}$
($M$ is the number of transmit antennas) when $M$ is sufficiently large.
Numerical results are provided to corroborate the proposed scheme.
|
[
{
"version": "v1",
"created": "Wed, 18 Jul 2018 09:58:02 GMT"
}
] | 2018-07-20T00:00:00 |
[
[
"Guo",
"Wei",
""
],
[
"Zhang",
"Weile",
""
],
[
"Mu",
"Pengcheng",
""
],
[
"Gao",
"Feifei",
""
],
[
"Lin",
"Hai",
""
]
] |
new_dataset
| 0.999194 |
1807.07521
|
Jo\~ao Bernardino
|
Jo\~ao Bernardino (1), Lu\'is Filipe Teixeira (1 and 2), Hugo Sereno
Ferreira (1 and 2) ((1) DEI - Faculty of Engineering - University of Porto,
(2) INESC TEC)
|
Bio-Measurements Estimation and Support in Knee Recovery through Machine
Learning
|
8 pages, 9 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knee injuries are frequent, varied and often require the patient to undergo
intensive rehabilitation for several months. Treatment protocols usually
contemplate some recurrent measurements in order to assess progress, such as
goniometry. The need for specific equipment or the complexity and duration of
these tasks cause them to often be neglected. A novel deep learning based
solution is presented, supported by the generation of a synthetic image
dataset. A 3D human-body model was used for this purpose, simulating a
recovering patient. For each image, the coordinates of three key points were
registered: the centers of the thigh, the knee and the lower leg. These values
are sufficient to estimate the flexion angle. Convolutional neural networks
were then trained for predicting these six coordinates. Transfer learning was
used with the VGG16 and InceptionV3 models pre-trained on the ImageNet dataset,
being an additional custom model trained from scratch. All models were tested
with different combinations of data augmentation techniques applied on the
training sets. InceptionV3 achieved the best overall results, producing
considerably good predictions even on real unedited pictures.
|
[
{
"version": "v1",
"created": "Thu, 19 Jul 2018 16:24:22 GMT"
}
] | 2018-07-20T00:00:00 |
[
[
"Bernardino",
"João",
"",
"1 and 2"
],
[
"Teixeira",
"Luís Filipe",
"",
"1 and 2"
],
[
"Ferreira",
"Hugo Sereno",
"",
"1 and 2"
]
] |
new_dataset
| 0.965477 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.