id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1709.00488
|
Fangda Li
|
Fangda Li, Ankit V. Manerikar and Avinash C. Kak
|
RMPD - A Recursive Mid-Point Displacement Algorithm for Path Planning
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by what is required for real-time path planning, the paper starts
out by presenting sRMPD, a new recursive "local" planner founded on the key
notion that, unless made necessary by an obstacle, there must be no deviation
from the shortest path between any two points, which would normally be a
straight line path in the configuration space. Subsequently, we increase the
power of sRMPD by using it as a "connect" subroutine call in a higher-level
sampling-based algorithm mRMPD that is inspired by multi-RRT. As a consequence,
mRMPD spawns a larger number of space exploring trees in regions of the
configuration space that are characterized by a higher density of obstacles.
The overall effect is a hybrid tree growing strategy with a trade-off between
random exploration as made possible by multi-RRT based logic and immediate
exploitation of opportunities to connect two states as made possible by sRMPD.
The mRMPD planner can be biased with regard to this trade-off for solving
different kinds of planning problems efficiently. Based on the test cases we
have run, our experiments show that mRMPD can reduce planning time by up to 80%
compared to basic RRT.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2017 21:35:04 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Feb 2018 01:31:52 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Li",
"Fangda",
""
],
[
"Manerikar",
"Ankit V.",
""
],
[
"Kak",
"Avinash C.",
""
]
] |
new_dataset
| 0.994969 |
1709.06283
|
Douglas Morrison
|
D. Morrison, A.W. Tow, M. McTaggart, R. Smith, N. Kelly-Boxall, S.
Wade-McCue, J. Erskine, R. Grinover, A. Gurman, T. Hunn, D. Lee, A. Milan, T.
Pham, G. Rallos, A. Razjigaev, T. Rowntree, K. Vijay, Z. Zhuang, C. Lehnert,
I. Reid, P. Corke and J. Leitner
|
Cartman: The low-cost Cartesian Manipulator that won the Amazon Robotics
Challenge
|
To appear at the IEEE International Conference on Robotics and
Automation (ICRA) 2018. 8 pages
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Amazon Robotics Challenge enlisted sixteen teams to each design a
pick-and-place robot for autonomous warehousing, addressing development in
robotic vision and manipulation. This paper presents the design of our
custom-built, cost-effective, Cartesian robot system Cartman, which won first
place in the competition finals by stowing 14 (out of 16) and picking all 9
items in 27 minutes, scoring a total of 272 points. We highlight our
experience-centred design methodology and key aspects of our system that
contributed to our competitiveness. We believe these aspects are crucial to
building robust and effective robotic systems.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2017 08:01:43 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Feb 2018 04:02:06 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Morrison",
"D.",
""
],
[
"Tow",
"A. W.",
""
],
[
"McTaggart",
"M.",
""
],
[
"Smith",
"R.",
""
],
[
"Kelly-Boxall",
"N.",
""
],
[
"Wade-McCue",
"S.",
""
],
[
"Erskine",
"J.",
""
],
[
"Grinover",
"R.",
""
],
[
"Gurman",
"A.",
""
],
[
"Hunn",
"T.",
""
],
[
"Lee",
"D.",
""
],
[
"Milan",
"A.",
""
],
[
"Pham",
"T.",
""
],
[
"Rallos",
"G.",
""
],
[
"Razjigaev",
"A.",
""
],
[
"Rowntree",
"T.",
""
],
[
"Vijay",
"K.",
""
],
[
"Zhuang",
"Z.",
""
],
[
"Lehnert",
"C.",
""
],
[
"Reid",
"I.",
""
],
[
"Corke",
"P.",
""
],
[
"Leitner",
"J.",
""
]
] |
new_dataset
| 0.99904 |
1710.03103
|
Mahdi Azari
|
Mohammad Mahdi Azari, Fernando Rosas, Alessandro Chiumento, Sofie
Pollin
|
Coexistence of Terrestrial and Aerial Users in Cellular Networks
|
Accepted for presentation at the IEEE GLOBECOM 2017 workshops
| null |
10.1109/GLOCOMW.2017.8269068
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Enabling the integration of aerial mobile users into existing cellular
networks would make possible a number of promising applications. However,
current cellular networks have not been designed to serve aerial users, and
hence an exploration of design parameters is required in order to allow network
providers to modify their current infrastructure. As a first step in this
direction, this paper provides an in-depth analysis of the coverage probability
of the downlink of a cellular network that serves both aerial and ground users.
We present an exact mathematical characterization of the coverage probability,
which includes the effect of base stations (BSs) height, antenna pattern and
drone altitude for various type of urban environments. Interestingly, our
results show that the favorable propagation conditions that aerial users enjoys
due to its altitude is also their strongest limiting factor, as it leaves them
vulnerable to interference. This negative effect can be substantially reduced
by optimizing the flying altitude, the base station height and antenna
down-tilt. Moreover, lowering the base station height and increasing down-tilt
angle are in general beneficial for both terrestrial and aerial users, pointing
out a possible path to enable their coexistence.
|
[
{
"version": "v1",
"created": "Mon, 9 Oct 2017 14:03:59 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Azari",
"Mohammad Mahdi",
""
],
[
"Rosas",
"Fernando",
""
],
[
"Chiumento",
"Alessandro",
""
],
[
"Pollin",
"Sofie",
""
]
] |
new_dataset
| 0.955152 |
1710.07756
|
Yuanxing Zhang
|
Yuanxing Zhang, Zhuqi Li, Chengliang Gao, Kaigui Bian, Lingyang Song,
Shaoling Dong, Xiaoming Li
|
Mobile Social Big Data: WeChat Moments Dataset, Network Applications,
and Opportunities
|
Accepted by IEEE Network
| null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In parallel to the increase of various mobile technologies, the mobile social
network (MSN) service has brought us into an era of mobile social big data,
where people are creating new social data every second and everywhere. It is of
vital importance for businesses, government, and institutes to understand how
peoples' behaviors in the online cyberspace can affect the underlying computer
network, or their offline behaviors at large. To study this problem, we collect
a dataset from WeChat Moments, called WeChatNet, which involves 25,133,330
WeChat users with 246,369,415 records of link reposting on their pages. We
revisit three network applications based on the data analytics over WeChatNet,
i.e., the information dissemination in mobile cellular networks, the network
traffic prediction in backbone networks, and the mobile population distribution
projection. Meanwhile, we discuss the potential research opportunities for
developing new applications using the released dataset.
|
[
{
"version": "v1",
"created": "Sat, 21 Oct 2017 05:55:18 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Dec 2017 01:09:29 GMT"
},
{
"version": "v3",
"created": "Sat, 24 Feb 2018 06:21:15 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Zhang",
"Yuanxing",
""
],
[
"Li",
"Zhuqi",
""
],
[
"Gao",
"Chengliang",
""
],
[
"Bian",
"Kaigui",
""
],
[
"Song",
"Lingyang",
""
],
[
"Dong",
"Shaoling",
""
],
[
"Li",
"Xiaoming",
""
]
] |
new_dataset
| 0.997918 |
1711.02162
|
Prafulla Kumar Choubey
|
Prafulla Kumar Choubey and Ruihong Huang
|
TAMU at KBP 2017: Event Nugget Detection and Coreference Resolution
|
TAC KBP 2017
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we describe TAMU's system submitted to the TAC KBP 2017 event
nugget detection and coreference resolution task. Our system builds on the
statistical and empirical observations made on training and development data.
We found that modifiers of event nuggets tend to have unique syntactic
distribution. Their parts-of-speech tags and dependency relations provides them
essential characteristics that are useful in identifying their span and also
defining their types and realis status. We further found that the joint
modeling of event span detection and realis status identification performs
better than the individual models for both tasks. Our simple system designed
using minimal features achieved the micro-average F1 scores of 57.72, 44.27 and
42.47 for event span detection, type identification and realis status
classification tasks respectively. Also, our system achieved the CoNLL F1 score
of 27.20 in event coreference resolution task.
|
[
{
"version": "v1",
"created": "Mon, 6 Nov 2017 20:30:50 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Feb 2018 06:02:10 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Choubey",
"Prafulla Kumar",
""
],
[
"Huang",
"Ruihong",
""
]
] |
new_dataset
| 0.998233 |
1802.08690
|
Chenhao Tan
|
Chenhao Tan and Hao Peng and Noah A. Smith
|
"You are no Jack Kennedy": On Media Selection of Highlights from
Presidential Debates
|
10 pages, 5 figures, to appear in Proceedings of WWW 2018, data and
more at https://chenhaot.com/papers/debate-quotes.html
| null |
10.1145/3178876.3186142
| null |
cs.SI cs.CL physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Political speeches and debates play an important role in shaping the images
of politicians, and the public often relies on media outlets to select bits of
political communication from a large pool of utterances. It is an important
research question to understand what factors impact this selection process.
To quantitatively explore the selection process, we build a three- decade
dataset of presidential debate transcripts and post-debate coverage. We first
examine the effect of wording and propose a binary classification framework
that controls for both the speaker and the debate situation. We find that
crowdworkers can only achieve an accuracy of 60% in this task, indicating that
media choices are not entirely obvious. Our classifiers outperform crowdworkers
on average, mainly in primary debates. We also compare important factors from
crowdworkers' free-form explanations with those from data-driven methods and
find interesting differences. Few crowdworkers mentioned that "context
matters", whereas our data show that well-quoted sentences are more distinct
from the previous utterance by the same speaker than less-quoted sentences.
Finally, we examine the aggregate effect of media preferences towards different
wordings to understand the extent of fragmentation among media outlets. By
analyzing a bipartite graph built from quoting behavior in our data, we observe
a decreasing trend in bipartisan coverage.
|
[
{
"version": "v1",
"created": "Fri, 23 Feb 2018 19:00:01 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Tan",
"Chenhao",
""
],
[
"Peng",
"Hao",
""
],
[
"Smith",
"Noah A.",
""
]
] |
new_dataset
| 0.994879 |
1802.08751
|
Lili Wang
|
L. Wang, J. Liu, A. S. Morse, B. D. O. Anderson and D. Fullmer
|
A Generalized Discrete-Time Altafini Model
|
7 pages, 3 figures, ECC paper
| null | null | null |
cs.SY cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A discrete-time modulus consensus model is considered in which the
interaction among a family of networked agents is described by a time-dependent
gain graph whose vertices correspond to agents and whose arcs are assigned
complex numbers from a cyclic group. Limiting behavior of the model is studied
using a graphical approach. It is shown that, under appropriate connectedness,
a certain type of clustering will be reached exponentially fast for almost all
initial conditions if and only if the sequence of gain graphs is "repeatedly
jointly structurally balanced" corresponding to that type of clustering, where
the number of clusters is at most the order of a cyclic group. It is also shown
that the model will reach a consensus asymptotically at zero if the sequence of
gain graphs is repeatedly jointly strongly connected and structurally
unbalanced. In the special case when the cyclic group is of order two, the
model simplifies to the so-called Altafini model whose gain graph is simply a
signed graph.
|
[
{
"version": "v1",
"created": "Fri, 23 Feb 2018 22:27:47 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Wang",
"L.",
""
],
[
"Liu",
"J.",
""
],
[
"Morse",
"A. S.",
""
],
[
"Anderson",
"B. D. O.",
""
],
[
"Fullmer",
"D.",
""
]
] |
new_dataset
| 0.972467 |
1802.08781
|
Ligang Zhang
|
Ligang Zhang, Brijesh Verma
|
Superpixel based Class-Semantic Texton Occurrences for Natural Roadside
Vegetation Segmentation
|
This is a pre-print of an article published in Machine Vision and
Applications. The final authenticated version is available online at:
https://doi.org/10.1007/s00138-017-0833-7
|
Machine Vision and Applications (2017) 28: 293
|
10.1007/s00138-017-0833-7
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vegetation segmentation from roadside data is a field that has received
relatively little attention in present studies, but can be of great potentials
in a wide range of real-world applications, such as road safety assessment and
vegetation condition monitoring. In this paper, we present a novel approach
that generates class-semantic color-texture textons and aggregates superpixel
based texton occurrences for vegetation segmentation in natural roadside
images. Pixel-level class-semantic textons are first learnt by generating two
individual sets of bag-of-word visual dictionaries from color and filter-bank
texture features separately for each object class using manually cropped
training data. For a testing image, it is first oversegmented into a set of
homogeneous superpixels. The color and texture features of all pixels in each
superpixel are extracted and further mapped to one of the learnt textons using
the nearest distance metric, resulting in a color and a texture texton
occurrence matrix. The color and texture texton occurrences are aggregated
using a linear mixing method over each superpixel and the segmentation is
finally achieved using a simple yet effective majority voting strategy.
Evaluations on two public image datasets from videos collected by the
Department of Transport and Main Roads (DTMR), Queensland, Australia, and a
public roadside grass dataset show high accuracy of the proposed approach. We
also demonstrate the effectiveness of the approach for vegetation segmentation
in real-world scenarios.
|
[
{
"version": "v1",
"created": "Sat, 24 Feb 2018 01:51:41 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Zhang",
"Ligang",
""
],
[
"Verma",
"Brijesh",
""
]
] |
new_dataset
| 0.998968 |
1802.08799
|
Boris Aronov
|
Boris Aronov and Anirudh Donakonda and Esther Ezra and Rom Pinchasi
|
On Pseudo-disk Hypergraphs
|
Submitted for publication
| null | null | null |
cs.CG math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $F$ be a family of pseudo-disks in the plane, and $P$ be a finite subset
of $F$. Consider the hypergraph $H(P,F)$ whose vertices are the pseudo-disks in
$P$ and the edges are all subsets of $P$ of the form $\{D \in P \mid D \cap S
\neq \emptyset\}$, where $S$ is a pseudo-disk in $F$. We give an upper bound of
$O(nk^3)$ for the number of edges in $H(P,F)$ of cardinality at most $k$. This
generalizes a result of Buzaglo et al. (2013).
As an application of our bound, we obtain an algorithm that computes a
constant-factor approximation to the smallest _weighted_ dominating set in a
collection of pseudo-disks in the plane, in expected polynomial time.
|
[
{
"version": "v1",
"created": "Sat, 24 Feb 2018 04:51:48 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Aronov",
"Boris",
""
],
[
"Donakonda",
"Anirudh",
""
],
[
"Ezra",
"Esther",
""
],
[
"Pinchasi",
"Rom",
""
]
] |
new_dataset
| 0.970838 |
1802.08824
|
Kaichun Mo
|
Kaichun Mo, Haoxiang Li, Zhe Lin and Joon-Young Lee
|
The AdobeIndoorNav Dataset: Towards Deep Reinforcement Learning based
Real-world Indoor Robot Visual Navigation
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep reinforcement learning (DRL) demonstrates its potential in learning a
model-free navigation policy for robot visual navigation. However, the
data-demanding algorithm relies on a large number of navigation trajectories in
training. Existing datasets supporting training such robot navigation
algorithms consist of either 3D synthetic scenes or reconstructed scenes.
Synthetic data suffers from domain gap to the real-world scenes while visual
inputs rendered from 3D reconstructed scenes have undesired holes and
artifacts. In this paper, we present a new dataset collected in real-world to
facilitate the research in DRL based visual navigation. Our dataset includes 3D
reconstruction for real-world scenes as well as densely captured real 2D images
from the scenes. It provides high-quality visual inputs with real-world scene
complexity to the robot at dense grid locations. We further study and benchmark
one recent DRL based navigation algorithm and present our attempts and thoughts
on improving its generalizability to unseen test targets in the scenes.
|
[
{
"version": "v1",
"created": "Sat, 24 Feb 2018 09:42:18 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Mo",
"Kaichun",
""
],
[
"Li",
"Haoxiang",
""
],
[
"Lin",
"Zhe",
""
],
[
"Lee",
"Joon-Young",
""
]
] |
new_dataset
| 0.99949 |
1802.08872
|
Hamid Hamraz
|
Hamid Hamraz, Nathan B. Jacobs, Marco A. Contreras, and Chase H. Clark
|
Deep learning for conifer/deciduous classification of airborne LiDAR 3D
point clouds representing individual trees
|
Under review as of the date of submission
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The purpose of this study was to investigate the use of deep learning for
coniferous/deciduous classification of individual trees from airborne LiDAR
data. To enable efficient processing by a deep convolutional neural network
(CNN), we designed two discrete representations using leaf-off and leaf-on
LiDAR data: a digital surface model with four channels (DSMx4) and a set of
four 2D views (4x2D). A training dataset of labeled tree crowns was generated
via segmentation of tree crowns, followed by co-registration with field data.
Potential mislabels due to GPS error or tree leaning were corrected using a
statistical ensemble filtering procedure. Because the training data was heavily
unbalanced (~8% conifers), we trained an ensemble of CNNs on random balanced
sub-samples of augmented data (180 rotational variations per instance). The
4x2D representation yielded similar classification accuracies to the DSMx4
representation (~82% coniferous and ~90% deciduous) while converging faster.
The data augmentation improved the classification accuracies, but more real
training instances (especially coniferous) likely results in much stronger
improvements. Leaf-off LiDAR data were the primary source of useful
information, which is likely due to the perennial nature of coniferous foliage.
LiDAR intensity values also proved to be useful, but normalization yielded no
significant improvements. Lastly, the classification accuracies of overstory
trees (~90%) were more balanced than those of understory trees (~90% deciduous
and ~65% coniferous), which is likely due to the incomplete capture of
understory tree crowns via airborne LiDAR. Automatic derivation of optimal
features via deep learning provide the opportunity for remarkable improvements
in prediction tasks where captured data are not friendly to human visual system
- likely yielding sub-optimal human-designed features.
|
[
{
"version": "v1",
"created": "Sat, 24 Feb 2018 16:10:39 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Hamraz",
"Hamid",
""
],
[
"Jacobs",
"Nathan B.",
""
],
[
"Contreras",
"Marco A.",
""
],
[
"Clark",
"Chase H.",
""
]
] |
new_dataset
| 0.998025 |
1802.08909
|
Sunrita Poddar
|
Sunrita Poddar, Yasir Mohsin, Deidra Ansah, Bijoy Thattaliyath, Ravi
Ashwath, Mathews Jacob
|
Free-breathing cardiac MRI using bandlimited manifold modelling
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a novel bandlimited manifold framework and an algorithm to
recover freebreathing and ungated cardiac MR images from highly undersampled
measurements. The image frames in the free breathing and ungated dataset are
assumed to be points on a bandlimited manifold. We introduce a novel kernel
low-rank algorithm to estimate the manifold structure (Laplacian) from a
navigator-based acquisition scheme. The structure of the manifold is then used
to recover the images from highly undersampled measurements. A computationally
efficient algorithm, which relies on the bandlimited approximation of the
Laplacian matrix, is used to recover the images. The proposed scheme is
demonstrated on several patients with different breathing patterns and cardiac
rates, without requiring the need for manually tuning the reconstruction
parameters in each case. The proposed scheme enabled the recovery of
free-breathing and ungated data, providing reconstructions that are
qualitatively similar to breath-held scans performed on the same patients. This
shows the potential of the technique as a clinical protocol for free-breathing
cardiac scans.
|
[
{
"version": "v1",
"created": "Sat, 24 Feb 2018 20:43:23 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Poddar",
"Sunrita",
""
],
[
"Mohsin",
"Yasir",
""
],
[
"Ansah",
"Deidra",
""
],
[
"Thattaliyath",
"Bijoy",
""
],
[
"Ashwath",
"Ravi",
""
],
[
"Jacob",
"Mathews",
""
]
] |
new_dataset
| 0.999623 |
1802.08916
|
Shahrzad Keshavarz
|
Shahrzad Keshavarz, Falk Schellenberg, Bastian Richter, Christof Paar,
Daniel Holcomb
|
SAT-based Reverse Engineering of Gate-Level Schematics using Fault
Injection and Probing
|
IEEE International Symposium on Hardware Oriented Security and Trust
(HOST)
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gate camouflaging is a known security enhancement technique that tries to
thwart reverse engineering by hiding the functions of gates or the connections
between them. A number of works on SAT-based attacks have shown that it is
often possible to reverse engineer a circuit function by combining a
camouflaged circuit model and the ability to have oracle access to the
obfuscated combinational circuit. Especially in small circuits it is easy to
reverse engineer the circuit function in this way, but SAT-based reverse
engineering techniques provide no guarantees of recovering a circuit that is
gate-by-gate equivalent to the original design. In this work we show that an
attacker who does not know gate functions or connections of an aggressively
camouflaged circuit cannot learn the correct gate-level schematic even if able
to control inputs and probe all combinational nodes of the circuit. We then
present a stronger attack that extends SAT-based reverse engineering with fault
analysis to allow an attacker to recover the correct gate-level schematic. We
analyze our reverse engineering approach on an S-Box circuit.
|
[
{
"version": "v1",
"created": "Sat, 24 Feb 2018 21:24:48 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Keshavarz",
"Shahrzad",
""
],
[
"Schellenberg",
"Falk",
""
],
[
"Richter",
"Bastian",
""
],
[
"Paar",
"Christof",
""
],
[
"Holcomb",
"Daniel",
""
]
] |
new_dataset
| 0.999354 |
1802.08925
|
Aaron Lee
|
Cecilia S. Lee, Ariel J. Tyring, Yue Wu, Sa Xiao, Ariel S. Rokem,
Nicolaas P. Deruyter, Qinqin Zhang, Adnan Tufail, Ruikang K. Wang, Aaron Y.
Lee
|
Generating retinal flow maps from structural optical coherence
tomography with artificial intelligence
|
Under revision at Nature Communications. Submitted on June 5th 2017
| null | null | null |
cs.CV cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite significant advances in artificial intelligence (AI) for computer
vision, its application in medical imaging has been limited by the burden and
limits of expert-generated labels. We used images from optical coherence
tomography angiography (OCTA), a relatively new imaging modality that measures
perfusion of the retinal vasculature, to train an AI algorithm to generate
vasculature maps from standard structural optical coherence tomography (OCT)
images of the same retinae, both exceeding the ability and bypassing the need
for expert labeling. Deep learning was able to infer perfusion of
microvasculature from structural OCT images with similar fidelity to OCTA and
significantly better than expert clinicians (P < 0.00001). OCTA suffers from
need of specialized hardware, laborious acquisition protocols, and motion
artifacts; whereas our model works directly from standard OCT which are
ubiquitous and quick to obtain, and allows unlocking of large volumes of
previously collected standard OCT data both in existing clinical trials and
clinical practice. This finding demonstrates a novel application of AI to
medical imaging, whereby subtle regularities between different modalities are
used to image the same body part and AI is used to generate detailed and
accurate inferences of tissue function from structure imaging.
|
[
{
"version": "v1",
"created": "Sat, 24 Feb 2018 22:51:43 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Lee",
"Cecilia S.",
""
],
[
"Tyring",
"Ariel J.",
""
],
[
"Wu",
"Yue",
""
],
[
"Xiao",
"Sa",
""
],
[
"Rokem",
"Ariel S.",
""
],
[
"Deruyter",
"Nicolaas P.",
""
],
[
"Zhang",
"Qinqin",
""
],
[
"Tufail",
"Adnan",
""
],
[
"Wang",
"Ruikang K.",
""
],
[
"Lee",
"Aaron Y.",
""
]
] |
new_dataset
| 0.97859 |
1802.09043
|
Timo Hinzmann
|
Timo Hinzmann, Thomas Stastny, Cesar Cadena, Roland Siegwart, and Igor
Gilitschenski
|
Free LSD: Prior-Free Visual Landing Site Detection for Autonomous Planes
|
Accepted for publication in IEEE International Conference on Robotics
and Automation (ICRA), 2018, Brisbane and IEEE Robotics and Automation
Letters (RA-L), 2018
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Full autonomy for fixed-wing unmanned aerial vehicles (UAVs) requires the
capability to autonomously detect potential landing sites in unknown and
unstructured terrain, allowing for self-governed mission completion or handling
of emergency situations. In this work, we propose a perception system
addressing this challenge by detecting landing sites based on their texture and
geometric shape without using any prior knowledge about the environment. The
proposed method considers hazards within the landing region such as terrain
roughness and slope, surrounding obstacles that obscure the landing approach
path, and the local wind field that is estimated by the on-board EKF. The
latter enables applicability of the proposed method on small-scale autonomous
planes without landing gear. A safe approach path is computed based on the UAV
dynamics, expected state estimation and actuator uncertainty, and the on-board
computed elevation map. The proposed framework has been successfully tested on
photo-realistic synthetic datasets and in challenging real-world environments.
|
[
{
"version": "v1",
"created": "Sun, 25 Feb 2018 17:00:54 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Hinzmann",
"Timo",
""
],
[
"Stastny",
"Thomas",
""
],
[
"Cadena",
"Cesar",
""
],
[
"Siegwart",
"Roland",
""
],
[
"Gilitschenski",
"Igor",
""
]
] |
new_dataset
| 0.999562 |
1802.09087
|
Ahmed Roushdy
|
Ahmed Roushdy, Abolfazl Seyed Motahari, Mohammed Nafie and Deniz
Gunduz
|
Cache-Aided Fog Radio Access Networks with Partial Connectivity
|
To appear at the 2018 IEEE Wireless Communications and Networking
Conference (WCNC)
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Centralized coded caching and delivery is studied for a partially-connected
fog radio access network (F-RAN), whereby a set of H edge nodes (ENs) (without
caches), connected to a cloud server via orthogonal fronthaul links, serve K
users over the wireless edge. The cloud server is assumed to hold a library of
N files, each of size F bits; and each user, equipped with a cache of size MF
bits, is connected to a distinct set of r ENs; or equivalently, the wireless
edge from the ENs to the users is modeled as a partial interference channel.
The objective is to minimize the normalized delivery time (NDT), which refers
to the worst case delivery latency, when each user requests a single file from
the library. An achievable coded caching and transmission scheme is proposed,
which utilizes maximum distance separable (MDS) codes in the placement phase,
and real interference alignment (IA) in the delivery phase, and its achievable
NDT is presented for r = 2 and arbitrary cache size M, and also for arbitrary
values of r when the cache capacity is sufficiently large.
|
[
{
"version": "v1",
"created": "Sun, 25 Feb 2018 21:33:31 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Roushdy",
"Ahmed",
""
],
[
"Motahari",
"Abolfazl Seyed",
""
],
[
"Nafie",
"Mohammed",
""
],
[
"Gunduz",
"Deniz",
""
]
] |
new_dataset
| 0.998291 |
1802.09118
|
Yonatan Naamad
|
Moses Charikar, Yonatan Naamad, Jennifer Rexford, X. Kelvin Zou
|
Multi-Commodity Flow with In-Network Processing
| null | null | null | null |
cs.DS cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern networks run "middleboxes" that offer services ranging from network
address translation and server load balancing to firewalls, encryption, and
compression. In an industry trend known as Network Functions Virtualization
(NFV), these middleboxes run as virtual machines on any commodity server, and
the switches steer traffic through the relevant chain of services. Network
administrators must decide how many middleboxes to run, where to place them,
and how to direct traffic through them, based on the traffic load and the
server and network capacity. Rather than placing specific kinds of middleboxes
on each processing node, we argue that server virtualization allows each server
node to host all middlebox functions, and simply vary the fraction of resources
devoted to each one. This extra flexibility fundamentally changes the
optimization problem the network administrators must solve to a new kind of
multi-commodity flow problem, where the traffic flows consume bandwidth on the
links as well as processing resources on the nodes. We show that allocating
resources to maximize the processed flow can be optimized exactly via a linear
programming formulation, and to arbitrary accuracy via an efficient
combinatorial algorithm. Our experiments with real traffic and topologies show
that a joint optimization of node and link resources leads to an efficient use
of bandwidth and processing capacity. We also study a class of design problems
that decide where to provide node capacity to best process and route a given
set of demands, and demonstrate both approximation algorithms and hardness
results for these problems.
|
[
{
"version": "v1",
"created": "Mon, 26 Feb 2018 01:07:32 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Charikar",
"Moses",
""
],
[
"Naamad",
"Yonatan",
""
],
[
"Rexford",
"Jennifer",
""
],
[
"Zou",
"X. Kelvin",
""
]
] |
new_dataset
| 0.993639 |
1802.09180
|
Tomer Kaftan
|
Tomer Kaftan, Magdalena Balazinska, Alvin Cheung, Johannes Gehrke
|
Cuttlefish: A Lightweight Primitive for Adaptive Query Processing
| null | null | null | null |
cs.DB cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern data processing applications execute increasingly sophisticated
analysis that requires operations beyond traditional relational algebra. As a
result, operators in query plans grow in diversity and complexity. Designing
query optimizer rules and cost models to choose physical operators for all of
these novel logical operators is impractical. To address this challenge, we
develop Cuttlefish, a new primitive for adaptively processing online query
plans that explores candidate physical operator instances during query
execution and exploits the fastest ones using multi-armed bandit reinforcement
learning techniques. We prototype Cuttlefish in Apache Spark and adaptively
choose operators for image convolution, regular expression matching, and
relational joins. Our experiments show Cuttlefish-based adaptive convolution
and regular expression operators can reach 72-99% of the throughput of an
all-knowing oracle that always selects the optimal algorithm, even when
individual physical operators are up to 105x slower than the optimal.
Additionally, Cuttlefish achieves join throughput improvements of up to 7.5x
compared with Spark SQL's query optimizer.
|
[
{
"version": "v1",
"created": "Mon, 26 Feb 2018 06:50:43 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Kaftan",
"Tomer",
""
],
[
"Balazinska",
"Magdalena",
""
],
[
"Cheung",
"Alvin",
""
],
[
"Gehrke",
"Johannes",
""
]
] |
new_dataset
| 0.99928 |
1802.09348
|
Dian Pratiwi
|
Risky Armansyah, Dian Pratiwi
|
Game of the Cursed Prince based on Android
|
6 pages, 17 figures
|
International Journal of Computer Applications, Volume 179 -
Number 19, 2018
|
10.5120/ijca2018916333
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays Games become an entertainment alternative for various circles,
industry and game development business is also a profitable industry. In
Indonesia the amount of game consumption is very high, especially the console
game type RPG (Role Playing Game). The task of this research is developing game
software using Unity 3D to create an Android-based RPG game app. The story is
packed with RPG genres so the player can feel the main role of the storys
imagination. The game to be built is a game titled The Cursed Prince. Users
will get the sensation of royal adventure. Multiplayer game system, graphics in
3D game, The main character in this game is Prince, enemies in this game are
wizards and monsters, Game is not limited time to complete. And the game can be
saved, so it can be reopened. The game of The Cursed Prince can be part of
Indonesian Industry Gaming development.
|
[
{
"version": "v1",
"created": "Mon, 19 Feb 2018 14:24:52 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Armansyah",
"Risky",
""
],
[
"Pratiwi",
"Dian",
""
]
] |
new_dataset
| 0.999769 |
1802.09353
|
Johannes Pillmann
|
Johannes Pillmann and Christian Wietfeld and Adrian Zarcula and Thomas
Raugust and Daniel Calvo Alonso
|
Novel Common Vehicle Information Model (CVIM) for Future Automotive
Vehicle Big Data Marketplaces
| null |
Intelligent Vehicles Symposium (IV), 2017 IEEE
|
10.1109/IVS.2017.7995984
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Even though connectivity services have been introduced in many of the most
recent car models, access to vehicle data is currently limited due to its
proprietary nature. The European project AutoMat has therefore developed an
open Marketplace providing a single point of access for brand-independent
vehicle data. Thereby, vehicle sensor data can be leveraged for the design and
implementation of entirely new services even beyond trafficrelated applications
(such as hyper-local traffic forecasts). This paper presents the architecture
for a Vehicle Big Data Marketplace as enabler of cross-sectorial and innovative
vehicle data services. Therefore, the novel Common Vehicle Information Model
(CVIM) is defined as an open and harmonized data model, allowing the
aggregation of brand-independent and generic data sets. Within this work the
realization of a prototype CVIM and Marketplace implementation is presented.
The two use-cases of local weather prediction and road quality measurements are
introduced to show the applicability of the AutoMat concept and prototype to
non-automotive application
|
[
{
"version": "v1",
"created": "Wed, 21 Feb 2018 10:37:00 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Pillmann",
"Johannes",
""
],
[
"Wietfeld",
"Christian",
""
],
[
"Zarcula",
"Adrian",
""
],
[
"Raugust",
"Thomas",
""
],
[
"Alonso",
"Daniel Calvo",
""
]
] |
new_dataset
| 0.998602 |
1802.09358
|
Erkan Bostanci
|
Egemen Turkyilmaz, Alper Akgul, Erkan Bostanci and Mehmet Serdar Guzel
|
Detection of Light Sleep Periods Using an Accelerometer Based Alarm
System
|
5 pages, 11 figures
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Light sleep is a sleeping period which occurs within each hour during the
sleep. This is the period when people are closest to awakening. With this being
the case people tend to move more frequently and aggressively during these
periods. The characteristics of sleeping stages, detection of light sleep
periods and analysis of light sleep periods were clarified. The sleeping
patterns of different subjects were analyzed. In this paper the most suitable
moment for waking a person up will be described. The detection of this moment
and the development process of a system dedicated to this purpose will be
explained, and also some experimental results that are acquired via different
tests will be shared and analyzed.
|
[
{
"version": "v1",
"created": "Thu, 22 Feb 2018 12:37:14 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Turkyilmaz",
"Egemen",
""
],
[
"Akgul",
"Alper",
""
],
[
"Bostanci",
"Erkan",
""
],
[
"Guzel",
"Mehmet Serdar",
""
]
] |
new_dataset
| 0.998473 |
1802.09375
|
Johannes Bjerva
|
Johannes Bjerva and Isabelle Augenstein
|
From Phonology to Syntax: Unsupervised Linguistic Typology at Different
Levels with Language Embeddings
|
Accepted to NAACL 2018 (long paper). arXiv admin note: text overlap
with arXiv:1711.05468
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
A core part of linguistic typology is the classification of languages
according to linguistic properties, such as those detailed in the World Atlas
of Language Structure (WALS). Doing this manually is prohibitively
time-consuming, which is in part evidenced by the fact that only 100 out of
over 7,000 languages spoken in the world are fully covered in WALS.
We learn distributed language representations, which can be used to predict
typological properties on a massively multilingual scale. Additionally,
quantitative and qualitative analyses of these language embeddings can tell us
how language similarities are encoded in NLP models for tasks at different
typological levels. The representations are learned in an unsupervised manner
alongside tasks at three typological levels: phonology (grapheme-to-phoneme
prediction, and phoneme reconstruction), morphology (morphological inflection),
and syntax (part-of-speech tagging).
We consider more than 800 languages and find significant differences in the
language representations encoded, depending on the target task. For instance,
although Norwegian Bokm{\aa}l and Danish are typologically close to one
another, they are phonologically distant, which is reflected in their language
embeddings growing relatively distant in a phonological task. We are also able
to predict typological features in WALS with high accuracies, even for unseen
language families.
|
[
{
"version": "v1",
"created": "Fri, 23 Feb 2018 11:55:44 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Bjerva",
"Johannes",
""
],
[
"Augenstein",
"Isabelle",
""
]
] |
new_dataset
| 0.99178 |
1802.09435
|
Pedro Piacenza
|
Pedro Piacenza, Sydney Sherman, Matei Ciocarlie
|
Data-driven Super-resolution on a Tactile Dome
|
8 pages, 9 figures
|
IEEE Robotics and Automation Letters, vol. 3, no. 3, pp.
1434-1441, July 2018
|
10.1109/LRA.2018.2800081
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While tactile sensor technology has made great strides over the past decades,
applications in robotic manipulation are limited by aspects such as blind
spots, difficult integration into hands, and low spatial resolution. We present
a method for localizing contact with high accuracy over curved, three
dimensional surfaces, with a low wire count and reduced integration complexity.
To achieve this, we build a volume of soft material embedded with individual
off-the-shelf pressure sensors. Using data driven techniques, we map the raw
signals from these pressure sensors to known surface locations and indentation
depths. Additionally, we show that a finite element model can be used to
improve the placement of the pressure sensors inside the volume and to explore
the design space in simulation. We validate our approach on physically
implemented tactile domes which achieve high contact localization accuracy
($1.1mm$ in the best case) over a large, curved sensing area ($1,300mm^2$
hemisphere). We believe this approach can be used to deploy tactile sensing
capabilities over three dimensional surfaces such as a robotic finger or palm.
|
[
{
"version": "v1",
"created": "Mon, 26 Feb 2018 16:23:57 GMT"
}
] | 2018-02-27T00:00:00 |
[
[
"Piacenza",
"Pedro",
""
],
[
"Sherman",
"Sydney",
""
],
[
"Ciocarlie",
"Matei",
""
]
] |
new_dataset
| 0.986439 |
1708.02136
|
Weipeng Xu
|
Weipeng Xu, Avishek Chatterjee, Michael Zollh\"ofer, Helge Rhodin,
Dushyant Mehta, Hans-Peter Seidel, Christian Theobalt
|
MonoPerfCap: Human Performance Capture from Monocular Video
|
Accepted to ACM TOG 2018, to be presented on SIGGRAPH 2018
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the first marker-less approach for temporally coherent 3D
performance capture of a human with general clothing from monocular video. Our
approach reconstructs articulated human skeleton motion as well as medium-scale
non-rigid surface deformations in general scenes. Human performance capture is
a challenging problem due to the large range of articulation, potentially fast
motion, and considerable non-rigid deformations, even from multi-view data.
Reconstruction from monocular video alone is drastically more challenging,
since strong occlusions and the inherent depth ambiguity lead to a highly
ill-posed reconstruction problem. We tackle these challenges by a novel
approach that employs sparse 2D and 3D human pose detections from a
convolutional neural network using a batch-based pose estimation strategy.
Joint recovery of per-batch motion allows to resolve the ambiguities of the
monocular reconstruction problem based on a low dimensional trajectory
subspace. In addition, we propose refinement of the surface geometry based on
fully automatically extracted silhouettes to enable medium-scale non-rigid
alignment. We demonstrate state-of-the-art performance capture results that
enable exciting applications such as video editing and free viewpoint video,
previously infeasible from monocular video. Our qualitative and quantitative
evaluation demonstrates that our approach significantly outperforms previous
monocular methods in terms of accuracy, robustness and scene complexity that
can be handled.
|
[
{
"version": "v1",
"created": "Mon, 7 Aug 2017 14:43:57 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Feb 2018 12:40:25 GMT"
}
] | 2018-02-26T00:00:00 |
[
[
"Xu",
"Weipeng",
""
],
[
"Chatterjee",
"Avishek",
""
],
[
"Zollhöfer",
"Michael",
""
],
[
"Rhodin",
"Helge",
""
],
[
"Mehta",
"Dushyant",
""
],
[
"Seidel",
"Hans-Peter",
""
],
[
"Theobalt",
"Christian",
""
]
] |
new_dataset
| 0.99929 |
1710.07300
|
Vincent Michalski
|
Samira Ebrahimi Kahou, Vincent Michalski, Adam Atkinson, Akos Kadar,
Adam Trischler, Yoshua Bengio
|
FigureQA: An Annotated Figure Dataset for Visual Reasoning
|
workshop paper at ICLR 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce FigureQA, a visual reasoning corpus of over one million
question-answer pairs grounded in over 100,000 images. The images are
synthetic, scientific-style figures from five classes: line plots, dot-line
plots, vertical and horizontal bar graphs, and pie charts. We formulate our
reasoning task by generating questions from 15 templates; questions concern
various relationships between plot elements and examine characteristics like
the maximum, the minimum, area-under-the-curve, smoothness, and intersection.
To resolve, such questions often require reference to multiple plot elements
and synthesis of information distributed spatially throughout a figure. To
facilitate the training of machine learning systems, the corpus also includes
side data that can be used to formulate auxiliary objectives. In particular, we
provide the numerical data used to generate each figure as well as bounding-box
annotations for all plot elements. We study the proposed visual reasoning task
by training several models, including the recently proposed Relation Network as
a strong baseline. Preliminary results indicate that the task poses a
significant machine learning challenge. We envision FigureQA as a first step
towards developing models that can intuitively recognize patterns from visual
representations of data.
|
[
{
"version": "v1",
"created": "Thu, 19 Oct 2017 18:01:38 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Feb 2018 22:50:42 GMT"
}
] | 2018-02-26T00:00:00 |
[
[
"Kahou",
"Samira Ebrahimi",
""
],
[
"Michalski",
"Vincent",
""
],
[
"Atkinson",
"Adam",
""
],
[
"Kadar",
"Akos",
""
],
[
"Trischler",
"Adam",
""
],
[
"Bengio",
"Yoshua",
""
]
] |
new_dataset
| 0.999868 |
1802.07693
|
Souvik Bhattacherjee
|
Souvik Bhattacherjee and Amol Deshpande
|
RStore: A Distributed Multi-version Document Store
|
A shorter version of the paper is to appear in ICDE 2018
| null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the problem of compactly storing a large number of versions
(snapshots) of a collection of keyed documents or records in a distributed
environment, while efficiently answering a variety of retrieval queries over
those, including retrieving full or partial versions, and evolution histories
for specific keys. We motivate the increasing need for such a system in a
variety of application domains, carefully explore the design space for building
such a system and the various storage-computation-retrieval trade-offs, and
discuss how different storage layouts influence those trade-offs. We propose a
novel system architecture that satisfies the key desiderata for such a system,
and offers simple tuning knobs that allow adapting to a specific data and query
workload. Our system is intended to act as a layer on top of a distributed
key-value store that houses the raw data as well as any indexes. We design
novel off-line storage layout algorithms for efficiently partitioning the data
to minimize the storage costs while keeping the retrieval costs low. We also
present an online algorithm to handle new versions being added to system. Using
extensive experiments on large datasets, we demonstrate that our system
operates at the scale required in most practical scenarios and often
outperforms standard baselines, including a delta-based storage engine, by
orders-of-magnitude.
|
[
{
"version": "v1",
"created": "Wed, 21 Feb 2018 17:50:44 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Feb 2018 01:01:00 GMT"
}
] | 2018-02-26T00:00:00 |
[
[
"Bhattacherjee",
"Souvik",
""
],
[
"Deshpande",
"Amol",
""
]
] |
new_dataset
| 0.999718 |
1802.07858
|
Sudipta Kar
|
Sudipta Kar and Suraj Maharjan and A. Pastor L\'opez-Monroy and Thamar
Solorio
|
MPST: A Corpus of Movie Plot Synopses with Tags
|
Accepted at LREC 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social tagging of movies reveals a wide range of heterogeneous information
about movies, like the genre, plot structure, soundtracks, metadata, visual and
emotional experiences. Such information can be valuable in building automatic
systems to create tags for movies. Automatic tagging systems can help
recommendation engines to improve the retrieval of similar movies as well as
help viewers to know what to expect from a movie in advance. In this paper, we
set out to the task of collecting a corpus of movie plot synopses and tags. We
describe a methodology that enabled us to build a fine-grained set of around 70
tags exposing heterogeneous characteristics of movie plots and the multi-label
associations of these tags with some 14K movie plot synopses. We investigate
how these tags correlate with movies and the flow of emotions throughout
different types of movies. Finally, we use this corpus to explore the
feasibility of inferring tags from plot synopses. We expect the corpus will be
useful in other tasks where analysis of narratives is relevant.
|
[
{
"version": "v1",
"created": "Thu, 22 Feb 2018 00:27:54 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Feb 2018 04:04:44 GMT"
}
] | 2018-02-26T00:00:00 |
[
[
"Kar",
"Sudipta",
""
],
[
"Maharjan",
"Suraj",
""
],
[
"López-Monroy",
"A. Pastor",
""
],
[
"Solorio",
"Thamar",
""
]
] |
new_dataset
| 0.984924 |
1802.08286
|
Ashkan Zeinalzadeh
|
Ashkan Zeinalzadeh, Donya Ghavidel, and Vijay Gupta
|
Reliability and Market Price of Energy in the Presence of Intermittent
and Non-Dispatchable Renewable Energies
|
11 pages
| null | null | null |
cs.SY stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The intermittent nature of the renewable energies increases the operation
costs of conventional generators. As the share of energy supplied by renewable
sources increases, these costs also increase. In this paper, we quantify these
costs by developing a market clearing price of energy in the presence of
renewable energy and congestion constraints. We consider an electricity market
where generators propose their asking price per unit of energy to an
independent system operator (ISO). The ISO solve an optimization problem to
dispatch energy from each generator to minimize the total cost of energy
purchased on behalf of the consumers.
To ensure that the generators are able to meet the load within a desired
confidence level, we incorporate the notion of load variance using the
Conditional Value-at-Risk (CVAR) measure in an electricity market and we derive
the amount of committed power and market clearing price of energy as a function
of CVAR. It is shown that a higher penetration of renewable energies may
increase the committed power, market clearing price of energy and consumer cost
of energy due to renewable generation uncertainties. We also obtain an
upper-bound on the amount that congestion constraints can affect the committed
power. We present descriptive simulations to illustrate the impact of renewable
energy penetration and reliability levels on committed power by the
non-renewable generators, difference between the dispatched and committed
power, market price of energy and profit of renewable and non-renewable
generators.
|
[
{
"version": "v1",
"created": "Mon, 5 Feb 2018 19:22:23 GMT"
}
] | 2018-02-26T00:00:00 |
[
[
"Zeinalzadeh",
"Ashkan",
""
],
[
"Ghavidel",
"Donya",
""
],
[
"Gupta",
"Vijay",
""
]
] |
new_dataset
| 0.993846 |
1802.08307
|
Berkay Celik
|
Z. Berkay Celik, Leonardo Babun, Amit K. Sikder, Hidayet Aksu, Gang
Tan, Patrick McDaniel, A. Selcuk Uluagac
|
Sensitive Information Tracking in Commodity IoT
|
first submission
| null | null | null |
cs.CR cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Broadly defined as the Internet of Things (IoT), the growth of commodity
devices that integrate physical processes with digital connectivity has had
profound effects on society--smart homes, personal monitoring devices, enhanced
manufacturing and other IoT apps have changed the way we live, play, and work.
Yet extant IoT platforms provide few means of evaluating the use (and potential
avenues for misuse) of sensitive information. Thus, consumers and organizations
have little information to assess the security and privacy risks these devices
present. In this paper, we present SainT, a static taint analysis tool for IoT
applications. SainT operates in three phases; (a) translation of
platform-specific IoT source code into an intermediate representation (IR), (b)
identifying sensitive sources and sinks, and (c) performing static analysis to
identify sensitive data flows. We evaluate SainT on 230 SmartThings market apps
and find 138 (60%) include sensitive data flows. In addition, we demonstrate
SainT on IoTBench, a novel open-source test suite containing 19 apps with 27
unique data leaks. Through this effort, we introduce a rigorously grounded
framework for evaluating the use of sensitive information in IoT apps---and
therein provide developers, markets, and consumers a means of identifying
potential threats to security and privacy.
|
[
{
"version": "v1",
"created": "Thu, 22 Feb 2018 21:26:44 GMT"
}
] | 2018-02-26T00:00:00 |
[
[
"Celik",
"Z. Berkay",
""
],
[
"Babun",
"Leonardo",
""
],
[
"Sikder",
"Amit K.",
""
],
[
"Aksu",
"Hidayet",
""
],
[
"Tan",
"Gang",
""
],
[
"McDaniel",
"Patrick",
""
],
[
"Uluagac",
"A. Selcuk",
""
]
] |
new_dataset
| 0.965035 |
1802.08415
|
Chen Chen
|
Chen Chen and Daniele E. Asoni, and Adrian Perrig, and David Barrera,
and George Danezis, and Carmela Troncoso
|
TARANET: Traffic-Analysis Resistant Anonymity at the NETwork layer
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern low-latency anonymity systems, no matter whether constructed as an
overlay or implemented at the network layer, offer limited security guarantees
against traffic analysis. On the other hand, high-latency anonymity systems
offer strong security guarantees at the cost of computational overhead and long
delays, which are excessive for interactive applications. We propose TARANET,
an anonymity system that implements protection against traffic analysis at the
network layer, and limits the incurred latency and overhead. In TARANET's setup
phase, traffic analysis is thwarted by mixing. In the data transmission phase,
end hosts and ASes coordinate to shape traffic into constant-rate transmission
using packet splitting. Our prototype implementation shows that TARANET can
forward anonymous traffic at over 50~Gbps using commodity hardware.
|
[
{
"version": "v1",
"created": "Fri, 23 Feb 2018 07:22:42 GMT"
}
] | 2018-02-26T00:00:00 |
[
[
"Chen",
"Chen",
""
],
[
"Asoni",
"Daniele E.",
""
],
[
"Perrig",
"Adrian",
""
],
[
"Barrera",
"David",
""
],
[
"Danezis",
"George",
""
],
[
"Troncoso",
"Carmela",
""
]
] |
new_dataset
| 0.999215 |
1802.08522
|
Johann Briffa
|
Johann A. Briffa and Stephan Wesemeyer
|
SimCommSys: Taking the errors out of error-correcting code simulations
| null |
J. A. Briffa and S. Wesemeyer, "Simcommsys: Taking the errors out
of error-correcting code simulations", IET Journal of Engineering, Jun. 2014
|
10.1049/joe.2014.0055
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present SimCommSys, a Simulator of Communication Systems
that we are releasing under an open source license. The core of the project is
a set of C++ libraries defining communication system components and a
distributed Monte Carlo simulator. Of principal interest is the error-control
coding component, where various kinds of binary and non-binary codes are
implemented, including turbo, LDPC, repeat-accumulate, and Reed-Solomon. The
project also contains a number of ready-to-build binaries implementing various
stages of the communication system (such as the encoder and decoder), a
complete simulator, and a system benchmark. Finally, SimCommSys also provides a
number of shell and python scripts to encapsulate routine use cases. As long as
the required components are already available in SimCommSys, the user may
simulate complete communication systems of their own design without any
additional programming. The strict separation of development (needed only to
implement new components) and use (to simulate specific constructions)
encourages reproducibility of experimental work and reduces the likelihood of
error. Following an overview of the framework, we provide some examples of how
to use the framework, including the implementation of a simple codec, the
specification of communication systems and their simulation.
|
[
{
"version": "v1",
"created": "Fri, 23 Feb 2018 13:27:03 GMT"
}
] | 2018-02-26T00:00:00 |
[
[
"Briffa",
"Johann A.",
""
],
[
"Wesemeyer",
"Stephan",
""
]
] |
new_dataset
| 0.967887 |
1802.08540
|
Suttinee Sawadsitang
|
Suttinee Sawadsitang, Rakpong Kaewpuang, Siwei Jiang, Dusit Niyato,
Ping Wang
|
Optimal Stochastic Delivery Planning in Full-Truckload and
Less-Than-Truckload Delivery
|
5 pages, 6 figures, Vehicular Technology Conference (VTC Spring),
2017 IEEE 85th
| null |
10.1109/VTCSpring.2017.8108576
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With an increasing demand from emerging logistics businesses, Vehicle Routing
Problem with Private fleet and common Carrier (VRPPC) has been introduced to
manage package delivery services from a supplier to customers. However, almost
all of existing studies focus on the deterministic problem that assumes all
parameters are known perfectly at the time when the planning and routing
decisions are made. In reality, some parameters are random and unknown.
Therefore, in this paper, we consider VRPPC with hard time windows and random
demand, called Optimal Delivery Planning (ODP). The proposed ODP aims to
minimize the total package delivery cost while meeting the customer time window
constraints. We use stochastic integer programming to formulate the
optimization problem incorporating the customer demand uncertainty. Moreover,
we evaluate the performance of the ODP using test data from benchmark dataset
and from actual Singapore road map.
|
[
{
"version": "v1",
"created": "Sun, 4 Feb 2018 08:45:19 GMT"
}
] | 2018-02-26T00:00:00 |
[
[
"Sawadsitang",
"Suttinee",
""
],
[
"Kaewpuang",
"Rakpong",
""
],
[
"Jiang",
"Siwei",
""
],
[
"Niyato",
"Dusit",
""
],
[
"Wang",
"Ping",
""
]
] |
new_dataset
| 0.99433 |
1802.08558
|
Walter Mascarenhas
|
Walter F. Mascarenhas
|
Moore: Interval Arithmetic in C++20
|
arXiv admin note: text overlap with arXiv:1611.09567"
| null | null | null |
cs.MS cs.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article presents the Moore library for interval arithmetic in C++20. It
gives examples of how the library can be used, and explains the basic
principles underlying its design.
|
[
{
"version": "v1",
"created": "Wed, 21 Feb 2018 19:02:45 GMT"
}
] | 2018-02-26T00:00:00 |
[
[
"Mascarenhas",
"Walter F.",
""
]
] |
new_dataset
| 0.960527 |
1802.08659
|
Om Prakash
|
Om Prakash and Habibul Islam
|
Skew cyclic codes over F_{p}+uF_{p}+\dots +u^{k-1}F_{p}
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, we study the skew cyclic codes over R_{k}=F_{p}+uF_{p}+\dots
+u^{k-1}F_{p} of length n. We characterize the skew cyclic codes of length $n$
over R_{k} as free left R_{k}[x;\theta]-submodules of R_{k}[x;\theta]/\langle
x^{n}-1\rangle and construct their generators and minimal generating sets.
Also, an algorithm has been provided to encode and decode these skew cyclic
codes.
|
[
{
"version": "v1",
"created": "Fri, 23 Feb 2018 17:53:57 GMT"
}
] | 2018-02-26T00:00:00 |
[
[
"Prakash",
"Om",
""
],
[
"Islam",
"Habibul",
""
]
] |
new_dataset
| 0.985134 |
1703.05916
|
Mamoru Komachi
|
Yuya Sakaizawa and Mamoru Komachi
|
Construction of a Japanese Word Similarity Dataset
|
LREC 2018; 4 pages
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
An evaluation of distributed word representation is generally conducted using
a word similarity task and/or a word analogy task. There are many datasets
readily available for these tasks in English. However, evaluating distributed
representation in languages that do not have such resources (e.g., Japanese) is
difficult. Therefore, as a first step toward evaluating distributed
representations in Japanese, we constructed a Japanese word similarity dataset.
To the best of our knowledge, our dataset is the first resource that can be
used to evaluate distributed representations in Japanese. Moreover, our dataset
contains various parts of speech and includes rare words in addition to common
words.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2017 07:53:03 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Feb 2018 07:55:54 GMT"
}
] | 2018-02-23T00:00:00 |
[
[
"Sakaizawa",
"Yuya",
""
],
[
"Komachi",
"Mamoru",
""
]
] |
new_dataset
| 0.999218 |
1712.05591
|
Ivor Hoog V.D.
|
Ivor Hoog v.d., Elena Khramtcova, Maarten L\"offler
|
Dynamic smooth compressed quadtrees (Fullversion)
|
Full version of the accepted SOCG submission
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce dynamic smooth (a.k.a. balanced) compressed quadtrees with
worst-case constant time updates in constant dimensions. We distinguish two
versions of the problem. First, we show that quadtrees as a space-division data
structure can be made smooth and dynamic subject to split and merge operations
on the quadtree cells. Second, we show that quadtrees used to store a set of
points in $\mathbb{R}^d$ can be made smooth and dynamic subject to insertions
and deletions of points. The second version uses the first but must
additionally deal with compression and alignment of quadtree components. In
both cases our updates take $2^{\mathcal{O}(d\log d )}$ time, except for the
point location part in the second version which has a lower bound of $\Theta
(\log n)$---but if a pointer (finger) to the correct quadtree cell is given,
the rest of the updates take worst-case constant time. Our result implies that
several classic and recent results (ranging from ray tracing to planar point
location) in computational geometry which use quadtrees can deal with arbitrary
point sets on a real RAM pointer machine.
|
[
{
"version": "v1",
"created": "Fri, 15 Dec 2017 09:30:04 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Feb 2018 13:22:35 GMT"
}
] | 2018-02-23T00:00:00 |
[
[
"d.",
"Ivor Hoog v.",
""
],
[
"Khramtcova",
"Elena",
""
],
[
"Löffler",
"Maarten",
""
]
] |
new_dataset
| 0.975131 |
1802.06424
|
Stavros Petridis
|
Stavros Petridis, Themos Stafylakis, Pingchuan Ma, Feipeng Cai,
Georgios Tzimiropoulos, Maja Pantic
|
End-to-end Audiovisual Speech Recognition
|
Accepted to ICASSP 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Several end-to-end deep learning approaches have been recently presented
which extract either audio or visual features from the input images or audio
signals and perform speech recognition. However, research on end-to-end
audiovisual models is very limited. In this work, we present an end-to-end
audiovisual model based on residual networks and Bidirectional Gated Recurrent
Units (BGRUs). To the best of our knowledge, this is the first audiovisual
fusion model which simultaneously learns to extract features directly from the
image pixels and audio waveforms and performs within-context word recognition
on a large publicly available dataset (LRW). The model consists of two streams,
one for each modality, which extract features directly from mouth regions and
raw waveforms. The temporal dynamics in each stream/modality are modeled by a
2-layer BGRU and the fusion of multiple streams/modalities takes place via
another 2-layer BGRU. A slight improvement in the classification rate over an
end-to-end audio-only and MFCC-based model is reported in clean audio
conditions and low levels of noise. In presence of high levels of noise, the
end-to-end audiovisual model significantly outperforms both audio-only models.
|
[
{
"version": "v1",
"created": "Sun, 18 Feb 2018 19:07:31 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Feb 2018 11:58:14 GMT"
}
] | 2018-02-23T00:00:00 |
[
[
"Petridis",
"Stavros",
""
],
[
"Stafylakis",
"Themos",
""
],
[
"Ma",
"Pingchuan",
""
],
[
"Cai",
"Feipeng",
""
],
[
"Tzimiropoulos",
"Georgios",
""
],
[
"Pantic",
"Maja",
""
]
] |
new_dataset
| 0.970885 |
1802.07778
|
Shadrokh Samavi
|
Mina Nasr-Esfahani, Majid Mohrekesh, Mojtaba Akbari, S.M.Reza
Soroushmehr, Ebrahim Nasr-Esfahani, Nader Karimi, Shadrokh Samavi, Kayvan
Najarian
|
Left Ventricle Segmentation in Cardiac MR Images Using Fully
Convolutional Network
|
4 pages, 3 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Medical image analysis, especially segmenting a specific organ, has an
important role in developing clinical decision support systems. In cardiac
magnetic resonance (MR) imaging, segmenting the left and right ventricles helps
physicians diagnose different heart abnormalities. There are challenges for
this task, including the intensity and shape similarity between left ventricle
and other organs, inaccurate boundaries and presence of noise in most of the
images. In this paper we propose an automated method for segmenting the left
ventricle in cardiac MR images. We first automatically extract the region of
interest, and then employ it as an input of a fully convolutional network. We
train the network accurately despite the small number of left ventricle pixels
in comparison with the whole image. Thresholding on the output map of the fully
convolutional network and selection of regions based on their roundness are
performed in our proposed post-processing phase. The Dice score of our method
reaches 87.24% by applying this algorithm on the York dataset of heart images.
|
[
{
"version": "v1",
"created": "Wed, 21 Feb 2018 20:01:35 GMT"
}
] | 2018-02-23T00:00:00 |
[
[
"Nasr-Esfahani",
"Mina",
""
],
[
"Mohrekesh",
"Majid",
""
],
[
"Akbari",
"Mojtaba",
""
],
[
"Soroushmehr",
"S. M. Reza",
""
],
[
"Nasr-Esfahani",
"Ebrahim",
""
],
[
"Karimi",
"Nader",
""
],
[
"Samavi",
"Shadrokh",
""
],
[
"Najarian",
"Kayvan",
""
]
] |
new_dataset
| 0.980898 |
1802.07852
|
Siddharth Siddharth
|
Siddharth, Aashish Patel, Tzyy-Ping Jung, and Terrence J. Sejnowski
|
An Affordable Bio-Sensing and Activity Tagging Platform for HCI Research
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel multi-modal bio-sensing platform capable of integrating
multiple data streams for use in real-time applications. The system is composed
of a central compute module and a companion headset. The compute node collects,
time-stamps and transmits the data while also providing an interface for a wide
range of sensors including electroencephalogram, photoplethysmogram,
electrocardiogram, and eye gaze among others. The companion headset contains
the gaze tracking cameras. By integrating many of the measurements systems into
an accessible package, we are able to explore previously unanswerable questions
ranging from open-environment interactions to emotional response studies.
Though some of the integrated sensors are designed from the ground-up to fit
into a compact form factor, we validate the accuracy of the sensors and find
that they perform similarly to, and in some cases better than, alternatives.
|
[
{
"version": "v1",
"created": "Thu, 22 Feb 2018 00:08:42 GMT"
}
] | 2018-02-23T00:00:00 |
[
[
"Siddharth",
"",
""
],
[
"Patel",
"Aashish",
""
],
[
"Jung",
"Tzyy-Ping",
""
],
[
"Sejnowski",
"Terrence J.",
""
]
] |
new_dataset
| 0.988518 |
1802.07855
|
Tao Gong
|
Song Han and Tao Gong and Mark Nixon and Eric Rotvold and Kam-yiu Lam
and Krithi Ramamritham
|
RT-DAP: A Real-Time Data Analytics Platform for Large-scale Industrial
Process Monitoring and Control
| null | null | null | null |
cs.NI cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In most process control systems nowadays, process measurements are
periodically collected and archived in historians. Analytics applications
process the data, and provide results offline or in a time period that is
considerably slow in comparison to the performance of the manufacturing
process. Along with the proliferation of Internet-of-Things (IoT) and the
introduction of "pervasive sensors" technology in process industries,
increasing number of sensors and actuators are installed in process plants for
pervasive sensing and control, and the volume of produced process data is
growing exponentially. To digest these data and meet the ever-growing
requirements to increase production efficiency and improve product quality,
there needs to be a way to both improve the performance of the analytics system
and scale the system to closely monitor a much larger set of plant resources.
In this paper, we present a real-time data analytics platform, called RT-DAP,
to support large-scale continuous data analytics in process industries. RT-DAP
is designed to be able to stream, store, process and visualize a large volume
of realtime data flows collected from heterogeneous plant resources, and
feedback to the control system and operators in a realtime manner. A prototype
of the platform is implemented on Microsoft Azure. Our extensive experiments
validate the design methodologies of RT-DAP and demonstrate its efficiency in
both component and system levels.
|
[
{
"version": "v1",
"created": "Thu, 22 Feb 2018 00:25:25 GMT"
}
] | 2018-02-23T00:00:00 |
[
[
"Han",
"Song",
""
],
[
"Gong",
"Tao",
""
],
[
"Nixon",
"Mark",
""
],
[
"Rotvold",
"Eric",
""
],
[
"Lam",
"Kam-yiu",
""
],
[
"Ramamritham",
"Krithi",
""
]
] |
new_dataset
| 0.999193 |
1802.07856
|
Darius Lam
|
Darius Lam, Richard Kuzma, Kevin McGee, Samuel Dooley, Michael
Laielli, Matthew Klaric, Yaroslav Bulatov, Brendan McCord
|
xView: Objects in Context in Overhead Imagery
|
Initial submission
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a new large-scale dataset for the advancement of object
detection techniques and overhead object detection research. This satellite
imagery dataset enables research progress pertaining to four key computer
vision frontiers. We utilize a novel process for geospatial category detection
and bounding box annotation with three stages of quality control. Our data is
collected from WorldView-3 satellites at 0.3m ground sample distance, providing
higher resolution imagery than most public satellite imagery datasets. We
compare xView to other object detection datasets in both natural and overhead
imagery domains and then provide a baseline analysis using the Single Shot
MultiBox Detector. xView is one of the largest and most diverse publicly
available object-detection datasets to date, with over 1 million objects across
60 classes in over 1,400 km^2 of imagery.
|
[
{
"version": "v1",
"created": "Thu, 22 Feb 2018 00:26:46 GMT"
}
] | 2018-02-23T00:00:00 |
[
[
"Lam",
"Darius",
""
],
[
"Kuzma",
"Richard",
""
],
[
"McGee",
"Kevin",
""
],
[
"Dooley",
"Samuel",
""
],
[
"Laielli",
"Michael",
""
],
[
"Klaric",
"Matthew",
""
],
[
"Bulatov",
"Yaroslav",
""
],
[
"McCord",
"Brendan",
""
]
] |
new_dataset
| 0.999704 |
1802.07862
|
Seungwhan Moon
|
Seungwhan Moon, Leonardo Neves, Vitor Carvalho
|
Multimodal Named Entity Recognition for Short Social Media Posts
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a new task called Multimodal Named Entity Recognition (MNER) for
noisy user-generated data such as tweets or Snapchat captions, which comprise
short text with accompanying images. These social media posts often come in
inconsistent or incomplete syntax and lexical notations with very limited
surrounding textual contexts, bringing significant challenges for NER. To this
end, we create a new dataset for MNER called SnapCaptions (Snapchat
image-caption pairs submitted to public and crowd-sourced stories with fully
annotated named entities). We then build upon the state-of-the-art Bi-LSTM
word/character based NER models with 1) a deep image network which incorporates
relevant visual context to augment textual information, and 2) a generic
modality-attention module which learns to attenuate irrelevant modalities while
amplifying the most informative ones to extract contexts from, adaptive to each
sample and token. The proposed MNER model with modality attention significantly
outperforms the state-of-the-art text-only NER models by successfully
leveraging provided visual contexts, opening up potential applications of MNER
on myriads of social media platforms.
|
[
{
"version": "v1",
"created": "Thu, 22 Feb 2018 00:54:47 GMT"
}
] | 2018-02-23T00:00:00 |
[
[
"Moon",
"Seungwhan",
""
],
[
"Neves",
"Leonardo",
""
],
[
"Carvalho",
"Vitor",
""
]
] |
new_dataset
| 0.99956 |
1802.08112
|
Jos\'e Vuelvas
|
Jos\'e Vuelvas and Fredy Ruiz
|
Rational consumer decisions in a peak time rebate program
| null |
Vuelvas, J., & Ruiz, F. (2017). Rational consumer decisions in a
peak time rebate program. Electric Power Systems Research, 143.
https://doi.org/10.1016/j.epsr.2016.11.001
| null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A rational behavior of a consumer is analyzed when the user participates in a
Peak Time Rebate (PTR) mechanism, which is a demand response (DR) incentive
program based on a baseline. A multi-stage stochastic programming is proposed
from the demand side in order to understand the rational decisions. The
consumer preferences are modeled as a risk-averse function under additive
uncertainty. The user chooses the optimal consumption profile to maximize his
economic benefits for each period. The stochastic optimization problem is
solved backward in time. A particular situation is developed when the System
Operator (SO) uses consumption of the previous interval as the
household-specific baseline for the DR program. It is found that a rational
consumer alters the baseline in order to increase the well-being when there is
an economic incentive. As results, whether the incentive is lower than the
retail price, the user shifts his load requirement to the baseline setting
period. On the other hand, if the incentive is greater than the regular energy
price, the optimal decision is that the user spends the maximum possible energy
in the baseline setting period and reduces the consumption at the PTR time.
This consumer behavior produces more energy consumption in total considering
all periods. In addition, the user with high uncertainty level in his energy
pattern should spend less energy than a predictable consumer when the incentive
is lower than the retail price.
|
[
{
"version": "v1",
"created": "Thu, 22 Feb 2018 15:52:07 GMT"
}
] | 2018-02-23T00:00:00 |
[
[
"Vuelvas",
"José",
""
],
[
"Ruiz",
"Fredy",
""
]
] |
new_dataset
| 0.984744 |
1802.08138
|
Muhammed Omer Sayin
|
Muhammed O. Sayin, Chung-Wei Lin, Shinichi Shiraishi, and Tamer
Ba\c{s}ar
|
Reliable Intersection Control in Non-cooperative Environments
|
Extended version (including proofs of theorems and lemmas) of the
paper: M. O. Sayin, C.-W. Lin, S. Shiraishi, and T. Basar, "Reliable
intersection control in non-cooperative environments", to appear in the
Proceedings of American Control Conference, 2018
| null | null | null |
cs.AI cs.GT cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a reliable intersection control mechanism for strategic autonomous
and connected vehicles (agents) in non-cooperative environments. Each agent has
access to his/her earliest possible and desired passing times, and reports a
passing time to the intersection manager, who allocates the intersection
temporally to the agents in a First-Come-First-Serve basis. However, the agents
might have conflicting interests and can take actions strategically. To this
end, we analyze the strategic behaviors of the agents and formulate Nash
equilibria for all possible scenarios. Furthermore, among all Nash equilibria
we identify a socially optimal equilibrium that leads to a fair intersection
allocation, and correspondingly we describe a strategy-proof intersection
mechanism, which achieves reliable intersection control such that the strategic
agents do not have any incentive to misreport their passing times
strategically.
|
[
{
"version": "v1",
"created": "Thu, 22 Feb 2018 16:23:39 GMT"
}
] | 2018-02-23T00:00:00 |
[
[
"Sayin",
"Muhammed O.",
""
],
[
"Lin",
"Chung-Wei",
""
],
[
"Shiraishi",
"Shinichi",
""
],
[
"Başar",
"Tamer",
""
]
] |
new_dataset
| 0.996807 |
1802.08148
|
Diego Moussallem
|
Diego Moussallem, Mohamed Ahmed Sherif, Diego Esteves, Marcos Zampieri
and Axel-Cyrille Ngonga Ngomo
|
LIDIOMS: A Multilingual Linked Idioms Data Set
|
Accepted for publication in Language Resources and Evaluation
Conference (LREC) 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we describe the LIDIOMS data set, a multilingual RDF
representation of idioms currently containing five languages: English, German,
Italian, Portuguese, and Russian. The data set is intended to support natural
language processing applications by providing links between idioms across
languages. The underlying data was crawled and integrated from various sources.
To ensure the quality of the crawled data, all idioms were evaluated by at
least two native speakers. Herein, we present the model devised for structuring
the data. We also provide the details of linking LIDIOMS to well-known
multilingual data sets such as BabelNet. The resulting data set complies with
best practices according to Linguistic Linked Open Data Community.
|
[
{
"version": "v1",
"created": "Thu, 22 Feb 2018 16:38:40 GMT"
}
] | 2018-02-23T00:00:00 |
[
[
"Moussallem",
"Diego",
""
],
[
"Sherif",
"Mohamed Ahmed",
""
],
[
"Esteves",
"Diego",
""
],
[
"Zampieri",
"Marcos",
""
],
[
"Ngomo",
"Axel-Cyrille Ngonga",
""
]
] |
new_dataset
| 0.998421 |
1802.08150
|
Diego Moussallem
|
Diego Moussallem, Thiago Castro Ferreira, Marcos Zampieri, Maria
Claudia Cavalcanti, Geraldo Xex\'eo, Mariana Neves, Axel-Cyrille Ngonga Ngomo
|
RDF2PT: Generating Brazilian Portuguese Texts from RDF Data
|
Accepted for publication in Language Resources and Evaluation
Conference (LREC) 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The generation of natural language from Resource Description Framework (RDF)
data has recently gained significant attention due to the continuous growth of
Linked Data. A number of these approaches generate natural language in
languages other than English, however, no work has been proposed to generate
Brazilian Portuguese texts out of RDF. We address this research gap by
presenting RDF2PT, an approach that verbalizes RDF data to Brazilian Portuguese
language. We evaluated RDF2PT in an open questionnaire with 44 native speakers
divided into experts and non-experts. Our results suggest that RDF2PT is able
to generate text which is similar to that generated by humans and can hence be
easily understood.
|
[
{
"version": "v1",
"created": "Thu, 22 Feb 2018 16:41:56 GMT"
}
] | 2018-02-23T00:00:00 |
[
[
"Moussallem",
"Diego",
""
],
[
"Ferreira",
"Thiago Castro",
""
],
[
"Zampieri",
"Marcos",
""
],
[
"Cavalcanti",
"Maria Claudia",
""
],
[
"Xexéo",
"Geraldo",
""
],
[
"Neves",
"Mariana",
""
],
[
"Ngomo",
"Axel-Cyrille Ngonga",
""
]
] |
new_dataset
| 0.999476 |
1802.08204
|
Andrew Tomkins
|
Alex Fabrikant, Mohammad Mahdian and Andrew Tomkins
|
SCRank: Spammer and Celebrity Ranking in Directed Social Networks
| null | null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many online social networks allow directed edges: Alice can unilaterally add
an "edge" to Bob, typically indicating interest in Bob or Bob's content,
without Bob's permission or reciprocation. In directed social networks we
observe the rise of two distinctive classes of users: celebrities who accrue
unreciprocated incoming links, and follow spammers, who generate unreciprocated
outgoing links. Identifying users in these two classes is important for abuse
detection, user and content ranking, privacy choices, and other social network
features.
In this paper we develop SCRank, an iterative algorithm to identify such
users. We analyze SCRank both theoretically and experimentally. The
spammer-celebrity definition is not amenable to analysis using standard power
iteration, so we develop a novel potential function argument to show
convergence to an approximate equilibrium point for a class of algorithms
including SCRank. We then use experimental evaluation on a real global-scale
social network and on synthetically generated graphs to observe that the
algorithm converges quickly and consistently. Using synthetic data with
built-in ground truth, we also experimentally show that the algorithm provides
a good approximation to planted celebrities and spammers.
|
[
{
"version": "v1",
"created": "Thu, 22 Feb 2018 17:58:55 GMT"
}
] | 2018-02-23T00:00:00 |
[
[
"Fabrikant",
"Alex",
""
],
[
"Mahdian",
"Mohammad",
""
],
[
"Tomkins",
"Andrew",
""
]
] |
new_dataset
| 0.964052 |
1802.08236
|
Xin Jin
|
Xin Jin, Xiaozhou Li, Haoyu Zhang, Nate Foster, Jeongkeun Lee, Robert
Soule, Changhoon Kim, Ion Stoica
|
NetChain: Scale-Free Sub-RTT Coordination (Extended Version)
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Coordination services are a fundamental building block of modern cloud
systems, providing critical functionalities like configuration management and
distributed locking. The major challenge is to achieve low latency and high
throughput while providing strong consistency and fault-tolerance. Traditional
server-based solutions require multiple round-trip times (RTTs) to process a
query. This paper presents NetChain, a new approach that provides scale-free
sub-RTT coordination in datacenters. NetChain exploits recent advances in
programmable switches to store data and process queries entirely in the network
data plane. This eliminates the query processing at coordination servers and
cuts the end-to-end latency to as little as half of an RTT---clients only
experience processing delay from their own software stack plus network delay,
which in a datacenter setting is typically much smaller. We design new
protocols and algorithms based on chain replication to guarantee strong
consistency and to efficiently handle switch failures. We implement a prototype
with four Barefoot Tofino switches and four commodity servers. Evaluation
results show that compared to traditional server-based solutions like
ZooKeeper, our prototype provides orders of magnitude higher throughput and
lower latency, and handles failures gracefully.
|
[
{
"version": "v1",
"created": "Thu, 22 Feb 2018 18:46:39 GMT"
}
] | 2018-02-23T00:00:00 |
[
[
"Jin",
"Xin",
""
],
[
"Li",
"Xiaozhou",
""
],
[
"Zhang",
"Haoyu",
""
],
[
"Foster",
"Nate",
""
],
[
"Lee",
"Jeongkeun",
""
],
[
"Soule",
"Robert",
""
],
[
"Kim",
"Changhoon",
""
],
[
"Stoica",
"Ion",
""
]
] |
new_dataset
| 0.985139 |
1607.01223
|
Benjamin Sliwa
|
Benjamin Sliwa, Daniel Behnke, Christoph Ide and Christian Wietfeld
|
B.A.T.Mobile: Leveraging Mobility Control Knowledge for Efficient
Routing in Mobile Robotic Networks
| null |
Globecom Workshops (GC Wkshps), 2016 IEEE
|
10.1109/GLOCOMW.2016.7848845
| null |
cs.NI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Efficient routing is one of the key challenges of wireless networking for
unmanned autonomous vehicles (UAVs) due to dynamically changing channel and
network topology characteristics. Various well known mobile-ad-hoc routing
protocols, such as AODV, OLSR and B.A.T.M.A.N. have been proposed to allow for
proactive and reactive routing decisions. In this paper, we present a novel
approach which leverages application layer knowledge derived from mobility
control algorithms guiding the behavior of UAVs to fulfill a dedicated task.
Thereby a prediction of future trajectories of the UAVs can be integrated with
the routing protocol to avoid unexpected route breaks and packet loss. The
proposed extension of the B.A.T.M.A.N. routing protocol by a mobility
prediction component - called B.A.T.Mobile - has shown to be very effective to
realize this concept. The results of in-depth simulation studies show that the
proposed protocol reaches a distinct higher availability compared to the
established approaches and shows robust behavior even in challenging channel
conditions.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2016 12:39:25 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Jan 2017 08:14:27 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Feb 2018 09:22:40 GMT"
}
] | 2018-02-22T00:00:00 |
[
[
"Sliwa",
"Benjamin",
""
],
[
"Behnke",
"Daniel",
""
],
[
"Ide",
"Christoph",
""
],
[
"Wietfeld",
"Christian",
""
]
] |
new_dataset
| 0.995435 |
1702.05235
|
Benjamin Sliwa
|
Benjamin Sliwa and Robert Falkenberg and Christian Wietfeld
|
A Simple Scheme for Distributed Passive Load Balancing in Mobile Ad-hoc
Networks
| null |
Vehicular Technology Conference (VTC Spring), 2017 IEEE 85th
|
10.1109/VTCSpring.2017.8108553
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Efficient routing is one of the key challenges for next generation vehicular
networks in order to provide fast and reliable communication in a smart city
context. Various routing protocols have been proposed for determining optimal
routing paths in highly dynamic topologies. However, it is the dilemma of those
kinds of networks that good paths are used intensively, resulting in congestion
and path quality degradation. In this paper, we adopt ideas from multipath
routing and propose a simple decentral scheme for Mobile Ad-hoc Network (MANET)
routing, which handles passive load balancing without requiring additional
communication effort. It can easily be applied to existing routing protocols to
achieve load balancing without changing the routing process itself. In
comprehensive simulation studies, we apply the proposed load balancing
technique to multiple example protocols and evaluate its effects on the network
performance. The results show that all considered protocols can achieve
significantly higher reliability and improved Packet Delivery Ratio (PDR)
values by applying the proposed load balancing scheme.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2017 06:45:30 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Feb 2018 09:21:51 GMT"
}
] | 2018-02-22T00:00:00 |
[
[
"Sliwa",
"Benjamin",
""
],
[
"Falkenberg",
"Robert",
""
],
[
"Wietfeld",
"Christian",
""
]
] |
new_dataset
| 0.995077 |
1709.06841
|
Ruihao Li
|
Ruihao Li, Sen Wang, Zhiqiang Long and Dongbing Gu
|
UnDeepVO: Monocular Visual Odometry through Unsupervised Deep Learning
|
6 pages, 6 figures, Accepted by ICRA18. Video:
(https://www.youtube.com/watch?v=5RdjO93wJqo) Website:
(http://senwang.gitlab.io/UnDeepVO/)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel monocular visual odometry (VO) system called UnDeepVO in
this paper. UnDeepVO is able to estimate the 6-DoF pose of a monocular camera
and the depth of its view by using deep neural networks. There are two salient
features of the proposed UnDeepVO: one is the unsupervised deep learning
scheme, and the other is the absolute scale recovery. Specifically, we train
UnDeepVO by using stereo image pairs to recover the scale but test it by using
consecutive monocular images. Thus, UnDeepVO is a monocular system. The loss
function defined for training the networks is based on spatial and temporal
dense information. A system overview is shown in Fig. 1. The experiments on
KITTI dataset show our UnDeepVO achieves good performance in terms of pose
accuracy.
|
[
{
"version": "v1",
"created": "Wed, 20 Sep 2017 12:54:26 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Feb 2018 14:44:30 GMT"
}
] | 2018-02-22T00:00:00 |
[
[
"Li",
"Ruihao",
""
],
[
"Wang",
"Sen",
""
],
[
"Long",
"Zhiqiang",
""
],
[
"Gu",
"Dongbing",
""
]
] |
new_dataset
| 0.987651 |
1710.05519
|
Kiem-Hieu Nguyen
|
Kiem-Hieu Nguyen
|
BKTreebank: Building a Vietnamese Dependency Treebank
|
Accepted for LREC 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dependency treebank is an important resource in any language. In this paper,
we present our work on building BKTreebank, a dependency treebank for
Vietnamese. Important points on designing POS tagset, dependency relations, and
annotation guidelines are discussed. We describe experiments on POS tagging and
dependency parsing on the treebank. Experimental results show that the treebank
is a useful resource for Vietnamese language processing.
|
[
{
"version": "v1",
"created": "Mon, 16 Oct 2017 05:49:29 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Feb 2018 10:45:32 GMT"
}
] | 2018-02-22T00:00:00 |
[
[
"Nguyen",
"Kiem-Hieu",
""
]
] |
new_dataset
| 0.992786 |
1710.10639
|
Reid Pryzant
|
Reid Pryzant, Yongjoo Chung, Dan Jurafsky, and Denny Britz
|
JESC: Japanese-English Subtitle Corpus
|
To appear at LREC 2018. Project website updated
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we describe the Japanese-English Subtitle Corpus (JESC). JESC
is a large Japanese-English parallel corpus covering the underrepresented
domain of conversational dialogue. It consists of more than 3.2 million
examples, making it the largest freely available dataset of its kind. The
corpus was assembled by crawling and aligning subtitles found on the web. The
assembly process incorporates a number of novel preprocessing elements to
ensure high monolingual fluency and accurate bilingual alignments. We summarize
its contents and evaluate its quality using human experts and baseline machine
translation (MT) systems.
|
[
{
"version": "v1",
"created": "Sun, 29 Oct 2017 16:15:30 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Oct 2017 01:04:43 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Dec 2017 15:50:39 GMT"
},
{
"version": "v4",
"created": "Wed, 21 Feb 2018 16:23:56 GMT"
}
] | 2018-02-22T00:00:00 |
[
[
"Pryzant",
"Reid",
""
],
[
"Chung",
"Yongjoo",
""
],
[
"Jurafsky",
"Dan",
""
],
[
"Britz",
"Denny",
""
]
] |
new_dataset
| 0.999809 |
1711.00238
|
Qianhui Luo
|
Qianhui Luo, Huifang Ma, Yue Wang, Li Tang and Rong Xiong
|
3D-SSD: Learning Hierarchical Features from RGB-D Images for Amodal 3D
Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper aims at developing a faster and a more accurate solution to the
amodal 3D object detection problem for indoor scenes. It is achieved through a
novel neural network that takes a pair of RGB-D images as the input and
delivers oriented 3D bounding boxes as the output. The network, named 3D-SSD,
composed of two parts: hierarchical feature fusion and multi-layer prediction.
The hierarchical feature fusion combines appearance and geometric features from
RGB-D images while the multi-layer prediction utilizes multi-scale features for
object detection. As a result, the network can exploit 2.5D representations in
a synergetic way to improve the accuracy and efficiency. The issue of object
sizes is addressed by attaching a set of 3D anchor boxes with varying sizes to
every location of the prediction layers. At the end stage, the category scores
for 3D anchor boxes are generated with adjusted positions, sizes and
orientations respectively, leading to the final detections using non-maximum
suppression. In the training phase, the positive samples are identified with
the aid of 2D ground truth to avoid the noisy estimation of depth from raw
data, which guide to a better converged model. Experiments performed on the
challenging SUN RGB-D dataset show that our algorithm outperforms the
state-of-the-art Deep Sliding Shape by 10.2% mAP and 88x faster. Further,
experiments also suggest our approach achieves comparable accuracy and is 386x
faster than the state-of-art method on the NYUv2 dataset even with a smaller
input image size.
|
[
{
"version": "v1",
"created": "Wed, 1 Nov 2017 07:57:25 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Feb 2018 09:06:33 GMT"
}
] | 2018-02-22T00:00:00 |
[
[
"Luo",
"Qianhui",
""
],
[
"Ma",
"Huifang",
""
],
[
"Wang",
"Yue",
""
],
[
"Tang",
"Li",
""
],
[
"Xiong",
"Rong",
""
]
] |
new_dataset
| 0.983743 |
1801.03317
|
Benjamin Sliwa
|
Marcus Haferkamp and Manar Al-Askary and Dennis Dorn and Benjamin
Sliwa and Lars Habel and Michael Schreckenberg and Christian Wietfeld
|
Radio-based Traffic Flow Detection and Vehicle Classification for Future
Smart Cities
| null |
Vehicular Technology Conference (VTC Spring), 2017 IEEE 85th
|
10.1109/VTCSpring.2017.8108633
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intelligent Transportation Systems (ITSs) providing vehicle-related
statistical data are one of the key components for future smart cities. In this
context, knowledge about the current traffic flow is used for travel time
reduction and proactive jam avoidance by intelligent traffic control
mechanisms. In addition, the monitoring and classification of vehicles can be
used in the field of smart parking systems. The required data is measured using
networks with a wide range of sensors. Nevertheless, in the context of smart
cities no existing solution for traffic flow detection and vehicle
classification is able to guarantee high classification accuracy, low
deployment and maintenance costs, low power consumption and a
weather-independent operation while respecting privacy. In this paper, we
propose a radiobased approach for traffic flow detection and vehicle
classification using signal attenuation measurements and machine learning
algorithms. The results of comprehensive measurements in the field prove its
high classification success rate of about 99%.
|
[
{
"version": "v1",
"created": "Wed, 10 Jan 2018 11:39:55 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Feb 2018 09:20:31 GMT"
}
] | 2018-02-22T00:00:00 |
[
[
"Haferkamp",
"Marcus",
""
],
[
"Al-Askary",
"Manar",
""
],
[
"Dorn",
"Dennis",
""
],
[
"Sliwa",
"Benjamin",
""
],
[
"Habel",
"Lars",
""
],
[
"Schreckenberg",
"Michael",
""
],
[
"Wietfeld",
"Christian",
""
]
] |
new_dataset
| 0.980518 |
1802.06042
|
Karthikeyan Sundaresan
|
Karthikeyan Sundaresan, Eugene Chai, Ayon Chakraborty, Sampath
Rangarajan
|
SkyLiTE: End-to-End Design of Low-Altitude UAV Networks for Providing
LTE Connectivity
| null | null | null |
NEC Labs America Technical Report 2018-TR001
|
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Un-manned aerial vehicle (UAVs) have the potential to change the landscape of
wide-area wireless connectivity by bringing them to areas where connectivity
was sparing or non-existent (e.g. rural areas) or has been compromised due to
disasters. While Google's Project Loon and Facebook's Project Aquila are
examples of high-altitude, long-endurance UAV-based connectivity efforts in
this direction, the telecom operators (e.g. AT&T and Verizon) have been
exploring low-altitude UAV-based LTE solutions for on-demand deployments.
Understandably, these projects are in their early stages and face formidable
challenges in their realization and deployment. The goal of this document is to
expose the reader to both the challenges as well as the potential offered by
these unconventional connectivity solutions. We aim to explore the end-to-end
design of such UAV-based connectivity networks particularly in the context of
low-altitude UAV networks providing LTE connectivity. Specifically, we aim to
highlight the challenges that span across multiple layers (access, core
network, and backhaul) in an inter-twined manner as well as the richness and
complexity of the design space itself. To help interested readers navigate this
complex design space towards a solution, we also articulate the overview of one
such end-to-end design, namely SkyLiTE-- a self-organizing network of
low-altitude UAVs that provide optimized LTE connectivity in a desired region.
|
[
{
"version": "v1",
"created": "Fri, 16 Feb 2018 17:34:35 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Feb 2018 20:50:48 GMT"
}
] | 2018-02-22T00:00:00 |
[
[
"Sundaresan",
"Karthikeyan",
""
],
[
"Chai",
"Eugene",
""
],
[
"Chakraborty",
"Ayon",
""
],
[
"Rangarajan",
"Sampath",
""
]
] |
new_dataset
| 0.999399 |
1802.07280
|
Joseph Shaheen
|
Joseph A.E. Shaheen
|
Simulating the Ridesharing Economy: The Individual Agent
Metro-Washington Area Ridesharing Model
|
28 pages. Please cite as Shaheen, J. A. E., Simulating the
Ride-sharing Economy: The Individual Agent Metro-Washington Area Ride-sharing
Model, Complex Adaptive Systems: Views from the Physical, Natural, and Social
Sciences, 2018. forthcoming
| null | null | null |
cs.MA nlin.AO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ridesharing economy is experiencing rapid growth and innovation.
Companies such as Uber and Lyft are continuing to grow at a considerable pace
while providing their platform as an organizing medium for ridesharing
services, increasing consumer utility as well as employing thousands in
part-time positions. However, many challenges remain in the modeling of
ridesharing services, many of which are not currently under wide consideration.
In this paper, an agent-based model is developed to simulate a ridesharing
service in the Washington D.C. metropolitan region. The model is used to
examine levels of utility gained for both riders (customers) and drivers
(service providers) of a generic ridesharing service. A description of the
Individual Agent Metro-Washington Area Ridesharing Model (IAMWARM) is provided,
as well as a description of a typical simulation run. We investigate the
financial gains of drivers for a 24-hour period under two scenarios and two
spatial movement behaviors. The two spatial behaviors were random movement and
Voronoi movement, which we describe. Both movement behaviors were tested under
a stationary run conditions scenario and a variable run conditions scenario. We
find that Voronoi movement increased drivers' utility gained but that emergence
of this system property was only viable under variable scenario conditions.
This result provides two important insights: The first is that driver movement
decisions prior to passenger pickup can impact financial gain for the service
and drivers, and consequently, rate of successful pickup for riders. The second
is that this phenomenon is only evident under experimentation conditions where
variability in passenger and driver arrival rates are administered.
|
[
{
"version": "v1",
"created": "Mon, 19 Feb 2018 01:58:28 GMT"
}
] | 2018-02-22T00:00:00 |
[
[
"Shaheen",
"Joseph A. E.",
""
]
] |
new_dataset
| 0.979017 |
1802.07389
|
Hyeontaek Lim
|
Hyeontaek Lim and David G. Andersen and Michael Kaminsky
|
3LC: Lightweight and Effective Traffic Compression for Distributed
Machine Learning
| null | null | null | null |
cs.LG cs.DC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The performance and efficiency of distributed machine learning (ML) depends
significantly on how long it takes for nodes to exchange state changes.
Overly-aggressive attempts to reduce communication often sacrifice final model
accuracy and necessitate additional ML techniques to compensate for this loss,
limiting their generality. Some attempts to reduce communication incur high
computation overhead, which makes their performance benefits visible only over
slow networks.
We present 3LC, a lossy compression scheme for state change traffic that
strikes balance between multiple goals: traffic reduction, accuracy,
computation overhead, and generality. It combines three new
techniques---3-value quantization with sparsity multiplication, quartic
encoding, and zero-run encoding---to leverage strengths of quantization and
sparsification techniques and avoid their drawbacks. It achieves a data
compression ratio of up to 39--107X, almost the same test accuracy of trained
models, and high compression speed. Distributed ML frameworks can employ 3LC
without modifications to existing ML algorithms. Our experiments show that 3LC
reduces wall-clock training time of ResNet-110--based image classifiers for
CIFAR-10 on a 10-GPU cluster by up to 16--23X compared to TensorFlow's baseline
design.
|
[
{
"version": "v1",
"created": "Wed, 21 Feb 2018 01:08:58 GMT"
}
] | 2018-02-22T00:00:00 |
[
[
"Lim",
"Hyeontaek",
""
],
[
"Andersen",
"David G.",
""
],
[
"Kaminsky",
"Michael",
""
]
] |
new_dataset
| 0.985622 |
1802.07508
|
Marianna Nicolosi Asmundo
|
Domenico Cantone, Marianna Nicolosi-Asmundo, Ewa Or{\l}owska
|
A Dual Tableau-based Decision Procedure for a Relational Logic with the
Universal Relation (Extended Version)
|
Extended version of the conference paper: D. Cantone, M.
Nicolosi-Asmundo, E. Or{\l}owska. A Dual Tableau-based Decision Procedure for
a Relational Logic with the Universal Relation. In Proceedings of the 29th
Italian Conference on Computational Logic, Torino, Italy, June 16-18, 2014.
CEUR Workshop Proceedings Vol. 1195, pp. 194-209 (2014)
| null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present a first result towards the use of entailment in- side relational
dual tableau-based decision procedures. To this end, we introduce a fragment of
RL(1) which admits a restricted form of composition, (R ; S) or (R ; 1), where
the left subterm R of (R ; S) is only allowed to be either the constant 1, or a
Boolean term neither containing the complement operator nor the constant 1,
while in the case of (R ; 1), R can only be a Boolean term involving relational
variables and the operators of intersection and of union. We prove the
decidability of the fragment by defining a dual tableau- based decision
procedure with a suitable blocking mechanism and where the rules to decompose
compositional formulae are modified so to deal with the constant 1 while
preserving termination. The fragment properly includes the logics presented in
previous work and, therefore, it allows one to express, among others, the
multi-modal logic K with union and intersection of accessibility relations, and
the description logic ALC with union and intersection of roles.
|
[
{
"version": "v1",
"created": "Wed, 21 Feb 2018 10:57:05 GMT"
}
] | 2018-02-22T00:00:00 |
[
[
"Cantone",
"Domenico",
""
],
[
"Nicolosi-Asmundo",
"Marianna",
""
],
[
"Orłowska",
"Ewa",
""
]
] |
new_dataset
| 0.992337 |
1802.07545
|
Omar Reyad
|
Omar Reyad, M. A. Mofaddel, W. M. Abd-Elhafiez, Mohamed Fathy
|
A Novel Image Encryption Scheme Based on Different Block Sizes for
Grayscale and Color Images
|
7 pages, 4 figures, conference
|
12th International Conference on Computer Engineering and Systems
(ICCES) 2017
|
10.1109/ICCES.2017.8275351
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, two image encryption schemes are proposed for grayscale and
color images. The two encryption schemes are based on dividing each image into
blocks of different sizes. In the first scheme, the two dimension ($2$D) input
image is divided into various blocks of size $N \times N$. Each block is
transformed into a one dimensional ($1$D) array by using the Zigzag pattern.
Then, the exclusive or (XOR) logical operation is used to encrypt each block
with the analogous secret key. In the second scheme, after the transformation
process, the first block of each image is encrypted by the corresponding secret
key. Then, before the next block is encrypted, it is XORed with the first
encrypted block to become the next input to the encrypting routine and so on.
This feedback mechanism depends on the cipher block chaining (CBC) mode of
operation which considers the heart of some ciphers because it is highly
nonlinear. In the case of color images, the color component is separated into
blocks with the same size and different secret keys. The used secret key
sequences are generated from elliptic curves (EC) over a \textit{binary} finite
field $\mathbb{F}_{2^{m}}$. Finally, the experimental results are carried out
and security analysis of the ciphered images are demonstrated that the two
proposed schemes had a better performance in terms of security, sensitivity and
robustness.
|
[
{
"version": "v1",
"created": "Wed, 21 Feb 2018 12:52:18 GMT"
}
] | 2018-02-22T00:00:00 |
[
[
"Reyad",
"Omar",
""
],
[
"Mofaddel",
"M. A.",
""
],
[
"Abd-Elhafiez",
"W. M.",
""
],
[
"Fathy",
"Mohamed",
""
]
] |
new_dataset
| 0.988877 |
1802.07592
|
Ioannis Tamvakis Mr
|
Ioannis Tamvakis
|
"How to squash a mathematical tomato", Rubic's cube-like surfaces and
their connection to reversible computation
| null | null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Here we show how reversible computation processes, like Margolus diffusion,
can be envisioned as physical turning operations on a 2-dimensional rigid
surface that is cut by a regular pattern of intersecting circles. We then
briefly explore the design-space of these patterns, and report on the discovery
of an interesting fractal subdivision of space by iterative circle packings. We
devise two different ways for creating this fractal, both showing interesting
properties, some resembling properties of the dragon curve. The patterns
presented here can have interesting applications to the engineering of modular,
kinetic, active surfaces.
|
[
{
"version": "v1",
"created": "Wed, 7 Feb 2018 17:14:01 GMT"
}
] | 2018-02-22T00:00:00 |
[
[
"Tamvakis",
"Ioannis",
""
]
] |
new_dataset
| 0.985161 |
1802.07673
|
Marshall Ball
|
Marshall Ball, Dana Dachman-Soled, Siyao Guo, Tal Malkin, Li-Yang Tan
|
Non-Malleable Codes for Small-Depth Circuits
|
26 pages, 4 figures
| null | null | null |
cs.CC cs.CR cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We construct efficient, unconditional non-malleable codes that are secure
against tampering functions computed by small-depth circuits. For
constant-depth circuits of polynomial size (i.e. $\mathsf{AC^0}$ tampering
functions), our codes have codeword length $n = k^{1+o(1)}$ for a $k$-bit
message. This is an exponential improvement of the previous best construction
due to Chattopadhyay and Li (STOC 2017), which had codeword length
$2^{O(\sqrt{k})}$. Our construction remains efficient for circuit depths as
large as $\Theta(\log(n)/\log\log(n))$ (indeed, our codeword length remains
$n\leq k^{1+\epsilon})$, and extending our result beyond this would require
separating $\mathsf{P}$ from $\mathsf{NC^1}$.
We obtain our codes via a new efficient non-malleable reduction from
small-depth tampering to split-state tampering. A novel aspect of our work is
the incorporation of techniques from unconditional derandomization into the
framework of non-malleable reductions. In particular, a key ingredient in our
analysis is a recent pseudorandom switching lemma of Trevisan and Xue (CCC
2013), a derandomization of the influential switching lemma from circuit
complexity; the randomness-efficiency of this switching lemma translates into
the rate-efficiency of our codes via our non-malleable reduction.
|
[
{
"version": "v1",
"created": "Wed, 21 Feb 2018 17:11:52 GMT"
}
] | 2018-02-22T00:00:00 |
[
[
"Ball",
"Marshall",
""
],
[
"Dachman-Soled",
"Dana",
""
],
[
"Guo",
"Siyao",
""
],
[
"Malkin",
"Tal",
""
],
[
"Tan",
"Li-Yang",
""
]
] |
new_dataset
| 0.997465 |
1801.10202
|
Alex Zihao Zhu
|
Alex Zihao Zhu, Dinesh Thakur, Tolga Ozaslan, Bernd Pfrommer, Vijay
Kumar and Kostas Daniilidis
|
The Multi Vehicle Stereo Event Camera Dataset: An Event Camera Dataset
for 3D Perception
|
8 pages, 7 figures, 2 tables. Website:
https://daniilidis-group.github.io/mvsec/. Video:
https://www.youtube.com/watch?v=AwRMO5vFgak. Updated website and video in
comments, DOI
| null |
10.1109/LRA.2018.2800793
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event based cameras are a new passive sensing modality with a number of
benefits over traditional cameras, including extremely low latency,
asynchronous data acquisition, high dynamic range and very low power
consumption. There has been a lot of recent interest and development in
applying algorithms to use the events to perform a variety of 3D perception
tasks, such as feature tracking, visual odometry, and stereo depth estimation.
However, there currently lacks the wealth of labeled data that exists for
traditional cameras to be used for both testing and development. In this paper,
we present a large dataset with a synchronized stereo pair event based camera
system, carried on a handheld rig, flown by a hexacopter, driven on top of a
car and mounted on a motorcycle, in a variety of different illumination levels
and environments. From each camera, we provide the event stream, grayscale
images and IMU readings. In addition, we utilize a combination of IMU, a
rigidly mounted lidar system, indoor and outdoor motion capture and GPS to
provide accurate pose and depth images for each camera at up to 100Hz. For
comparison, we also provide synchronized grayscale images and IMU readings from
a frame based stereo camera system.
|
[
{
"version": "v1",
"created": "Tue, 30 Jan 2018 20:09:30 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Feb 2018 23:00:01 GMT"
}
] | 2018-02-21T00:00:00 |
[
[
"Zhu",
"Alex Zihao",
""
],
[
"Thakur",
"Dinesh",
""
],
[
"Ozaslan",
"Tolga",
""
],
[
"Pfrommer",
"Bernd",
""
],
[
"Kumar",
"Vijay",
""
],
[
"Daniilidis",
"Kostas",
""
]
] |
new_dataset
| 0.99973 |
1802.03014
|
Nitin Darkunde
|
Nitin S. Darkunde, Arunkumar R. Patil
|
On Some Ternary LCD Codes
|
Corrected typos from earlier version. arXiv admin note: substantial
text overlap with arXiv:1801.05271
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The main aim of this paper is to study $LCD$ codes. Linear code with
complementary dual($LCD$) are those codes which have their intersection with
their dual code as $\{0\}$. In this paper we will give rather alternative proof
of Massey's theorem\cite{8}, which is one of the most important
characterization of $LCD$ codes. Let $LCD[n,k]_3$ denote the maximum of
possible values of $d$ among $[n,k,d]$ ternary $LCD$ codes. In \cite{4},
authors have given upper bound on $LCD[n,k]_2$ and extended this result for
$LCD[n,k]_q$, for any $q$, where $q$ is some prime power. We will discuss cases
when this bound is attained for $q=3$.
|
[
{
"version": "v1",
"created": "Thu, 8 Feb 2018 13:05:42 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Feb 2018 09:24:35 GMT"
}
] | 2018-02-21T00:00:00 |
[
[
"Darkunde",
"Nitin S.",
""
],
[
"Patil",
"Arunkumar R.",
""
]
] |
new_dataset
| 0.997224 |
1802.06852
|
Zeeshan Bhatti
|
Zeeshan Bhatti, Ahsan Abro, Abdul Rehman Gillal, Mostafa Karbasi
|
Be-Educated: Multimedia Learning through 3D Animation
|
10 pages, 32 figures
|
INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND EMERGING
TECHNOLOGIES,(IJCET)- VOL1(1) DECEMBER 2017- 13-22
| null | null |
cs.GR cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Multimedia learning tools and techniques are placing its importance with
large scale in education sector. With the help of multimedia learning, various
complex phenomenon and theories can be explained and taught easily and
conveniently. This project aims to teach and spread the importance of education
and respecting the tools of education: pen, paper, pencil, rubber. To achieve
this cognitive learning, a 3D animated movie has been developed using
principles of multimedia learning with 3D cartoon characters resembling the
actual educational objects, where the buildings have also been modelled to
resemble real books and diaries. For modelling and animation of these
characters, polygon mesh tools are used in 3D Studio Max. Additionally, the
final composition of video and audio is performed in adobe premiere. This 3D
animated video aims to highlight a message of importance for education and
stationary. The Moral of movie is that do not waste your stationary material,
use your Pen and Paper for the purpose they are made for. To be a good citizen
you have to Be-Educated yourself and for that you need to give value to Pen.
The final rendered and composited 3D animated video reflects this moral and
portrays the intended message with very vibrant visuals
|
[
{
"version": "v1",
"created": "Mon, 19 Feb 2018 21:08:50 GMT"
}
] | 2018-02-21T00:00:00 |
[
[
"Bhatti",
"Zeeshan",
""
],
[
"Abro",
"Ahsan",
""
],
[
"Gillal",
"Abdul Rehman",
""
],
[
"Karbasi",
"Mostafa",
""
]
] |
new_dataset
| 0.993295 |
1802.06902
|
Roman Kovalchukov
|
Antonino Orsino, Roman Kovalchukov, Andrey Samuylov, Dmitri
Moltchanov, Sergey Andreev, Yevgeni Koucheryavy and Mikko Valkama
|
Caching-Aided Collaborative D2D Operation for Predictive Data
Dissemination in Industrial IoT
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Industrial automation deployments constitute challenging environments where
moving IoT machines may produce high-definition video and other heavy sensor
data during surveying and inspection operations. Transporting massive contents
to the edge network infrastructure and then eventually to the remote human
operator requires reliable and high-rate radio links supported by intelligent
data caching and delivery mechanisms. In this work, we address the challenges
of contents dissemination in characteristic factory automation scenarios by
proposing to engage moving industrial machines as device-to-device (D2D)
caching helpers. With the goal to improve reliability of high-rate
millimeter-wave (mmWave) data connections, we introduce the alternative
contents dissemination modes and then construct a novel mobility-aware
methodology that helps develop predictive mode selection strategies based on
the anticipated radio link conditions. We also conduct a thorough system-level
evaluation of representative data dissemination strategies to confirm the
benefits of predictive solutions that employ D2D-enabled collaborative caching
at the wireless edge to lower contents delivery latency and improve data
acquisition reliability.
|
[
{
"version": "v1",
"created": "Mon, 19 Feb 2018 22:58:41 GMT"
}
] | 2018-02-21T00:00:00 |
[
[
"Orsino",
"Antonino",
""
],
[
"Kovalchukov",
"Roman",
""
],
[
"Samuylov",
"Andrey",
""
],
[
"Moltchanov",
"Dmitri",
""
],
[
"Andreev",
"Sergey",
""
],
[
"Koucheryavy",
"Yevgeni",
""
],
[
"Valkama",
"Mikko",
""
]
] |
new_dataset
| 0.994866 |
1802.06950
|
Tirthankar Ghosal
|
Tirthankar Ghosal, Amitra Salam, Swati Tiwari, Asif Ekbal, Pushpak
Bhattacharyya
|
TAP-DLND 1.0 : A Corpus for Document Level Novelty Detection
|
Accepted for publication in Language Resources and Evaluation
Conference (LREC) 2018
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Detecting novelty of an entire document is an Artificial Intelligence (AI)
frontier problem that has widespread NLP applications, such as extractive
document summarization, tracking development of news events, predicting impact
of scholarly articles, etc. Important though the problem is, we are unaware of
any benchmark document level data that correctly addresses the evaluation of
automatic novelty detection techniques in a classification framework. To bridge
this gap, we present here a resource for benchmarking the techniques for
document level novelty detection. We create the resource via event-specific
crawling of news documents across several domains in a periodic manner. We
release the annotated corpus with necessary statistics and show its use with a
developed system for the problem in concern.
|
[
{
"version": "v1",
"created": "Tue, 20 Feb 2018 03:42:11 GMT"
}
] | 2018-02-21T00:00:00 |
[
[
"Ghosal",
"Tirthankar",
""
],
[
"Salam",
"Amitra",
""
],
[
"Tiwari",
"Swati",
""
],
[
"Ekbal",
"Asif",
""
],
[
"Bhattacharyya",
"Pushpak",
""
]
] |
new_dataset
| 0.968832 |
1802.06960
|
Pingping Zhang
|
Pingping Zhang, Luyao Wang, Dong Wang, Huchuan Lu, Chunhua Shen
|
Agile Amulet: Real-Time Salient Object Detection with Contextual
Attention
|
10 pages, 4 figures and 3 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes an Agile Aggregating Multi-Level feaTure framework (Agile
Amulet) for salient object detection. The Agile Amulet builds on previous works
to predict saliency maps using multi-level convolutional features. Compared to
previous works, Agile Amulet employs some key innovations to improve training
and testing speed while also increase prediction accuracy. More specifically,
we first introduce a contextual attention module that can rapidly highlight
most salient objects or regions with contextual pyramids. Thus, it effectively
guides the learning of low-layer convolutional features and tells the backbone
network where to look. The contextual attention module is a fully convolutional
mechanism that simultaneously learns complementary features and predicts
saliency scores at each pixel. In addition, we propose a novel method to
aggregate multi-level deep convolutional features. As a result, we are able to
use the integrated side-output features of pre-trained convolutional networks
alone, which significantly reduces the model parameters leading to a model size
of 67 MB, about half of Amulet. Compared to other deep learning based saliency
methods, Agile Amulet is of much lighter-weight, runs faster (30 fps in
real-time) and achieves higher performance on seven public benchmarks in terms
of both quantitative and qualitative evaluation.
|
[
{
"version": "v1",
"created": "Tue, 20 Feb 2018 04:14:08 GMT"
}
] | 2018-02-21T00:00:00 |
[
[
"Zhang",
"Pingping",
""
],
[
"Wang",
"Luyao",
""
],
[
"Wang",
"Dong",
""
],
[
"Lu",
"Huchuan",
""
],
[
"Shen",
"Chunhua",
""
]
] |
new_dataset
| 0.992951 |
1802.07023
|
Gewu Bu
|
Gewu Bu, Maria Potop-Butucaru
|
BAN-GZKP: Optimal Zero Knowledge Proof based Scheme for Wireless Body
Area Networks
| null | null | null | null |
cs.NI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
BANZKP is the best to date Zero Knowledge Proof (ZKP) based secure
lightweight and energy efficient authentication scheme designed for Wireless
Area Network (WBAN). It is vulnerable to several security attacks such as the
replay attack, Distributed Denial-of-Service (DDoS) attacks at sink and
redundancy information crack. However, BANZKP needs an end-to-end
authentication which is not compliant with the human body postural mobility. We
propose a new scheme BAN-GZKP. Our scheme improves both the security and
postural mobility resilience of BANZKP. Moreover, BAN-GZKP uses only a
three-phase authentication which is optimal in the class of ZKP protocols. To
fix the security vulnerabilities of BANZKP, BAN-GZKP uses a novel random key
allocation and a Hop-by-Hop authentication definition. We further prove the
reliability of our scheme to various attacks including those to which BANZKP is
vulnerable. Furthermore, via extensive simulations we prove that our scheme,
BAN-GZKP, outperforms BANZKP in terms of reliability to human body postural
mobility for various network parameters (end-to-end delay, number of packets
exchanged in the network, number of transmissions). We compared both schemes
using representative convergecast strategies with various transmission rates
and human postural mobility. Finally, it is important to mention that BAN-GZKP
has no additional cost compared to BANZKP in terms memory, computational
complexity or energy consumption.
|
[
{
"version": "v1",
"created": "Tue, 20 Feb 2018 09:19:11 GMT"
}
] | 2018-02-21T00:00:00 |
[
[
"Bu",
"Gewu",
""
],
[
"Potop-Butucaru",
"Maria",
""
]
] |
new_dataset
| 0.999493 |
1802.07038
|
Uli Fahrenberg
|
Uli Fahrenberg
|
Higher-Dimensional Timed Automata
| null | null | null | null |
cs.LO cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a new formalism of higher-dimensional timed automata, based on
van Glabbeek's higher-dimensional automata and Alur's timed automata. We prove
that their reachability is PSPACE-complete and can be decided using zone-based
algorithms. We also show how to use tensor products to combat state-space
explosion and how to extend the setting to higher-dimensional hybrid automata.
|
[
{
"version": "v1",
"created": "Tue, 20 Feb 2018 10:06:31 GMT"
}
] | 2018-02-21T00:00:00 |
[
[
"Fahrenberg",
"Uli",
""
]
] |
new_dataset
| 0.992289 |
1802.07064
|
Xiaochuan Yin
|
Xiaochuan Yin, Henglai Wei, Penghong lin, Xiangwei Wang, Qijun Chen
|
Novel View Synthesis for Large-scale Scene using Adversarial Loss
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Novel view synthesis aims to synthesize new images from different viewpoints
of given images. Most of previous works focus on generating novel views of
certain objects with a fixed background. However, for some applications, such
as virtual reality or robotic manipulations, large changes in background may
occur due to the egomotion of the camera. Generated images of a large-scale
environment from novel views may be distorted if the structure of the
environment is not considered. In this work, we propose a novel fully
convolutional network, that can take advantage of the structural information
explicitly by incorporating the inverse depth features. The inverse depth
features are obtained from CNNs trained with sparse labeled depth values. This
framework can easily fuse multiple images from different viewpoints. To fill
the missing textures in the generated image, adversarial loss is applied, which
can also improve the overall image quality. Our method is evaluated on the
KITTI dataset. The results show that our method can generate novel views of
large-scale scene without distortion. The effectiveness of our approach is
demonstrated through qualitative and quantitative evaluation.
|
[
{
"version": "v1",
"created": "Tue, 20 Feb 2018 11:21:11 GMT"
}
] | 2018-02-21T00:00:00 |
[
[
"Yin",
"Xiaochuan",
""
],
[
"Wei",
"Henglai",
""
],
[
"lin",
"Penghong",
""
],
[
"Wang",
"Xiangwei",
""
],
[
"Chen",
"Qijun",
""
]
] |
new_dataset
| 0.996897 |
1802.07233
|
Mustafa A. Mustafa
|
Tim Van hamme and Vera Rimmer and Davy Preuveneers and Wouter Joosen
and Mustafa A. Mustafa and Aysajan Abidin and Enrique Argones R\'ua
|
Frictionless Authentication Systems: Emerging Trends, Research
Challenges and Opportunities
|
published at the 11th International Conference on Emerging Security
Information, Systems and Technologies (SECURWARE 2017)
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Authentication and authorization are critical security layers to protect a
wide range of online systems, services and content. However, the increased
prevalence of wearable and mobile devices, the expectations of a frictionless
experience and the diverse user environments will challenge the way users are
authenticated. Consumers demand secure and privacy-aware access from any
device, whenever and wherever they are, without any obstacles. This paper
reviews emerging trends and challenges with frictionless authentication systems
and identifies opportunities for further research related to the enrollment of
users, the usability of authentication schemes, as well as security and privacy
trade-offs of mobile and wearable continuous authentication systems.
|
[
{
"version": "v1",
"created": "Tue, 20 Feb 2018 18:27:04 GMT"
}
] | 2018-02-21T00:00:00 |
[
[
"Van hamme",
"Tim",
""
],
[
"Rimmer",
"Vera",
""
],
[
"Preuveneers",
"Davy",
""
],
[
"Joosen",
"Wouter",
""
],
[
"Mustafa",
"Mustafa A.",
""
],
[
"Abidin",
"Aysajan",
""
],
[
"Rúa",
"Enrique Argones",
""
]
] |
new_dataset
| 0.981371 |
1512.06271
|
Sahil Singla
|
Guru Guruganesh, Sahil Singla
|
Online Matroid Intersection: Beating Half for Random Arrival
|
39 pages, 3 figures, 1 notation table, Part of this appeared in IPCO
2017
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For two matroids $\mathcal{M}_1$ and $\mathcal{M}_2$ defined on the same
ground set $E$, the online matroid intersection problem is to design an
algorithm that constructs a large common independent set in an online fashion.
The algorithm is presented with the ground set elements one-by-one in a
uniformly random order. At each step, the algorithm must irrevocably decide
whether to pick the element, while always maintaining a common independent set.
While the natural greedy algorithm---pick an element whenever possible---is
half competitive, nothing better was previously known; even for the special
case of online bipartite matching in the edge arrival model. We present the
first randomized online algorithm that has a $\frac12 + \delta$ competitive
ratio in expectation, where $\delta >0$ is a constant. The expectation is over
the random order and the coin tosses of the algorithm. As a corollary, we also
obtain the first linear time algorithm that beats half competitiveness for
offline matroid intersection.
|
[
{
"version": "v1",
"created": "Sat, 19 Dec 2015 17:09:41 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Apr 2016 02:48:24 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Jul 2017 03:37:28 GMT"
},
{
"version": "v4",
"created": "Mon, 19 Feb 2018 18:24:36 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Guruganesh",
"Guru",
""
],
[
"Singla",
"Sahil",
""
]
] |
new_dataset
| 0.965962 |
1703.03504
|
Nathaniel Wendt
|
Nathaniel Wendt, Christine Julien
|
PACO: A System-Level Abstraction for On-Loading Contextual Data to
Mobile Devices
|
14 pages, 11 figures
| null |
10.1109/TMC.2018.2795604
| null |
cs.OH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spatiotemporal context is crucial in modern mobile applications that utilize
increasing amounts of context to better predict events and user behaviors,
requiring rich records of users' or devices' spatiotemporal histories.
Maintaining these rich histories requires frequent sampling and indexed storage
of spatiotemporal data that pushes the limits of resource-constrained mobile
devices. Today's apps offload processing and storing contextual information,
but this increases response time, often relies on the user's data connection,
and runs the very real risk of revealing sensitive information. In this paper
we motivate the feasibility of on-loading large amounts of context and
introduce PACO (Programming Abstraction for Contextual On-loading), an
architecture for on-loading data that optimizes for location and time while
allowing flexibility in storing additional context. The PACO API's innovations
enable on-loading very dense traces of information, even given devices'
resource constraints. Using real-world traces and our implementation for
Android, we demonstrate that PACO can support expressive application queries
entirely on-device. Our quantitative evaluation assesses PACO's energy
consumption, execution time, and spatiotemporal query accuracy. Further, PACO
facilitates unified contextual reasoning across multiple applications and also
supports user-controlled release of contextual data to other devices or the
cloud; we demonstrate these assets through a proof-of-concept case study.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2017 01:29:11 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Wendt",
"Nathaniel",
""
],
[
"Julien",
"Christine",
""
]
] |
new_dataset
| 0.978955 |
1704.01238
|
Bin Dai
|
Bin Dai, Zheng Ma, Ming Xiao, Xiaohu Tang, Pingzhi Fan
|
Finite State Multiple-Access Wiretap Channel with Delayed Feedback
|
Accepted by IEEE JSAC, special issue on physical layer security for
5G wireless networks
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, it has been shown that the time-varying multiple-access channel
(MAC) with perfect channel state information (CSI) at the receiver and delayed
feedback CSI at the transmitters can be modeled as the finite state MAC
(FS-MAC) with delayed state feedback, where the time variation of the channel
is characterized by the statistics of the underlying state process. To study
the fundamental limit of the secure transmission over multi-user wireless
communication systems, we re-visit the FS-MAC with delayed state feedback by
considering an external eavesdropper, which we call the finite state
multiple-access wiretap channel (FS-MAC-WT) with delayed feedback. The main
contribution of this paper is to show that taking full advantage of the delayed
channel output feedback helps to increase the secrecy rate region of the
FS-MAC-WT with delayed state feedback, and the results of this paper are
further illustrated by a degraded Gaussian fading example.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2017 01:38:00 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2017 17:25:19 GMT"
},
{
"version": "v3",
"created": "Sun, 18 Feb 2018 04:04:13 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Dai",
"Bin",
""
],
[
"Ma",
"Zheng",
""
],
[
"Xiao",
"Ming",
""
],
[
"Tang",
"Xiaohu",
""
],
[
"Fan",
"Pingzhi",
""
]
] |
new_dataset
| 0.986176 |
1707.00421
|
Matthias Grezet
|
Matthias Grezet, Ragnar Freij-Hollanti, Thomas Westerb\"ack and
Camilla Hollanti
|
On Binary Matroid Minors and Applications to Data Storage over Small
Fields
|
14 pages, 2 figures
|
Coding Theory and Applications, 5 ICMCTA (2017). Proceedings, pp.
139-153
|
10.1007/978-3-319-66278-7_13
| null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Locally repairable codes for distributed storage systems have gained a lot of
interest recently, and various constructions can be found in the literature.
However, most of the constructions result in either large field sizes and hence
too high computational complexity for practical implementation, or in low rates
translating into waste of the available storage space. In this paper we address
this issue by developing theory towards code existence and design over a given
field. This is done via exploiting recently established connections between
linear locally repairable codes and matroids, and using matroid-theoretic
characterisations of linearity over small fields. In particular, nonexistence
can be shown by finding certain forbidden uniform minors within the lattice of
cyclic flats. It is shown that the lattice of cyclic flats of binary matroids
have additional structure that significantly restricts the possible locality
properties of $\mathbb{F}_{2}$-linear storage codes. Moreover, a collection of
criteria for detecting uniform minors from the lattice of cyclic flats of a
given matroid is given, which is interesting in its own right.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2017 06:47:36 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Feb 2018 09:04:49 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Grezet",
"Matthias",
""
],
[
"Freij-Hollanti",
"Ragnar",
""
],
[
"Westerbäck",
"Thomas",
""
],
[
"Hollanti",
"Camilla",
""
]
] |
new_dataset
| 0.996765 |
1801.01665
|
Kiran Garimella
|
Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis,
Michael Mathioudakis
|
Political Discourse on Social Media: Echo Chambers, Gatekeepers, and the
Price of Bipartisanship
|
Published at The Web Conference 2018 (WWW2018). Please cite the WWW
version
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Echo chambers, i.e., situations where one is exposed only to opinions that
agree with their own, are an increasing concern for the political discourse in
many democratic countries. This paper studies the phenomenon of political echo
chambers on social media. We identify the two components in the phenomenon: the
opinion that is shared ('echo'), and the place that allows its exposure
('chamber' --- the social network), and examine closely at how these two
components interact. We define a production and consumption measure for
social-media users, which captures the political leaning of the content shared
and received by them. By comparing the two, we find that Twitter users are, to
a large degree, exposed to political opinions that agree with their own. We
also find that users who try to bridge the echo chambers, by sharing content
with diverse leaning, have to pay a 'price of bipartisanship' in terms of their
network centrality and content appreciation. In addition, we study the role of
'gatekeepers', users who consume content with diverse leaning but produce
partisan content (with a single-sided leaning), in the formation of echo
chambers. Finally, we apply these findings to the task of predicting partisans
and gatekeepers from social and content features. While partisan users turn out
relatively easy to identify, gatekeepers prove to be more challenging.
|
[
{
"version": "v1",
"created": "Fri, 5 Jan 2018 08:24:55 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Feb 2018 11:12:41 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Garimella",
"Kiran",
""
],
[
"Morales",
"Gianmarco De Francisci",
""
],
[
"Gionis",
"Aristides",
""
],
[
"Mathioudakis",
"Michael",
""
]
] |
new_dataset
| 0.999203 |
1801.03650
|
Azat Khusnutdinov
|
Denis Usachev, Azat Khusnutdinov, Manuel Mazzara, Adil Khan, Ivan
Panchenko
|
Open source platform Digital Personal Assistant
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays Digital Personal Assistants (DPA) become more and more popular. DPAs
help to increase quality of life especially for elderly or disabled people. In
this paper we develop an open source DPA and smart home system as a 3-rd party
extension to show the functionality of the assistant. The system is designed to
use the DPA as a learning platform for engineers to provide them with the
opportunity to create and test their own hypothesis. The DPA is able to
recognize users' commands in natural language and transform it to the set of
machine commands that can be used to control different 3rd-party application.
We use smart home system as an example of such 3rd-party. We demonstrate that
the system is able to control home appliances, like lights, or to display
information about the current state of the home, like temperature, through a
dialogue between a user and the Digital Personal Assistant.
|
[
{
"version": "v1",
"created": "Thu, 11 Jan 2018 07:43:41 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Feb 2018 18:33:06 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Feb 2018 18:01:02 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Usachev",
"Denis",
""
],
[
"Khusnutdinov",
"Azat",
""
],
[
"Mazzara",
"Manuel",
""
],
[
"Khan",
"Adil",
""
],
[
"Panchenko",
"Ivan",
""
]
] |
new_dataset
| 0.985045 |
1802.05022
|
Ali Al-Azzawi Fouad
|
A. F. Al-Azzawi
|
PyFml - a Textual Language For Feature Modeling
|
13 pages, 13 figures, 29 refrences
|
International Journal of Software Engineering & Applications
(IJSEA), Vol.9, No.1, January 2018
| null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Feature model is a typical approach to capture variability in a software
product line design and implementation. For that, most works automate feature
model using a limited graphical notation represented by propositional logic and
implemented by Prolog or Java programming languages. These works do not
properly combine the extensions of classical feature models and do not provide
scalability to implement large size problem issues. In this work, we propose a
textual feature modeling language based on Python programming language (PyFML),
that generalizes the classical feature models with instance feature
cardinalities and attributes which be extended with highlight of replication
and complex logical and mathematical cross-tree constraints. textX
Meta-language is used for building PyFML to describe and organize feature model
dependencies, and PyConstraint Problem Solver is used to implement feature
model variability and its constraints validation. The work provides a textual
human-readable language to represent feature model and maps the feature model
descriptions directly into the object-oriented representation to be used by
Constraint Problem Solver for computation. Furthermore, the proposed PyFML
makes the notation of feature modeling more expressive to deal with complex
software product line representations and using PyConstraint Problem Solver
|
[
{
"version": "v1",
"created": "Wed, 14 Feb 2018 10:21:51 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Feb 2018 11:48:55 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Al-Azzawi",
"A. F.",
""
]
] |
new_dataset
| 0.960512 |
1802.05219
|
Michael Green
|
Gabriella A. B. Barros, Michael Cerny Green, Antonios Liapis, and
Julian Togelius
|
Who Killed Albert Einstein? From Open Data to Murder Mystery Games
|
11 pages, 6 figures, 2 tables
|
10.1109/TG.2018.2806190
| null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a framework for generating adventure games from open
data. Focusing on the murder mystery type of adventure games, the generator is
able to transform open data from Wikipedia articles, OpenStreetMap and images
from Wikimedia Commons into WikiMysteries. Every WikiMystery game revolves
around the murder of a person with a Wikipedia article and populates the game
with suspects who must be arrested by the player if guilty of the murder or
absolved if innocent. Starting from only one person as the victim, an extensive
generative pipeline finds suspects, their alibis, and paths connecting them
from open data, transforms open data into cities, buildings, non-player
characters, locks and keys and dialog options. The paper describes in detail
each generative step, provides a specific playthrough of one WikiMystery where
Albert Einstein is murdered, and evaluates the outcomes of games generated for
the 100 most influential people of the 20th century.
|
[
{
"version": "v1",
"created": "Wed, 14 Feb 2018 17:17:54 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Barros",
"Gabriella A. B.",
""
],
[
"Green",
"Michael Cerny",
""
],
[
"Liapis",
"Antonios",
""
],
[
"Togelius",
"Julian",
""
]
] |
new_dataset
| 0.998194 |
1802.06185
|
Amrith Krishna
|
Vikas Reddy, Amrith Krishna, Vishnu Dutt Sharma, Prateek Gupta,
Vineeth M R, Pawan Goyal
|
Building a Word Segmenter for Sanskrit Overnight
|
The work is accepted at LREC 2018, Miyazaki, Japan
| null | null | null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is an abundance of digitised texts available in Sanskrit. However, the
word segmentation task in such texts are challenging due to the issue of
'Sandhi'. In Sandhi, words in a sentence often fuse together to form a single
chunk of text, where the word delimiter vanishes and sounds at the word
boundaries undergo transformations, which is also reflected in the written
text. Here, we propose an approach that uses a deep sequence to sequence
(seq2seq) model that takes only the sandhied string as the input and predicts
the unsandhied string. The state of the art models are linguistically involved
and have external dependencies for the lexical and morphological analysis of
the input. Our model can be trained "overnight" and be used for production. In
spite of the knowledge lean approach, our system preforms better than the
current state of the art by gaining a percentage increase of 16.79 % than the
current state of the art.
|
[
{
"version": "v1",
"created": "Sat, 17 Feb 2018 04:05:36 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Reddy",
"Vikas",
""
],
[
"Krishna",
"Amrith",
""
],
[
"Sharma",
"Vishnu Dutt",
""
],
[
"Gupta",
"Prateek",
""
],
[
"R",
"Vineeth M",
""
],
[
"Goyal",
"Pawan",
""
]
] |
new_dataset
| 0.995765 |
1802.06195
|
Jonti Talukdar
|
Bhavana Mehta, Jonti Talukdar, Sachin Gajjar
|
High Speed SRT Divider for Intelligent Embedded System
|
IEEE Int. Conf. Soft Comp. 17 (5 Pages)
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Increasing development in embedded systems, VLSI and processor design have
given rise to increased demands from the system in terms of power, speed, area,
throughput etc. Most of the sophisticated embedded system applications consist
of processors, which now need an arithmetic unit with the ability to execute
complex division operations with maximum efficiency. Hence the speed of the
arithmetic unit is critically dependent on division operation. Most of the
dividers use the SRT division algorithm for division. In IoT and other embedded
applications, typically radix 2 and radix 4 division algorithms are used. The
proposed algorithm lies on parallel execution of various steps so as to reduce
time critical path, use fuzzy logic to solve the overlap problem in quotient
selection, hence reducing maximum delay and increasing the accuracy. Every
logical circuit has a maximum delay on which the timing of the circuit is
dependent and the path, causing the maximum delay is known as the critical
path. Our approach uses the previous SRT algorithm methods to make a highly
parallel pipelined design and use Mamdani model to determine a solution to the
overlapping problem to reduce the overall execution time of radix 4 SRT
division on 64 bits double precision floating point numbers to 281ns. The
design is made using Bluespec System Verilog, synthesized and simulated using
Vivado v.2016.1 and implemented on Xilinx Virtex UltraScale FPGA board.
|
[
{
"version": "v1",
"created": "Sat, 17 Feb 2018 05:20:34 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Mehta",
"Bhavana",
""
],
[
"Talukdar",
"Jonti",
""
],
[
"Gajjar",
"Sachin",
""
]
] |
new_dataset
| 0.997894 |
1802.06223
|
Eunjin Oh
|
Eunjin Oh and Luis Barba and Hee-Kap Ahn
|
The Geodesic Farthest-point Voronoi Diagram in a Simple Polygon
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a set of point sites in a simple polygon, the geodesic farthest-point
Voronoi diagram partitions the polygon into cells, at most one cell per site,
such that every point in a cell has the same farthest site with respect to the
geodesic metric. We present an $O(n\log\log n+m\log m)$- time algorithm to
compute the geodesic farthest-point Voronoi diagram of $m$ point sites in a
simple $n$-gon. This improves the previously best known algorithm by Aronov et
al. [Discrete Comput. Geom. 9(3):217-255, 1993]. In the case that all point
sites are on the boundary of the simple polygon, we can compute the geodesic
farthest-point Voronoi diagram in $O((n + m) \log \log n)$ time.
|
[
{
"version": "v1",
"created": "Sat, 17 Feb 2018 11:36:42 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Oh",
"Eunjin",
""
],
[
"Barba",
"Luis",
""
],
[
"Ahn",
"Hee-Kap",
""
]
] |
new_dataset
| 0.993106 |
1802.06224
|
Ali Al-Azzawi Fouad
|
A.F. Al Azzawi, M. Bettaz and H. M. Al-Refai
|
Generating Python Code From Object-Z Specifications
|
12 pages, 3 figures
|
International Journal of Software Engineering & Applications
(IJSEA), Vol.8, No.4, July 2017
| null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object-Z is an object-oriented specification language which extends the Z
language with classes, objects, inheritance and polymorphism that can be used
to represent the specification of a complex system as collections of objects.
There are a number of existing works that mapped Object-Z to C++ and Java
programming languages. Since Python and Object-Z share many similarities, both
are object-oriented paradigm, support set theory and predicate calculus
moreover, Python is a functional programming language which is naturally closer
to formal specifications, we propose a mapping from Object-Z specifications to
Python code that covers some Object-Z constructs and express its specifications
in Python to validate these specifications. The validations are used in the
mapping covered preconditions, post-conditions, and invariants that are built
using lambda function and Python's decorator. This work has found Python is an
excellent language for developing libraries to map Object-Z specifications to
Python.
|
[
{
"version": "v1",
"created": "Sat, 17 Feb 2018 11:41:24 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Azzawi",
"A. F. Al",
""
],
[
"Bettaz",
"M.",
""
],
[
"Al-Refai",
"H. M.",
""
]
] |
new_dataset
| 0.999293 |
1802.06314
|
Sarah Thornton
|
Sarah Thornton
|
Autonomous Vehicle Speed Control for Safe Navigation of Occluded
Pedestrian Crosswalk
|
6 pages, 9 figures
| null | null | null |
cs.RO cs.AI cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Both humans and the sensors on an autonomous vehicle have limited sensing
capabilities. When these limitations coincide with scenarios involving
vulnerable road users, it becomes important to account for these limitations in
the motion planner. For the scenario of an occluded pedestrian crosswalk, the
speed of the approaching vehicle should be a function of the amount of
uncertainty on the roadway. In this work, the longitudinal controller is
formulated as a partially observable Markov decision process and dynamic
programming is used to compute the control policy. The control policy scales
the speed profile to be used by a model predictive steering controller.
|
[
{
"version": "v1",
"created": "Sun, 18 Feb 2018 00:18:01 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Thornton",
"Sarah",
""
]
] |
new_dataset
| 0.998511 |
1802.06328
|
Peter Clote
|
Amir H. Bayegan and Peter Clote
|
Minimum length RNA folding trajectories
|
38 pages with 26 figures and additional 11 page appendix containing 3
tables and supplementary figures
| null | null | null |
cs.DS q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Kinfold and KFOLD programs for RNA folding kinetics implement the
Gillespie algorithm to generate stochastic folding trajectories from an initial
structure s to a target structure t, in which each intermediate secondary
structure is obtained from its predecessor by the addition, removal or shift of
a single base pair. Define MS2 distance between secondary structures s and t to
be the minimum path length to refold s to t, where a move from MS2 is applied
in each step. We describe algorithms to compute the shortest MS2 folding
trajectory between any two given RNA secondary structures. These algorithms
include an optimal integer programming (IP) algorithm, an accurate and
efficient near-optimal algorithm, a greedy algorithm, a branch-and-bound
algorithm, and an optimal algorithm if one allows intermediate structures to
contain pseudoknots. Our optimal IP [resp. near-optimal IP] algorithm maximizes
[resp. approximately maximizes] the number of shifts and minimizes [resp.
approximately minimizes] the number of base pair additions and removals by
applying integer programming to (essentially) solve the minimum feedback vertex
set (FVS) problem for the RNA conflict digraph, then applies topological sort
to tether subtrajectories into the final optimal folding trajectory. We prove
NP-hardness of the problem to determine the minimum barrier energy over all
possible MS2 folding pathways, and conjecture that computing the MS2 distance
between arbitrary secondary structures is NP-hard. Since our optimal IP
algorithm relies on the FVS, known to be NP-complete for arbitrary digraphs, we
compare the family of RNA conflict digraphs with the following classes of
digraphs (planar, reducible flow graph, Eulerian, and tournament) for which FVS
is known to be either polynomial time computable or NP-hard. Source code
available at http://bioinformatics.bc.edu/clotelab/MS2distance/.
|
[
{
"version": "v1",
"created": "Sun, 18 Feb 2018 03:41:43 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Bayegan",
"Amir H.",
""
],
[
"Clote",
"Peter",
""
]
] |
new_dataset
| 0.99176 |
1802.06392
|
Dimitrios Kanoulas
|
Dimitrios Kanoulas, Jinoh Lee, Darwin G. Caldwell, Nikos G. Tsagarakis
|
Center-of-Mass-Based Grasp Pose Adaptation Using 3D Range and
Force/Torque Sensing
|
25 pages, 10 figures, International Journal of Humanoid Robotics
(IJHR)
|
International Journal of Humanoid Robotics Vol. 15 (2018) 1850013
(25 pages), World Scientific Publishing Company
|
10.1142/S0219843618500135
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lifting objects, whose mass may produce high wrist torques that exceed the
hardware strength limits, could lead to unstable grasps or serious robot
damage. This work introduces a new Center-of-Mass (CoM)-based grasp pose
adaptation method, for picking up objects using a combination of exteroceptive
3D perception and proprioceptive force/torque sensor feedback. The method works
in two iterative stages to provide reliable and wrist torque efficient grasps.
Initially, a geometric object CoM is estimated from the input range data. In
the first stage, a set of hand-size handle grasps are localized on the object
and the closest to its CoM is selected for grasping. In the second stage, the
object is lifted using a single arm, while the force and torque readings from
the sensor on the wrist are monitored. Based on these readings, a displacement
to the new CoM estimation is calculated. The object is released and the process
is repeated until the wrist torque effort is minimized. The advantage of our
method is the blending of both exteroceptive (3D range) and proprioceptive
(force/torque) sensing for finding the grasp location that minimizes the wrist
effort, potentially improving the reliability of the grasping and the
subsequent manipulation task. We experimentally validate the proposed method by
executing a number of tests on a set of objects that include handles, using the
humanoid robot WALK-MAN.
|
[
{
"version": "v1",
"created": "Sun, 18 Feb 2018 15:34:32 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Kanoulas",
"Dimitrios",
""
],
[
"Lee",
"Jinoh",
""
],
[
"Caldwell",
"Darwin G.",
""
],
[
"Tsagarakis",
"Nikos G.",
""
]
] |
new_dataset
| 0.996705 |
1802.06446
|
Jakob Weiss
|
Jakob Weiss, Nicola Rieke, Mohammad Ali Nasseri, Mathias Maier,
Abouzar Eslami, Nassir Navab
|
Fast 5DOF Needle Tracking in iOCT
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Purpose. Intraoperative Optical Coherence Tomography (iOCT) is an
increasingly available imaging technique for ophthalmic microsurgery that
provides high-resolution cross-sectional information of the surgical scene. We
propose to build on its desirable qualities and present a method for tracking
the orientation and location of a surgical needle. Thereby, we enable direct
analysis of instrument-tissue interaction directly in OCT space without complex
multimodal calibration that would be required with traditional instrument
tracking methods. Method. The intersection of the needle with the iOCT scan is
detected by a peculiar multi-step ellipse fitting that takes advantage of the
directionality of the modality. The geometric modelling allows us to use the
ellipse parameters and provide them into a latency aware estimator to infer the
5DOF pose during needle movement. Results. Experiments on phantom data and
ex-vivo porcine eyes indicate that the algorithm retains angular precision
especially during lateral needle movement and provides a more robust and
consistent estimation than baseline methods. Conclusion. Using solely
crosssectional iOCT information, we are able to successfully and robustly
estimate a 5DOF pose of the instrument in less than 5.5 ms on a CPU.
|
[
{
"version": "v1",
"created": "Sun, 18 Feb 2018 21:15:54 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Weiss",
"Jakob",
""
],
[
"Rieke",
"Nicola",
""
],
[
"Nasseri",
"Mohammad Ali",
""
],
[
"Maier",
"Mathias",
""
],
[
"Eslami",
"Abouzar",
""
],
[
"Navab",
"Nassir",
""
]
] |
new_dataset
| 0.988033 |
1802.06488
|
Alexander Wong
|
Alexander Wong, Mohammad Javad Shafiee, Francis Li, Brendan Chwyl
|
Tiny SSD: A Tiny Single-shot Detection Deep Convolutional Neural Network
for Real-time Embedded Object Detection
|
7 pages
| null | null | null |
cs.CV cs.AI cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object detection is a major challenge in computer vision, involving both
object classification and object localization within a scene. While deep neural
networks have been shown in recent years to yield very powerful techniques for
tackling the challenge of object detection, one of the biggest challenges with
enabling such object detection networks for widespread deployment on embedded
devices is high computational and memory requirements. Recently, there has been
an increasing focus in exploring small deep neural network architectures for
object detection that are more suitable for embedded devices, such as Tiny YOLO
and SqueezeDet. Inspired by the efficiency of the Fire microarchitecture
introduced in SqueezeNet and the object detection performance of the
single-shot detection macroarchitecture introduced in SSD, this paper
introduces Tiny SSD, a single-shot detection deep convolutional neural network
for real-time embedded object detection that is composed of a highly optimized,
non-uniform Fire sub-network stack and a non-uniform sub-network stack of
highly optimized SSD-based auxiliary convolutional feature layers designed
specifically to minimize model size while maintaining object detection
performance. The resulting Tiny SSD possess a model size of 2.3MB (~26X smaller
than Tiny YOLO) while still achieving an mAP of 61.3% on VOC 2007 (~4.2% higher
than Tiny YOLO). These experimental results show that very small deep neural
network architectures can be designed for real-time object detection that are
well-suited for embedded scenarios.
|
[
{
"version": "v1",
"created": "Mon, 19 Feb 2018 01:57:46 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Wong",
"Alexander",
""
],
[
"Shafiee",
"Mohammad Javad",
""
],
[
"Li",
"Francis",
""
],
[
"Chwyl",
"Brendan",
""
]
] |
new_dataset
| 0.986285 |
1802.06624
|
Dian Pratiwi
|
Putri Kurniasih, Dian Pratiwi
|
Osteoarthritis Disease Detection System using Self Organizing Maps
Method based on Ossa Manus X-Ray
|
6 pages, 12 figures, 1 table
|
International Journal of Computer Applications, Foundation of
Computer Science (FCS), NY, USA. Volume 173 - Number 3, 2017
|
10.5120/ijca2017915278
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Osteoarthritis is a disease found in the world, including in Indonesia. The
purpose of this study was to detect the disease Osteoarthritis using Self
Organizing mapping (SOM), and to know the procedure of artificial intelligence
on the methods of Self Organizing Mapping (SOM). In this system, there are
several stages to preserve to detect disease Osteoarthritis using Self
Organizing maps is the result of photographic images rontgen Ossa Manus normal
and sick with the resolution (150 x 200 pixels) do the repair phase contrast,
the Gray scale, thresholding process, Histogram of process , and do the last
process, where the process of doing training (Training) and testing on images
that have kept the shape data (.text). the conclusion is the result of testing
by using a data image, where 42 of data have 12 Normal image data and image
data 30 sick. On the results of the process of training data there are 8 X-ray
image revealed normal right and 19 data x-ray image of pain expressed is
correct. Then the accuracy on the process of training was 96.42%, and in the
process of testing normal true image 4 obtained revealed Normal, 9 data pain
stated true pain and 1 data imagery hurts stated incorrectly, then the accuracy
gained from the results of testing are 92,8%.
|
[
{
"version": "v1",
"created": "Mon, 19 Feb 2018 13:43:05 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Kurniasih",
"Putri",
""
],
[
"Pratiwi",
"Dian",
""
]
] |
new_dataset
| 0.996345 |
1802.06651
|
Domenico Sacca'
|
Domenico Sacca' and Angelo Furfaro
|
CalcuList: a Functional Language Extended with Imperative Features
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
CalcuList (Calculator with List manipulation), is an educational language for
teaching functional programming extended with some imperative and side-effect
features, which are enabled under explicit request by the programmer. In
addition to strings and lists, the language natively supports json objects. The
language adopts a Python-like syntax and enables interactive computation
sessions with the user through a REPL (Read-Evaluate-Print-Loop) shell. The
object code produced by a compilation is a program that will be eventually
executed by the CalcuList Virtual Machine (CLVM).
|
[
{
"version": "v1",
"created": "Mon, 19 Feb 2018 14:42:34 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Sacca'",
"Domenico",
""
],
[
"Furfaro",
"Angelo",
""
]
] |
new_dataset
| 0.974627 |
1802.06691
|
Mario Werner
|
Mario Werner, Thomas Unterluggauer, David Schaffenrath and Stefan
Mangard
|
Sponge-Based Control-Flow Protection for IoT Devices
|
accepted at IEEE EuroS&P 2018
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Embedded devices in the Internet of Things (IoT) face a wide variety of
security challenges. For example, software attackers perform code injection and
code-reuse attacks on their remote interfaces, and physical access to IoT
devices allows to tamper with code in memory, steal confidential Intellectual
Property (IP), or mount fault attacks to manipulate a CPU's control flow.
In this work, we present Sponge-based Control Flow Protection (SCFP). SCFP is
a stateful, sponge-based scheme to ensure the confidentiality of software IP
and its authentic execution on IoT devices. At compile time, SCFP encrypts and
authenticates software with instruction-level granularity. During execution, an
SCFP hardware extension between the CPU's fetch and decode stage continuously
decrypts and authenticates instructions. Sponge-based authenticated encryption
in SCFP yields fine-grained control-flow integrity and thus prevents
code-reuse, code-injection, and fault attacks on the code and the control flow.
In addition, SCFP withstands any modification of software in memory. For
evaluation, we extended a RISC-V core with SCFP and fabricated a real System on
Chip (SoC). The average overhead in code size and execution time of SCFP on
this design is 19.8% and 9.1%, respectively, and thus meets the requirements of
embedded IoT devices.
|
[
{
"version": "v1",
"created": "Mon, 19 Feb 2018 16:28:48 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Werner",
"Mario",
""
],
[
"Unterluggauer",
"Thomas",
""
],
[
"Schaffenrath",
"David",
""
],
[
"Mangard",
"Stefan",
""
]
] |
new_dataset
| 0.999562 |
1802.06708
|
Luca Pedrelli
|
Claudio Gallicchio, Alessio Micheli, Luca Pedrelli
|
Deep Echo State Networks for Diagnosis of Parkinson's Disease
|
This is a pre-print of the paper submitted to the European Symposium
on Artificial Neural Networks, Computational Intelligence and Machine
Learning, ESANN 2018
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce a novel approach for diagnosis of Parkinson's
Disease (PD) based on deep Echo State Networks (ESNs). The identification of PD
is performed by analyzing the whole time-series collected from a tablet device
during the sketching of spiral tests, without the need for feature extraction
and data preprocessing. We evaluated the proposed approach on a public dataset
of spiral tests. The results of experimental analysis show that DeepESNs
perform significantly better than shallow ESN model. Overall, the proposed
approach obtains state-of-the-art results in the identification of PD on this
kind of temporal data.
|
[
{
"version": "v1",
"created": "Mon, 19 Feb 2018 17:10:52 GMT"
}
] | 2018-02-20T00:00:00 |
[
[
"Gallicchio",
"Claudio",
""
],
[
"Micheli",
"Alessio",
""
],
[
"Pedrelli",
"Luca",
""
]
] |
new_dataset
| 0.989295 |
1705.03202
|
Ruobing Xie
|
Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin
|
Does William Shakespeare REALLY Write Hamlet? Knowledge Representation
Learning with Confidence
|
8 pages
|
AAAI-2018
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge graphs (KGs), which could provide essential relational information
between entities, have been widely utilized in various knowledge-driven
applications. Since the overall human knowledge is innumerable that still grows
explosively and changes frequently, knowledge construction and update
inevitably involve automatic mechanisms with less human supervision, which
usually bring in plenty of noises and conflicts to KGs. However, most
conventional knowledge representation learning methods assume that all triple
facts in existing KGs share the same significance without any noises. To
address this problem, we propose a novel confidence-aware knowledge
representation learning framework (CKRL), which detects possible noises in KGs
while learning knowledge representations with confidence simultaneously.
Specifically, we introduce the triple confidence to conventional
translation-based methods for knowledge representation learning. To make triple
confidence more flexible and universal, we only utilize the internal structural
information in KGs, and propose three kinds of triple confidences considering
both local and global structural information. In experiments, We evaluate our
models on knowledge graph noise detection, knowledge graph completion and
triple classification. Experimental results demonstrate that our
confidence-aware models achieve significant and consistent improvements on all
tasks, which confirms the capability of CKRL modeling confidence with
structural information in both KG noise detection and knowledge representation
learning.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2017 06:46:21 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Feb 2018 16:15:36 GMT"
}
] | 2018-02-19T00:00:00 |
[
[
"Xie",
"Ruobing",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Lin",
"Fen",
""
],
[
"Lin",
"Leyu",
""
]
] |
new_dataset
| 0.993725 |
1712.05884
|
Jonathan Shen
|
Jonathan Shen, Ruoming Pang, Ron J. Weiss, Mike Schuster, Navdeep
Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, RJ Skerry-Ryan,
Rif A. Saurous, Yannis Agiomyrgiannakis, Yonghui Wu
|
Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram
Predictions
|
Accepted to ICASSP 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes Tacotron 2, a neural network architecture for speech
synthesis directly from text. The system is composed of a recurrent
sequence-to-sequence feature prediction network that maps character embeddings
to mel-scale spectrograms, followed by a modified WaveNet model acting as a
vocoder to synthesize timedomain waveforms from those spectrograms. Our model
achieves a mean opinion score (MOS) of $4.53$ comparable to a MOS of $4.58$ for
professionally recorded speech. To validate our design choices, we present
ablation studies of key components of our system and evaluate the impact of
using mel spectrograms as the input to WaveNet instead of linguistic, duration,
and $F_0$ features. We further demonstrate that using a compact acoustic
intermediate representation enables significant simplification of the WaveNet
architecture.
|
[
{
"version": "v1",
"created": "Sat, 16 Dec 2017 00:51:40 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Feb 2018 01:28:23 GMT"
}
] | 2018-02-19T00:00:00 |
[
[
"Shen",
"Jonathan",
""
],
[
"Pang",
"Ruoming",
""
],
[
"Weiss",
"Ron J.",
""
],
[
"Schuster",
"Mike",
""
],
[
"Jaitly",
"Navdeep",
""
],
[
"Yang",
"Zongheng",
""
],
[
"Chen",
"Zhifeng",
""
],
[
"Zhang",
"Yu",
""
],
[
"Wang",
"Yuxuan",
""
],
[
"Skerry-Ryan",
"RJ",
""
],
[
"Saurous",
"Rif A.",
""
],
[
"Agiomyrgiannakis",
"Yannis",
""
],
[
"Wu",
"Yonghui",
""
]
] |
new_dataset
| 0.986562 |
1802.02605
|
Jean-Fran\c{c}ois Delpech
|
Jean-Fran\c{c}ois Delpech
|
Unsupervised word sense disambiguation in dynamic semantic spaces
|
7 pages, 1 table, 5 examples
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we are mainly concerned with the ability to quickly and
automatically distinguish word senses in dynamic semantic spaces in which new
terms and new senses appear frequently. Such spaces are built '"on the fly"
from constantly evolving data sets such as Wikipedia, repositories of patent
grants and applications, or large sets of legal documents for Technology
Assisted Review and e-discovery. This immediacy rules out supervision as well
as the use of a priori training sets. We show that the various senses of a term
can be automatically made apparent with a simple clustering algorithm, each
sense being a vector in the semantic space. While we only consider here
semantic spaces built by using random vectors, this algorithm should work with
any kind of embedding, provided meaningful similarities between terms can be
computed and do fulfill at least the two basic conditions that terms which
close meanings have high similarities and terms with unrelated meanings have
near-zero similarities.
|
[
{
"version": "v1",
"created": "Wed, 7 Feb 2018 19:27:27 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Feb 2018 13:58:10 GMT"
}
] | 2018-02-19T00:00:00 |
[
[
"Delpech",
"Jean-François",
""
]
] |
new_dataset
| 0.971896 |
1802.05735
|
Seyed Ali Cheraghi
|
Seyed Ali Cheraghi, Vinod Namboodiri, Kaushik Sinha
|
IBeaconMap: Automated Indoor Space Representation for Beacon-Based
Wayfinding
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditionally, there have been few options for navigational aids for the
blind and visually impaired (BVI) in large indoor spaces. Some recent indoor
navigation systems allow users equipped with smartphones to interact with low
cost Bluetoothbased beacons deployed strategically within the indoor space of
interest to navigate their surroundings. A major challenge in deploying such
beacon-based navigation systems is the need to employ a time and
labor-expensive beacon planning process to identify potential beacon placement
locations and arrive at a topological structure representing the indoor space.
This work presents a technique called IBeaconMap for creating such topological
structures to use with beacon-based navigation that only needs the floor plans
of the indoor spaces of interest. IBeaconMap employs a combination of computer
vision and machine learning techniques to arrive at the required set of beacon
locations and a weighted connectivity graph (with directional orientations) for
subsequent navigational needs. Evaluations show IBeaconMap to be both fast and
reasonably accurate, potentially proving to be an essential tool to be utilized
before mass deployments of beacon-based indoor wayfinding systems of the
future.
|
[
{
"version": "v1",
"created": "Thu, 15 Feb 2018 19:58:17 GMT"
}
] | 2018-02-19T00:00:00 |
[
[
"Cheraghi",
"Seyed Ali",
""
],
[
"Namboodiri",
"Vinod",
""
],
[
"Sinha",
"Kaushik",
""
]
] |
new_dataset
| 0.999187 |
1802.05737
|
Kamal Sarkar
|
Kamal Sarkar
|
JU_KS@SAIL_CodeMixed-2017: Sentiment Analysis for Indian Code Mixed
Social Media Texts
|
NLP Tool Contest on Sentiment Analysis for Indian Languages (Code
Mixed) held in conjunction with the 14th International Conference on Natural
Language Processing, 2017
|
Kamal Sarkar, JU_KS@SAIL_CodeMixed-2017: Sentiment Analysis for
Indian Code Mixed Social Media Texts, NLP Tool Contest@the 14th International
Conference on Natural Language Processing, 2017
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper reports about our work in the NLP Tool Contest @ICON-2017, shared
task on Sentiment Analysis for Indian Languages (SAIL) (code mixed). To
implement our system, we have used a machine learning algo-rithm called
Multinomial Na\"ive Bayes trained using n-gram and SentiWordnet features. We
have also used a small SentiWordnet for English and a small SentiWordnet for
Bengali. But we have not used any SentiWordnet for Hindi language. We have
tested our system on Hindi-English and Bengali-English code mixed social media
data sets released for the contest. The performance of our system is very close
to the best system participated in the contest. For both Bengali-English and
Hindi-English runs, our system was ranked at the 3rd position out of all
submitted runs and awarded the 3rd prize in the contest.
|
[
{
"version": "v1",
"created": "Thu, 15 Feb 2018 20:02:43 GMT"
}
] | 2018-02-19T00:00:00 |
[
[
"Sarkar",
"Kamal",
""
]
] |
new_dataset
| 0.998533 |
1802.05802
|
Zhuoqun Cheng
|
Zhuoqun Cheng, Richard West, Craig Einstein
|
End-to-end Analysis and Design of a Drone Flight Controller
| null | null | null | null |
cs.SY cs.OS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Timing guarantees are crucial to cyber-physical applications that must bound
the end-to-end delay between sensing, processing and actuation. For example, in
a flight controller for a multirotor drone, the data from a gyro or inertial
sensor must be gathered and processed to determine the attitude of the
aircraft. Sensor data fusion is followed by control decisions that adjust the
flight of a drone by altering motor speeds. If the processing pipeline between
sensor input and actuation is not bounded, the drone will lose control and
possibly fail to maintain flight.
Motivated by the implementation of a multithreaded drone flight controller on
the Quest RTOS, we develop a composable pipe model based on the system's task,
scheduling and communication abstractions. This pipe model is used to analyze
two semantics of end-to-end time: reaction time and freshness time. We also
argue that end-to-end timing properties should be factored in at the early
stage of application design. Thus, we provide a mathematical framework to
derive feasible task periods that satisfy both a given set of end-to-end timing
constraints and the schedulability requirement. We demonstrate the
applicability of our design approach by using it to port the Cleanflight flight
controller firmware to Quest on the Intel Aero board. Experiments show that
Cleanflight ported to Quest is able to achieve end-to-end latencies within the
predicted time bounds derived by analysis.
|
[
{
"version": "v1",
"created": "Thu, 15 Feb 2018 23:38:27 GMT"
}
] | 2018-02-19T00:00:00 |
[
[
"Cheng",
"Zhuoqun",
""
],
[
"West",
"Richard",
""
],
[
"Einstein",
"Craig",
""
]
] |
new_dataset
| 0.996448 |
1802.05839
|
Michel M\"uller
|
Michel M\"uller, Takayuki Aoki
|
New High Performance GPGPU Code Transformation Framework Applied to
Large Production Weather Prediction Code
|
Preprint as accepted for ACM TOPC
| null | null | null |
cs.DC physics.ao-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce "Hybrid Fortran", a new approach that allows a high performance
GPGPU port for structured grid Fortran codes. This technique only requires
minimal changes for a CPU targeted codebase, which is a significant advancement
in terms of productivity. It has been successfully applied to both dynamical
core and physical processes of ASUCA, a Japanese mesoscale weather prediction
model with more than 150k lines of code. By means of a minimal weather
application that resembles ASUCA's code structure, Hybrid Fortran is compared
to both a performance model as well as today's commonly used method, OpenACC.
As a result, the Hybrid Fortran implementation is shown to deliver the same or
better performance than OpenACC and its performance agrees with the model both
on CPU and GPU. In a full scale production run, using an ASUCA grid with 1581 x
1301 x 58 cells and real world weather data in 2km resolution, 24 NVIDIA Tesla
P100 running the Hybrid Fortran based GPU port are shown to replace more than
50 18-core Intel Xeon Broadwell E5-2695 v4 running the reference implementation
- an achievement comparable to more invasive GPGPU rewrites of other weather
models.
|
[
{
"version": "v1",
"created": "Fri, 16 Feb 2018 05:29:38 GMT"
}
] | 2018-02-19T00:00:00 |
[
[
"Müller",
"Michel",
""
],
[
"Aoki",
"Takayuki",
""
]
] |
new_dataset
| 0.999315 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.