id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1703.07544
|
Ayan Mahalanobis
|
Ayan Mahalanobis and Vivek Mallick
|
A Las Vegas algorithm to solve the elliptic curve discrete logarithm
problem
| null | null |
10.13140/RG.2.2.22661.86249
| null |
cs.CR cs.IT math.AG math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we describe a new Las Vegas algorithm to solve the elliptic
curve discrete logarithm problem. The algorithm depends on a property of the
group of rational points of an elliptic curve and is thus not a generic
algorithm. The algorithm that we describe has some similarities with the most
powerful index-calculus algorithm for the discrete logarithm problem over a
finite field.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2017 07:03:29 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Oct 2017 11:09:59 GMT"
},
{
"version": "v3",
"created": "Mon, 5 Feb 2018 10:27:33 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Mahalanobis",
"Ayan",
""
],
[
"Mallick",
"Vivek",
""
]
] |
new_dataset
| 0.997862 |
1708.04567
|
Zhen Xu
|
Zhen Xu, Xuhao Chen, Jie Shen, Yang Zhang, Cheng Chen, and Canqun Yang
|
GARDENIA: A Domain-specific Benchmark Suite for Next-generation
Accelerators
|
12 pages, 5 figures, journal
| null | null | null |
cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the Graph Analytics Repository for Designing
Next-generation Accelerators (GARDENIA), a benchmark suite for studying
irregular algorithms on massively parallel accelerators. Existing generic
benchmarks for accelerators have mainly focused on high performance computing
(HPC) applications with limited control and data irregularity, while available
graph analytics benchmarks do not apply state-of-the-art algorithms and/or
optimization techniques. GARDENIA includes emerging irregular applications in
big-data and machine learning domains which mimic massively multithreaded
commercial programs running on modern large-scale datacenters. Our
characterization shows that GARDENIA exhibits irregular microarchitectural
behavior which is quite different from structured workloads and
straightforward-implemented graph benchmarks.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2017 15:54:19 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Jan 2018 19:31:55 GMT"
},
{
"version": "v3",
"created": "Tue, 9 Jan 2018 20:32:55 GMT"
},
{
"version": "v4",
"created": "Sat, 3 Feb 2018 12:26:15 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Xu",
"Zhen",
""
],
[
"Chen",
"Xuhao",
""
],
[
"Shen",
"Jie",
""
],
[
"Zhang",
"Yang",
""
],
[
"Chen",
"Cheng",
""
],
[
"Yang",
"Canqun",
""
]
] |
new_dataset
| 0.999249 |
1710.01968
|
Sebastian Schlag
|
Robin Andre, Sebastian Schlag and Christian Schulz
|
Memetic Multilevel Hypergraph Partitioning
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hypergraph partitioning has a wide range of important applications such as
VLSI design or scientific computing. With focus on solution quality, we develop
the first multilevel memetic algorithm to tackle the problem. Key components of
our contribution are new effective multilevel recombination and mutation
operations that provide a large amount of diversity. We perform a wide range of
experiments on a benchmark set containing instances from application areas such
VLSI, SAT solving, social networks, and scientific computing. Compared to the
state-of-the-art hypergraph partitioning tools hMetis, PaToH, and KaHyPar, our
new algorithm computes the best result on almost all instances.
|
[
{
"version": "v1",
"created": "Thu, 5 Oct 2017 11:20:45 GMT"
},
{
"version": "v2",
"created": "Sat, 3 Feb 2018 12:37:55 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Andre",
"Robin",
""
],
[
"Schlag",
"Sebastian",
""
],
[
"Schulz",
"Christian",
""
]
] |
new_dataset
| 0.964067 |
1711.03016
|
Lane Schwartz
|
Richard Wei, Lane Schwartz, Vikram Adve
|
DLVM: A modern compiler infrastructure for deep learning systems
| null | null | null | null |
cs.PL cs.LG cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning software demands reliability and performance. However, many of
the existing deep learning frameworks are software libraries that act as an
unsafe DSL in Python and a computation graph interpreter. We present DLVM, a
design and implementation of a compiler infrastructure with a linear algebra
intermediate representation, algorithmic differentiation by adjoint code
generation, domain-specific optimizations and a code generator targeting GPU
via LLVM. Designed as a modern compiler infrastructure inspired by LLVM, DLVM
is more modular and more generic than existing deep learning compiler
frameworks, and supports tensor DSLs with high expressivity. With our
prototypical staged DSL embedded in Swift, we argue that the DLVM system
enables a form of modular, safe and performant frameworks for deep learning.
|
[
{
"version": "v1",
"created": "Wed, 8 Nov 2017 15:33:23 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Nov 2017 14:47:33 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Dec 2017 01:55:59 GMT"
},
{
"version": "v4",
"created": "Mon, 11 Dec 2017 21:49:48 GMT"
},
{
"version": "v5",
"created": "Fri, 2 Feb 2018 21:07:25 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Wei",
"Richard",
""
],
[
"Schwartz",
"Lane",
""
],
[
"Adve",
"Vikram",
""
]
] |
new_dataset
| 0.999717 |
1801.02073
|
Tomasz Jurczyk
|
Tomasz Jurczyk, Amit Deshmane, Jinho D. Choi
|
Analysis of Wikipedia-based Corpora for Question Answering
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper gives comprehensive analyses of corpora based on Wikipedia for
several tasks in question answering. Four recent corpora are collected,WikiQA,
SelQA, SQuAD, and InfoQA, and first analyzed intrinsically by contextual
similarities, question types, and answer categories. These corpora are then
analyzed extrinsically by three question answering tasks, answer retrieval,
selection, and triggering. An indexing-based method for the creation of a
silver-standard dataset for answer retrieval using the entire Wikipedia is also
presented. Our analysis shows the uniqueness of these corpora and suggests a
better use of them for statistical question answering learning.
|
[
{
"version": "v1",
"created": "Sat, 6 Jan 2018 19:28:15 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Feb 2018 07:41:32 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Jurczyk",
"Tomasz",
""
],
[
"Deshmane",
"Amit",
""
],
[
"Choi",
"Jinho D.",
""
]
] |
new_dataset
| 0.994225 |
1801.06734
|
Yixuan Zhang
|
Zhengyuan Yang, Yixuan Zhang, Jerry Yu, Junjie Cai, Jiebo Luo
|
End-to-end Multi-Modal Multi-Task Vehicle Control for Self-Driving Cars
with Visual Perception
|
6 pages, 5 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolutional Neural Networks (CNN) have been successfully applied to
autonomous driving tasks, many in an end-to-end manner. Previous end-to-end
steering control methods take an image or an image sequence as the input and
directly predict the steering angle with CNN. Although single task learning on
steering angles has reported good performances, the steering angle alone is not
sufficient for vehicle control. In this work, we propose a multi-task learning
framework to predict the steering angle and speed control simultaneously in an
end-to-end manner. Since it is nontrivial to predict accurate speed values with
only visual inputs, we first propose a network to predict discrete speed
commands and steering angles with image sequences. Moreover, we propose a
multi-modal multi-task network to predict speed values and steering angles by
taking previous feedback speeds and visual recordings as inputs. Experiments
are conducted on the public Udacity dataset and a newly collected SAIC dataset.
Results show that the proposed model predicts steering angles and speed values
accurately. Furthermore, we improve the failure data synthesis methods to solve
the problem of error accumulation in real road tests.
|
[
{
"version": "v1",
"created": "Sat, 20 Jan 2018 21:59:08 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Feb 2018 22:13:57 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Yang",
"Zhengyuan",
""
],
[
"Zhang",
"Yixuan",
""
],
[
"Yu",
"Jerry",
""
],
[
"Cai",
"Junjie",
""
],
[
"Luo",
"Jiebo",
""
]
] |
new_dataset
| 0.998258 |
1801.07965
|
Sarah Azouvi
|
Sarah Azouvi, Patrick McCorry and Sarah Meiklejohn
|
Winning the Caucus Race: Continuous Leader Election via Public
Randomness
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Consensus protocols inherently rely on the notion of leader election, in
which one or a subset of participants are temporarily elected to authorize and
announce the network's latest state. While leader election is a well studied
problem, the rise of distributed ledgers (i.e., blockchains) has led to a new
perspective on how to perform large-scale leader elections via solving a
computationally difficult puzzle (i.e., proof of work). In this paper, we
present Caucus, a large-scale leader election protocol with minimal
coordination costs that does not require the computational cost of
proof-of-work. We evaluate Caucus in terms of its security, using a new model
for blockchain-focused leader election, before testing an implementation of
Caucus on an Ethereum private network. Our experiments highlight that one
variant of Caucus costs only $0.10 per leader election if deployed on Ethereum.
|
[
{
"version": "v1",
"created": "Wed, 24 Jan 2018 12:47:37 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Jan 2018 12:09:39 GMT"
},
{
"version": "v3",
"created": "Sun, 4 Feb 2018 17:33:54 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Azouvi",
"Sarah",
""
],
[
"McCorry",
"Patrick",
""
],
[
"Meiklejohn",
"Sarah",
""
]
] |
new_dataset
| 0.991828 |
1802.00048
|
Damien Anderson Mr
|
Damien Anderson, Matthew Stephenson, Julian Togelius, Christian Salge,
John Levine and Jochen Renz
|
Deceptive Games
|
16 pages, accepted at EvoStar2018
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deceptive games are games where the reward structure or other aspects of the
game are designed to lead the agent away from a globally optimal policy. While
many games are already deceptive to some extent, we designed a series of games
in the Video Game Description Language (VGDL) implementing specific types of
deception, classified by the cognitive biases they exploit. VGDL games can be
run in the General Video Game Artificial Intelligence (GVGAI) Framework, making
it possible to test a variety of existing AI agents that have been submitted to
the GVGAI Competition on these deceptive games. Our results show that all
tested agents are vulnerable to several kinds of deception, but that different
agents have different weaknesses. This suggests that we can use deception to
understand the capabilities of a game-playing algorithm, and game-playing
algorithms to characterize the deception displayed by a game.
|
[
{
"version": "v1",
"created": "Wed, 31 Jan 2018 20:06:05 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Feb 2018 23:12:14 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Anderson",
"Damien",
""
],
[
"Stephenson",
"Matthew",
""
],
[
"Togelius",
"Julian",
""
],
[
"Salge",
"Christian",
""
],
[
"Levine",
"John",
""
],
[
"Renz",
"Jochen",
""
]
] |
new_dataset
| 0.999305 |
1802.00478
|
Paul Wild
|
Paul Wild, Lutz Schr\"oder, Dirk Pattinson, Barbara K\"onig
|
A van Benthem Theorem for Fuzzy Modal Logic
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a fuzzy (or quantitative) version of the van Benthem theorem,
which characterizes propositional modal logic as the bisimulation-invariant
fragment of first-order logic. Specifically, we consider a first-order fuzzy
predicate logic along with its modal fragment, and show that the fuzzy
first-order formulas that are non-expansive w.r.t. the natural notion of
bisimulation distance are exactly those that can be approximated by fuzzy modal
formulas.
|
[
{
"version": "v1",
"created": "Thu, 1 Feb 2018 20:35:06 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Feb 2018 14:23:33 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Wild",
"Paul",
""
],
[
"Schröder",
"Lutz",
""
],
[
"Pattinson",
"Dirk",
""
],
[
"König",
"Barbara",
""
]
] |
new_dataset
| 0.952483 |
1802.00893
|
Yuhua Zhang
|
Xiaofei Wang, Yuhua Zhang, Victor C. M. Leung, Nadra Guizani, Tianpeng
Jiang
|
D2D Big Data: Content Deliveries over Wireless Device-to-Device Sharing
in Large Scale Mobile Networks
|
13 pages, 6 figures, IEEE Wireless Communications Magazine
|
IEEE Wireless Commun., vol. 25, no.1, pp. 1-10, 2018
| null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently the topic of how to effectively offload cellular traffic onto
device-to-device (D2D) sharing among users in proximity has been gaining more
and more attention of global researchers and engineers. Users utilize wireless
short-range D2D communications for sharing contents locally, due to not only
the rapid sharing experience and free cost, but also high accuracy on
deliveries of interesting and popular contents, as well as strong social
impacts among friends. Nevertheless, the existing related studies are mostly
confined to small-scale datasets, limited dimensions of user features, or
unrealistic assumptions and hypotheses on user behaviors. In this article,
driven by emerging Big Data techniques, we propose to design a big data
platform, named D2D Big Data, in order to encourage the wireless D2D
communications among users effectively, to promote contents for providers
accurately, and to carry out offloading intelligence for operators efficiently.
We deploy a big data platform and further utilize a large-scale dataset (3.56
TBytes) from a popular D2D sharing application (APP), which contains 866
million D2D sharing activities on 4.5 million files disseminated via nearly 850
million users in 13 weeks. By abstracting and analyzing multidimensional
features, including online behaviors, content properties, location relations,
structural characteristics, meeting dynamics, social arborescence, privacy
preservation policies and so on, we verify and evaluate the D2D Big Data
platform regarding predictive content propagating coverage. Finally, we discuss
challenges and opportunities regarding D2D Big Data and propose to unveil a
promising upcoming future of wireless D2D communications.
|
[
{
"version": "v1",
"created": "Sat, 3 Feb 2018 01:53:24 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Wang",
"Xiaofei",
""
],
[
"Zhang",
"Yuhua",
""
],
[
"Leung",
"Victor C. M.",
""
],
[
"Guizani",
"Nadra",
""
],
[
"Jiang",
"Tianpeng",
""
]
] |
new_dataset
| 0.999509 |
1802.00923
|
Paul Pu Liang
|
Amir Zadeh, Paul Pu Liang, Soujanya Poria, Prateek Vij, Erik Cambria,
Louis-Philippe Morency
|
Multi-attention Recurrent Network for Human Communication Comprehension
|
AAAI 2018 Oral Presentation
| null | null | null |
cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human face-to-face communication is a complex multimodal signal. We use words
(language modality), gestures (vision modality) and changes in tone (acoustic
modality) to convey our intentions. Humans easily process and understand
face-to-face communication, however, comprehending this form of communication
remains a significant challenge for Artificial Intelligence (AI). AI must
understand each modality and the interactions between them that shape human
communication. In this paper, we present a novel neural architecture for
understanding human communication called the Multi-attention Recurrent Network
(MARN). The main strength of our model comes from discovering interactions
between modalities through time using a neural component called the
Multi-attention Block (MAB) and storing them in the hybrid memory of a
recurrent component called the Long-short Term Hybrid Memory (LSTHM). We
perform extensive comparisons on six publicly available datasets for multimodal
sentiment analysis, speaker trait recognition and emotion recognition. MARN
shows state-of-the-art performance on all the datasets.
|
[
{
"version": "v1",
"created": "Sat, 3 Feb 2018 06:29:17 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Zadeh",
"Amir",
""
],
[
"Liang",
"Paul Pu",
""
],
[
"Poria",
"Soujanya",
""
],
[
"Vij",
"Prateek",
""
],
[
"Cambria",
"Erik",
""
],
[
"Morency",
"Louis-Philippe",
""
]
] |
new_dataset
| 0.994352 |
1802.00929
|
Ananthanarayanan Chockalingam
|
K. R. Murali and A. Chockalingam
|
On OTFS Modulation for High-Doppler Fading Channels
|
ITA'2018, San Diego, Feb. 2018
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Orthogonal time frequency space (OTFS) modulation is a 2-dimensional (2D)
modulation scheme designed in the delay-Doppler domain, unlike traditional
modulation schemes which are designed in the time-frequency domain. Through a
series of 2D transformations, OTFS converts a doubly-dispersive channel into an
almost non-fading channel in the delay-Doppler domain. In this domain, each
symbol in a frame experiences an almost constant fade, thus achieving
significant performance gains over existing modulation schemes such as OFDM.
The sparse delay-Doppler impulse response which reflects the actual physical
geometry of the wireless channel enables efficient channel estimation,
especially in high-Doppler fading channels. This paper investigates OTFS from a
signal detection and channel estimation perspective, and proposes a Markov
chain Monte-Carlo sampling based detection scheme and a pseudo-random noise
(PN) pilot based channel estimation scheme in the delay-Doppler domain.
|
[
{
"version": "v1",
"created": "Sat, 3 Feb 2018 06:45:32 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Murali",
"K. R.",
""
],
[
"Chockalingam",
"A.",
""
]
] |
new_dataset
| 0.999592 |
1802.00948
|
Truyen Tran
|
Phuoc Nguyen, Truyen Tran, Svetha Venkatesh
|
Resset: A Recurrent Model for Sequence of Sets with Applications to
Electronic Medical Records
| null | null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern healthcare is ripe for disruption by AI. A game changer would be
automatic understanding the latent processes from electronic medical records,
which are being collected for billions of people worldwide. However, these
healthcare processes are complicated by the interaction between at least three
dynamic components: the illness which involves multiple diseases, the care
which involves multiple treatments, and the recording practice which is biased
and erroneous. Existing methods are inadequate in capturing the dynamic
structure of care. We propose Resset, an end-to-end recurrent model that reads
medical record and predicts future risk. The model adopts the algebraic view in
that discrete medical objects are embedded into continuous vectors lying in the
same space. We formulate the problem as modeling sequences of sets, a novel
setting that have rarely, if not, been addressed. Within Resset, the bag of
diseases recorded at each clinic visit is modeled as function of sets. The same
hold for the bag of treatments. The interaction between the disease bag and the
treatment bag at a visit is modeled in several, one of which as residual of
diseases minus the treatments. Finally, the health trajectory, which is a
sequence of visits, is modeled using a recurrent neural network. We report
results on over a hundred thousand hospital visits by patients suffered from
two costly chronic diseases -- diabetes and mental health. Resset shows
promises in multiple predictive tasks such as readmission prediction,
treatments recommendation and diseases progression.
|
[
{
"version": "v1",
"created": "Sat, 3 Feb 2018 10:11:38 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Nguyen",
"Phuoc",
""
],
[
"Tran",
"Truyen",
""
],
[
"Venkatesh",
"Svetha",
""
]
] |
new_dataset
| 0.988121 |
1802.01050
|
Daniel Inge
|
William Fu, Raymond Lin, Daniel Inge
|
TaintAssembly: Taint-Based Information Flow Control Tracking for
WebAssembly
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
WebAssembly (wasm) has recently emerged as a promisingly portable,
size-efficient, fast, and safe binary format for the web. As WebAssembly can
interact freely with JavaScript libraries, this gives rise to a potential for
undesirable behavior to occur. It is therefore important to be able to detect
when this might happen. A way to do this is through taint tracking, where we
follow the flow of information by applying taint labels to data. In this paper,
we describe TaintAssembly, a taint tracking engine for interpreted WebAssembly,
that we have created by modifying the V8 JavaScript engine. We implement basic
taint tracking functionality, taint in linear memory, and a probabilistic
variant of taint. We then benchmark our TaintAssembly engine by incorporating
it into a Chromium build and running it on custom test scripts and various real
world WebAssembly applications. We find that our modifications to the V8 engine
do not incur significant overhead with respect to vanilla V8's interpreted
WebAssembly, making TaintAssembly suitable for development and debugging.
|
[
{
"version": "v1",
"created": "Sun, 4 Feb 2018 00:19:34 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Fu",
"William",
""
],
[
"Lin",
"Raymond",
""
],
[
"Inge",
"Daniel",
""
]
] |
new_dataset
| 0.997686 |
1802.01093
|
Piotr Koniusz
|
Piotr Koniusz, Yusuf Tas, Hongguang Zhang, Mehrtash Harandi, Fatih
Porikli, Rui Zhang
|
Museum Exhibit Identification Challenge for Domain Adaptation and Beyond
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we approach an open problem of artwork identification and
propose a new dataset dubbed Open Museum Identification Challenge (Open MIC).
It contains photos of exhibits captured in 10 distinct exhibition spaces of
several museums which showcase paintings, timepieces, sculptures, glassware,
relics, science exhibits, natural history pieces, ceramics, pottery, tools and
indigenous crafts. The goal of Open MIC is to stimulate research in domain
adaptation, egocentric recognition and few-shot learning by providing a testbed
complementary to the famous Office dataset which reaches 90% accuracy. To form
our dataset, we captured a number of images per art piece with a mobile phone
and wearable cameras to form the source and target data splits, respectively.
To achieve robust baselines, we build on a recent approach that aligns
per-class scatter matrices of the source and target CNN streams [15]. Moreover,
we exploit the positive definite nature of such representations by using
end-to-end Bregman divergences and the Riemannian metric. We present baselines
such as training/evaluation per exhibition and training/evaluation on the
combined set covering 866 exhibit identities. As each exhibition poses distinct
challenges e.g., quality of lighting, motion blur, occlusions, clutter,
viewpoint and scale variations, rotations, glares, transparency, non-planarity,
clipping, we break down results w.r.t. these factors.
|
[
{
"version": "v1",
"created": "Sun, 4 Feb 2018 09:16:44 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Koniusz",
"Piotr",
""
],
[
"Tas",
"Yusuf",
""
],
[
"Zhang",
"Hongguang",
""
],
[
"Harandi",
"Mehrtash",
""
],
[
"Porikli",
"Fatih",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.993587 |
1802.01167
|
Vahid Yazdanpanah
|
Vahid Yazdanpanah, Devrim Murat Yazan
|
Industrial Symbiotic Relations as Cooperative Games
|
Presented at the 7th International Conference on Industrial
Engineering and Systems Management (IESM-2017), October 11--13, 2017,
Saarbr\"ucken, Germany
| null | null | null |
cs.MA cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce a game-theoretical formulation for a specific
form of collaborative industrial relations called "Industrial Symbiotic
Relation (ISR) games" and provide a formal framework to model, verify, and
support collaboration decisions in this new class of two-person operational
games. ISR games are formalized as cooperative cost-allocation games with the
aim to allocate the total ISR-related operational cost to involved industrial
firms in a fair and stable manner by taking into account their contribution to
the total traditional ISR-related cost. We tailor two types of allocation
mechanisms using which firms can implement cost allocations that result in a
collaboration that satisfies the fairness and stability properties. Moreover,
while industries receive a particular ISR proposal, our introduced methodology
is applicable as a managerial decision support to systematically verify the
quality of the ISR in question. This is achievable by analyzing if the
implemented allocation mechanism is a stable/fair allocation.
|
[
{
"version": "v1",
"created": "Sun, 4 Feb 2018 17:58:45 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Yazdanpanah",
"Vahid",
""
],
[
"Yazan",
"Devrim Murat",
""
]
] |
new_dataset
| 0.999645 |
1802.01185
|
Mansour Ahmadi
|
Mansour Ahmadi, Angelo Sotgiu, and Giorgio Giacinto
|
IntelliAV: Building an Effective On-Device Android Malware Detector
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The importance of employing machine learning for malware detection has become
explicit to the security community. Several anti-malware vendors have claimed
and advertised the application of machine learning in their products in which
the inference phase is performed on servers and high-performance machines, but
the feasibility of such approaches on mobile devices with limited computational
resources has not yet been assessed by the research community, vendors still
being skeptical. In this paper, we aim to show the practicality of devising a
learning-based anti-malware on Android mobile devices, first. Furthermore, we
aim to demonstrate the significance of such a tool to cease new and evasive
malware that can not easily be caught by signature-based or offline
learning-based security tools. To this end, we first propose the extraction of
a set of lightweight yet powerful features from Android applications. Then, we
embed these features in a vector space to build an effective as well as
efficient model. Hence, the model can perform the inference on the device for
detecting potentially harmful applications. We show that without resorting to
any signatures and relying only on a training phase involving a reasonable set
of samples, the proposed system, named IntelliAV, provides more satisfying
performances than the popular major anti-malware products. Moreover, we
evaluate the robustness of IntelliAV against common obfuscation techniques
where most of the anti-malware solutions get affected.
|
[
{
"version": "v1",
"created": "Sun, 4 Feb 2018 20:04:56 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Ahmadi",
"Mansour",
""
],
[
"Sotgiu",
"Angelo",
""
],
[
"Giacinto",
"Giorgio",
""
]
] |
new_dataset
| 0.999321 |
1802.01395
|
Thomas Szyrkowiec
|
Mohit Chamania, Thomas Szyrkowiec, Michele Santuari, Domenico
Siracusa, Achim Autenrieth, Victor Lopez, Pontus Sk\"oldstr\"om and
St\'ephane Junique
|
Intent-Based In-flight Service Encryption in Multi-Layer Transport
Networks
|
Optical Fiber Communication Conference
| null |
10.1364/OFC.2017.Tu3L.10
| null |
cs.NI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We demonstrate multi-layer encrypted service provisioning via the ACINO
orchestrator. ACINO combines a novel intent interface with an ONOS-based SDN
orchestrator to facilitate encrypted services at IP, Ethernet and optical
network layers.
|
[
{
"version": "v1",
"created": "Mon, 29 Jan 2018 14:55:43 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Chamania",
"Mohit",
""
],
[
"Szyrkowiec",
"Thomas",
""
],
[
"Santuari",
"Michele",
""
],
[
"Siracusa",
"Domenico",
""
],
[
"Autenrieth",
"Achim",
""
],
[
"Lopez",
"Victor",
""
],
[
"Sköldström",
"Pontus",
""
],
[
"Junique",
"Stéphane",
""
]
] |
new_dataset
| 0.988147 |
1802.01425
|
Abhay Karandikar
|
Pranav Jha, Abhay Karandikar
|
SDN based Control and Management of WLANs in the 3GPP 5G Network
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The exponential growth in mobile broadband usage [1] has catalyzed the need
for high data rate communication systems. In this regard, activities for
standardizing the next generation mobile broadband system, known as the Fifth
Generation(5G) system are underway. The 5G system also enables the integration
of Institute of Electrical and Electronic Engineers (IEEE) Wireless Local Area
Networks (WLANs) for providing cost-effective broadband connectivity. It is
therefore imperative to find solutions for control and management of WLANs,
while providing seamless inter-working capabilities with the cellular network.
In this paper, we propose a novel Software Defined Networking (SDN) based
architecture for efficient control and management of IEEE WLANs while providing
a mechanism for smooth integration of WLANs within the 5G system.
|
[
{
"version": "v1",
"created": "Sat, 27 Jan 2018 05:31:11 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Jha",
"Pranav",
""
],
[
"Karandikar",
"Abhay",
""
]
] |
new_dataset
| 0.992148 |
1802.01536
|
Allan Zhou
|
Allan Zhou, Dylan Hadfield-Menell, Anusha Nagabandi, Anca D. Dragan
|
Expressive Robot Motion Timing
| null |
HRI '17 Proceedings of the 2017 ACM/IEEE International Conference
on Human-Robot Interaction Pages 22-31
|
10.1145/2909824.3020221
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our goal is to enable robots to \emph{time} their motion in a way that is
purposefully expressive of their internal states, making them more transparent
to people. We start by investigating what types of states motion timing is
capable of expressing, focusing on robot manipulation and keeping the path
constant while systematically varying the timing. We find that users naturally
pick up on certain properties of the robot (like confidence), of the motion
(like naturalness), or of the task (like the weight of the object that the
robot is carrying). We then conduct a hypothesis-driven experiment to tease out
the directions and magnitudes of these effects, and use our findings to develop
candidate mathematical models for how users make these inferences from the
timing. We find a strong correlation between the models and real user data,
suggesting that robots can leverage these models to autonomously optimize the
timing of their motion to be expressive.
|
[
{
"version": "v1",
"created": "Mon, 5 Feb 2018 18:00:21 GMT"
}
] | 2018-02-06T00:00:00 |
[
[
"Zhou",
"Allan",
""
],
[
"Hadfield-Menell",
"Dylan",
""
],
[
"Nagabandi",
"Anusha",
""
],
[
"Dragan",
"Anca D.",
""
]
] |
new_dataset
| 0.978288 |
1709.03485
|
Eli Gibson
|
Eli Gibson, Wenqi Li, Carole Sudre, Lucas Fidon, Dzhoshkun I. Shakir,
Guotai Wang, Zach Eaton-Rosen, Robert Gray, Tom Doel, Yipeng Hu, Tom Whyntie,
Parashkev Nachev, Marc Modat, Dean C. Barratt, S\'ebastien Ourselin, M. Jorge
Cardoso and Tom Vercauteren
|
NiftyNet: a deep-learning platform for medical imaging
|
Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge
Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6
figures; Update includes additional applications, updated author list and
formatting for journal submission
| null |
10.1016/j.cmpb.2018.01.025
| null |
cs.CV cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Medical image analysis and computer-assisted intervention problems are
increasingly being addressed with deep-learning-based solutions. Established
deep-learning platforms are flexible but do not provide specific functionality
for medical image analysis and adapting them for this application requires
substantial implementation effort. Thus, there has been substantial duplication
of effort and incompatible infrastructure developed across many research
groups. This work presents the open-source NiftyNet platform for deep learning
in medical imaging. The ambition of NiftyNet is to accelerate and simplify the
development of these solutions, and to provide a common mechanism for
disseminating research outputs for the community to use, adapt and build upon.
NiftyNet provides a modular deep-learning pipeline for a range of medical
imaging applications including segmentation, regression, image generation and
representation learning applications. Components of the NiftyNet pipeline
including data loading, data augmentation, network architectures, loss
functions and evaluation metrics are tailored to, and take advantage of, the
idiosyncracies of medical image analysis and computer-assisted intervention.
NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D
and 3D images and computational graphs by default.
We present 3 illustrative medical image analysis applications built using
NiftyNet: (1) segmentation of multiple abdominal organs from computed
tomography; (2) image regression to predict computed tomography attenuation
maps from brain magnetic resonance images; and (3) generation of simulated
ultrasound images for specified anatomical poses.
NiftyNet enables researchers to rapidly develop and distribute deep learning
solutions for segmentation, regression, image generation and representation
learning applications, or extend the platform to new applications.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2017 17:42:10 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Oct 2017 13:46:31 GMT"
}
] | 2018-02-05T00:00:00 |
[
[
"Gibson",
"Eli",
""
],
[
"Li",
"Wenqi",
""
],
[
"Sudre",
"Carole",
""
],
[
"Fidon",
"Lucas",
""
],
[
"Shakir",
"Dzhoshkun I.",
""
],
[
"Wang",
"Guotai",
""
],
[
"Eaton-Rosen",
"Zach",
""
],
[
"Gray",
"Robert",
""
],
[
"Doel",
"Tom",
""
],
[
"Hu",
"Yipeng",
""
],
[
"Whyntie",
"Tom",
""
],
[
"Nachev",
"Parashkev",
""
],
[
"Modat",
"Marc",
""
],
[
"Barratt",
"Dean C.",
""
],
[
"Ourselin",
"Sébastien",
""
],
[
"Cardoso",
"M. Jorge",
""
],
[
"Vercauteren",
"Tom",
""
]
] |
new_dataset
| 0.994605 |
1711.08717
|
Lukas Fleischer
|
Lukas Fleischer and Manfred Kufleitner
|
The Intersection Problem for Finite Monoids
|
Extended version of a paper accepted to STACS 2018
| null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the intersection problem for finite monoids, which asks for a
given set of regular languages, represented by recognizing morphisms to finite
monoids from a variety V, whether there exists a word contained in their
intersection. Our main result is that the problem is PSPACE-complete if V is
contained in DS and NP-complete if V is non-trivial and contained in DO. Our
NP-algorithm for the case that V is contained in DO uses novel methods, based
on compression techniques and combinatorial properties of DO. We also show that
the problem is log-space reducible to the intersection problem for
deterministic finite automata (DFA) and that a variant of the problem is
log-space reducible to the membership problem for transformation monoids. In
light of these reductions, our hardness results can be seen as a generalization
of both a classical result by Kozen and a theorem by Beaudry, McKenzie and
Therien.
|
[
{
"version": "v1",
"created": "Thu, 23 Nov 2017 14:48:02 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Feb 2018 12:38:33 GMT"
}
] | 2018-02-05T00:00:00 |
[
[
"Fleischer",
"Lukas",
""
],
[
"Kufleitner",
"Manfred",
""
]
] |
new_dataset
| 0.988397 |
1802.00521
|
Anil Kumar Chorppath
|
Frank Gabriel, Anil Kumar Chorppath, Ievgenii Tsokalo, Frank H.P.
Fitzek
|
Multipath Communication with Finite Sliding Window Network Coding for
Ultra-Reliability and Low Latency
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We use random linear network coding (RLNC) based scheme for multipath
communication in the presence of lossy links with different delay
characteristics to obtain ultra-reliability and low latency. A sliding window
version of RLNC is proposed where the coded packets are generated using packets
in a window size and are inserted among systematic packets in different paths.
The packets are scheduled in the paths in a round robin fashion proportional to
the data rates. We use finite encoding and decoding window size and do not rely
on feedback for closing the sliding window, unlike the previous work. Our
implementation of two paths with LTE and WiFi characteristics shows that the
proposed sliding window scheme achieves better latency compared to the block
RLNC code. It is also shown that the proposed scheme achieves low latency
communication through multiple paths compared to the individual paths for
bursty traffic by translating the throughput on both the paths into latency
gain.
|
[
{
"version": "v1",
"created": "Fri, 2 Feb 2018 00:03:53 GMT"
}
] | 2018-02-05T00:00:00 |
[
[
"Gabriel",
"Frank",
""
],
[
"Chorppath",
"Anil Kumar",
""
],
[
"Tsokalo",
"Ievgenii",
""
],
[
"Fitzek",
"Frank H. P.",
""
]
] |
new_dataset
| 0.998054 |
1802.00580
|
Carlo Condo
|
Gabriele Coppolino, Carlo Condo, Guido Masera, Warren J. Gross
|
A Multi-Kernel Multi-Code Polar Decoder Architecture
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Polar codes have received increasing attention in the past decade, and have
been selected for the next generation of wireless communication standard. Most
research on polar codes has focused on codes constructed from a $2\times2$
polarization matrix, called binary kernel: codes constructed from binary
kernels have code lengths that are bound to powers of $2$. A few recent works
have proposed construction methods based on multiple kernels of different
dimensions, not only binary ones, allowing code lengths different from powers
of $2$. In this work, we design and implement the first multi-kernel successive
cancellation polar code decoder in literature. It can decode any code
constructed with binary and ternary kernels: the architecture, sized for a
maximum code length $N_{max}$, is fully flexible in terms of code length, code
rate and kernel sequence. The decoder can achieve frequency of more than $1$
GHz in $65$ nm CMOS technology, and a throughput of $615$ Mb/s. The area
occupation ranges between $0.11$ mm$^2$ for $N_{max}=256$ and $2.01$ mm$^2$ for
$N_{max}=4096$. Implementation results show an unprecedented degree of
flexibility: with $N_{max}=4096$, up to $55$ code lengths can be decoded with
the same hardware, along with any kernel sequence and code rate.
|
[
{
"version": "v1",
"created": "Fri, 2 Feb 2018 07:02:19 GMT"
}
] | 2018-02-05T00:00:00 |
[
[
"Coppolino",
"Gabriele",
""
],
[
"Condo",
"Carlo",
""
],
[
"Masera",
"Guido",
""
],
[
"Gross",
"Warren J.",
""
]
] |
new_dataset
| 0.999808 |
1802.00626
|
Jens Grubert
|
Jens Grubert and Lukas Witzani and Eyal Ofek and Michel Pahud and
Matthias Kranz and Per Ola Kristensson
|
Text Entry in Immersive Head-Mounted Display-based Virtual Reality using
Standard Keyboards
|
IEEE VR 2018. arXiv admin note: text overlap with arXiv:1802.00613
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the performance and user experience of two popular mainstream text
entry devices, desktop keyboards and touchscreen keyboards, for use in Virtual
Reality (VR) applications. We discuss the limitations arising from limited
visual feedback, and examine the efficiency of different strategies of use. We
analyze a total of 24 hours of typing data in VR from 24 participants and find
that novice users are able to retain about 60% of their typing speed on a
desktop keyboard and about 40-45\% of their typing speed on a touchscreen
keyboard. We also find no significant learning effects, indicating that users
can transfer their typing skills fast into VR. Besides investigating baseline
performances, we study the position in which keyboards and hands are rendered
in space. We find that this does not adversely affect performance for desktop
keyboard typing and results in a performance trade-off for touchscreen keyboard
typing.
|
[
{
"version": "v1",
"created": "Fri, 2 Feb 2018 10:28:44 GMT"
}
] | 2018-02-05T00:00:00 |
[
[
"Grubert",
"Jens",
""
],
[
"Witzani",
"Lukas",
""
],
[
"Ofek",
"Eyal",
""
],
[
"Pahud",
"Michel",
""
],
[
"Kranz",
"Matthias",
""
],
[
"Kristensson",
"Per Ola",
""
]
] |
new_dataset
| 0.965835 |
1802.00671
|
Saikat Roy
|
Saikat Roy, Nibaran Das, Mahantapas Kundu, Mita Nasipuri
|
Handwritten Isolated Bangla Compound Character Recognition: a new
benchmark using a novel deep learning approach
| null |
Pattern Recognition Letters, Elsevier, Vol. 90, Pages 15-21, 2017
|
10.1016/j.patrec.2017.03.004
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, a novel deep learning technique for the recognition of
handwritten Bangla isolated compound character is presented and a new benchmark
of recognition accuracy on the CMATERdb 3.1.3.3 dataset is reported. Greedy
layer wise training of Deep Neural Network has helped to make significant
strides in various pattern recognition problems. We employ layerwise training
to Deep Convolutional Neural Networks (DCNN) in a supervised fashion and
augment the training process with the RMSProp algorithm to achieve faster
convergence. We compare results with those obtained from standard shallow
learning methods with predefined features, as well as standard DCNNs.
Supervised layerwise trained DCNNs are found to outperform standard shallow
learning models such as Support Vector Machines as well as regular DCNNs of
similar architecture by achieving error rate of 9.67% thereby setting a new
benchmark on the CMATERdb 3.1.3.3 with recognition accuracy of 90.33%,
representing an improvement of nearly 10%.
|
[
{
"version": "v1",
"created": "Fri, 2 Feb 2018 13:06:43 GMT"
}
] | 2018-02-05T00:00:00 |
[
[
"Roy",
"Saikat",
""
],
[
"Das",
"Nibaran",
""
],
[
"Kundu",
"Mahantapas",
""
],
[
"Nasipuri",
"Mita",
""
]
] |
new_dataset
| 0.99879 |
1802.00689
|
Max Dohse
|
Max Dohse
|
TikZ-FeynHand: Basic User Guide
|
12 pages, many figures
| null | null | null |
cs.OH hep-ph hep-th
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This is a userguide for the LaTex package Tikz-FeynHand at
https://ctan.org/pkg/tikz-feynhand which let's you draw Feynman diagrams using
TikZ. It contains many examples and a 5-minute introduction to TikZ.
The package is a low-end modification of the package TikZ-Feynman at
https://ctan.org/pkg/tikz-feynman, one of whose principal advantages is the
automatic generation of diagrams, for which it needs LuaTex. FeynHand only
provides the manual mode and hence runs in LaTex without any reference to
LuaTex.
In addition it provides some NEW STYLES for vertices and propagators,
alternative SHORTER KEYWORDS in addition to TikZ-Feynman's longer ones, some
shortcut commands for QUICKLY CUSTOMIZING the diagrams' look, and the new
feature to put one propagator "ON TOP" of another.
|
[
{
"version": "v1",
"created": "Wed, 31 Jan 2018 19:19:03 GMT"
}
] | 2018-02-05T00:00:00 |
[
[
"Dohse",
"Max",
""
]
] |
new_dataset
| 0.999406 |
1706.03848
|
Olga Zagovora
|
Olga Zagovora (1), Fabian Fl\"ock (1), Claudia Wagner (1 and 2) ((1)
GESIS - Leibniz Institute for the Social Sciences, (2) University of
Koblenz-Landau)
|
"(Weitergeleitet von Journalistin)": The Gendered Presentation of
Professions on Wikipedia
|
In the 9th International ACM Web Science Conference 2017 (WebSci'17),
June 25-28, 2017, Troy, NY, USA. Based on the results of the thesis:
arXiv:1702.00829
| null |
10.1145/3091478.3091488
| null |
cs.CY cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Previous research has shown the existence of gender biases in the depiction
of professions and occupations in search engine results. Such an unbalanced
presentation might just as likely occur on Wikipedia, one of the most popular
knowledge resources on the Web, since the encyclopedia has already been found
to exhibit such tendencies in past studies. Under this premise, our work
assesses gender bias with respect to the content of German Wikipedia articles
about professions and occupations along three dimensions: used male vs. female
titles (and redirects), included images of persons, and names of professionals
mentioned in the articles. We further use German labor market data to assess
the potential misrepresentation of a gender for each specific profession. Our
findings in fact provide evidence for systematic over-representation of men on
all three dimensions. For instance, for professional fields dominated by
females, the respective articles on average still feature almost two times more
images of men; and in the mean, 83% of the mentioned names of professionals
were male and only 17% female.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2017 20:49:42 GMT"
}
] | 2018-02-04T00:00:00 |
[
[
"Zagovora",
"Olga",
"",
"1 and 2"
],
[
"Flöck",
"Fabian",
"",
"1 and 2"
],
[
"Wagner",
"Claudia",
"",
"1 and 2"
]
] |
new_dataset
| 0.958045 |
1707.06588
|
Yaniv Taigman
|
Yaniv Taigman, Lior Wolf, Adam Polyak, Eliya Nachmani
|
VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop
| null | null | null | null |
cs.LG cs.CL cs.SD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new neural text to speech (TTS) method that is able to transform
text to speech in voices that are sampled in the wild. Unlike other systems,
our solution is able to deal with unconstrained voice samples and without
requiring aligned phonemes or linguistic features. The network architecture is
simpler than those in the existing literature and is based on a novel shifting
buffer working memory. The same buffer is used for estimating the attention,
computing the output audio, and for updating the buffer itself. The input
sentence is encoded using a context-free lookup table that contains one entry
per character or phoneme. The speakers are similarly represented by a short
vector that can also be fitted to new identities, even with only a few samples.
Variability in the generated speech is achieved by priming the buffer prior to
generating the audio. Experimental results on several datasets demonstrate
convincing capabilities, making TTS accessible to a wider range of
applications. In order to promote reproducibility, we release our source code
and models.
|
[
{
"version": "v1",
"created": "Thu, 20 Jul 2017 16:18:00 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Oct 2017 15:29:44 GMT"
},
{
"version": "v3",
"created": "Thu, 1 Feb 2018 14:48:11 GMT"
}
] | 2018-02-02T00:00:00 |
[
[
"Taigman",
"Yaniv",
""
],
[
"Wolf",
"Lior",
""
],
[
"Polyak",
"Adam",
""
],
[
"Nachmani",
"Eliya",
""
]
] |
new_dataset
| 0.953879 |
1802.00002
|
Abhishek Dubey
|
Fangzhou sun and Abhishek Dubey and Jules White
|
DxNAT - Deep Neural Networks for Explaining Non-Recurring Traffic
Congestion
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-recurring traffic congestion is caused by temporary disruptions, such as
accidents, sports games, adverse weather, etc. We use data related to real-time
traffic speed, jam factors (a traffic congestion indicator), and events
collected over a year from Nashville, TN to train a multi-layered deep neural
network. The traffic dataset contains over 900 million data records. The
network is thereafter used to classify the real-time data and identify
anomalous operations. Compared with traditional approaches of using statistical
or machine learning techniques, our model reaches an accuracy of 98.73 percent
when identifying traffic congestion caused by football games. Our approach
first encodes the traffic across a region as a scaled image. After that the
image data from different timestamps is fused with event- and time-related
data. Then a crossover operator is used as a data augmentation method to
generate training datasets with more balanced classes. Finally, we use the
receiver operating characteristic (ROC) analysis to tune the sensitivity of the
classifier. We present the analysis of the training time and the inference time
separately.
|
[
{
"version": "v1",
"created": "Tue, 30 Jan 2018 23:18:11 GMT"
}
] | 2018-02-02T00:00:00 |
[
[
"sun",
"Fangzhou",
""
],
[
"Dubey",
"Abhishek",
""
],
[
"White",
"Jules",
""
]
] |
new_dataset
| 0.999463 |
1802.00157
|
Alexander Barg
|
Oleg Kolosov, Alexander Barg, Itzhak Tamo, and Gala Yadgar
|
Optimal LRC codes for all lenghts n <= q
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A family of distance-optimal LRC codes from certain subcodes of $q$-ary
Reed-Solomon codes, proposed by I.~Tamo and A.~Barg in 2014, assumes that the
code length $n$ is a multiple of $r+1.$ By shortening codes from this family,
we show that it is possible to lift this assumption, still obtaining
distance-optimal codes.
|
[
{
"version": "v1",
"created": "Thu, 1 Feb 2018 05:27:23 GMT"
}
] | 2018-02-02T00:00:00 |
[
[
"Kolosov",
"Oleg",
""
],
[
"Barg",
"Alexander",
""
],
[
"Tamo",
"Itzhak",
""
],
[
"Yadgar",
"Gala",
""
]
] |
new_dataset
| 0.989077 |
1802.00254
|
Yu Wang
|
Yu Wang, Xie Chen, Mark Gales, Anton Ragni and Jeremy Wong
|
Phonetic and Graphemic Systems for Multi-Genre Broadcast Transcription
|
5 pages, 6 tables, to appear in 2018 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP 2018)
| null | null | null |
cs.SD cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State-of-the-art English automatic speech recognition systems typically use
phonetic rather than graphemic lexicons. Graphemic systems are known to perform
less well for English as the mapping from the written form to the spoken form
is complicated. However, in recent years the representational power of
deep-learning based acoustic models has improved, raising interest in graphemic
acoustic models for English, due to the simplicity of generating the lexicon.
In this paper, phonetic and graphemic models are compared for an English
Multi-Genre Broadcast transcription task. A range of acoustic models based on
lattice-free MMI training are constructed using phonetic and graphemic
lexicons. For this task, it is found that having a long-span temporal history
reduces the difference in performance between the two forms of models. In
addition, system combination is examined, using parameter smoothing and
hypothesis combination. As the combination approaches become more complicated
the difference between the phonetic and graphemic systems further decreases.
Finally, for all configurations examined the combination of phonetic and
graphemic systems yields consistent gains.
|
[
{
"version": "v1",
"created": "Thu, 1 Feb 2018 12:00:45 GMT"
}
] | 2018-02-02T00:00:00 |
[
[
"Wang",
"Yu",
""
],
[
"Chen",
"Xie",
""
],
[
"Gales",
"Mark",
""
],
[
"Ragni",
"Anton",
""
],
[
"Wong",
"Jeremy",
""
]
] |
new_dataset
| 0.998226 |
1802.00264
|
Kang Li
|
Kang Li, Xiaoguang Zhao, Jiang Bian, and Min Tan
|
Automatic Safety Helmet Wearing Detection
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Surveillance is essential for the safety of power substation. The detection
of whether wearing safety helmets or not for perambulatory workers is the key
component of overall intelligent surveillance system in power substation. In
this paper, a novel and practical safety helmet detection framework based on
computer vision, machine learning and image processing is proposed. In order to
ascertain motion objects in power substation, the ViBe background modelling
algorithm is employed. Moreover, based on the result of motion objects
segmentation, real-time human classification framework C4 is applied to locate
pedestrian in power substation accurately and quickly. Finally, according to
the result of pedestrian detection, the safety helmet wearing detection is
implemented using the head location, the color space transformation and the
color feature discrimination. Extensive compelling experimental results in
power substation illustrate the efficiency and effectiveness of the proposed
framework.
|
[
{
"version": "v1",
"created": "Thu, 1 Feb 2018 12:41:25 GMT"
}
] | 2018-02-02T00:00:00 |
[
[
"Li",
"Kang",
""
],
[
"Zhao",
"Xiaoguang",
""
],
[
"Bian",
"Jiang",
""
],
[
"Tan",
"Min",
""
]
] |
new_dataset
| 0.961837 |
1802.00300
|
Konstantinos Drossos
|
Konstantinos Drossos and Stylianos Ioannis Mimilakis and Dmitriy
Serdyuk and Gerald Schuller and Tuomas Virtanen and Yoshua Bengio
|
MaD TwinNet: Masker-Denoiser Architecture with Twin Networks for
Monaural Sound Source Separation
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Monaural singing voice separation task focuses on the prediction of the
singing voice from a single channel music mixture signal. Current state of the
art (SOTA) results in monaural singing voice separation are obtained with deep
learning based methods. In this work we present a novel deep learning based
method that learns long-term temporal patterns and structures of a musical
piece. We build upon the recently proposed Masker-Denoiser (MaD) architecture
and we enhance it with the Twin Networks, a technique to regularize a recurrent
generative network using a backward running copy of the network. We evaluate
our method using the Demixing Secret Dataset and we obtain an increment to
signal-to-distortion ratio (SDR) of 0.37 dB and to signal-to-interference ratio
(SIR) of 0.23 dB, compared to previous SOTA results.
|
[
{
"version": "v1",
"created": "Thu, 1 Feb 2018 14:31:36 GMT"
}
] | 2018-02-02T00:00:00 |
[
[
"Drossos",
"Konstantinos",
""
],
[
"Mimilakis",
"Stylianos Ioannis",
""
],
[
"Serdyuk",
"Dmitriy",
""
],
[
"Schuller",
"Gerald",
""
],
[
"Virtanen",
"Tuomas",
""
],
[
"Bengio",
"Yoshua",
""
]
] |
new_dataset
| 0.996482 |
1802.00319
|
Shirin Saeedi Bidokhti
|
Shirin Saeedi Bidokhti and Mich\`ele Wigger and Aylin Yener and Abbas
El Gamal
|
State-Adaptive Coded Caching for Symmetric Broadcast Channels
|
The paper has appeared at the 52nd Annual Asilomar Conference on
Signals, Systems, and Computers
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Coded-caching delivery is considered over a symmetric noisy broadcast channel
whose state is unknown at the transmitter during the cache placement phase. In
particular, the delivery phase is modeled by a state-dependent broadcast
channel where the state remains constant over each transmission block and is
learned by the transmitter (and the receivers) only at the beginning of each
block. A state-adaptive coded caching scheme is proposed that improves either
on rate or decoding latency over two baseline schemes that are based on
standard coded caching.
|
[
{
"version": "v1",
"created": "Thu, 1 Feb 2018 15:00:31 GMT"
}
] | 2018-02-02T00:00:00 |
[
[
"Bidokhti",
"Shirin Saeedi",
""
],
[
"Wigger",
"Michèle",
""
],
[
"Yener",
"Aylin",
""
],
[
"Gamal",
"Abbas El",
""
]
] |
new_dataset
| 0.973503 |
1511.07860
|
Ryan Williams
|
Daniel M. Kane and Ryan Williams
|
Super-Linear Gate and Super-Quadratic Wire Lower Bounds for Depth-Two
and Depth-Three Threshold Circuits
| null |
ACM Symposium on Theory of Computing (STOC), 2016
| null | null |
cs.CC cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to formally understand the power of neural computing, we first need
to crack the frontier of threshold circuits with two and three layers, a regime
that has been surprisingly intractable to analyze. We prove the first
super-linear gate lower bounds and the first super-quadratic wire lower bounds
for depth-two linear threshold circuits with arbitrary weights, and depth-three
majority circuits computing an explicit function.
$\bullet$ We prove that for all $\epsilon\gg \sqrt{\log(n)/n}$, the
linear-time computable Andreev's function cannot be computed on a
$(1/2+\epsilon)$-fraction of $n$-bit inputs by depth-two linear threshold
circuits of $o(\epsilon^3 n^{3/2}/\log^3 n)$ gates, nor can it be computed with
$o(\epsilon^{3} n^{5/2}/\log^{7/2} n)$ wires. This establishes an average-case
``size hierarchy'' for threshold circuits, as Andreev's function is computable
by uniform depth-two circuits of $o(n^3)$ linear threshold gates, and by
uniform depth-three circuits of $O(n)$ majority gates.
$\bullet$ We present a new function in $P$ based on small-biased sets, which
we prove cannot be computed by a majority vote of depth-two linear threshold
circuits with $o(n^{3/2}/\log^3 n)$ gates, nor with $o(n^{5/2}/\log^{7/2}n)$
wires.
$\bullet$ We give tight average-case (gate and wire) complexity results for
computing PARITY with depth-two threshold circuits; the answer turns out to be
the same as for depth-two majority circuits.
The key is a new random restriction lemma for linear threshold functions. Our
main analytical tool is the Littlewood-Offord Lemma from additive
combinatorics.
|
[
{
"version": "v1",
"created": "Tue, 24 Nov 2015 20:45:51 GMT"
}
] | 2018-02-01T00:00:00 |
[
[
"Kane",
"Daniel M.",
""
],
[
"Williams",
"Ryan",
""
]
] |
new_dataset
| 0.979288 |
1606.01946
|
Roy Fox
|
Roy Fox, Naftali Tishby
|
Minimum-Information LQG Control - Part I: Memoryless Controllers
| null |
55th IEEE Conference on Decision and Control (CDC 2016)
| null | null |
cs.SY cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the increased demand for power efficiency in feedback-control systems,
communication is becoming a limiting factor, raising the need to trade off the
external cost that they incur with the capacity of the controller's
communication channels. With a proper design of the channels, this translates
into a sequential rate-distortion problem, where we minimize the rate of
information required for the controller's operation under a constraint on its
external cost. Memoryless controllers are of particular interest both for the
simplicity and frugality of their implementation and as a basis for studying
more complex controllers. In this paper we present the optimality principle for
memoryless linear controllers that utilize minimal information rates to achieve
a guaranteed external-cost level. We also study the interesting and useful
phenomenology of the optimal controller, such as the principled reduction of
its order.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2016 21:24:51 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Jun 2016 10:19:50 GMT"
},
{
"version": "v3",
"created": "Sun, 18 Sep 2016 17:43:29 GMT"
},
{
"version": "v4",
"created": "Thu, 30 Mar 2017 05:03:04 GMT"
}
] | 2018-02-01T00:00:00 |
[
[
"Fox",
"Roy",
""
],
[
"Tishby",
"Naftali",
""
]
] |
new_dataset
| 0.998561 |
1708.01837
|
Xing Hu
|
Xing Hu, Yuhan Wei, Ge Li, Zhi Jin
|
CodeSum: Translate Program Language to Natural Language
|
We have some additional experiments on this work
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
During software maintenance, programmers spend a lot of time on code
comprehension. Reading comments is an effective way for programmers to reduce
the reading and navigating time when comprehending source code. Therefore, as a
critical task in software engineering, code summarization aims to generate
brief natural language descriptions for source code. In this paper, we propose
a new code summarization model named CodeSum. CodeSum exploits the
attention-based sequence-to-sequence (Seq2Seq) neural network with
Structure-based Traversal (SBT) of Abstract Syntax Trees (AST). The AST
sequences generated by SBT can better present the structure of ASTs and keep
unambiguous. We conduct experiments on three large-scale corpora in different
program languages, i.e., Java, C#, and SQL, in which Java corpus is our new
proposed industry code extracted from Github. Experimental results show that
our method CodeSum outperforms the state-of-the-art significantly.
|
[
{
"version": "v1",
"created": "Sun, 6 Aug 2017 02:53:55 GMT"
},
{
"version": "v2",
"created": "Wed, 31 Jan 2018 07:18:56 GMT"
}
] | 2018-02-01T00:00:00 |
[
[
"Hu",
"Xing",
""
],
[
"Wei",
"Yuhan",
""
],
[
"Li",
"Ge",
""
],
[
"Jin",
"Zhi",
""
]
] |
new_dataset
| 0.999769 |
1710.02365
|
Pavel Kral
|
Pavel Kr\'al, Ladislav Lenc
|
Czech Text Document Corpus v 2.0
|
Accepted for LREC 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces "Czech Text Document Corpus v 2.0", a collection of
text documents for automatic document classification in Czech language. It is
composed of the text documents provided by the Czech News Agency and is freely
available for research purposes at http://ctdc.kiv.zcu.cz/. This corpus was
created in order to facilitate a straightforward comparison of the document
classification approaches on Czech data. It is particularly dedicated to
evaluation of multi-label document classification approaches, because one
document is usually labelled with more than one label. Besides the information
about the document classes, the corpus is also annotated at the morphological
layer. This paper further shows the results of selected state-of-the-art
methods on this corpus to offer the possibility of an easy comparison with
these approaches.
|
[
{
"version": "v1",
"created": "Fri, 6 Oct 2017 12:22:44 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Jan 2018 21:29:21 GMT"
}
] | 2018-02-01T00:00:00 |
[
[
"Král",
"Pavel",
""
],
[
"Lenc",
"Ladislav",
""
]
] |
new_dataset
| 0.999722 |
1711.09251
|
Mitsuo Yoshida
|
Jinsei Shima, Mitsuo Yoshida, Kyoji Umemura
|
When Do Users Change Their Profile Information on Twitter?
|
IEEE BigData 2017 Workshop : The 2nd International Workshop on
Application of Big Data for Computational Social Science (accepted)
| null |
10.1109/BigData.2017.8258287
| null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We can see profile information such as name, description and location in
order to know the user on social media. However, this profile information is
not always fixed. If there is a change in the user's life, the profile
information will be changed. In this study, we focus on user's profile
information changes and analyze the timing and reasons for these changes on
Twitter. The results indicate that the peak of profile information change
occurs in April among Japanese users, but there was no such trend observed for
English users throughout the year. Our analysis also shows that English users
most frequently change their names on their birthdays, while Japanese users
change their names as their Twitter engagement and activities decrease over
time.
|
[
{
"version": "v1",
"created": "Sat, 25 Nov 2017 15:26:14 GMT"
}
] | 2018-02-01T00:00:00 |
[
[
"Shima",
"Jinsei",
""
],
[
"Yoshida",
"Mitsuo",
""
],
[
"Umemura",
"Kyoji",
""
]
] |
new_dataset
| 0.977675 |
1801.10186
|
Ardavan Salehi Nobandegani
|
Ardavan S. Nobandegani, Ioannis N. Psaromiligkos
|
A Rational Distributed Process-level Account of Independence Judgment
| null | null | null | null |
cs.AI q-bio.NC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is inconceivable how chaotic the world would look to humans, faced with
innumerable decisions a day to be made under uncertainty, had they been lacking
the capacity to distinguish the relevant from the irrelevant---a capacity which
computationally amounts to handling probabilistic independence relations. The
highly parallel and distributed computational machinery of the brain suggests
that a satisfying process-level account of human independence judgment should
also mimic these features. In this work, we present the first rational,
distributed, message-passing, process-level account of independence judgment,
called $\mathcal{D}^\ast$. Interestingly, $\mathcal{D}^\ast$ shows a curious,
but normatively-justified tendency for quick detection of dependencies,
whenever they hold. Furthermore, $\mathcal{D}^\ast$ outperforms all the
previously proposed algorithms in the AI literature in terms of worst-case
running time, and a salient aspect of it is supported by recent work in
neuroscience investigating possible implementations of Bayes nets at the neural
level. $\mathcal{D}^\ast$ nicely exemplifies how the pursuit of cognitive
plausibility can lead to the discovery of state-of-the-art algorithms with
appealing properties, and its simplicity makes $\mathcal{D}^\ast$ potentially a
good candidate for pedagogical purposes.
|
[
{
"version": "v1",
"created": "Tue, 30 Jan 2018 19:42:45 GMT"
}
] | 2018-02-01T00:00:00 |
[
[
"Nobandegani",
"Ardavan S.",
""
],
[
"Psaromiligkos",
"Ioannis N.",
""
]
] |
new_dataset
| 0.995164 |
1801.10214
|
Simon Brodeur
|
Simon Brodeur, Simon Carrier, Jean Rouat
|
CREATE: Multimodal Dataset for Unsupervised Learning, Generative
Modeling and Prediction of Sensory Data from a Mobile Robot in Indoor
Environments
|
The CREATE dataset is Open access and available on IEEE Dataport
(https://ieee-dataport.org/open-access/create-multimodal-dataset-unsupervised-learning-and-generative-modeling-sensory-data)
| null |
10.21227/H2M94J
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The CREATE database is composed of 14 hours of multimodal recordings from a
mobile robotic platform based on the iRobot Create. The various sensors cover
vision, audition, motors and proprioception. The dataset has been designed in
the context of a mobile robot that can learn multimodal representations of its
environment, thanks to its ability to navigate the environment. This ability
can also be used to learn the dependencies and relationships between the
different modalities of the robot (e.g. vision, audition), as they reflect both
the external environment and the internal state of the robot. The provided
multimodal dataset is expected to have multiple usages, such as multimodal
unsupervised object learning, multimodal prediction and egomotion/causality
detection.
|
[
{
"version": "v1",
"created": "Tue, 30 Jan 2018 20:40:48 GMT"
}
] | 2018-02-01T00:00:00 |
[
[
"Brodeur",
"Simon",
""
],
[
"Carrier",
"Simon",
""
],
[
"Rouat",
"Jean",
""
]
] |
new_dataset
| 0.99979 |
1801.10293
|
Avneesh Saluja
|
Avneesh Saluja, Chris Dyer, Jean-David Ruvini
|
Paraphrase-Supervised Models of Compositionality
|
This paper was originally submitted for review at NAACL 2015 and ACL
2015. This version maintains the original author affiliation "as-is" (as of
when the work was done)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Compositional vector space models of meaning promise new solutions to
stubborn language understanding problems. This paper makes two contributions
toward this end: (i) it uses automatically-extracted paraphrase examples as a
source of supervision for training compositional models, replacing previous
work which relied on manual annotations used for the same purpose, and (ii)
develops a context-aware model for scoring phrasal compositionality.
Experimental results indicate that these multiple sources of information can be
used to learn partial semantic supervision that matches previous techniques in
intrinsic evaluation tasks. Our approaches are also evaluated for their impact
on a machine translation system where we show improvements in translation
quality, demonstrating that compositionality in interpretation correlates with
compositionality in translation.
|
[
{
"version": "v1",
"created": "Wed, 31 Jan 2018 04:14:11 GMT"
}
] | 2018-02-01T00:00:00 |
[
[
"Saluja",
"Avneesh",
""
],
[
"Dyer",
"Chris",
""
],
[
"Ruvini",
"Jean-David",
""
]
] |
new_dataset
| 0.961688 |
1801.10295
|
Yining Hu
|
Yining Hu, Ahsan Manzoor, Parinya Ekparinya, Madhusanka Liyanage,
Kanchana Thilakarathna, Guillaume Jourjon, Aruna Seneviratne, Mika E
Ylianttila
|
A Delay-Tolerant Payment Scheme Based on the Ethereum Blockchain
| null | null | null | null |
cs.CY cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Banking as an essential service can be hard to access in remote, rural
regions where the network connectivity is intermittent. Although micro-banking
has been made possible by SMS or USSD messages in some places, their security
flaws and session-based nature prevent them from a wider adoption. Global level
cryptocurrencies enable low-cost, secure and pervasive money transferring among
distributed peers, but are still limited in their ability to reach more people
in remote communities.
We proposed to take advantage of the delay-tolerant nature of blockchains to
deliver banking services to remote communities that only connect to the broader
Internet intermittently. Using a base station that offers connectivity within
the local area, regular transaction processing is solely handled by blockchain
miners. The bank only joins to process currency exchange requests, reward
miners and track user balances when the connection is available. By
distributing the verification and storage tasks among peers, our system design
saves on the overall deployment and operational costs without sacrificing the
reliability and trustwor- thiness. Through theoretical and empirical analysis,
we provided insights to system design, tested its robustness against network
disturbances, and demonstrated the feasibility of implementation on
off-the-shelf computers and mobile devices.
|
[
{
"version": "v1",
"created": "Wed, 31 Jan 2018 04:29:20 GMT"
}
] | 2018-02-01T00:00:00 |
[
[
"Hu",
"Yining",
""
],
[
"Manzoor",
"Ahsan",
""
],
[
"Ekparinya",
"Parinya",
""
],
[
"Liyanage",
"Madhusanka",
""
],
[
"Thilakarathna",
"Kanchana",
""
],
[
"Jourjon",
"Guillaume",
""
],
[
"Seneviratne",
"Aruna",
""
],
[
"Ylianttila",
"Mika E",
""
]
] |
new_dataset
| 0.99897 |
1801.10300
|
Kuan-Ting Chen
|
Wen Hua Lin, Kuan-Ting Chen, Hung Yueh Chiang and Winston Hsu
|
Netizen-Style Commenting on Fashion Photos: Dataset and Diversity
Measures
|
The Web Conference (WWW) 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, deep neural network models have achieved promising results in image
captioning task. Yet, "vanilla" sentences, only describing shallow appearances
(e.g., types, colors), generated by current works are not satisfied netizen
style resulting in lacking engagements, contexts, and user intentions. To
tackle this problem, we propose Netizen Style Commenting (NSC), to
automatically generate characteristic comments to a user-contributed fashion
photo. We are devoted to modulating the comments in a vivid "netizen" style
which reflects the culture in a designated social community and hopes to
facilitate more engagement with users. In this work, we design a novel
framework that consists of three major components: (1) We construct a
large-scale clothing dataset named NetiLook, which contains 300K posts (photos)
with 5M comments to discover netizen-style comments. (2) We propose three
unique measures to estimate the diversity of comments. (3) We bring diversity
by marrying topic models with neural networks to make up the insufficiency of
conventional image captioning works. Experimenting over Flickr30k and our
NetiLook datasets, we demonstrate our proposed approaches benefit fashion photo
commenting and improve image captioning tasks both in accuracy and diversity.
|
[
{
"version": "v1",
"created": "Wed, 31 Jan 2018 05:08:58 GMT"
}
] | 2018-02-01T00:00:00 |
[
[
"Lin",
"Wen Hua",
""
],
[
"Chen",
"Kuan-Ting",
""
],
[
"Chiang",
"Hung Yueh",
""
],
[
"Hsu",
"Winston",
""
]
] |
new_dataset
| 0.998116 |
1801.10319
|
Xi Cheng
|
Xi Cheng, Xiang Li, Ying Tai, Jian Yang
|
SESR: Single Image Super Resolution with Recursive Squeeze and
Excitation Networks
|
Preprint version with 6 pages for ICPR18
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Single image super resolution is a very important computer vision task, with
a wide range of applications. In recent years, the depth of the
super-resolution model has been constantly increasing, but with a small
increase in performance, it has brought a huge amount of computation and memory
consumption. In this work, in order to make the super resolution models more
effective, we proposed a novel single image super resolution method via
recursive squeeze and excitation networks (SESR). By introducing the squeeze
and excitation module, our SESR can model the interdependencies and
relationships between channels and that makes our model more efficiency. In
addition, the recursive structure and progressive reconstruction method in our
model minimized the layers and parameters and enabled SESR to simultaneously
train multi-scale super resolution in a single model. After evaluating on four
benchmark test sets, our model is proved to be above the state-of-the-art
methods in terms of speed and accuracy.
|
[
{
"version": "v1",
"created": "Wed, 31 Jan 2018 06:50:49 GMT"
}
] | 2018-02-01T00:00:00 |
[
[
"Cheng",
"Xi",
""
],
[
"Li",
"Xiang",
""
],
[
"Tai",
"Ying",
""
],
[
"Yang",
"Jian",
""
]
] |
new_dataset
| 0.999311 |
1801.10442
|
Arsha Nagrani
|
Arsha Nagrani, Andrew Zisserman
|
From Benedict Cumberbatch to Sherlock Holmes: Character Identification
in TV series without a Script
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of this paper is the automatic identification of characters in TV
and feature film material. In contrast to standard approaches to this task,
which rely on the weak supervision afforded by transcripts and subtitles, we
propose a new method requiring only a cast list. This list is used to obtain
images of actors from freely available sources on the web, providing a form of
partial supervision for this task. In using images of actors to recognize
characters, we make the following three contributions: (i) We demonstrate that
an automated semi-supervised learning approach is able to adapt from the
actor's face to the character's face, including the face context of the hair;
(ii) By building voice models for every character, we provide a bridge between
frontal faces (for which there is plenty of actor-level supervision) and
profile (for which there is very little or none); and (iii) by combining face
context and speaker identification, we are able to identify characters with
partially occluded faces and extreme facial poses. Results are presented on the
TV series 'Sherlock' and the feature film 'Casablanca'. We achieve the
state-of-the-art on the Casablanca benchmark, surpassing previous methods that
have used the stronger supervision available from transcripts.
|
[
{
"version": "v1",
"created": "Wed, 31 Jan 2018 13:25:29 GMT"
}
] | 2018-02-01T00:00:00 |
[
[
"Nagrani",
"Arsha",
""
],
[
"Zisserman",
"Andrew",
""
]
] |
new_dataset
| 0.980775 |
1608.01738
|
Joseph Connelly
|
Joseph Connelly, Kenneth Zeger
|
Linear Network Coding over Rings, Part I: Scalar Codes and Commutative
Alphabets
| null | null |
10.1109/TIT.2017.2697421
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fixed-size commutative rings are quasi-ordered such that all scalar linearly
solvable networks over any given ring are also scalar linearly solvable over
any higher-ordered ring. As consequences, if a network has a scalar linear
solution over some finite commutative ring, then (i) the network is also scalar
linearly solvable over a maximal commutative ring of the same size, and (ii)
the (unique) smallest size commutative ring over which the network has a scalar
linear solution is a field. We prove that a commutative ring is maximal with
respect to the quasi-order if and only if some network is scalar linearly
solvable over the ring but not over any other commutative ring of the same
size. Furthermore, we show that maximal commutative rings are direct products
of certain fields specified by the integer partitions of the prime factor
multiplicities of the maximal ring's size.
Finally, we prove that there is a unique maximal commutative ring of size $m$
if and only if each prime factor of $m$ has multiplicity in $\{1,2,3,4,6\}$. In
fact, whenever $p$ is prime and $k \in \{1,2,3,4,6\}$, the unique such maximal
ring of size $p^k$ is the field $GF(p^k)$. However, for every field $GF(p^k)$
with $k\not\in \{1,2,3,4,6\}$, there is always some network that is not scalar
linearly solvable over the field but is scalar linearly solvable over a
commutative ring of the same size. These results imply that for scalar linear
network coding over commutative rings, fields can always be used when the
alphabet size is flexible, but alternative rings may be needed when the
alphabet size is fixed.
|
[
{
"version": "v1",
"created": "Fri, 5 Aug 2016 01:52:35 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Aug 2016 05:13:45 GMT"
}
] | 2018-01-31T00:00:00 |
[
[
"Connelly",
"Joseph",
""
],
[
"Zeger",
"Kenneth",
""
]
] |
new_dataset
| 0.982887 |
1701.07299
|
Igor Malinovi\'c
|
Yuri Faenza and Igor Malinovic
|
A PTAS for the Time-Invariant Incremental Knapsack problem
|
17 pages, 2 figures
| null | null | null |
cs.DS cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Time-Invariant Incremental Knapsack problem (IIK) is a generalization of
Maximum Knapsack to a discrete multi-period setting. At each time, capacity
increases and items can be added, but not removed from the knapsack. The goal
is to maximize the sum of profits over all times. IIK models various
applications including specific financial markets and governmental decision
processes. IIK is strongly NP-hard and there has been work on giving
approximation algorithms for some special cases. In this paper, we settle the
complexity of IIK by designing a PTAS based on rounding a disjuncive
formulation, and provide several extensions of the technique.
|
[
{
"version": "v1",
"created": "Wed, 25 Jan 2017 13:23:36 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Oct 2017 16:56:01 GMT"
},
{
"version": "v3",
"created": "Wed, 25 Oct 2017 11:48:08 GMT"
},
{
"version": "v4",
"created": "Tue, 30 Jan 2018 16:31:44 GMT"
}
] | 2018-01-31T00:00:00 |
[
[
"Faenza",
"Yuri",
""
],
[
"Malinovic",
"Igor",
""
]
] |
new_dataset
| 0.997851 |
1703.02326
|
Farbod Farshidian
|
Farbod Farshidian, Edo Jelavi\'c, Alexander W. Winkler, Jonas Buchli
|
Robust Whole-Body Motion Control of Legged Robots
|
8 Pages
| null |
10.1109/IROS.2017.8206328
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a robust control architecture for the whole-body motion control
of torque controlled robots with arms and legs. The method is based on the
robust control of contact forces in order to track a planned Center of Mass
trajectory. Its appeal lies in the ability to guarantee robust stability and
performance despite rigid body model mismatch, actuator dynamics, delays,
contact surface stiffness, and unobserved ground profiles. Furthermore, we
introduce a task space decomposition approach which removes the coupling
effects between contact force controller and the other non-contact controllers.
Finally, we verify our control performance on a quadruped robot and compare its
performance to a standard inverse dynamics approach on hardware.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2017 11:09:42 GMT"
}
] | 2018-01-31T00:00:00 |
[
[
"Farshidian",
"Farbod",
""
],
[
"Jelavić",
"Edo",
""
],
[
"Winkler",
"Alexander W.",
""
],
[
"Buchli",
"Jonas",
""
]
] |
new_dataset
| 0.986787 |
1708.09046
|
Benjamin Moseley
|
Sungjin Im, Benjamin Moseley, Kirk Pruhs and Clifford Stein
|
An O(log log m)-competitive Algorithm for Online Machine Minimization
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers the online machine minimization problem, a basic real
time scheduling problem. The setting for this problem consists of n jobs that
arrive over time, where each job has a deadline by which it must be completed.
The goal is to design an online scheduler that feasibly schedules the jobs on a
nearly minimal number of machines. An algorithm is c-machine optimal if the
algorithm will feasibly schedule a collection of jobs on cm machines if there
exists a feasible schedule on m machines. For over two decades the best known
result was a O(log P)-machine optimal algorithm, where P is the ratio of the
maximum to minimum job size. In a recent breakthrough, a O(log m)-machine
optimal algorithm was given. In this paper, we exponentially improve on this
recent result by giving a O(log log m)-machine optimal algorithm.
|
[
{
"version": "v1",
"created": "Tue, 29 Aug 2017 21:57:22 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Jan 2018 22:41:50 GMT"
}
] | 2018-01-31T00:00:00 |
[
[
"Im",
"Sungjin",
""
],
[
"Moseley",
"Benjamin",
""
],
[
"Pruhs",
"Kirk",
""
],
[
"Stein",
"Clifford",
""
]
] |
new_dataset
| 0.976114 |
1710.07805
|
Cise Midoglu M.Sc.
|
Cise Midoglu and Leonhard Wimmer and Andra Lutu and Ozgu Alay and
Carsten Griwodz
|
MONROE-Nettest: A Configurable Tool for Dissecting Speed Measurements in
Mobile Broadband Networks
|
6 pages, 3 figures, submitted to INFOCOM CNERT Workshop 2018
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the demand for mobile connectivity continues to grow, there is a strong
need to evaluate the performance of Mobile Broadband (MBB) networks. In the
last years, mobile "speed", quantified most commonly by data rate, gained
popularity as the widely accepted metric to describe their performance.
However, there is a lack of consensus on how mobile speed should be measured.
In this paper, we design and implement MONROE-Nettest to dissect mobile speed
measurements, and investigate the effect of different factors on speed
measurements in the complex mobile ecosystem. MONROE-Nettest is built as an
Experiment as a Service (EaaS) on top of the MONROE platform, an open dedicated
platform for experimentation in operational MBB networks. Using MONROE-Nettest,
we conduct a large scale measurement campaign and quantify the effects of
measurement duration, number of TCP flows, and server location on measured
downlink data rate in 6 operational MBB networks in Europe. Our results
indicate that differences in parameter configuration can significantly affect
the measurement results. We provide the complete MONROE-Nettest toolset as open
source and our measurements as open data.
|
[
{
"version": "v1",
"created": "Sat, 21 Oct 2017 14:17:14 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Jan 2018 12:43:11 GMT"
}
] | 2018-01-31T00:00:00 |
[
[
"Midoglu",
"Cise",
""
],
[
"Wimmer",
"Leonhard",
""
],
[
"Lutu",
"Andra",
""
],
[
"Alay",
"Ozgu",
""
],
[
"Griwodz",
"Carsten",
""
]
] |
new_dataset
| 0.988318 |
1801.09718
|
Tomasz Kornuta
|
Mikyas T. Desta and Larry Chen and Tomasz Kornuta
|
Object-based reasoning in VQA
|
10 pages, 15 figures, published as a conference paper at 2018 IEEE
Winter Conf. on Applications of Computer Vision (WACV'2018)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual Question Answering (VQA) is a novel problem domain where multi-modal
inputs must be processed in order to solve the task given in the form of a
natural language. As the solutions inherently require to combine visual and
natural language processing with abstract reasoning, the problem is considered
as AI-complete. Recent advances indicate that using high-level, abstract facts
extracted from the inputs might facilitate reasoning. Following that direction
we decided to develop a solution combining state-of-the-art object detection
and reasoning modules. The results, achieved on the well-balanced CLEVR
dataset, confirm the promises and show significant, few percent improvements of
accuracy on the complex "counting" task.
|
[
{
"version": "v1",
"created": "Mon, 29 Jan 2018 19:24:51 GMT"
}
] | 2018-01-31T00:00:00 |
[
[
"Desta",
"Mikyas T.",
""
],
[
"Chen",
"Larry",
""
],
[
"Kornuta",
"Tomasz",
""
]
] |
new_dataset
| 0.990473 |
1801.09797
|
{\L}ukasz Kaiser
|
{\L}ukasz Kaiser and Samy Bengio
|
Discrete Autoencoders for Sequence Models
| null | null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recurrent models for sequences have been recently successful at many tasks,
especially for language modeling and machine translation. Nevertheless, it
remains challenging to extract good representations from these models. For
instance, even though language has a clear hierarchical structure going from
characters through words to sentences, it is not apparent in current language
models. We propose to improve the representation in sequence models by
augmenting current approaches with an autoencoder that is forced to compress
the sequence through an intermediate discrete latent space. In order to
propagate gradients though this discrete representation we introduce an
improved semantic hashing technique. We show that this technique performs well
on a newly proposed quantitative efficiency measure. We also analyze latent
codes produced by the model showing how they correspond to words and phrases.
Finally, we present an application of the autoencoder-augmented model to
generating diverse translations.
|
[
{
"version": "v1",
"created": "Mon, 29 Jan 2018 23:36:11 GMT"
}
] | 2018-01-31T00:00:00 |
[
[
"Kaiser",
"Łukasz",
""
],
[
"Bengio",
"Samy",
""
]
] |
new_dataset
| 0.997186 |
1801.09847
|
Qian-Yi Zhou
|
Qian-Yi Zhou and Jaesik Park and Vladlen Koltun
|
Open3D: A Modern Library for 3D Data Processing
|
http://www.open3d.org
| null | null | null |
cs.CV cs.GR cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open3D is an open-source library that supports rapid development of software
that deals with 3D data. The Open3D frontend exposes a set of carefully
selected data structures and algorithms in both C++ and Python. The backend is
highly optimized and is set up for parallelization. Open3D was developed from a
clean slate with a small and carefully considered set of dependencies. It can
be set up on different platforms and compiled from source with minimal effort.
The code is clean, consistently styled, and maintained via a clear code review
mechanism. Open3D has been used in a number of published research projects and
is actively deployed in the cloud. We welcome contributions from the
open-source community.
|
[
{
"version": "v1",
"created": "Tue, 30 Jan 2018 04:33:20 GMT"
}
] | 2018-01-31T00:00:00 |
[
[
"Zhou",
"Qian-Yi",
""
],
[
"Park",
"Jaesik",
""
],
[
"Koltun",
"Vladlen",
""
]
] |
new_dataset
| 0.987974 |
1801.09936
|
Mahsa S. Shahshahani
|
Mahsa Sadat Shahshahani, Mahdi Mohseni, Azadeh Shakery, Heshaam Faili
|
PEYMA: A Tagged Corpus for Persian Named Entities
|
2017, Signal and Data Processing Journal
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal in the NER task is to classify proper nouns of a text into classes
such as person, location, and organization. This is an important preprocessing
step in many NLP tasks such as question-answering and summarization. Although
many research studies have been conducted in this area in English and the
state-of-the-art NER systems have reached performances of higher than 90
percent in terms of F1 measure, there are very few research studies for this
task in Persian. One of the main important causes of this may be the lack of a
standard Persian NER dataset to train and test NER systems. In this research we
create a standard, big-enough tagged Persian NER dataset which will be
distributed for free for research purposes. In order to construct such a
standard dataset, we studied standard NER datasets which are constructed for
English researches and found out that almost all of these datasets are
constructed using news texts. So we collected documents from ten news websites.
Later, in order to provide annotators with some guidelines to tag these
documents, after studying guidelines used for constructing CoNLL and MUC
standard English datasets, we set our own guidelines considering the Persian
linguistic rules.
|
[
{
"version": "v1",
"created": "Tue, 30 Jan 2018 11:30:38 GMT"
}
] | 2018-01-31T00:00:00 |
[
[
"Shahshahani",
"Mahsa Sadat",
""
],
[
"Mohseni",
"Mahdi",
""
],
[
"Shakery",
"Azadeh",
""
],
[
"Faili",
"Heshaam",
""
]
] |
new_dataset
| 0.998504 |
1508.03773
|
Herman Haverkort
|
Herman Haverkort
|
No acute tetrahedron is an 8-reptile
|
updated text, as in press with Discrete Mathematics, Discrete
Mathematics Available online 10 November 2017
| null | null | null |
cs.CG math.MG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An $r$-gentiling is a dissection of a shape into $r \geq 2$ parts which are
all similar to the original shape. An $r$-reptiling is an $r$-gentiling of
which all parts are mutually congruent. This article shows that no acute
tetrahedron is an $r$-gentile or $r$-reptile for any $r < 9$, by showing that
no acute spherical diangle can be dissected into less than nine acute spherical
triangles.
|
[
{
"version": "v1",
"created": "Sat, 15 Aug 2015 22:21:42 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Jan 2018 10:30:25 GMT"
}
] | 2018-01-30T00:00:00 |
[
[
"Haverkort",
"Herman",
""
]
] |
new_dataset
| 0.998162 |
1609.08265
|
Sudhir R. Ghorpade
|
Sudhir R. Ghorpade and Prasant Singh
|
Minimum Distance and the Minimum Weight Codewords of Schubert Codes
|
26 pages; Slightly revised version; to appear in Finite Fields Appl
|
Finite Fields Appl. 49 (2018), 1-28
|
10.1016/j.ffa.2017.08.014
| null |
cs.IT math.AG math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider linear codes associated to Schubert varieties in Grassmannians. A
formula for the minimum distance of these codes was conjectured in 2000 and
after having been established in various special cases, it was proved in 2008
by Xiang. We give an alternative proof of this formula. Further, we propose a
characterization of the minimum weight codewords of Schubert codes by
introducing the notion of Schubert decomposable elements of certain exterior
powers. It is shown that codewords corresponding to Schubert decomposable
elements are of minimum weight and also that the converse is true in many
cases. A lower bound, and in some cases, an exact formula, for the number of
minimum weight codewords of Schubert codes is also given. From a geometric
point of view, these results correspond to determining the maximum number of
$\mathbb{F}_q$-rational points that can lie on a hyperplane section of a
Schubert variety in a Grassmannian with its nondegenerate embedding in a
projective subspace of the Pl\"ucker projective space, and also the number of
hyperplanes for which the maximum is attained.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2016 05:46:33 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2017 10:37:51 GMT"
}
] | 2018-01-30T00:00:00 |
[
[
"Ghorpade",
"Sudhir R.",
""
],
[
"Singh",
"Prasant",
""
]
] |
new_dataset
| 0.999342 |
1704.06873
|
Keenan Crane
|
Rohan Sawhney and Keenan Crane
|
Boundary First Flattening
|
13 pages
|
ACM Trans. Graph. 37 (1), 2017
|
10.1145/3132705
| null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A conformal flattening maps a curved surface to the plane without distorting
angles---such maps have become a fundamental building block for problems in
geometry processing, numerical simulation, and computational design. Yet
existing methods provide little direct control over the shape of the flattened
domain, or else demand expensive nonlinear optimization. Boundary first
flattening (BFF) is a linear method for conformal parameterization which is
faster than traditional linear methods, yet provides control and quality
comparable to sophisticated nonlinear schemes. The key insight is that the
boundary data for many conformal mapping problems can be efficiently
constructed via the Cherrier formula together with a pair of Poincare-Steklov
operators; once the boundary is known, the map can be easily extended over the
rest of the domain. Since computation demands only a single factorization of
the real Laplace matrix, the amortized cost is about 50x less than any
previously published technique for boundary-controlled conformal flattening. As
a result, BFF opens the door to real-time editing or fast optimization of
high-resolution maps, with direct control over boundary length or angle. We
show how this method can be used to construct maps with sharp corners, cone
singularities, minimal area distortion, and uniformization over the unit disk;
we also demonstrate for the first time how a surface can be conformally
flattened directly onto any given target shape.
|
[
{
"version": "v1",
"created": "Sun, 23 Apr 2017 02:43:33 GMT"
},
{
"version": "v2",
"created": "Sat, 27 Jan 2018 15:39:25 GMT"
}
] | 2018-01-30T00:00:00 |
[
[
"Sawhney",
"Rohan",
""
],
[
"Crane",
"Keenan",
""
]
] |
new_dataset
| 0.992805 |
1710.09918
|
Muhamed Turkanovi\'c
|
Muhamed Turkanovi\'c, Marko H\"olbl, Kristjan Ko\v{s}i\v{c}, Marjan
Heri\v{c}ko, Aida Kami\v{s}ali\'c
|
EduCTX: A blockchain-based higher education credit platform
|
20 pages, 6 figures
| null |
10.1109/ACCESS.2018.2789929
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blockchain technology enables the creation of a decentralized environment
where transactions and data are not under the control of any third party
organization. Any transaction ever completed is recorded in a public ledger in
a verifiable and permanent way. Based on blockchain technology, we propose a
global higher education credit platform, named EduCTX. This platform is based
on the concept of the European Credit Transfer and Accumulation System (ECTS).
It constitutes a globally trusted, decentralized higher education credit and
grading system that can offer a globally unified viewpoint for students and
higher education institutions (HEIs), as well as for other potential
stakeholders such as companies, institutions, and organizations. As a proof of
concept, we present a prototype implementation of the environment, based on the
open-source Ark Blockchain Platform. Based on a globally distributed
peer-to-peer network, EduCTX will process, manage and control ECTX tokens,
which represent credits that students gain for completed courses such as ECTS.
HEIs are the peers of the blockchain network. The platform is a first step
towards a more transparent and technologically advanced form of higher
education systems. The EduCTX platform represents the basis of the EduCTX
initiative which anticipates that various HEIs would join forces in order to
create a globally efficient, simplified and ubiquitous environment in order to
avoid language and administrative barriers. Therefore we invite and encourage
HEIs to join the EduCTX initiative and the EduCTX blockchain network.
|
[
{
"version": "v1",
"created": "Thu, 26 Oct 2017 21:28:13 GMT"
}
] | 2018-01-30T00:00:00 |
[
[
"Turkanović",
"Muhamed",
""
],
[
"Hölbl",
"Marko",
""
],
[
"Košič",
"Kristjan",
""
],
[
"Heričko",
"Marjan",
""
],
[
"Kamišalić",
"Aida",
""
]
] |
new_dataset
| 0.991373 |
1711.06922
|
Mikhail Pavlov
|
Mikhail Pavlov, Sergey Kolesnikov, Sergey M. Plis
|
Run, skeleton, run: skeletal model in a physics-based simulation
|
Corrected typos and spelling
| null | null | null |
cs.AI cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present our approach to solve a physics-based reinforcement
learning challenge "Learning to Run" with objective to train
physiologically-based human model to navigate a complex obstacle course as
quickly as possible. The environment is computationally expensive, has a
high-dimensional continuous action space and is stochastic. We benchmark state
of the art policy-gradient methods and test several improvements, such as layer
normalization, parameter noise, action and state reflecting, to stabilize
training and improve its sample-efficiency. We found that the Deep
Deterministic Policy Gradient method is the most efficient method for this
environment and the improvements we have introduced help to stabilize training.
Learned models are able to generalize to new physical scenarios, e.g. different
obstacle courses.
|
[
{
"version": "v1",
"created": "Sat, 18 Nov 2017 20:18:16 GMT"
},
{
"version": "v2",
"created": "Sun, 28 Jan 2018 09:29:07 GMT"
}
] | 2018-01-30T00:00:00 |
[
[
"Pavlov",
"Mikhail",
""
],
[
"Kolesnikov",
"Sergey",
""
],
[
"Plis",
"Sergey M.",
""
]
] |
new_dataset
| 0.983277 |
1801.06540
|
Santanu Bhattacharya
|
Kabir Rustogi, Santanu Bhattacharya, Margaret Church and Ramesh Raskar
|
What is the right addressing scheme for India?
|
6 pages, 5 figures and 3 tables. Published in the MIT Emerging Worlds
Site
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computer generated addresses are coming to your neighborhood because most
places in the world do not have an assigned meaningful street address. In
India, 80% of the addresses are written with respect to a landmark which
typically lies between 50-1500 meters of the actual address; such addresses
make geolocating very challenging. Accuracy in geolocation is critical for
emergency services to navigate quickly to reach you and for logistics
industries to improve on-time performance and efficient routing of the package
coming to your house. In this paper, we explore suggested addressing schemes
for India, to determine what use cases and potential technologies will have the
best adoption and therefore, greatest impact.
|
[
{
"version": "v1",
"created": "Sat, 20 Jan 2018 03:34:18 GMT"
},
{
"version": "v2",
"created": "Sun, 28 Jan 2018 07:08:48 GMT"
}
] | 2018-01-30T00:00:00 |
[
[
"Rustogi",
"Kabir",
""
],
[
"Bhattacharya",
"Santanu",
""
],
[
"Church",
"Margaret",
""
],
[
"Raskar",
"Ramesh",
""
]
] |
new_dataset
| 0.998898 |
1801.09042
|
Yipin Zhou
|
Yipin Zhou and Yale Song and Tamara L. Berg
|
Image2GIF: Generating Cinemagraphs using Recurrent Deep Q-Networks
|
WACV2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a still photograph, one can imagine how dynamic objects might move
against a static background. This idea has been actualized in the form of
cinemagraphs, where the motion of particular objects within a still image is
repeated, giving the viewer a sense of animation. In this paper, we learn
computational models that can generate cinemagraph sequences automatically
given a single image. To generate cinemagraphs, we explore combining generative
models with a recurrent neural network and deep Q-networks to enhance the power
of sequence generation. To enable and evaluate these models we make use of two
datasets, one synthetically generated and the other containing real video
generated cinemagraphs. Both qualitative and quantitative evaluations
demonstrate the effectiveness of our models on the synthetic and real datasets.
|
[
{
"version": "v1",
"created": "Sat, 27 Jan 2018 05:48:20 GMT"
}
] | 2018-01-30T00:00:00 |
[
[
"Zhou",
"Yipin",
""
],
[
"Song",
"Yale",
""
],
[
"Berg",
"Tamara L.",
""
]
] |
new_dataset
| 0.999674 |
1801.09184
|
Yancheng Bai
|
Yancheng Bai, Huijuan Xu, Kate Saenko, Bernard Ghanem
|
Contextual Multi-Scale Region Convolutional 3D Network for Activity
Detection
|
10 pages, 3 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Activity detection is a fundamental problem in computer vision. Detecting
activities of different temporal scales is particularly challenging. In this
paper, we propose the contextual multi-scale region convolutional 3D network
(CMS-RC3D) for activity detection. To deal with the inherent temporal scale
variability of activity instances, the temporal feature pyramid is used to
represent activities of different temporal scales. On each level of the
temporal feature pyramid, an activity proposal detector and an activity
classifier are learned to detect activities of specific temporal scales.
Temporal contextual information is fused into activity classifiers for better
recognition. More importantly, the entire model at all levels can be trained
end-to-end. Our CMS-RC3D detector can deal with activities at all temporal
scale ranges with only a single pass through the backbone network. We test our
detector on two public activity detection benchmarks, THUMOS14 and ActivityNet.
Extensive experiments show that the proposed CMS-RC3D detector outperforms
state-of-the-art methods on THUMOS14 by a substantial margin and achieves
comparable results on ActivityNet despite using a shallow feature extractor.
|
[
{
"version": "v1",
"created": "Sun, 28 Jan 2018 05:46:01 GMT"
}
] | 2018-01-30T00:00:00 |
[
[
"Bai",
"Yancheng",
""
],
[
"Xu",
"Huijuan",
""
],
[
"Saenko",
"Kate",
""
],
[
"Ghanem",
"Bernard",
""
]
] |
new_dataset
| 0.978841 |
1801.09275
|
Zeyu Guo
|
Zeyu Guo, Nitin Saxena, Amit Sinhababu
|
Algebraic dependencies and PSPACE algorithms in approximative complexity
| null | null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Testing whether a set $\mathbf{f}$ of polynomials has an algebraic dependence
is a basic problem with several applications. The polynomials are given as
algebraic circuits. Algebraic independence testing question is wide open over
finite fields (Dvir, Gabizon, Wigderson, FOCS'07). The best complexity known is
NP$^{\#\rm P}$ (Mittmann, Saxena, Scheiblechner, Trans.AMS'14). In this work we
put the problem in AM $\cap$ coAM. In particular, dependence testing is
unlikely to be NP-hard and joins the league of problems of "intermediate"
complexity, eg. graph isomorphism & integer factoring. Our proof method is
algebro-geometric-- estimating the size of the image/preimage of the polynomial
map $\mathbf{f}$ over the finite field. A gap in this size is utilized in the
AM protocols.
Next, we study the open question of testing whether every annihilator of
$\mathbf{f}$ has zero constant term (Kayal, CCC'09). We give a geometric
characterization using Zariski closure of the image of $\mathbf{f}$;
introducing a new problem called approximate polynomials satisfiability (APS).
We show that APS is NP-hard and, using projective algebraic-geometry ideas, we
put APS in PSPACE (prior best was EXPSPACE via Grobner basis computation). As
an unexpected application of this to approximative complexity theory we get--
Over any field, hitting-set for $\overline{\rm VP}$ can be designed in PSPACE.
This solves an open problem posed in (Mulmuley, FOCS'12, J.AMS 2017); greatly
mitigating the GCT Chasm (exponentially in terms of space complexity).
|
[
{
"version": "v1",
"created": "Sun, 28 Jan 2018 19:56:40 GMT"
}
] | 2018-01-30T00:00:00 |
[
[
"Guo",
"Zeyu",
""
],
[
"Saxena",
"Nitin",
""
],
[
"Sinhababu",
"Amit",
""
]
] |
new_dataset
| 0.995965 |
1801.09482
|
Jekan Thangavelautham
|
H. Kalita, S. Schwartz, E. Asphaug, J. Thangavelautham
|
Mobility and Science operations On An Asteroid Using a Hopping Small
Spacecraft on Stilts
|
13 pages, 9 figures, to Appear at AAS GNC 2018/Advances in
Astronautical Sciences 2018
| null | null | null |
cs.RO astro-ph.EP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There are thousands of asteroids in near-Earth space and millions in the Main
Belt. They are diverse in physical properties and composition and are time
capsules of the early solar system. This makes them strategic locations for
planetary science, resource mining, planetary defense/security and as
interplanetary depots and communication relays. Landing on a small asteroid and
manipulating its surface materials remains a major unsolved challenge fraught
with high risk. The asteroid surface may contain everything from hard boulders
to soft regolith loosely held by cohesion and very low-gravity. Upcoming
missions Hayabusa II and OSIRIS-REx will perform touch and go operations to
mitigate the risks of landing on an asteroid. This limits the contact time and
requires fuel expenditure for hovering. An important unknown is the problem of
getting stuck or making a hard impact with the surface. The Spacecraft
Penetrator for Increasing Knowledge of NEOs (SPIKE) mission concept will
utilize a small-satellite bus that is propelled using a xenon-fueled ion engine
and will contain an extendable, low-mass, high-strength boom with a tip
containing force-moment sensors. SPIKE will enable contact with the asteroid
surface, where it will perform detailed regolith analysis and seismology as
well as penetrometry, while keeping the main spacecraft bus at a safe distance.
Using one or more long stilts frees the spacecraft from having to hover above
the asteroid and thus substantially reduces or eliminates fuel use when doing
science operations. This enables much longer missions that include a series of
hops to multiple locations on the small-body surface.
|
[
{
"version": "v1",
"created": "Mon, 29 Jan 2018 12:55:17 GMT"
}
] | 2018-01-30T00:00:00 |
[
[
"Kalita",
"H.",
""
],
[
"Schwartz",
"S.",
""
],
[
"Asphaug",
"E.",
""
],
[
"Thangavelautham",
"J.",
""
]
] |
new_dataset
| 0.998015 |
1801.09485
|
Mohammad Shehab
|
Mohammad Shehab, Hirley Alves, and Matti Latva-aho
|
Ultra Reliable Communication via Opportunistic ARQ Transmission in
Cognitive Networks
|
accepted in WCNC 2018
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel opportunistic spectrum sharing scheme that
applies ARQ protocol to achieve ultra reliability in the finite blocklength
regime. A primary user shares its licensed spectrum to a secondary user, where
both communicate to the same base station. The base station applies ARQ with
the secondary user, which possess a limited number of trials to transmit each
packet. We resort to the interweave model in which the secondary user senses
the primary user activity and accesses the channel with access probabilities
which depend on the primary user arrival rate and the number of available
trials. We characterize the secondary user access probabilities and transmit
power in order to achieve target error constraints for both users. Furthermore,
we analyze the primary user performance in terms of outage probability and
delay. The results show that our proposed scheme outperforms the open loop and
non-opportunistic scenarios in terms of secondary user transmit power saving
and primary user reliability.
|
[
{
"version": "v1",
"created": "Mon, 29 Jan 2018 13:03:04 GMT"
}
] | 2018-01-30T00:00:00 |
[
[
"Shehab",
"Mohammad",
""
],
[
"Alves",
"Hirley",
""
],
[
"Latva-aho",
"Matti",
""
]
] |
new_dataset
| 0.999301 |
1801.09556
|
Harsh Thakkar
|
Harsh Thakkar and Dharmen Punjani and Jens Lehmann and S\"oren Auer
|
Killing Two Birds with One Stone -- Querying Property Graphs using
SPARQL via GREMLINATOR
|
4 pages, 8 figures, DEMO paper submission. arXiv admin note: text
overlap with arXiv:1801.02911
| null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge graphs have become popular over the past decade and frequently rely
on the Resource Description Framework (RDF) or Property Graph (PG) databases as
data models. However, the query languages for these two data models -- SPARQL
for RDF and the PG traversal language Gremlin -- are lacking interoperability.
We present Gremlinator, the first translator from SPARQL -- the W3C
standardized language for RDF -- and Gremlin -- a popular property graph
traversal language. Gremlinator translates SPARQL queries to Gremlin path
traversals for executing graph pattern matching queries over graph databases.
This allows a user, who is well versed in SPARQL, to access and query a wide
variety of Graph Data Management Systems (DMSs) avoiding the steep learning
curve for adapting to a new Graph Query Language (GQL). Gremlin is a graph
computing system-agnostic traversal language (covering both OLTP graph database
or OLAP graph processors), making it a desirable choice for supporting
interoperability for querying Graph DMSs. Gremlinator currently supports the
translation of a subset of SPARQL 1.0, specifically the SPARQL SELECT queries.
|
[
{
"version": "v1",
"created": "Thu, 25 Jan 2018 23:15:36 GMT"
}
] | 2018-01-30T00:00:00 |
[
[
"Thakkar",
"Harsh",
""
],
[
"Punjani",
"Dharmen",
""
],
[
"Lehmann",
"Jens",
""
],
[
"Auer",
"Sören",
""
]
] |
new_dataset
| 0.979992 |
1709.09247
|
Chamika Liyanagedera
|
Chamika M. Liyanagedera, Abhronil Sengupta, Akhilesh Jaiswal, and
Kaushik Roy
|
Stochastic Spiking Neural Networks Enabled by Magnetic Tunnel Junctions:
From Nontelegraphic to Telegraphic Switching Regimes
|
12 pages, 14 Figures, 1 Table
|
Phys. Rev. Applied 8, 064017 (2017)
|
10.1103/PhysRevApplied.8.064017
| null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stochastic spiking neural networks based on nanoelectronic spin devices can
be a possible pathway to achieving "brainlike" compact and energy-effcient
cognitive intelligence. The computational model attempt to exploit the
intrinsic device stochasticity of nanoelectronic synaptic or neural components
to perform learning or inference. However, there has been limited analysis on
the scaling effect of stochastic spin devices and its impact on the operation
of such stochastic networks at the system level. This work attempts to explore
the design space and analyze the performance of nanomagnet-based stochastic
neuromorphic computing architectures for magnets with different barrier
heights. We illustrate how the underlying network architecture must be modified
to account for the random telegraphic switching behavior displayed by magnets
with low barrier heights as they are scaled into the superparamagnetic regime.
We perform a device-to-system-level analysis on a deep neural-network
architecture for a digit-recognition problem on the MNIST data set.
|
[
{
"version": "v1",
"created": "Tue, 26 Sep 2017 20:20:25 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Jan 2018 16:27:08 GMT"
}
] | 2018-01-29T00:00:00 |
[
[
"Liyanagedera",
"Chamika M.",
""
],
[
"Sengupta",
"Abhronil",
""
],
[
"Jaiswal",
"Akhilesh",
""
],
[
"Roy",
"Kaushik",
""
]
] |
new_dataset
| 0.97375 |
1801.08565
|
Ahmad Biniaz
|
Therese Biedl, Ahmad Biniaz, Robert Cummings, Anna Lubiw, Florin
Manea, Dirk Nowotka, and Jeffrey Shallit
|
Rollercoasters and Caterpillars
|
17 pages
| null | null | null |
cs.CG cs.DM cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A rollercoaster is a sequence of real numbers for which every maximal
contiguous subsequence, that is increasing or decreasing, has length at least
three. By translating this sequence to a set of points in the plane, a
rollercoaster can be defined as a polygonal path for which every maximal
sub-path, with positive- or negative-slope edges, has at least three points.
Given a sequence of distinct real numbers, the rollercoaster problem asks for a
maximum-length subsequence that is a rollercoaster. It was conjectured that
every sequence of $n$ distinct real numbers contains a rollercoaster of length
at least $\lceil n/2\rceil$ for $n>7$, while the best known lower bound is
$\Omega(n/\log n)$. In this paper we prove this conjecture. Our proof is
constructive and implies a linear-time algorithm for computing a rollercoaster
of this length. Extending the $O(n\log n)$-time algorithm for computing a
longest increasing subsequence, we show how to compute a maximum-length
rollercoaster within the same time bound. A maximum-length rollercoaster in a
permutation of $\{1,\dots,n\}$ can be computed in $O(n \log \log n)$ time.
The search for rollercoasters was motivated by orthogeodesic point-set
embedding of caterpillars. A caterpillar is a tree such that deleting the
leaves gives a path, called the spine. A top-view caterpillar is one of degree
4 such that the two leaves adjacent to each vertex lie on opposite sides of the
spine. As an application of our result on rollercoasters, we are able to find a
planar drawing of every $n$-node top-view caterpillar on every set of
$\frac{25}{3}n$ points in the plane, such that each edge is an orthogonal path
with one bend. This improves the previous best known upper bound on the number
of required points, which is $O(n \log n)$. We also show that such a drawing
can be obtained in linear time, provided that the points are given in sorted
order.
|
[
{
"version": "v1",
"created": "Thu, 25 Jan 2018 19:14:37 GMT"
}
] | 2018-01-29T00:00:00 |
[
[
"Biedl",
"Therese",
""
],
[
"Biniaz",
"Ahmad",
""
],
[
"Cummings",
"Robert",
""
],
[
"Lubiw",
"Anna",
""
],
[
"Manea",
"Florin",
""
],
[
"Nowotka",
"Dirk",
""
],
[
"Shallit",
"Jeffrey",
""
]
] |
new_dataset
| 0.999611 |
1801.08710
|
Sathya Chinnathambi
|
C Sathya, S Agilan, A G Aruna
|
Enhancing Byzantine fault tolerance using MD5 checksum and delay
variation in Cloud services
|
22 pages
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cloud computing management are beyond typical human narratives. However if a
virtual system is not effectively designed to tolerate Byzantine faults, it
could lead to a faultily executed mission rather than a cloud crash. The cloud
could recover from the crash but it could not recover from the loss of
credibility. Moreover no amount of replication or fault handling measures can
be helpful in facing a Byzantine fault unless the virtual system is designed to
detect, tolerate and eliminate such faults. However research efforts that are
made to address Byzantine faults have not provided convincing solutions vastly
due to their limited capabilities in detecting the Byzantine faults. As a
result, in this paper the Cloud system is modeled as a discrete system to
determine the virtual system behavior at varying time intervals. A delay
variation variable as a measure of deviation for the expected processing delay
associated with the virtual nodes takes values from the set of P {low, normal,
high, extreme}. Similarly, a check sum error variable which is even computed
for intra nodes that have no attachment to TCP/IP stack takes values from the
set of P {no error, error}. These conditions are then represented by the
occurrence of faulty events that cause specific component mode transition from
fail safe to fail-stop or byzantine prone.
|
[
{
"version": "v1",
"created": "Fri, 26 Jan 2018 08:25:22 GMT"
}
] | 2018-01-29T00:00:00 |
[
[
"Sathya",
"C",
""
],
[
"Agilan",
"S",
""
],
[
"Aruna",
"A G",
""
]
] |
new_dataset
| 0.995574 |
1801.08754
|
Clare Llewellyn
|
Clare Llewellyn, Laura Cram, Adrian Favero, Robin L. Hill
|
For Whom the Bell Trolls: Troll Behaviour in the Twitter Brexit Debate
| null | null | null | null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a review into automated and malicious activity Twitter released a list of
accounts that they believed were connected to state sponsored manipulation of
the 2016 American Election. This list details 2,752 accounts Twitter believed
to be controlled by Russian operatives. In the absence of a similar list of
operatives active within the debate on the 2016 UK referendum on membership of
the European Union (Brexit) we investigated the behaviour of the same American
Election focused accounts in the production of content related to the UK-EU
referendum. We found that within our dataset we had Brexit-related content from
419 of these accounts, leading to 3,485 identified tweets gathered between the
29th August 2015 and 3rd October 2017. The behaviour of the accounts altered
radically on the day of the referendum, shifting from generalised disruptive
tweeting to retweeting each other in order to amplify content produced by other
troll accounts. We also demonstrate that, while these accounts are, in general,
designed to resemble American citizens, accounts created in 2016 often
contained German locations and terms in the user profiles.
|
[
{
"version": "v1",
"created": "Fri, 26 Jan 2018 11:02:26 GMT"
}
] | 2018-01-29T00:00:00 |
[
[
"Llewellyn",
"Clare",
""
],
[
"Cram",
"Laura",
""
],
[
"Favero",
"Adrian",
""
],
[
"Hill",
"Robin L.",
""
]
] |
new_dataset
| 0.999717 |
1801.08811
|
Guohua Zhang
|
Guohua Zhang, Yulin Hu and Qinwei He
|
Constructing LDPC Codes from Partition and Latin-Style Splicing
|
7 pages, 2 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A novel method guaranteeing nondecreasing girth is presented for constructing
longer low-density parity-check (LDPC) codes from shorter ones. The
parity-check matrix of a shorter base code is decomposed into N (N>=2)
non-overlapping components with the same size. Then, these components are
combined together to form the parity-check matrix of a longer code, according
to a given N*N Latin square. To illustrate this method, longer quasi-cyclic
(QC) LDPC codes are obtained with girth at least eight and satisfactory
performance, via shorter QC-LDPC codes with girth eight but poor performance.
The proposed method naturally includes several well-known methods as special
cases, but is much more general compared with these existing approaches.
|
[
{
"version": "v1",
"created": "Fri, 26 Jan 2018 13:59:43 GMT"
}
] | 2018-01-29T00:00:00 |
[
[
"Zhang",
"Guohua",
""
],
[
"Hu",
"Yulin",
""
],
[
"He",
"Qinwei",
""
]
] |
new_dataset
| 0.9964 |
1801.08823
|
Anoop Aroor
|
Anoop Aroor, Susan L. Epstein and Raj Korpan
|
MengeROS: a Crowd Simulation Tool for Autonomous Robot Navigation
|
In AAAI 2017 Fall symposium on AI for HRI
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While effective navigation in large, crowded environments is essential for an
autonomous robot, preliminary testing of algorithms to support it requires
simulation across a broad range of crowd scenarios. Most available simulation
tools provide either realistic crowds without robots or realistic robots
without realistic crowds. This paper introduces MengeROS, a 2-D simulator that
realistically integrates multiple robots and crowds. MengeROS provides a broad
range of settings in which to test the capabilities and performance of
navigation algorithms designed for large crowded environments.
|
[
{
"version": "v1",
"created": "Fri, 26 Jan 2018 14:31:14 GMT"
}
] | 2018-01-29T00:00:00 |
[
[
"Aroor",
"Anoop",
""
],
[
"Epstein",
"Susan L.",
""
],
[
"Korpan",
"Raj",
""
]
] |
new_dataset
| 0.961747 |
1801.08825
|
Arnim Bleier
|
Sebastian Stier and Arnim Bleier and Haiko Lietz and Markus Strohmaier
|
Election campaigning on social media: Politicians, audiences and the
mediation of political communication on Facebook and Twitter
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Although considerable research has concentrated on online campaigning, it is
still unclear how politicians use different social media platforms in political
communication. Focusing on the German federal election campaign 2013, this
article investigates whether election candidates address the topics most
important to the mass audience and to which extent their communication is
shaped by the characteristics of Facebook and Twitter. Based on open-ended
responses from a representative survey conducted during the election campaign,
we train a human-interpretable Bayesian language model to identify political
topics. Applying the model to social media messages of candidates and their
direct audiences, we find that both prioritize different topics than the mass
audience. The analysis also shows that politicians use Facebook and Twitter for
different purposes. We relate the various findings to the mediation of
political communication on social media induced by the particular
characteristics of audiences and sociotechnical environments.
|
[
{
"version": "v1",
"created": "Fri, 26 Jan 2018 14:33:12 GMT"
}
] | 2018-01-29T00:00:00 |
[
[
"Stier",
"Sebastian",
""
],
[
"Bleier",
"Arnim",
""
],
[
"Lietz",
"Haiko",
""
],
[
"Strohmaier",
"Markus",
""
]
] |
new_dataset
| 0.999272 |
1801.08841
|
Per-Arne Andersen
|
Per-Arne Andersen, Morten Goodwin, Ole-Christoffer Granmo
|
FlashRL: A Reinforcement Learning Platform for Flash Games
|
12 Pages, Proceedings of the 30th Norwegian Informatics Conference,
Oslo, Norway 2017
| null | null | null |
cs.AI cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement Learning (RL) is a research area that has blossomed
tremendously in recent years and has shown remarkable potential in among others
successfully playing computer games. However, there only exists a few game
platforms that provide diversity in tasks and state-space needed to advance RL
algorithms. The existing platforms offer RL access to Atari- and a few
web-based games, but no platform fully expose access to Flash games. This is
unfortunate because applying RL to Flash games have potential to push the
research of RL algorithms.
This paper introduces the Flash Reinforcement Learning platform (FlashRL)
which attempts to fill this gap by providing an environment for thousands of
Flash games on a novel platform for Flash automation. It opens up easy
experimentation with RL algorithms for Flash games, which has previously been
challenging. The platform shows excellent performance with as little as 5% CPU
utilization on consumer hardware. It shows promising results for novel
reinforcement learning algorithms.
|
[
{
"version": "v1",
"created": "Fri, 26 Jan 2018 15:12:31 GMT"
}
] | 2018-01-29T00:00:00 |
[
[
"Andersen",
"Per-Arne",
""
],
[
"Goodwin",
"Morten",
""
],
[
"Granmo",
"Ole-Christoffer",
""
]
] |
new_dataset
| 0.97428 |
1801.08867
|
Marco Baldi
|
Marco Baldi, Alessandro Barenghi, Franco Chiaraluce, Gerardo Pelosi,
Paolo Santini
|
LEDAkem: a post-quantum key encapsulation mechanism based on QC-LDPC
codes
|
21 pages, 3 tables
| null | null | null |
cs.CR cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents a new code-based key encapsulation mechanism (KEM) called
LEDAkem. It is built on the Niederreiter cryptosystem and relies on
quasi-cyclic low-density parity-check codes as secret codes, providing high
decoding speeds and compact keypairs. LEDAkem uses ephemeral keys to foil known
statistical attacks, and takes advantage of a new decoding algorithm that
provides faster decoding than the classical bit-flipping decoder commonly
adopted in this kind of systems. The main attacks against LEDAkem are
investigated, taking into account quantum speedups. Some instances of LEDAkem
are designed to achieve different security levels against classical and quantum
computers. Some performance figures obtained through an efficient C99
implementation of LEDAkem are provided.
|
[
{
"version": "v1",
"created": "Fri, 26 Jan 2018 15:56:15 GMT"
}
] | 2018-01-29T00:00:00 |
[
[
"Baldi",
"Marco",
""
],
[
"Barenghi",
"Alessandro",
""
],
[
"Chiaraluce",
"Franco",
""
],
[
"Pelosi",
"Gerardo",
""
],
[
"Santini",
"Paolo",
""
]
] |
new_dataset
| 0.999658 |
1801.08938
|
Oliver Rutishauser
|
Oliver Rutishauser
|
Simulation for L3 Volumetric Attack Detection
|
8 pages with figures, code, and conclusions
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The detection of a volumetric attack involves collecting statistics on the
network traffic, and identifying suspicious activities. We assume that
available statistical information includes the number of packets and the number
of bytes passed per flow. We apply methods of machine learning to detect
malicious traffic. A prototype project is implemented as a module for the
Floodlight controller. The prototype was tested on the Mininet simulation
platform. The simulated topology includes a number of edge switches, a
connected graph of core switches, and a number of server and user hosts. The
server hosts run simple web servers. The user hosts simulate web clients. The
controller employs Dijkstra's algorithm to find the best flow in the graph. The
controller periodically polls the edge switches and provides current and
historical statistics on each active flow. The streaming analytics evaluates
the traffic volume and detects volumetric attacks.
|
[
{
"version": "v1",
"created": "Fri, 26 Jan 2018 18:59:51 GMT"
}
] | 2018-01-29T00:00:00 |
[
[
"Rutishauser",
"Oliver",
""
]
] |
new_dataset
| 0.998853 |
1705.07262
|
Daniel Hein
|
Daniel Hein, Steffen Udluft, Michel Tokic, Alexander Hentschel, Thomas
A. Runkler, Volkmar Sterzing
|
Batch Reinforcement Learning on the Industrial Benchmark: First
Experiences
| null |
2017 International Joint Conference on Neural Networks (IJCNN),
Anchorage, AK, 2017, pp. 4214-4221
|
10.1109/IJCNN.2017.7966389
| null |
cs.LG cs.AI cs.NE cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Particle Swarm Optimization Policy (PSO-P) has been recently introduced
and proven to produce remarkable results on interacting with academic
reinforcement learning benchmarks in an off-policy, batch-based setting. To
further investigate the properties and feasibility on real-world applications,
this paper investigates PSO-P on the so-called Industrial Benchmark (IB), a
novel reinforcement learning (RL) benchmark that aims at being realistic by
including a variety of aspects found in industrial applications, like
continuous state and action spaces, a high dimensional, partially observable
state space, delayed effects, and complex stochasticity. The experimental
results of PSO-P on IB are compared to results of closed-form control policies
derived from the model-based Recurrent Control Neural Network (RCNN) and the
model-free Neural Fitted Q-Iteration (NFQ). Experiments show that PSO-P is not
only of interest for academic benchmarks, but also for real-world industrial
applications, since it also yielded the best performing policy in our IB
setting. Compared to other well established RL techniques, PSO-P produced
outstanding results in performance and robustness, requiring only a relatively
low amount of effort in finding adequate parameters or making complex design
decisions.
|
[
{
"version": "v1",
"created": "Sat, 20 May 2017 05:31:52 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Jul 2017 15:34:21 GMT"
}
] | 2018-01-26T00:00:00 |
[
[
"Hein",
"Daniel",
""
],
[
"Udluft",
"Steffen",
""
],
[
"Tokic",
"Michel",
""
],
[
"Hentschel",
"Alexander",
""
],
[
"Runkler",
"Thomas A.",
""
],
[
"Sterzing",
"Volkmar",
""
]
] |
new_dataset
| 0.981547 |
1801.07507
|
Yosi Mass
|
Yosi Mass, Lili Kotlerman, Shachar Mirkin, Elad Venezian, Gera
Witzling, Noam Slonim
|
What did you Mention? A Large Scale Mention Detection Benchmark for
Spoken and Written Text
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe a large, high-quality benchmark for the evaluation of Mention
Detection tools. The benchmark contains annotations of both named entities as
well as other types of entities, annotated on different types of text, ranging
from clean text taken from Wikipedia, to noisy spoken data. The benchmark was
built through a highly controlled crowd sourcing process to ensure its quality.
We describe the benchmark, the process and the guidelines that were used to
build it. We then demonstrate the results of a state-of-the-art system running
on that benchmark.
|
[
{
"version": "v1",
"created": "Tue, 23 Jan 2018 12:22:52 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Jan 2018 11:35:27 GMT"
},
{
"version": "v3",
"created": "Thu, 25 Jan 2018 10:14:28 GMT"
}
] | 2018-01-26T00:00:00 |
[
[
"Mass",
"Yosi",
""
],
[
"Kotlerman",
"Lili",
""
],
[
"Mirkin",
"Shachar",
""
],
[
"Venezian",
"Elad",
""
],
[
"Witzling",
"Gera",
""
],
[
"Slonim",
"Noam",
""
]
] |
new_dataset
| 0.956631 |
1801.07746
|
Behzad Golshan
|
Akari Asai, Sara Evensen, Behzad Golshan, Alon Halevy, Vivian Li,
Andrei Lopatenko, Daniela Stepanov, Yoshihiko Suhara, Wang-Chiew Tan, Yinzhan
Xu
|
HappyDB: A Corpus of 100,000 Crowdsourced Happy Moments
|
Typos fixed
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The science of happiness is an area of positive psychology concerned with
understanding what behaviors make people happy in a sustainable fashion.
Recently, there has been interest in developing technologies that help
incorporate the findings of the science of happiness into users' daily lives by
steering them towards behaviors that increase happiness. With the goal of
building technology that can understand how people express their happy moments
in text, we crowd-sourced HappyDB, a corpus of 100,000 happy moments that we
make publicly available. This paper describes HappyDB and its properties, and
outlines several important NLP problems that can be studied with the help of
the corpus. We also apply several state-of-the-art analysis techniques to
analyze HappyDB. Our results demonstrate the need for deeper NLP techniques to
be developed which makes HappyDB an exciting resource for follow-on research.
|
[
{
"version": "v1",
"created": "Tue, 23 Jan 2018 19:49:58 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Jan 2018 18:56:35 GMT"
}
] | 2018-01-26T00:00:00 |
[
[
"Asai",
"Akari",
""
],
[
"Evensen",
"Sara",
""
],
[
"Golshan",
"Behzad",
""
],
[
"Halevy",
"Alon",
""
],
[
"Li",
"Vivian",
""
],
[
"Lopatenko",
"Andrei",
""
],
[
"Stepanov",
"Daniela",
""
],
[
"Suhara",
"Yoshihiko",
""
],
[
"Tan",
"Wang-Chiew",
""
],
[
"Xu",
"Yinzhan",
""
]
] |
new_dataset
| 0.99956 |
1801.07962
|
Florent Altche
|
Florent Altch\'e, Arnaud de La Fortelle
|
An LSTM Network for Highway Trajectory Prediction
|
Presented at IEEE ITSC 2017
| null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to drive safely and efficiently on public roads, autonomous vehicles
will have to understand the intentions of surrounding vehicles, and adapt their
own behavior accordingly. If experienced human drivers are generally good at
inferring other vehicles' motion up to a few seconds in the future, most
current Advanced Driving Assistance Systems (ADAS) are unable to perform such
medium-term forecasts, and are usually limited to high-likelihood situations
such as emergency braking. In this article, we present a first step towards
consistent trajectory prediction by introducing a long short-term memory (LSTM)
neural network, which is capable of accurately predicting future longitudinal
and lateral trajectories for vehicles on highway. Unlike previous work focusing
on a low number of trajectories collected from a few drivers, our network was
trained and validated on the NGSIM US-101 dataset, which contains a total of
800 hours of recorded trajectories in various traffic densities, representing
more than 6000 individual drivers.
|
[
{
"version": "v1",
"created": "Wed, 24 Jan 2018 12:45:01 GMT"
}
] | 2018-01-26T00:00:00 |
[
[
"Altché",
"Florent",
""
],
[
"de La Fortelle",
"Arnaud",
""
]
] |
new_dataset
| 0.998676 |
1801.08234
|
Akshay Rangesh
|
Akshay Rangesh and Mohan M. Trivedi
|
When Vehicles See Pedestrians with Phones:A Multi-Cue Framework for
Recognizing Phone-based Activities of Pedestrians
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The intelligent vehicle community has devoted considerable efforts to model
driver behavior, and in particular to detect and overcome driver distraction in
an effort to reduce accidents caused by driver negligence. However, as the
domain increasingly shifts towards autonomous and semi-autonomous solutions,
the driver is no longer integral to the decision making process, indicating a
need to refocus efforts elsewhere. To this end, we propose to study pedestrian
distraction instead. In particular, we focus on detecting pedestrians who are
engaged in secondary activities involving their cellphones and similar handheld
multimedia devices from a purely vision-based standpoint. To achieve this
objective, we propose a pipeline incorporating articulated human pose
estimation, followed by a soft object label transfer from an ensemble of
exemplar SVMs trained on the nearest neighbors in pose feature space. We
additionally incorporate head gaze features and prior pose information to carry
out cellphone related pedestrian activity recognition. Finally, we offer a
method to reliably track the articulated pose of a pedestrian through a
sequence of images using a particle filter with a Gaussian Process Dynamical
Model (GPDM), which can then be used to estimate sequentially varying activity
scores at a very low computational cost. The entire framework is fast
(especially for sequential data) and accurate, and easily extensible to include
other secondary activities and sources of distraction.
|
[
{
"version": "v1",
"created": "Wed, 24 Jan 2018 23:14:58 GMT"
}
] | 2018-01-26T00:00:00 |
[
[
"Rangesh",
"Akshay",
""
],
[
"Trivedi",
"Mohan M.",
""
]
] |
new_dataset
| 0.999559 |
1801.08354
|
Mustafa A. Mustafa
|
Aysajan Abidin, Abdelrahaman Aly, Sara Cleemput, Mustafa A. Mustafa
|
Secure and Privacy-Friendly Local Electricity Trading and Billing in
Smart Grid
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes two decentralised, secure and privacy-friendly protocols
for local electricity trading and billing, respectively. The trading protocol
employs a bidding algorithm based upon secure multiparty computations and
allows users to trade their excess electricity among themselves. The bid
selection and calculation of the trading price are performed in a decentralised
and oblivious manner. The billing protocol is based on a simple
privacy-friendly aggregation technique that allows suppliers to compute their
customers' monthly bills without learning their fine-grained electricity
consumption data. We also implemented and tested the performance of the trading
protocol with realistic data. Our results show that it can be performed for
2500 bids in less than five minutes in the on-line phase, showing its
feasibility for a typical electricity trading period of 30 minutes.
|
[
{
"version": "v1",
"created": "Thu, 25 Jan 2018 11:18:37 GMT"
}
] | 2018-01-26T00:00:00 |
[
[
"Abidin",
"Aysajan",
""
],
[
"Aly",
"Abdelrahaman",
""
],
[
"Cleemput",
"Sara",
""
],
[
"Mustafa",
"Mustafa A.",
""
]
] |
new_dataset
| 0.993205 |
1801.08405
|
Wacharin Wichiramala
|
Wacharin Wichiramala
|
A smaller cover for closed unit curves
|
In the appendix, the computer code for numerical optimization is
provided together with explanation. The link to the actual file, a
Mathematica notebook, is at
www.math.sc.chula.ac.th/~wacharin/optimization/closed%20arcs
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Forty years ago Schaer and Wetzel showed that a $\frac{1}{\pi}\times\frac
{1}{2\pi}\sqrt{\pi^{2}-4}$ rectangle, whose area is about $0.122\,74,$ is the
smallest rectangle that is a cover for the family of all closed unit arcs. More
recently F\"{u}redi and Wetzel showed that one corner of this rectangle can be
clipped to form a pentagonal cover having area $0.11224$ for this family of
curves. Here we show that then the opposite corner can be clipped to form a
hexagonal cover of area less than $0.11023$ for this same family. This
irregular hexagon is the smallest cover currently known for this family of
arcs.
|
[
{
"version": "v1",
"created": "Sat, 20 Jan 2018 07:26:12 GMT"
}
] | 2018-01-26T00:00:00 |
[
[
"Wichiramala",
"Wacharin",
""
]
] |
new_dataset
| 0.995292 |
1702.05265
|
Franz J. Brandenburg
|
Franz J. Brandenburg
|
T-Shape Visibility Representations of 1-Planar Graphs
|
26 pages, 8 figures
| null |
10.1016/j.comgeo.2017.10.007
| null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A shape visibility representation displays a graph so that each vertex is
represented by an orthogonal polygon of a particular shape and for each edge
there is a horizontal or vertical line of sight between the polygons assigned
to its endvertices. Special shapes are rectangles, L, T, E and H-shapes, and
caterpillars. A flat rectangle is a horizontal bar of height $\epsilon>0$. A
graph is 1-planar if there is a drawing in the plane such that each edge is
crossed at most once and is IC-planar if in addition no two crossing edges
share a vertex.
We show that every IC-planar graph has a flat rectangle visibility
representation and that every 1-planar graph has a T-shape visibility
representation. The representations use quadratic area and can be computed in
linear time from a given embedding.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2017 09:19:02 GMT"
}
] | 2018-01-25T00:00:00 |
[
[
"Brandenburg",
"Franz J.",
""
]
] |
new_dataset
| 0.992583 |
1801.07741
|
Kyong-Tak Cho
|
Kyong-Tak Cho, Yuseung Kim, Kang G. Shin
|
Who Killed My Parked Car?
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We find that the conventional belief of vehicle cyber attacks and their
defenses---attacks are feasible and thus defenses are required only when the
vehicle's ignition is turned on---does not hold. We verify this fact by
discovering and applying two new practical and important attacks: battery-drain
and Denial-of-Body-control (DoB). The former can drain the vehicle battery
while the latter can prevent the owner from starting or even opening/entering
his car, when either or both attacks are mounted with the ignition off. We
first analyze how operation (e.g., normal, sleep, listen) modes of ECUs are
defined in various in-vehicle network standards and how they are implemented in
the real world. From this analysis, we discover that an adversary can exploit
the wakeup function of in-vehicle networks---which was originally designed for
enhanced user experience/convenience (e.g., remote diagnosis, remote
temperature control)---as an attack vector. Ironically, a core battery-saving
feature in in-vehicle networks makes it easier for an attacker to wake up ECUs
and, therefore, mount and succeed in battery-drain and/or DoB attacks. Via
extensive experimental evaluations on various real vehicles, we show that by
mounting the battery-drain attack, the adversary can increase the average
battery consumption by at least 12.57x, drain the car battery within a few
hours or days, and therefore immobilize/cripple the vehicle. We also
demonstrate the proposed DoB attack on a real vehicle, showing that the
attacker can cut off communications between the vehicle and the driver's key
fob by indefinitely shutting down an ECU, thus making the driver unable to
start and/or even enter the car.
|
[
{
"version": "v1",
"created": "Tue, 23 Jan 2018 19:30:27 GMT"
}
] | 2018-01-25T00:00:00 |
[
[
"Cho",
"Kyong-Tak",
""
],
[
"Kim",
"Yuseung",
""
],
[
"Shin",
"Kang G.",
""
]
] |
new_dataset
| 0.996023 |
1801.07759
|
Jukka Ruohonen
|
Jukka Ruohonen and Ville Lepp\"anen
|
Whose Hands Are in the Finnish Cookie Jar?
|
Proceedings of the European Intelligence and Security Informatics
Conference (EISIC 2017)
| null |
10.1109/EISIC.2017.25
| null |
cs.CR cs.CY cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Web cookies are ubiquitously used to track and profile the behavior of users.
Although there is a solid empirical foundation for understanding the use of
cookies in the global world wide web, thus far, limited attention has been
devoted for country-specific and company-level analysis of cookies. To patch
this limitation in the literature, this paper investigates persistent
third-party cookies used in the Finnish web. The exploratory results reveal
some similarities and interesting differences between the Finnish and the
global web---in particular, popular Finnish web sites are mostly owned by media
companies, which have established their distinct partnerships with online
advertisement companies. The results reported can be also reflected against
current and future privacy regulation in the European Union.
|
[
{
"version": "v1",
"created": "Tue, 23 Jan 2018 20:33:00 GMT"
}
] | 2018-01-25T00:00:00 |
[
[
"Ruohonen",
"Jukka",
""
],
[
"Leppänen",
"Ville",
""
]
] |
new_dataset
| 0.99921 |
1801.07779
|
Martin Thoma
|
Martin Thoma
|
The WiLI benchmark dataset for written language identification
|
{"pages": 12, "figures": 4, "language": "English", "author-ORCiD":
["https://orcid.org/0000-0002-6517-1690"]}
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper describes the WiLI-2018 benchmark dataset for monolingual written
natural language identification. WiLI-2018 is a publicly available, free of
charge dataset of short text extracts from Wikipedia. It contains 1000
paragraphs of 235 languages, totaling in 23500 paragraphs. WiLI is a
classification dataset: Given an unknown paragraph written in one dominant
language, it has to be decided which language it is.
|
[
{
"version": "v1",
"created": "Tue, 23 Jan 2018 21:40:53 GMT"
}
] | 2018-01-25T00:00:00 |
[
[
"Thoma",
"Martin",
""
]
] |
new_dataset
| 0.999853 |
1801.07800
|
Andreas Bender
|
Andreas O. Bender
|
qrypt0 - encrypted short messages exchanged between offline computers
|
6 pages, 1 figure
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A system is described for exchanging encrypted short messages between
computers which remain permanently isolated from any network accessible to the
attacker. The main advantage is effective protection of these computers from
malware which could circumvent the encryption. For transmission, the ciphertext
is passed between isolated and connected computers in the form of a QR code,
which is displayed on and scanned from a screen. The security of qrypt0
therefore rests on the cryptography and the computer's physical isolation
rather than on the computer security of the encrypting device.
|
[
{
"version": "v1",
"created": "Tue, 23 Jan 2018 22:42:59 GMT"
}
] | 2018-01-25T00:00:00 |
[
[
"Bender",
"Andreas O.",
""
]
] |
new_dataset
| 0.999348 |
1801.07880
|
Ying Ye
|
Ying Ye, Zhuoqun Cheng, Soham Sinha, Richard West
|
vLibOS: Babysitting OS Evolution with a Virtualized Library OS
| null | null | null | null |
cs.OS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many applications have service requirements that are not easily met by
existing operating systems. Real-time and security-critical tasks, for example,
often require custom OSes to meet their needs. However, development of special
purpose OSes is a time-consuming and difficult exercise. Drivers, libraries and
applications have to be written from scratch or ported from existing sources.
Many researchers have tackled this problem by developing ways to extend
existing systems with application-specific services. However, it is often
difficult to ensure an adequate degree of separation between legacy and new
services, especially when security and timing requirements are at stake.
Virtualization, for example, supports logical isolation of separate guest
services, but suffers from inadequate temporal isolation of time-critical code
required for real-time systems. This paper presents vLibOS, a master-slave
paradigm for new systems, whose services are built on legacy code that is
temporally and spatially isolated in separate VM domains. Existing OSes are
treated as sandboxed libraries, providing legacy services that are requested by
inter-VM calls, which execute with the time budget of the caller. We evaluate a
real-time implementation of vLibOS. Empirical results show that vLibOS achieves
as much as a 50\% reduction in performance slowdown for real-time threads, when
competing for a shared memory bus with a Linux VM.
|
[
{
"version": "v1",
"created": "Wed, 24 Jan 2018 07:11:22 GMT"
}
] | 2018-01-25T00:00:00 |
[
[
"Ye",
"Ying",
""
],
[
"Cheng",
"Zhuoqun",
""
],
[
"Sinha",
"Soham",
""
],
[
"West",
"Richard",
""
]
] |
new_dataset
| 0.985316 |
1801.08107
|
Hugo Torres Vieira
|
Marco Carbone (1), Fabrizio Montesi (2), Hugo Torres Vieira (3) ((1)
IT University of Copenhagen, (2) University of Southern Denmark, (3) IMT
School for Advanced Studies Lucca)
|
Choreographies for Reactive Programming
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modular programming is a cornerstone in software development, as it allows to
build complex systems from the assembly of simpler components, and support
reusability and substitution principles. In a distributed setting, component
assembly is supported by communication that is often required to follow a
prescribed protocol of interaction. In this paper, we present a language for
the modular development of distributed systems, where the assembly of
components is supported by a choreography that specifies the communication
protocol. Our language allows to separate component behaviour, given in terms
of reactive data ports, and choreographies, specified as first class entities.
This allows us to consider reusability and substitution principles for both
components and choreographies. We show how our model can be compiled into a
more operational perspective in a provably-correct way, and we present a typing
discipline that addresses communication safety and progress of systems, where a
notion of substitutability naturally arises.
|
[
{
"version": "v1",
"created": "Wed, 24 Jan 2018 18:13:06 GMT"
}
] | 2018-01-25T00:00:00 |
[
[
"Carbone",
"Marco",
""
],
[
"Montesi",
"Fabrizio",
""
],
[
"Vieira",
"Hugo Torres",
""
]
] |
new_dataset
| 0.981381 |
1705.05322
|
Helio M. de Oliveira
|
H.M. de Oliveira and R.C. de Oliveira
|
Understanding MIDI: A Painless Tutorial on Midi Format
|
7 pages, 7 figures
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A short overview demystifying the midi audio format is presented. The goal is
to explain the file structure and how the instructions are used to produce a
music signal, both in the case of monophonic signals as for polyphonic signals.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2017 16:34:20 GMT"
}
] | 2018-01-24T00:00:00 |
[
[
"de Oliveira",
"H. M.",
""
],
[
"de Oliveira",
"R. C.",
""
]
] |
new_dataset
| 0.969573 |
1708.00894
|
Arno Solin
|
Arno Solin, Santiago Cortes, Esa Rahtu, Juho Kannala
|
PIVO: Probabilistic Inertial-Visual Odometry for Occlusion-Robust
Navigation
|
10 pages, 4 figures. Paper to be published in WACV 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel method for visual-inertial odometry. The method
is based on an information fusion framework employing low-cost IMU sensors and
the monocular camera in a standard smartphone. We formulate a sequential
inference scheme, where the IMU drives the dynamical model and the camera
frames are used in coupling trailing sequences of augmented poses. The novelty
in the model is in taking into account all the cross-terms in the updates, thus
propagating the inter-connected uncertainties throughout the model. Stronger
coupling between the inertial and visual data sources leads to robustness
against occlusion and feature-poor environments. We demonstrate results on data
collected with an iPhone and provide comparisons against the Tango device and
using the EuRoC data set.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2017 18:58:38 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Jan 2018 14:12:24 GMT"
}
] | 2018-01-24T00:00:00 |
[
[
"Solin",
"Arno",
""
],
[
"Cortes",
"Santiago",
""
],
[
"Rahtu",
"Esa",
""
],
[
"Kannala",
"Juho",
""
]
] |
new_dataset
| 0.982092 |
1709.06663
|
Helio M. de Oliveira
|
H.M. de Oliveira and R.C. de Oliveira
|
Linear Computer-Music through Sequences over Galois Fields
|
5 pages, 2 figures, 5 tables
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is shown how binary sequences can be associated with automatic composition
of monophonic pieces. We are concerned with the composition of e-music from
finite field structures. The information at the input may be either random or
information from a black-and-white, grayscale or color picture. New
e-compositions and music score are made available, including a new piece from
the famous Lenna picture: the score of the e-music <<Between Lenna's eyes in C
major.>> The corresponding stretch of music score are presented. Some
particular structures, including clock arithmetic (mod 12), GF(7), GF(8),
GF(13) and GF(17) are addressed. Further, multilevel block-codes are also used
in a new approach of e-music composition, engendering a particular style as an
e-composer. As an example, Pascal multilevel block codes recently introduced
are handled to generate a new style of electronic music over GF(13).
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2017 22:36:16 GMT"
}
] | 2018-01-24T00:00:00 |
[
[
"de Oliveira",
"H. M.",
""
],
[
"de Oliveira",
"R. C.",
""
]
] |
new_dataset
| 0.981521 |
1801.06613
|
Xuancheng Ren
|
Xuancheng Ren, Xu Sun, Ji Wen, Bingzhen Wei, Weidong Zhan, Zhiyuan
Zhang
|
Building an Ellipsis-aware Chinese Dependency Treebank for Web Text
|
The treebank is available at
https://github.com/lancopku/Chinese-Dependency-Treebank-with-Ellipsis
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Web 2.0 has brought with it numerous user-produced data revealing one's
thoughts, experiences, and knowledge, which are a great source for many tasks,
such as information extraction, and knowledge base construction. However, the
colloquial nature of the texts poses new challenges for current natural
language processing techniques, which are more adapt to the formal form of the
language. Ellipsis is a common linguistic phenomenon that some words are left
out as they are understood from the context, especially in oral utterance,
hindering the improvement of dependency parsing, which is of great importance
for tasks relied on the meaning of the sentence. In order to promote research
in this area, we are releasing a Chinese dependency treebank of 319 weibos,
containing 572 sentences with omissions restored and contexts reserved.
|
[
{
"version": "v1",
"created": "Sat, 20 Jan 2018 01:59:21 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Jan 2018 04:35:30 GMT"
}
] | 2018-01-24T00:00:00 |
[
[
"Ren",
"Xuancheng",
""
],
[
"Sun",
"Xu",
""
],
[
"Wen",
"Ji",
""
],
[
"Wei",
"Bingzhen",
""
],
[
"Zhan",
"Weidong",
""
],
[
"Zhang",
"Zhiyuan",
""
]
] |
new_dataset
| 0.998821 |
1801.07356
|
Dongyang Xu
|
Dongyang Xu, Pinyi Ren, James A. Ritcey, and Yichen Wang
|
Code-Frequency Block Group Coding for Anti-Spoofing Pilot Authentication
in Multi-Antenna OFDM Systems
|
accepted to IEEE Transactions on Information Forensics and Security,
Jan. 2018
| null | null | null |
cs.IT cs.CR eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A pilot spoofer can paralyze the channel estimation in multi-user orthogonal
frequency-division multiplexing (OFD- M) systems by using the same
publicly-known pilot tones as legitimate nodes. This causes the problem of
pilot authentication (PA). To solve this, we propose, for a two-user
multi-antenna OFDM system, a code-frequency block group (CFBG) coding based PA
mechanism. Here multi-user pilot information, after being randomized
independently to avoid being spoofed, are converted into activation patterns of
subcarrier-block groups on code-frequency domain. Those patterns, though
overlapped and interfered mutually in the wireless transmission environment,
are qualified to be separated and identified as the original pilots with high
accuracy, by exploiting CFBG coding theory and channel characteristic.
Particularly, we develop the CFBG code through two steps, i.e., 1) devising an
ordered signal detection technique to recognize the number of signals
coexisting on each subcarrier block, and encoding each subcarrier block with
the detected number; 2) constructing a zero-false-drop (ZFD) code and block
detection based (BD) code via k-dimensional Latin hypercubes and integrating
those two codes into the CFBG code. This code can bring a desirable pilot
separation error probability (SEP), inversely proportional to the number of
occupied subcarriers and antennas with a power of k. To apply the code to PA, a
scheme of pilot conveying, separation and identification is proposed. Based on
this novel PA, a joint channel estimation and identification mechanism is
proposed to achieve high-precision channel recovery and simultaneously enhance
PA without occupying extra resources. Simulation results verify the
effectiveness of our proposed mechanism.
|
[
{
"version": "v1",
"created": "Tue, 23 Jan 2018 00:14:46 GMT"
}
] | 2018-01-24T00:00:00 |
[
[
"Xu",
"Dongyang",
""
],
[
"Ren",
"Pinyi",
""
],
[
"Ritcey",
"James A.",
""
],
[
"Wang",
"Yichen",
""
]
] |
new_dataset
| 0.992106 |
1801.07400
|
Hongyun Chu
|
Hongyun Chu, Le Zheng, and Xiaodong Wang
|
Super-Resolution mmWave Channel Estimation using Atomic Norm
Minimization
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose super-resolution MIMO channel estimators for millimeter-wave
(mmWave) systems that employ hybrid analog and digital beamforming and
generalized spatial modulation, respectively. Exploiting the inherent sparsity
of mmWave channels, the channel estimation problem is formulated as an atomic
norm minimization that enhances sparsity in the continuous angles of departure
and arrival. Both pilot-assisted and data-aided channel estimators are
developed, with the former one formulated as a convex problem and the latter as
a non-convex problem. To solve these formulated channel estimation problems, we
develop a computationally efficient conjugate gradient descent method based on
non-convex factorization which restricts the search space to low-rank matrices.
Simulation results are presented to illustrate the superior channel estimation
performance of the proposed algorithms for both types of mmWave systems
compared to the existing compressed-sensing-based estimators with finely
quantized angle grids.
|
[
{
"version": "v1",
"created": "Tue, 23 Jan 2018 05:49:52 GMT"
}
] | 2018-01-24T00:00:00 |
[
[
"Chu",
"Hongyun",
""
],
[
"Zheng",
"Le",
""
],
[
"Wang",
"Xiaodong",
""
]
] |
new_dataset
| 0.99582 |
1801.07423
|
Parisa Nouri
|
Parisa Nouri, Hirley Alves, Richard Demo Souza, and Matti Latva-aho
|
Ultra-Reliable Short Message Cooperative Relaying Protocols under
Nakagami-m Fading
| null | null |
10.1109/ISWCS.2017.8108126
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the next few years, the development of wireless communication systems
propel the world into a fully connected society where the Machine-type
Communications (MTC) plays a substantial role as key enabler in the future
cellular systems. MTC is categorized into mMTC and uMTC, where mMTC provides
the connectivity to massive number of devices while uMTC is related to low
latency and ultra-high reliability of the wireless communications. This paper
studies uMTC with incremental relaying technique, where the source and relay
collaborate to transfer the message to a destination. In this paper, we compare
the performance of two distinct cooperative relaying protocols with the direct
transmission under the finite blocklength (FB) regime. We define the overall
outage probability in each relaying scenario, supposing Nakagami-m fading. We
show that cooperative communication outperforms direct transmission under the
FB regime. In addition, we examine the impact of fading severity and power
allocation factor on the outage probability and the minimum delay required to
meet the ultra-reliable communication requirements. Moreover, we provide the
outage probability in closed form.
|
[
{
"version": "v1",
"created": "Tue, 23 Jan 2018 07:59:07 GMT"
}
] | 2018-01-24T00:00:00 |
[
[
"Nouri",
"Parisa",
""
],
[
"Alves",
"Hirley",
""
],
[
"Souza",
"Richard Demo",
""
],
[
"Latva-aho",
"Matti",
""
]
] |
new_dataset
| 0.981292 |
1801.07447
|
Rhys Bowden
|
R. Bowden, H.P. Keeler, A.E. Krzesinski, P.G. Taylor
|
Block arrivals in the Bitcoin blockchain
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bitcoin is a electronic payment system where payment transactions are
verified and stored in a data structure called the blockchain. Bitcoin miners
work individually to solve a computationally intensive problem, and with each
solution a Bitcoin block is generated, resulting in a new arrival to the
blockchain. The difficulty of the computational problem is updated every 2,016
blocks in order to control the rate at which blocks are generated. In the
original Bitcoin paper, it was suggested that the blockchain arrivals occur
according to a homogeneous Poisson process. Based on blockchain block arrival
data and stochastic analysis of the block arrival process, we demonstrate that
this is not the case. We present a refined mathematical model for block
arrivals, focusing on both the block arrivals during a period of constant
difficulty and how the difficulty level evolves over time.
|
[
{
"version": "v1",
"created": "Tue, 23 Jan 2018 09:20:48 GMT"
}
] | 2018-01-24T00:00:00 |
[
[
"Bowden",
"R.",
""
],
[
"Keeler",
"H. P.",
""
],
[
"Krzesinski",
"A. E.",
""
],
[
"Taylor",
"P. G.",
""
]
] |
new_dataset
| 0.996479 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.