id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1804.00987
|
Kyle Richardson
|
Kyle Richardson
|
A Language for Function Signature Representations
|
short note
| null | null | null |
cs.CL cs.AI cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Recent work by (Richardson and Kuhn, 2017a,b; Richardson et al., 2018) looks
at semantic parser induction and question answering in the domain of source
code libraries and APIs. In this brief note, we formalize the representations
being learned in these studies and introduce a simple domain specific language
and a systematic translation from this language to first-order logic. By
recasting the target representations in terms of classical logic, we aim to
broaden the applicability of existing code datasets for investigating more
complex natural language understanding and reasoning problems in the software
domain.
|
[
{
"version": "v1",
"created": "Sat, 31 Mar 2018 13:01:29 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Apr 2018 13:23:03 GMT"
}
] | 2018-04-19T00:00:00 |
[
[
"Richardson",
"Kyle",
""
]
] |
new_dataset
| 0.993739 |
1804.06438
|
Karthik Muthuraman
|
Karthik Muthuraman, Pranav Joshi, Suraj Kiran Raman
|
Vision Based Dynamic Offside Line Marker for Soccer Games
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Offside detection in soccer has emerged as one of the most important
decisions with an average of 50 offside decisions every game. False detections
and rash calls adversely affect game conditions and in many cases drastically
change the outcome of the game. The human eye has finite precision and can only
discern a limited amount of detail in a given instance. Current offside
decisions are made manually by sideline referees and tend to remain
controversial in many games. This calls for automated offside detection
techniques in order to assist accurate refereeing. In this work, we have
explicitly used computer vision and image processing techniques like Hough
transform, color similarity (quantization), graph connected components, and
vanishing point ideas to identify the probable offside regions.
Keywords: Hough transform, connected components, KLT tracking, color
similarity.
|
[
{
"version": "v1",
"created": "Tue, 17 Apr 2018 19:00:01 GMT"
}
] | 2018-04-19T00:00:00 |
[
[
"Muthuraman",
"Karthik",
""
],
[
"Joshi",
"Pranav",
""
],
[
"Raman",
"Suraj Kiran",
""
]
] |
new_dataset
| 0.996551 |
1804.06489
|
Mehmet Aktas
|
Mehmet Fatih Aktas, Elie Najm, Emina Soljanin
|
Simplex Queues for Hot-Data Download
| null | null | null | null |
cs.IT cs.PF math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In cloud storage systems, hot data is usually replicated over multiple nodes
in order to accommodate simultaneous access by multiple users as well as
increase the fault tolerance of the system. Recent cloud storage research has
proposed using availability codes, which is a special class of erasure codes,
as a more storage efficient way to store hot data. These codes enable data
recovery from multiple, small disjoint groups of servers. The number of the
recovery groups is referred to as the availability and the size of each group
as the locality of the code. Until now, we have very limited knowledge on how
code locality and availability affect data access time. Data download from
these systems involves multiple fork-join queues operating in-parallel, making
the analysis of access time a very challenging problem. In this paper, we
present an approximate analysis of data access time in storage systems that
employ simplex codes, which are an important and in certain sense optimal class
of availability codes. We consider and compare three strategies in assigning
download requests to servers; first one aggressively exploits the storage
availability for faster download, second one implements only load balancing,
and the last one employs storage availability only for hot data download
without incurring any negative impact on the cold data download.
|
[
{
"version": "v1",
"created": "Tue, 17 Apr 2018 22:26:48 GMT"
}
] | 2018-04-19T00:00:00 |
[
[
"Aktas",
"Mehmet Fatih",
""
],
[
"Najm",
"Elie",
""
],
[
"Soljanin",
"Emina",
""
]
] |
new_dataset
| 0.992212 |
1804.06511
|
Thomas Keller
|
T. Anderson Keller, Sharath Nittur Sridhar, Xin Wang
|
Fast Weight Long Short-Term Memory
| null | null | null | null |
cs.NE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Associative memory using fast weights is a short-term memory mechanism that
substantially improves the memory capacity and time scale of recurrent neural
networks (RNNs). As recent studies introduced fast weights only to regular
RNNs, it is unknown whether fast weight memory is beneficial to gated RNNs. In
this work, we report a significant synergy between long short-term memory
(LSTM) networks and fast weight associative memories. We show that this
combination, in learning associative retrieval tasks, results in much faster
training and lower test error, a performance boost most prominent at high
memory task difficulties.
|
[
{
"version": "v1",
"created": "Wed, 18 Apr 2018 00:20:28 GMT"
}
] | 2018-04-19T00:00:00 |
[
[
"Keller",
"T. Anderson",
""
],
[
"Sridhar",
"Sharath Nittur",
""
],
[
"Wang",
"Xin",
""
]
] |
new_dataset
| 0.975949 |
1804.06657
|
Christos Baziotis
|
Christos Baziotis, Nikos Athanasiou, Georgios Paraskevopoulos,
Nikolaos Ellinas, Athanasia Kolovou, Alexandros Potamianos
|
NTUA-SLP at SemEval-2018 Task 2: Predicting Emojis using RNNs with
Context-aware Attention
|
SemEval-2018, Task 2 "Multilingual Emoji Prediction"
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present a deep-learning model that competed at SemEval-2018
Task 2 "Multilingual Emoji Prediction". We participated in subtask A, in which
we are called to predict the most likely associated emoji in English tweets.
The proposed architecture relies on a Long Short-Term Memory network, augmented
with an attention mechanism, that conditions the weight of each word, on a
"context vector" which is taken as the aggregation of a tweet's meaning.
Moreover, we initialize the embedding layer of our model, with word2vec word
embeddings, pretrained on a dataset of 550 million English tweets. Finally, our
model does not rely on hand-crafted features or lexicons and is trained
end-to-end with back-propagation. We ranked 2nd out of 48 teams.
|
[
{
"version": "v1",
"created": "Wed, 18 Apr 2018 11:30:57 GMT"
}
] | 2018-04-19T00:00:00 |
[
[
"Baziotis",
"Christos",
""
],
[
"Athanasiou",
"Nikos",
""
],
[
"Paraskevopoulos",
"Georgios",
""
],
[
"Ellinas",
"Nikolaos",
""
],
[
"Kolovou",
"Athanasia",
""
],
[
"Potamianos",
"Alexandros",
""
]
] |
new_dataset
| 0.985535 |
1804.06659
|
Christos Baziotis
|
Christos Baziotis, Nikos Athanasiou, Pinelopi Papalampidi, Athanasia
Kolovou, Georgios Paraskevopoulos, Nikolaos Ellinas, Alexandros Potamianos
|
NTUA-SLP at SemEval-2018 Task 3: Tracking Ironic Tweets using Ensembles
of Word and Character Level Attentive RNNs
|
SemEval-2018, Task 3 "Irony detection in English tweets"
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present two deep-learning systems that competed at
SemEval-2018 Task 3 "Irony detection in English tweets". We design and ensemble
two independent models, based on recurrent neural networks (Bi-LSTM), which
operate at the word and character level, in order to capture both the semantic
and syntactic information in tweets. Our models are augmented with a
self-attention mechanism, in order to identify the most informative words. The
embedding layer of our word-level model is initialized with word2vec word
embeddings, pretrained on a collection of 550 million English tweets. We did
not utilize any handcrafted features, lexicons or external datasets as prior
information and our models are trained end-to-end using back propagation on
constrained data. Furthermore, we provide visualizations of tweets with
annotations for the salient tokens of the attention layer that can help to
interpret the inner workings of the proposed models. We ranked 2nd out of 42
teams in Subtask A and 2nd out of 31 teams in Subtask B. However,
post-task-completion enhancements of our models achieve state-of-the-art
results ranking 1st for both subtasks.
|
[
{
"version": "v1",
"created": "Wed, 18 Apr 2018 11:35:56 GMT"
}
] | 2018-04-19T00:00:00 |
[
[
"Baziotis",
"Christos",
""
],
[
"Athanasiou",
"Nikos",
""
],
[
"Papalampidi",
"Pinelopi",
""
],
[
"Kolovou",
"Athanasia",
""
],
[
"Paraskevopoulos",
"Georgios",
""
],
[
"Ellinas",
"Nikolaos",
""
],
[
"Potamianos",
"Alexandros",
""
]
] |
new_dataset
| 0.992 |
1804.06701
|
Rens Wouter van der Heijden
|
Rens W. van der Heijden and Thomas Lukaseder and Frank Kargl
|
VeReMi: A Dataset for Comparable Evaluation of Misbehavior Detection in
VANETs
|
20 pages, 5 figures, Accepted for publication at SecureComm 2018
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vehicular networks are networks of communicating vehicles, a major enabling
technology for future cooperative and autonomous driving technologies. The most
important messages in these networks are broadcast-authenticated periodic
one-hop beacons, used for safety and traffic efficiency applications such as
collision avoidance and traffic jam detection. However, broadcast authenticity
is not sufficient to guarantee message correctness. The goal of misbehavior
detection is to analyze application data and knowledge about physical processes
in these cyber-physical systems to detect incorrect messages, enabling local
revocation of vehicles transmitting malicious messages. Comparative studies
between detection mechanisms are rare due to the lack of a reference dataset.
We take the first steps to address this challenge by introducing the Vehicular
Reference Misbehavior Dataset (VeReMi) and a discussion of valid metrics for
such an assessment. VeReMi is the first public extensible dataset, allowing
anyone to reproduce the generation process, as well as contribute attacks and
use the data to compare new detection mechanisms against existing ones. The
result of our analysis shows that the acceptance range threshold and the simple
speed check are complementary mechanisms that detect different attacks. This
supports the intuitive notion that fusion can lead to better results with data,
and we suggest that future work should focus on effective fusion with VeReMi as
an evaluation baseline.
|
[
{
"version": "v1",
"created": "Wed, 18 Apr 2018 13:10:36 GMT"
}
] | 2018-04-19T00:00:00 |
[
[
"van der Heijden",
"Rens W.",
""
],
[
"Lukaseder",
"Thomas",
""
],
[
"Kargl",
"Frank",
""
]
] |
new_dataset
| 0.999806 |
1804.06716
|
Atul Kr. Ojha Mr.
|
Rajneesh Pandey, Atul Kr. Ojha, Girish Nath Jha
|
Demo of Sanskrit-Hindi SMT System
|
Proceedings of the 4th Workshop on Indian Language Data: Resources
and Evaluation (under the 11th LREC2018, May 07-12, 2018)
|
http://lrec-conf.org/workshops/lrec2018/W11/summaries/20_W11.html
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The demo proposal presents a Phrase-based Sanskrit-Hindi (SaHiT) Statistical
Machine Translation system. The system has been developed on Moses. 43k
sentences of Sanskrit-Hindi parallel corpus and 56k sentences of a monolingual
corpus in the target language (Hindi) have been used. This system gives 57 BLEU
score.
|
[
{
"version": "v1",
"created": "Fri, 13 Apr 2018 19:44:56 GMT"
}
] | 2018-04-19T00:00:00 |
[
[
"Pandey",
"Rajneesh",
""
],
[
"Ojha",
"Atul Kr.",
""
],
[
"Jha",
"Girish Nath",
""
]
] |
new_dataset
| 0.99967 |
1804.06750
|
Thomas Lukaseder Mr
|
Thomas Lukaseder and Lisa Maile and Benjamin Erb and Frank Kargl
|
SDN-Assisted Network-Based Mitigation of Slow DDoS Attacks
|
20 pages, 3 figures, accepted to SecureComm'18
| null | null | null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Slow-running attacks against network applications are often not easy to
detect, as the attackers behave according to the specification. The servers of
many network applications are not prepared for such attacks, either due to
missing countermeasures or because their default configurations ignores such
attacks. The pressure to secure network services against such attacks is
shifting more and more from the service operators to the network operators of
the servers under attack. Recent technologies such as software-defined
networking offer the flexibility and extensibility to analyze and influence
network flows without the assistance of the target operator. Based on our
previous work on a network-based mitigation, we have extended a framework to
detect and mitigate slow-running DDoS attacks within the network
infrastructure, but without requiring access to servers under attack. We
developed and evaluated several identification schemes to identify attackers in
the network solely based on network traffic information. We showed that by
measuring the packet rate and the uniformity of the packet distances, a
reliable identificator can be built, given a training period of the deployment
network.
|
[
{
"version": "v1",
"created": "Wed, 18 Apr 2018 14:14:03 GMT"
}
] | 2018-04-19T00:00:00 |
[
[
"Lukaseder",
"Thomas",
""
],
[
"Maile",
"Lisa",
""
],
[
"Erb",
"Benjamin",
""
],
[
"Kargl",
"Frank",
""
]
] |
new_dataset
| 0.990632 |
1509.02479
|
Pierre Letouzey
|
Pierre Letouzey
|
Hofstadter's problem for curious readers
| null | null | null | null |
cs.LO math.HO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This document summarizes the proofs made during a Coq development inSummer
2015. This development investigates the function G introducedby Hofstadter in
his famous "G{\"o}del, Escher, Bach" bookas well as a related infinite tree.
The left/right flipped variantof this G tree has also been studied here,
followingHofstadter's "problem for the curious reader".The initial G function
is refered as sequence A005206 inOEIS, while the flipped version is the
sequence A123070.
|
[
{
"version": "v1",
"created": "Tue, 8 Sep 2015 18:11:31 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Oct 2015 10:21:56 GMT"
},
{
"version": "v3",
"created": "Tue, 17 Apr 2018 12:17:01 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Letouzey",
"Pierre",
""
]
] |
new_dataset
| 0.975042 |
1612.05005
|
Ingmar Steiner
|
Alexander Hewer, Stefanie Wuhrer, Ingmar Steiner, Korin Richmond
|
A Multilinear Tongue Model Derived from Speech Related MRI Data of the
Human Vocal Tract
| null |
Computer Speech & Language 51 (2018) 68-92
|
10.1016/j.csl.2018.02.001
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a multilinear statistical model of the human tongue that captures
anatomical and tongue pose related shape variations separately. The model is
derived from 3D magnetic resonance imaging data of 11 speakers sustaining
speech related vocal tract configurations. The extraction is performed by using
a minimally supervised method that uses as basis an image segmentation approach
and a template fitting technique. Furthermore, it uses image denoising to deal
with possibly corrupt data, palate surface information reconstruction to handle
palatal tongue contacts, and a bootstrap strategy to refine the obtained
shapes. Our evaluation concludes that limiting the degrees of freedom for the
anatomical and speech related variations to 5 and 4, respectively, produces a
model that can reliably register unknown data while avoiding overfitting
effects. Furthermore, we show that it can be used to generate a plausible
tongue animation by tracking sparse motion capture data.
|
[
{
"version": "v1",
"created": "Thu, 15 Dec 2016 10:31:40 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Apr 2017 08:51:42 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Dec 2017 16:00:02 GMT"
},
{
"version": "v4",
"created": "Fri, 13 Apr 2018 09:27:33 GMT"
},
{
"version": "v5",
"created": "Tue, 17 Apr 2018 08:16:54 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Hewer",
"Alexander",
""
],
[
"Wuhrer",
"Stefanie",
""
],
[
"Steiner",
"Ingmar",
""
],
[
"Richmond",
"Korin",
""
]
] |
new_dataset
| 0.9973 |
1703.02361
|
Aleksandr Maksimenko
|
Alexander Maksimenko
|
On the family of 0/1-polytopes with NP-complete non-adjacency relation
|
8 pages, 1 figure
| null |
10.4213/dm1427
| null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In 1995 T. Matsui considered a special family 0/1-polytopes for which the
problem of recognizing the non-adjacency of two arbitrary vertices is
NP-complete. In 2012 the author of this paper established that all the
polytopes of this family are present as faces in the polytopes associated with
the following NP-complete problems: the traveling salesman problem, the
3-satisfiability problem, the knapsack problem, the set covering problem, the
partial ordering problem, the cube subgraph problem, and some others. In
particular, it follows that for these families the non-adjacency relation is
also NP-complete. On the other hand, it is known that the vertex adjacency
criterion is polynomial for polytopes of the following NP-complete problems:
the maximum independent set problem, the set packing and the set partitioning
problem, the three-index assignment problem. It is shown that none of the
polytopes of the above-mentioned special family (with the exception of a
one-dimensional segment) can be the face of polytopes associated with the
problems of the maximum independent set, of a set packing and partitioning, and
of 3-assignments.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2017 12:45:26 GMT"
},
{
"version": "v2",
"created": "Wed, 10 May 2017 06:57:29 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Maksimenko",
"Alexander",
""
]
] |
new_dataset
| 0.997492 |
1705.08738
|
Il-Young Son
|
Birsen Yazici and Il-Young Son and H. Cagri Yanik
|
Doppler Synthetic Aperture Radar Interferometry: A Novel SAR
Interferometry for Height Mapping using Ultra-Narrowband Waveforms
|
Submitted to Inverse Problems
| null |
10.1088/1361-6420/aab24c
| null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a new and novel radar interferometry based on Doppler
synthetic aperture radar (Doppler-SAR) paradigm. Conventional SAR
interferometry relies on wideband transmitted waveforms to obtain high range
resolution. Topography of a surface is directly related to the range difference
between two antennas configured at different positions. Doppler-SAR is a novel
imaging modality that uses ultra-narrowband continuous waves (UNCW). It takes
advantage of high resolution Doppler information provided by UNCWs to form high
resolution SAR images.
We introduced the theory of Doppler-SAR interferometry. We derived
interferometric phase model and develop the equations of height mapping. Unlike
conventional SAR interferometry, we show that the topography of a scene is
related to the difference in Doppler between two antennas configured at
different velocities. While the conventional SAR interferometry uses range,
Doppler and Doppler due to interferometric phase in height mapping, Doppler-SAR
interferometry uses Doppler, Doppler-rate and Doppler-rate due to
interferometric phase in height mapping. We demonstrate our theory in numerical
simulations.
Doppler-SAR interferometry offers the advantages of long-range, robust,
environmentally friendly operations; low-power, low-cost, lightweight systems
suitable for low-payload platforms, such as micro-satellites; and passive
applications using sources of opportunity transmitting UNCW.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2017 13:09:55 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Yazici",
"Birsen",
""
],
[
"Son",
"Il-Young",
""
],
[
"Yanik",
"H. Cagri",
""
]
] |
new_dataset
| 0.999179 |
1711.07277
|
Mudasar Bacha
|
Mudasar Bacha and Bruno Clerckx
|
Backscatter Communications for the Internet of Things: A Stochastic
Geometry Approach
|
This work has been submitted for a possible journal publication
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by the recent advances in the Internet of Things (IoT) and in
Wireless Power Transfer (WPT), we study a network architecture that consists of
power beacons (PBs) and passive backscatter nodes (BNs). The PBs transmit a
sinusoidal continuous wave (CW) and the BNs reflect back a portion of this
signal while harvesting the remaining part. A BN harvests energy from multiple
nearby PBs and modulates its information bits on the composite CW through
backscatter modulation. The analysis poses real challenges due to the double
fading channel, and its dependence on the PPPs of both the BNs and PBs.
However, with the help of stochastic geometry, we derive the coverage
probability and the capacity of the network in tractable and easily computable
expressions, which depend on different system parameters. We observe that the
coverage probability decreases with an increase in the density of the BNs,
while the capacity of the network improves. We further compare the performance
of this network with a regular powered network in which the BNs have a reliable
power source and show that for a very high density of the PBs, the coverage
probability of the former network approaches that of the regular powered
network.
|
[
{
"version": "v1",
"created": "Mon, 20 Nov 2017 12:12:45 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Apr 2018 10:56:09 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Bacha",
"Mudasar",
""
],
[
"Clerckx",
"Bruno",
""
]
] |
new_dataset
| 0.99621 |
1801.01466
|
Rahul Mitra
|
Rahul Mitra and Nehal Doiphode and Utkarsh Gautam and Sanath Narayan
and Shuaib Ahmed and Sharat Chandran and Arjun Jain
|
A Large Dataset for Improving Patch Matching
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new dataset for learning local image descriptors which can be
used for significantly improved patch matching. Our proposed dataset consists
of an order of magnitude more number of scenes, images, and positive and
negative correspondences compared to the currently available Multi-View Stereo
(MVS) dataset from Brown et al. The new dataset also has better coverage of the
overall viewpoint, scale, and lighting changes in comparison to the MVS
dataset. Our dataset also provides supplementary information like RGB patches
with scale and rotations values, and intrinsic and extrinsic camera parameters
which as shown later can be used to customize training data as per application.
We train an existing state-of-the-art model on our dataset and evaluate on
publicly available benchmarks such as HPatches dataset and Strecha et
al.\cite{strecha} to quantify the image descriptor performance. Experimental
evaluations show that the descriptors trained using our proposed dataset
outperform the current state-of-the-art descriptors trained on MVS by 8%, 4%
and 10% on matching, verification and retrieval tasks respectively on the
HPatches dataset. Similarly on the Strecha dataset, we see an improvement of
3-5% for the matching task in non-planar scenes.
|
[
{
"version": "v1",
"created": "Thu, 4 Jan 2018 17:37:45 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Feb 2018 05:53:21 GMT"
},
{
"version": "v3",
"created": "Tue, 17 Apr 2018 14:31:04 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Mitra",
"Rahul",
""
],
[
"Doiphode",
"Nehal",
""
],
[
"Gautam",
"Utkarsh",
""
],
[
"Narayan",
"Sanath",
""
],
[
"Ahmed",
"Shuaib",
""
],
[
"Chandran",
"Sharat",
""
],
[
"Jain",
"Arjun",
""
]
] |
new_dataset
| 0.999833 |
1801.10228
|
Christian Cachin
|
Elli Androulaki, Artem Barger, Vita Bortnikov, Christian Cachin,
Konstantinos Christidis, Angelo De Caro, David Enyeart, Christopher Ferris,
Gennady Laventman, Yacov Manevich, Srinivasan Muralidharan, Chet Murthy, Binh
Nguyen, Manish Sethi, Gari Singh, Keith Smith, Alessandro Sorniotti,
Chrysoula Stathakopoulou, Marko Vukoli\'c, Sharon Weed Cocco, Jason Yellick
|
Hyperledger Fabric: A Distributed Operating System for Permissioned
Blockchains
|
Appears in proceedings of EuroSys 2018 conference
| null |
10.1145/3190508.3190538
| null |
cs.DC cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fabric is a modular and extensible open-source system for deploying and
operating permissioned blockchains and one of the Hyperledger projects hosted
by the Linux Foundation (www.hyperledger.org).
Fabric is the first truly extensible blockchain system for running
distributed applications. It supports modular consensus protocols, which allows
the system to be tailored to particular use cases and trust models. Fabric is
also the first blockchain system that runs distributed applications written in
standard, general-purpose programming languages, without systemic dependency on
a native cryptocurrency. This stands in sharp contrast to existing blockchain
platforms that require "smart-contracts" to be written in domain-specific
languages or rely on a cryptocurrency. Fabric realizes the permissioned model
using a portable notion of membership, which may be integrated with
industry-standard identity management. To support such flexibility, Fabric
introduces an entirely novel blockchain design and revamps the way blockchains
cope with non-determinism, resource exhaustion, and performance attacks.
This paper describes Fabric, its architecture, the rationale behind various
design decisions, its most prominent implementation aspects, as well as its
distributed application programming model. We further evaluate Fabric by
implementing and benchmarking a Bitcoin-inspired digital currency. We show that
Fabric achieves end-to-end throughput of more than 3500 transactions per second
in certain popular deployment configurations, with sub-second latency, scaling
well to over 100 peers.
|
[
{
"version": "v1",
"created": "Tue, 30 Jan 2018 21:22:06 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Apr 2018 09:34:27 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Androulaki",
"Elli",
""
],
[
"Barger",
"Artem",
""
],
[
"Bortnikov",
"Vita",
""
],
[
"Cachin",
"Christian",
""
],
[
"Christidis",
"Konstantinos",
""
],
[
"De Caro",
"Angelo",
""
],
[
"Enyeart",
"David",
""
],
[
"Ferris",
"Christopher",
""
],
[
"Laventman",
"Gennady",
""
],
[
"Manevich",
"Yacov",
""
],
[
"Muralidharan",
"Srinivasan",
""
],
[
"Murthy",
"Chet",
""
],
[
"Nguyen",
"Binh",
""
],
[
"Sethi",
"Manish",
""
],
[
"Singh",
"Gari",
""
],
[
"Smith",
"Keith",
""
],
[
"Sorniotti",
"Alessandro",
""
],
[
"Stathakopoulou",
"Chrysoula",
""
],
[
"Vukolić",
"Marko",
""
],
[
"Cocco",
"Sharon Weed",
""
],
[
"Yellick",
"Jason",
""
]
] |
new_dataset
| 0.999508 |
1802.06527
|
Pingping Zhang Mr
|
Pingping Zhang, Wei Liu, Huchuan Lu, Chunhua Shen
|
Salient Object Detection by Lossless Feature Reflection
|
Accepted by IJCAI-2018, 7 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Salient object detection, which aims to identify and locate the most salient
pixels or regions in images, has been attracting more and more interest due to
its various real-world applications. However, this vision task is quite
challenging, especially under complex image scenes. Inspired by the intrinsic
reflection of natural images, in this paper we propose a novel feature learning
framework for large-scale salient object detection. Specifically, we design a
symmetrical fully convolutional network (SFCN) to learn complementary saliency
features under the guidance of lossless feature reflection. The location
information, together with contextual and semantic information, of salient
objects are jointly utilized to supervise the proposed network for more
accurate saliency predictions. In addition, to overcome the blurry boundary
problem, we propose a new structural loss function to learn clear object
boundaries and spatially consistent saliency. The coarse prediction results are
effectively refined by these structural information for performance
improvements. Extensive experiments on seven saliency detection datasets
demonstrate that our approach achieves consistently superior performance and
outperforms the very recent state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Mon, 19 Feb 2018 05:59:08 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Apr 2018 03:19:49 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Zhang",
"Pingping",
""
],
[
"Liu",
"Wei",
""
],
[
"Lu",
"Huchuan",
""
],
[
"Shen",
"Chunhua",
""
]
] |
new_dataset
| 0.998006 |
1803.06315
|
Nathalie Cauchi
|
Nathalie Cauchi and Alessandro Abate
|
Benchmarks for cyber-physical systems: A modular model library for
building automation systems (Extended version)
|
Extension of ADHS conference paper
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Building Automation Systems (BAS) are exemplars of Cyber-Physical Systems
(CPS), incorporating digital control architectures over underlying continuous
physical processes. We provide a modular model library for BAS drawn from
expertise developed on a real BAS setup. The library allows to build models
comprising of either physical quantities or digital control modules.% which are
composable. The structure, operation, and dynamics of the model can be complex,
incorporating (i) stochasticity, (ii) non-linearities, (iii) numerous
continuous variables or discrete states, (iv) various input and output signals,
and (v) a large number of possible discrete configurations. The modular
composition of BAS components can generate useful CPS benchmarks. We display
this use by means of three realistic case studies, where corresponding models
are built and engaged with different analysis goals. The benchmarks, the model
library and data collected from the BAS setup at the University of Oxford, are
kept on-line at https://github.com/natchi92/BASBenchmarks.
|
[
{
"version": "v1",
"created": "Fri, 16 Mar 2018 17:09:32 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Apr 2018 10:22:30 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Cauchi",
"Nathalie",
""
],
[
"Abate",
"Alessandro",
""
]
] |
new_dataset
| 0.999242 |
1804.04637
|
Hyrum Anderson
|
Hyrum S. Anderson and Phil Roth
|
EMBER: An Open Dataset for Training Static PE Malware Machine Learning
Models
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes EMBER: a labeled benchmark dataset for training machine
learning models to statically detect malicious Windows portable executable
files. The dataset includes features extracted from 1.1M binary files: 900K
training samples (300K malicious, 300K benign, 300K unlabeled) and 200K test
samples (100K malicious, 100K benign). To accompany the dataset, we also
release open source code for extracting features from additional binaries so
that additional sample features can be appended to the dataset. This dataset
fills a void in the information security machine learning community: a
benign/malicious dataset that is large, open and general enough to cover
several interesting use cases. We enumerate several use cases that we
considered when structuring the dataset. Additionally, we demonstrate one use
case wherein we compare a baseline gradient boosted decision tree model trained
using LightGBM with default settings to MalConv, a recently published
end-to-end (featureless) deep learning model for malware detection. Results
show that even without hyper-parameter optimization, the baseline EMBER model
outperforms MalConv. The authors hope that the dataset, code and baseline model
provided by EMBER will help invigorate machine learning research for malware
detection, in much the same way that benchmark datasets have advanced computer
vision research.
|
[
{
"version": "v1",
"created": "Thu, 12 Apr 2018 17:23:56 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Apr 2018 20:43:33 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Anderson",
"Hyrum S.",
""
],
[
"Roth",
"Phil",
""
]
] |
new_dataset
| 0.999863 |
1804.05831
|
Alexander Panchenko
|
Nikita Muravyev, Alexander Panchenko, Sergei Obiedkov
|
Neologisms on Facebook
|
in Russian
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a study of neologisms and loan words frequently
occurring in Facebook user posts. We have analyzed a dataset of several million
publically available posts written during 2006-2013 by Russian-speaking
Facebook users. From these, we have built a vocabulary of most frequent
lemmatized words missing from the OpenCorpora dictionary the assumption being
that many such words have entered common use only recently. This assumption is
certainly not true for all the words extracted in this way; for that reason, we
manually filtered the automatically obtained list in order to exclude
non-Russian or incorrectly lemmatized words, as well as words recorded by other
dictionaries or those occurring in texts from the Russian National Corpus. The
result is a list of 168 words that can potentially be considered neologisms. We
present an attempt at an etymological classification of these neologisms
(unsurprisingly, most of them have recently been borrowed from English, but
there are also quite a few new words composed of previously borrowed stems) and
identify various derivational patterns. We also classify words into several
large thematic areas, "internet", "marketing", and "multimedia" being among
those with the largest number of words. We believe that, together with the word
base collected in the process, they can serve as a starting point in further
studies of neologisms and lexical processes that lead to their acceptance into
the mainstream language.
|
[
{
"version": "v1",
"created": "Fri, 13 Apr 2018 16:57:59 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Muravyev",
"Nikita",
""
],
[
"Panchenko",
"Alexander",
""
],
[
"Obiedkov",
"Sergei",
""
]
] |
new_dataset
| 0.987129 |
1804.05870
|
Rohit Pandey
|
Rohit Pandey, Pavel Pidlypenskyi, Shuoran Yang, Christine Kaeser-Chen
|
Egocentric 6-DoF Tracking of Small Handheld Objects
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Virtual and augmented reality technologies have seen significant growth in
the past few years. A key component of such systems is the ability to track the
pose of head mounted displays and controllers in 3D space. We tackle the
problem of efficient 6-DoF tracking of a handheld controller from egocentric
camera perspectives. We collected the HMD Controller dataset which consist of
over 540,000 stereo image pairs labelled with the full 6-DoF pose of the
handheld controller. Our proposed SSD-AF-Stereo3D model achieves a mean average
error of 33.5 millimeters in 3D keypoint prediction and is used in conjunction
with an IMU sensor on the controller to enable 6-DoF tracking. We also present
results on approaches for model based full 6-DoF tracking. All our models
operate under the strict constraints of real time mobile CPU inference.
|
[
{
"version": "v1",
"created": "Mon, 16 Apr 2018 18:08:51 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Pandey",
"Rohit",
""
],
[
"Pidlypenskyi",
"Pavel",
""
],
[
"Yang",
"Shuoran",
""
],
[
"Kaeser-Chen",
"Christine",
""
]
] |
new_dataset
| 0.999407 |
1804.05926
|
Jonni Virtema
|
Flavio Ferrarotti, Jan Van den Bussche, and Jonni Virtema
|
Expressivity within second-order transitive-closure logic
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Second-order transitive-closure logic, SO(TC), is an expressive declarative
language that captures the complexity class PSPACE. Already its monadic
fragment, MSO(TC), allows the expression of various NP-hard and even
PSPACE-hard problems in a natural and elegant manner. As SO(TC) offers an
attractive framework for expressing properties in terms of declaratively
specified computations, it is interesting to understand the expressivity of
different features of the language. This paper focuses on the fragment MSO(TC),
as well on the purely existential fragment SO(2TC)(E); in 2TC, the TC operator
binds only tuples of relation variables. We establish that, with respect to
expressive power, SO(2TC)(E) collapses to existential first-order logic. In
addition we study the relationship of MSO(TC) to an extension of MSO(TC) with
counting features (CMSO(TC)) as well as to order-invariant MSO. We show that
the expressive powers of CMSO(TC) and MSO(TC) coincide. Moreover we establish
that, over unary vocabularies, MSO(TC) strictly subsumes order-invariant MSO.
|
[
{
"version": "v1",
"created": "Mon, 16 Apr 2018 20:35:26 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Ferrarotti",
"Flavio",
""
],
[
"Bussche",
"Jan Van den",
""
],
[
"Virtema",
"Jonni",
""
]
] |
new_dataset
| 0.978691 |
1804.06000
|
Tadashi Wadayama
|
Kazuya Hirata and Tadashi Wadayama
|
Asymptotic Achievable Rate of Two-Dimensional Constraint Codes based on
Column by Column Encoding
|
5 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a column by column encoding scheme suitable for
two-dimensional (2D) constraint codes and derive a lower bound of its maximum
achievable rate. It is shown that the maximum achievable rate is equal to the
largest minimum degree of a subgraph of the maximal valid pair graph. A graph
theoretical analysis to provide a lower bound of the maximum achievable rate is
presented. For several 2D-constraints such as the asymmetric and symmetric
non-isolated bit constraints, the values of the lower bound are evaluated.
|
[
{
"version": "v1",
"created": "Tue, 17 Apr 2018 01:07:59 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Hirata",
"Kazuya",
""
],
[
"Wadayama",
"Tadashi",
""
]
] |
new_dataset
| 0.957273 |
1804.06003
|
Cunsheng Ding
|
Ziling Heng and Cunsheng Ding
|
The Subfield Codes of Hyperoval and Conic codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hyperovals in $\PG(2,\gf(q))$ with even $q$ are maximal arcs and an
interesting research topic in finite geometries and combinatorics. Hyperovals
in $\PG(2,\gf(q))$ are equivalent to $[q+2,3,q]$ MDS codes over $\gf(q)$,
called hyperoval codes, in the sense that one can be constructed from the
other. Ovals in $\PG(2,\gf(q))$ for odd $q$ are equivalent to $[q+1,3,q-1]$ MDS
codes over $\gf(q)$, which are called oval codes. In this paper, we investigate
the binary subfield codes of two families of hyperoval codes and the $p$-ary
subfield codes of the conic codes. The weight distributions of these subfield
codes and the parameters of their duals are determined. As a byproduct, we
generalize one family of the binary subfield codes to the $p$-ary case and
obtain its weight distribution. The codes presented in this paper are optimal
or almost optimal in many cases. In addition, the parameters of these binary
codes and $p$-ary codes seem new.
|
[
{
"version": "v1",
"created": "Tue, 17 Apr 2018 01:20:59 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Heng",
"Ziling",
""
],
[
"Ding",
"Cunsheng",
""
]
] |
new_dataset
| 0.999082 |
1804.06011
|
Konstantinos Georgiou
|
Jurek Czyzowicz, Konstantinos Georgiou, Ryan Killick, Evangelos
Kranakis, Danny Krizanc, Lata Narayanan, Jaroslav Opatrny and Sunil Shende
|
God Save the Queen
|
33 pages, 8 Figures. This is the full version of the paper with the
same title which will appear in the proceedings of the 9th International
Conference on Fun with Algorithms, (FUN'18), June 13--15, 2018, La Maddalena,
Maddalena Islands, Italy
| null | null | null |
cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Queen Daniela of Sardinia is asleep at the center of a round room at the top
of the tower in her castle. She is accompanied by her faithful servant, Eva.
Suddenly, they are awakened by cries of "Fire". The room is pitch black and
they are disoriented. There is exactly one exit from the room somewhere along
its boundary. They must find it as quickly as possible in order to save the
life of the queen. It is known that with two people searching while moving at
maximum speed 1 anywhere in the room, the room can be evacuated (i.e., with
both people exiting) in $1 + \frac{2\pi}{3} + \sqrt{3} \approx 4.8264$ time
units and this is optimal~[Czyzowicz et al., DISC'14], assuming that the first
person to find the exit can directly guide the other person to the exit using
her voice. Somewhat surprisingly, in this paper we show that if the goal is to
save the queen (possibly leaving Eva behind to die in the fire) there is a
slightly better strategy. We prove that this "priority" version of evacuation
can be solved in time at most $4.81854$. Furthermore, we show that any strategy
for saving the queen requires time at least $3 + \pi/6 + \sqrt{3}/2 \approx
4.3896$ in the worst case. If one or both of the queen's other servants (Biddy
and/or Lili) are with her, we show that the time bounds can be improved to
$3.8327$ for two servants, and $3.3738$ for three servants. Finally we show
lower bounds for these cases of $3.6307$ (two servants) and $3.2017$ (three
servants). The case of $n\geq 4$ is the subject of an independent study by
Queen Daniela's Royal Scientific Team.
|
[
{
"version": "v1",
"created": "Tue, 17 Apr 2018 01:42:44 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Czyzowicz",
"Jurek",
""
],
[
"Georgiou",
"Konstantinos",
""
],
[
"Killick",
"Ryan",
""
],
[
"Kranakis",
"Evangelos",
""
],
[
"Krizanc",
"Danny",
""
],
[
"Narayanan",
"Lata",
""
],
[
"Opatrny",
"Jaroslav",
""
],
[
"Shende",
"Sunil",
""
]
] |
new_dataset
| 0.998243 |
1804.06025
|
Vahid Rasouli Disfani
|
Changfu Li, Vahid R. Disfani, Zachary K. Pecenak, Saeed Mohajeryami,
Jan Kleissl
|
Optimal OLTC Voltage Control Scheme to Enable High Solar Penetrations
| null |
Li, Changfu, Vahid R. Disfani, Zachary K. Pecenak, Saeed
Mohajeryami, and Jan Kleissl. "Optimal OLTC voltage control scheme to enable
high solar penetrations." Electric Power Systems Research 160 (2018): 318-326
|
10.1016/j.epsr.2018.02.016
| null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High solar Photovoltaic (PV) penetration on distribution systems can cause
over-voltage problems. To this end, an Optimal Tap Control (OTC) method is
proposed to regulate On-Load Tap Changers (OLTCs) by minimizing the maximum
deviation of the voltage profile from 1~p.u. on the entire feeder. A secondary
objective is to reduce the number of tap operations (TOs), which is implemented
for the optimization horizon based on voltage forecasts derived from high
resolution PV generation forecasts. A linearization technique is applied to
make the optimization problem convex and able to be solved at operational
timescales. Simulations on a PC show the solution time for one time step is
only 1.1~s for a large feeder with 4 OLTCs and 1623 buses. OTC results are
compared against existing methods through simulations on two feeders in the
Californian network. OTC is firstly compared against an advanced rule-based
Voltage Level Control (VLC) method. OTC and VLC achieve the same reduction of
voltage violations, but unlike VLC, OTC is capable of coordinating multiple
OLTCs. Scalability to multiple OLTCs is therefore demonstrated against a basic
conventional rule-based control method called Autonomous Tap Control (ATC).
Comparing to ATC, the test feeder under control of OTC can accommodate around
67\% more PV without over-voltage issues. Though a side effect of OTC is an
increase in tap operations, the secondary objective functionally balances
operations between all OLTCs such that impacts on their lifetime and
maintenance are minimized.
|
[
{
"version": "v1",
"created": "Tue, 17 Apr 2018 03:13:08 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Li",
"Changfu",
""
],
[
"Disfani",
"Vahid R.",
""
],
[
"Pecenak",
"Zachary K.",
""
],
[
"Mohajeryami",
"Saeed",
""
],
[
"Kleissl",
"Jan",
""
]
] |
new_dataset
| 0.995936 |
1804.06028
|
Nikita Nangia
|
Nikita Nangia and Samuel R. Bowman
|
ListOps: A Diagnostic Dataset for Latent Tree Learning
|
8 pages, 4 figures, 3 tables, NAACL-SRW (2018)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Latent tree learning models learn to parse a sentence without syntactic
supervision, and use that parse to build the sentence representation. Existing
work on such models has shown that, while they perform well on tasks like
sentence classification, they do not learn grammars that conform to any
plausible semantic or syntactic formalism (Williams et al., 2018a). Studying
the parsing ability of such models in natural language can be challenging due
to the inherent complexities of natural language, like having several valid
parses for a single sentence. In this paper we introduce ListOps, a toy dataset
created to study the parsing ability of latent tree models. ListOps sequences
are in the style of prefix arithmetic. The dataset is designed to have a single
correct parsing strategy that a system needs to learn to succeed at the task.
We show that the current leading latent tree models are unable to learn to
parse and succeed at ListOps. These models achieve accuracies worse than purely
sequential RNNs.
|
[
{
"version": "v1",
"created": "Tue, 17 Apr 2018 03:26:28 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Nangia",
"Nikita",
""
],
[
"Bowman",
"Samuel R.",
""
]
] |
new_dataset
| 0.999264 |
1804.06078
|
Haodi Hou
|
Haodi Hou, Jing Huo, Yang Gao
|
Cross-Domain Adversarial Auto-Encoder
|
Under review as a conference paper of KDD 2018
| null | null | null |
cs.CV cs.AI cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose the Cross-Domain Adversarial Auto-Encoder (CDAAE)
to address the problem of cross-domain image inference, generation and
transformation. We make the assumption that images from different domains share
the same latent code space for content, while having separate latent code space
for style. The proposed framework can map cross-domain data to a latent code
vector consisting of a content part and a style part. The latent code vector is
matched with a prior distribution so that we can generate meaningful samples
from any part of the prior space. Consequently, given a sample of one domain,
our framework can generate various samples of the other domain with the same
content of the input. This makes the proposed framework different from the
current work of cross-domain transformation. Besides, the proposed framework
can be trained with both labeled and unlabeled data, which makes it also
suitable for domain adaptation. Experimental results on data sets SVHN, MNIST
and CASIA show the proposed framework achieved visually appealing performance
for image generation task. Besides, we also demonstrate the proposed method
achieved superior results for domain adaptation. Code of our experiments is
available in https://github.com/luckycallor/CDAAE.
|
[
{
"version": "v1",
"created": "Tue, 17 Apr 2018 07:12:58 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Hou",
"Haodi",
""
],
[
"Huo",
"Jing",
""
],
[
"Gao",
"Yang",
""
]
] |
new_dataset
| 0.984761 |
1804.06112
|
Xiaowei Zhou
|
Xiaowei Zhou, Sikang Liu, Georgios Pavlakos, Vijay Kumar, Kostas
Daniilidis
|
Human Motion Capture Using a Drone
|
In International Conference on Robotics and Automation (ICRA) 2018
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current motion capture (MoCap) systems generally require markers and multiple
calibrated cameras, which can be used only in constrained environments. In this
work we introduce a drone-based system for 3D human MoCap. The system only
needs an autonomously flying drone with an on-board RGB camera and is usable in
various indoor and outdoor environments. A reconstruction algorithm is
developed to recover full-body motion from the video recorded by the drone. We
argue that, besides the capability of tracking a moving subject, a flying drone
also provides fast varying viewpoints, which is beneficial for motion
reconstruction. We evaluate the accuracy of the proposed system using our new
DroCap dataset and also demonstrate its applicability for MoCap in the wild
using a consumer drone.
|
[
{
"version": "v1",
"created": "Tue, 17 Apr 2018 08:57:40 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Zhou",
"Xiaowei",
""
],
[
"Liu",
"Sikang",
""
],
[
"Pavlakos",
"Georgios",
""
],
[
"Kumar",
"Vijay",
""
],
[
"Daniilidis",
"Kostas",
""
]
] |
new_dataset
| 0.999429 |
1804.06137
|
Venkatesh Duppada
|
Venkatesh Duppada, Royal Jain, Sushant Hiray
|
SeerNet at SemEval-2018 Task 1: Domain Adaptation for Affect in Tweets
|
SemEval-2018 Task 1: Affect in Tweets
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper describes the best performing system for the SemEval-2018 Affect in
Tweets (English) sub-tasks. The system focuses on the ordinal classification
and regression sub-tasks for valence and emotion. For ordinal classification
valence is classified into 7 different classes ranging from -3 to 3 whereas
emotion is classified into 4 different classes 0 to 3 separately for each
emotion namely anger, fear, joy and sadness. The regression sub-tasks estimate
the intensity of valence and each emotion. The system performs domain
adaptation of 4 different models and creates an ensemble to give the final
prediction. The proposed system achieved 1st position out of 75 teams which
participated in the fore-mentioned sub-tasks. We outperform the baseline model
by margins ranging from 49.2% to 76.4%, thus, pushing the state-of-the-art
significantly.
|
[
{
"version": "v1",
"created": "Tue, 17 Apr 2018 09:50:01 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Duppada",
"Venkatesh",
""
],
[
"Jain",
"Royal",
""
],
[
"Hiray",
"Sushant",
""
]
] |
new_dataset
| 0.982989 |
1804.06236
|
Isaak Kavasidis
|
I. Kavasidis, S. Palazzo, C. Spampinato, C. Pino, D. Giordano, D.
Giuffrida, P. Messina
|
A Saliency-based Convolutional Neural Network for Table and Chart
Detection in Digitized Documents
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep Convolutional Neural Networks (DCNNs) have recently been applied
successfully to a variety of vision and multimedia tasks, thus driving
development of novel solutions in several application domains. Document
analysis is a particularly promising area for DCNNs: indeed, the number of
available digital documents has reached unprecedented levels, and humans are no
longer able to discover and retrieve all the information contained in these
documents without the help of automation. Under this scenario, DCNNs offers a
viable solution to automate the information extraction process from digital
documents. Within the realm of information extraction from documents, detection
of tables and charts is particularly needed as they contain a visual summary of
the most valuable information contained in a document. For a complete
automation of visual information extraction process from tables and charts, it
is necessary to develop techniques that localize them and identify precisely
their boundaries. In this paper we aim at solving the table/chart detection
task through an approach that combines deep convolutional neural networks,
graphical models and saliency concepts. In particular, we propose a
saliency-based fully-convolutional neural network performing multi-scale
reasoning on visual cues followed by a fully-connected conditional random field
(CRF) for localizing tables and charts in digital/digitized documents.
Performance analysis carried out on an extended version of ICDAR 2013 (with
annotated charts as well as tables) shows that our approach yields promising
results, outperforming existing models.
|
[
{
"version": "v1",
"created": "Tue, 17 Apr 2018 13:39:29 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Kavasidis",
"I.",
""
],
[
"Palazzo",
"S.",
""
],
[
"Spampinato",
"C.",
""
],
[
"Pino",
"C.",
""
],
[
"Giordano",
"D.",
""
],
[
"Giuffrida",
"D.",
""
],
[
"Messina",
"P.",
""
]
] |
new_dataset
| 0.997831 |
1804.06278
|
Chen Liu
|
Chen Liu, Jimei Yang, Duygu Ceylan, Ersin Yumer, Yasutaka Furukawa
|
PlaneNet: Piece-wise Planar Reconstruction from a Single RGB Image
|
CVPR 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a deep neural network (DNN) for piece-wise planar
depthmap reconstruction from a single RGB image. While DNNs have brought
remarkable progress to single-image depth prediction, piece-wise planar
depthmap reconstruction requires a structured geometry representation, and has
been a difficult task to master even for DNNs. The proposed end-to-end DNN
learns to directly infer a set of plane parameters and corresponding plane
segmentation masks from a single RGB image. We have generated more than 50,000
piece-wise planar depthmaps for training and testing from ScanNet, a
large-scale RGBD video database. Our qualitative and quantitative evaluations
demonstrate that the proposed approach outperforms baseline methods in terms of
both plane segmentation and depth estimation accuracy. To the best of our
knowledge, this paper presents the first end-to-end neural architecture for
piece-wise planar reconstruction from a single RGB image. Code and data are
available at https://github.com/art-programmer/PlaneNet.
|
[
{
"version": "v1",
"created": "Tue, 17 Apr 2018 14:18:33 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Liu",
"Chen",
""
],
[
"Yang",
"Jimei",
""
],
[
"Ceylan",
"Duygu",
""
],
[
"Yumer",
"Ersin",
""
],
[
"Furukawa",
"Yasutaka",
""
]
] |
new_dataset
| 0.963744 |
1804.06313
|
Abdul Basit
|
N Chaitanya Kumar, Abdul Basit, Priyadarshi Singh, and V. Ch. Venkaiah
|
Lightweight Cryptography for Distributed PKI Based MANETS
| null |
International Journal of Computer Networks & Communications
(IJCNC) Vol.10, No.2, March 2018
|
10.5121/ijcnc.2018.10207
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Because of lack of infrastructure and Central Authority(CA), secure
communication is a challenging job in MANETs. A lightweight security solution
is needed in MANET to balance its nodes resource tightness and mobility
feature. The role of CA should be decentralized in MANET because the network is
managed by the nodes themselves without any fixed infrastructure and
centralized authority. In this paper, we created a distributed Public Key
Infrastructure (PKI) using Shamir secret sharing mechanism which allows the
nodes of the MANET to have a share of its private key. The traditional PKI
protocols require centralized authority and heavy computing power to manage
public and private keys, thus making them not suitable for MANETs. To establish
a secure communication for the MANET nodes, we proposed a lightweight crypto
protocol which requires limited resources, making it suitable for MANETs.
|
[
{
"version": "v1",
"created": "Mon, 9 Apr 2018 12:36:07 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Kumar",
"N Chaitanya",
""
],
[
"Basit",
"Abdul",
""
],
[
"Singh",
"Priyadarshi",
""
],
[
"Venkaiah",
"V. Ch.",
""
]
] |
new_dataset
| 0.979362 |
1804.06375
|
Yongbin Sun
|
Yongbin Sun, Ziwei Liu, Yue Wang, Sanjay E. Sarma
|
Im2Avatar: Colorful 3D Reconstruction from a Single Image
|
10 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing works on single-image 3D reconstruction mainly focus on shape
recovery. In this work, we study a new problem, that is, simultaneously
recovering 3D shape and surface color from a single image, namely "colorful 3D
reconstruction". This problem is both challenging and intriguing because the
ability to infer textured 3D model from a single image is at the core of visual
understanding. Here, we propose an end-to-end trainable framework, Colorful
Voxel Network (CVN), to tackle this problem. Conditioned on a single 2D input,
CVN learns to decompose shape and surface color information of a 3D object into
a 3D shape branch and a surface color branch, respectively. Specifically, for
the shape recovery, we generate a shape volume with the state of its voxels
indicating occupancy. For the surface color recovery, we combine the strength
of appearance hallucination and geometric projection by concurrently learning a
regressed color volume and a 2D-to-3D flow volume, which are then fused into a
blended color volume. The final textured 3D model is obtained by sampling color
from the blended color volume at the positions of occupied voxels in the shape
volume. To handle the severe sparse volume representations, a novel loss
function, Mean Squared False Cross-Entropy Loss (MSFCEL), is designed.
Extensive experiments demonstrate that our approach achieves significant
improvement over baselines, and shows great generalization across diverse
object categories and arbitrary viewpoints.
|
[
{
"version": "v1",
"created": "Tue, 17 Apr 2018 17:02:20 GMT"
}
] | 2018-04-18T00:00:00 |
[
[
"Sun",
"Yongbin",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Wang",
"Yue",
""
],
[
"Sarma",
"Sanjay E.",
""
]
] |
new_dataset
| 0.999662 |
1612.09352
|
Ingmar Steiner
|
Ingmar Steiner, S\'ebastien Le Maguer and Alexander Hewer
|
Synthesis of Tongue Motion and Acoustics from Text using a Multimodal
Articulatory Database
| null |
IEEE/ACM Transactions on Audio, Speech, and Language Processing 25
(2017) 2351 - 2361
|
10.1109/TASLP.2017.2756818
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an end-to-end text-to-speech (TTS) synthesis system that generates
audio and synchronized tongue motion directly from text. This is achieved by
adapting a 3D model of the tongue surface to an articulatory dataset and
training a statistical parametric speech synthesis system directly on the
tongue model parameters. We evaluate the model at every step by comparing the
spatial coordinates of predicted articulatory movements against the reference
data. The results indicate a global mean Euclidean distance of less than 2.8
mm, and our approach can be adapted to add an articulatory modality to
conventional TTS applications without the need for extra data.
|
[
{
"version": "v1",
"created": "Fri, 30 Dec 2016 00:05:03 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Sep 2017 15:35:43 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Dec 2017 15:28:14 GMT"
},
{
"version": "v4",
"created": "Fri, 13 Apr 2018 14:36:28 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Steiner",
"Ingmar",
""
],
[
"Maguer",
"Sébastien Le",
""
],
[
"Hewer",
"Alexander",
""
]
] |
new_dataset
| 0.999322 |
1705.06936
|
Tomasz Grel
|
Robert Adamski, Tomasz Grel, Maciej Klimek and Henryk Michalewski
|
Atari games and Intel processors
| null | null |
10.1007/978-3-319-75931-9_1
| null |
cs.DC cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The asynchronous nature of the state-of-the-art reinforcement learning
algorithms such as the Asynchronous Advantage Actor-Critic algorithm, makes
them exceptionally suitable for CPU computations. However, given the fact that
deep reinforcement learning often deals with interpreting visual information, a
large part of the train and inference time is spent performing convolutions. In
this work we present our results on learning strategies in Atari games using a
Convolutional Neural Network, the Math Kernel Library and TensorFlow 0.11rc0
machine learning framework. We also analyze effects of asynchronous
computations on the convergence of reinforcement learning algorithms.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2017 11:19:45 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Adamski",
"Robert",
""
],
[
"Grel",
"Tomasz",
""
],
[
"Klimek",
"Maciej",
""
],
[
"Michalewski",
"Henryk",
""
]
] |
new_dataset
| 0.98156 |
1705.06979
|
Matthias Dorfer
|
Matthias Dorfer and Jan Schl\"uter and Andreu Vall and Filip
Korzeniowski and Gerhard Widmer
|
End-to-End Cross-Modality Retrieval with CCA Projections and Pairwise
Ranking Loss
|
Preliminary version of a paper published in the International Journal
of Multimedia Information Retrieval
| null |
10.1007/s13735-018-0151-5"
| null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cross-modality retrieval encompasses retrieval tasks where the fetched items
are of a different type than the search query, e.g., retrieving pictures
relevant to a given text query. The state-of-the-art approach to cross-modality
retrieval relies on learning a joint embedding space of the two modalities,
where items from either modality are retrieved using nearest-neighbor search.
In this work, we introduce a neural network layer based on Canonical
Correlation Analysis (CCA) that learns better embedding spaces by analytically
computing projections that maximize correlation. In contrast to previous
approaches, the CCA Layer (CCAL) allows us to combine existing objectives for
embedding space learning, such as pairwise ranking losses, with the optimal
projections of CCA. We show the effectiveness of our approach for
cross-modality retrieval on three different scenarios (text-to-image,
audio-sheet-music and zero-shot retrieval), surpassing both Deep CCA and a
multi-view network using freely learned projections optimized by a pairwise
ranking loss, especially when little training data is available (the code for
all three methods is released at: https://github.com/CPJKU/cca_layer).
|
[
{
"version": "v1",
"created": "Fri, 19 May 2017 13:23:46 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Apr 2018 14:03:05 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Dorfer",
"Matthias",
""
],
[
"Schlüter",
"Jan",
""
],
[
"Vall",
"Andreu",
""
],
[
"Korzeniowski",
"Filip",
""
],
[
"Widmer",
"Gerhard",
""
]
] |
new_dataset
| 0.975507 |
1706.02823
|
Wenqi Xian
|
Wenqi Xian, Patsorn Sangkloy, Varun Agrawal, Amit Raj, Jingwan Lu,
Chen Fang, Fisher Yu, James Hays
|
TextureGAN: Controlling Deep Image Synthesis with Texture Patches
|
CVPR 2018 spotlight
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate deep image synthesis guided by sketch, color,
and texture. Previous image synthesis methods can be controlled by sketch and
color strokes but we are the first to examine texture control. We allow a user
to place a texture patch on a sketch at arbitrary locations and scales to
control the desired output texture. Our generative network learns to synthesize
objects consistent with these texture suggestions. To achieve this, we develop
a local texture loss in addition to adversarial and content loss to train the
generative network. We conduct experiments using sketches generated from real
images and textures sampled from a separate texture database and results show
that our proposed algorithm is able to generate plausible images that are
faithful to user controls. Ablation studies show that our proposed pipeline can
generate more realistic images than adapting existing methods directly.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2017 03:35:08 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Dec 2017 08:19:15 GMT"
},
{
"version": "v3",
"created": "Sat, 14 Apr 2018 20:11:56 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Xian",
"Wenqi",
""
],
[
"Sangkloy",
"Patsorn",
""
],
[
"Agrawal",
"Varun",
""
],
[
"Raj",
"Amit",
""
],
[
"Lu",
"Jingwan",
""
],
[
"Fang",
"Chen",
""
],
[
"Yu",
"Fisher",
""
],
[
"Hays",
"James",
""
]
] |
new_dataset
| 0.99831 |
1710.08259
|
Bal\'azs T\'oth
|
Balazs Toth
|
Nauticle: a general-purpose particle-based simulation tool
|
Submitted manuscript
| null | null | null |
cs.MS physics.comp-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nauticle is a general-purpose simulation tool for the flexible and highly
configurable application of particle-based methods of either discrete or
continuum phenomena. It is presented that Nauticle has three distinct layers
for users and developers, then the top two layers are discussed in detail. The
paper introduces the Symbolic Form Language (SFL) of Nauticle, which
facilitates the formulation of user-defined numerical models at the top level
in text-based configuration files and provides simple application examples of
use. On the other hand, at the intermediate level, it is shown that the SFL can
be intuitively extended with new particle methods without tedious recoding or
even the knowledge of the bottom level. Finally, the efficiency of the code is
also tested through a performance benchmark.
|
[
{
"version": "v1",
"created": "Mon, 23 Oct 2017 13:27:36 GMT"
},
{
"version": "v2",
"created": "Sat, 14 Apr 2018 16:41:41 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Toth",
"Balazs",
""
]
] |
new_dataset
| 0.99942 |
1711.07846
|
Gordon Christie
|
Gordon Christie, Neil Fendley, James Wilson, Ryan Mukherjee
|
Functional Map of the World
|
CVPR 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new dataset, Functional Map of the World (fMoW), which aims to
inspire the development of machine learning models capable of predicting the
functional purpose of buildings and land use from temporal sequences of
satellite images and a rich set of metadata features. The metadata provided
with each image enables reasoning about location, time, sun angles, physical
sizes, and other features when making predictions about objects in the image.
Our dataset consists of over 1 million images from over 200 countries. For each
image, we provide at least one bounding box annotation containing one of 63
categories, including a "false detection" category. We present an analysis of
the dataset along with baseline approaches that reason about metadata and
temporal views. Our data, code, and pretrained models have been made publicly
available.
|
[
{
"version": "v1",
"created": "Tue, 21 Nov 2017 15:28:00 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Nov 2017 04:55:20 GMT"
},
{
"version": "v3",
"created": "Fri, 13 Apr 2018 19:03:50 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Christie",
"Gordon",
""
],
[
"Fendley",
"Neil",
""
],
[
"Wilson",
"James",
""
],
[
"Mukherjee",
"Ryan",
""
]
] |
new_dataset
| 0.99972 |
1711.07950
|
Jason Weston
|
Zhilin Yang, Saizheng Zhang, Jack Urbanek, Will Feng, Alexander H.
Miller, Arthur Szlam, Douwe Kiela, Jason Weston
|
Mastering the Dungeon: Grounded Language Learning by Mechanical Turker
Descent
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Contrary to most natural language processing research, which makes use of
static datasets, humans learn language interactively, grounded in an
environment. In this work we propose an interactive learning procedure called
Mechanical Turker Descent (MTD) and use it to train agents to execute natural
language commands grounded in a fantasy text adventure game. In MTD, Turkers
compete to train better agents in the short term, and collaborate by sharing
their agents' skills in the long term. This results in a gamified, engaging
experience for the Turkers and a better quality teaching signal for the agents
compared to static datasets, as the Turkers naturally adapt the training data
to the agent's abilities.
|
[
{
"version": "v1",
"created": "Tue, 21 Nov 2017 18:21:16 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Nov 2017 02:08:18 GMT"
},
{
"version": "v3",
"created": "Mon, 16 Apr 2018 15:48:58 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Yang",
"Zhilin",
""
],
[
"Zhang",
"Saizheng",
""
],
[
"Urbanek",
"Jack",
""
],
[
"Feng",
"Will",
""
],
[
"Miller",
"Alexander H.",
""
],
[
"Szlam",
"Arthur",
""
],
[
"Kiela",
"Douwe",
""
],
[
"Weston",
"Jason",
""
]
] |
new_dataset
| 0.955119 |
1712.06761
|
Paul Vicol
|
Paul Vicol, Makarand Tapaswi, Lluis Castrejon, Sanja Fidler
|
MovieGraphs: Towards Understanding Human-Centric Situations from Videos
|
Spotlight at CVPR 2018. Webpage: http://moviegraphs.cs.toronto.edu
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is growing interest in artificial intelligence to build socially
intelligent robots. This requires machines to have the ability to "read"
people's emotions, motivations, and other factors that affect behavior. Towards
this goal, we introduce a novel dataset called MovieGraphs which provides
detailed, graph-based annotations of social situations depicted in movie clips.
Each graph consists of several types of nodes, to capture who is present in the
clip, their emotional and physical attributes, their relationships (i.e.,
parent/child), and the interactions between them. Most interactions are
associated with topics that provide additional details, and reasons that give
motivations for actions. In addition, most interactions and many attributes are
grounded in the video with time stamps. We provide a thorough analysis of our
dataset, showing interesting common-sense correlations between different social
aspects of scenes, as well as across scenes over time. We propose a method for
querying videos and text with graphs, and show that: 1) our graphs contain rich
and sufficient information to summarize and localize each scene; and 2)
subgraphs allow us to describe situations at an abstract level and retrieve
multiple semantically relevant situations. We also propose methods for
interaction understanding via ordering, and reason understanding. MovieGraphs
is the first benchmark to focus on inferred properties of human-centric
situations, and opens up an exciting avenue towards socially-intelligent AI
agents.
|
[
{
"version": "v1",
"created": "Tue, 19 Dec 2017 03:08:25 GMT"
},
{
"version": "v2",
"created": "Sun, 15 Apr 2018 18:59:49 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Vicol",
"Paul",
""
],
[
"Tapaswi",
"Makarand",
""
],
[
"Castrejon",
"Lluis",
""
],
[
"Fidler",
"Sanja",
""
]
] |
new_dataset
| 0.999083 |
1803.02471
|
Zhenyu Ning
|
Zhenyu Ning and Fengwei Zhang
|
DexLego: Reassembleable Bytecode Extraction for Aiding Static Analysis
|
12 pages, 6 figures, to appear in the 48th IEEE/IFIP International
Conference on Dependable Systems and Networks (DSN'18)
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The scale of Android applications in the market is growing rapidly. To
efficiently detect the malicious behavior in these applications, an array of
static analysis tools are proposed. However, static analysis tools suffer from
code hiding techniques like packing, dynamic loading, self modifying, and
reflection. In this paper, we thus present DexLego, a novel system that
performs a reassembleable bytecode extraction for aiding static analysis tools
to reveal the malicious behavior of Android applications. DexLego leverages
just-in-time collection to extract data and bytecode from an application at
runtime, and reassembles them to a new Dalvik Executable (DEX) file offline.
The experiments on DroidBench and real-world applications show that DexLego
correctly reconstructs the behavior of an application in the reassembled DEX
file, and significantly improves analysis result of the existing static
analysis systems.
|
[
{
"version": "v1",
"created": "Tue, 6 Mar 2018 23:29:19 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Mar 2018 20:32:56 GMT"
},
{
"version": "v3",
"created": "Sun, 15 Apr 2018 03:24:30 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Ning",
"Zhenyu",
""
],
[
"Zhang",
"Fengwei",
""
]
] |
new_dataset
| 0.97829 |
1804.04326
|
Yuya Yoshikawa
|
Yuya Yoshikawa, Jiaqing Lin, Akikazu Takeuchi
|
STAIR Actions: A Video Dataset of Everyday Home Actions
|
STAIR Actions dataset can be downloaded from
http://actions.stair.center
| null | null | null |
cs.CV cs.AI cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A new large-scale video dataset for human action recognition, called STAIR
Actions is introduced. STAIR Actions contains 100 categories of action labels
representing fine-grained everyday home actions so that it can be applied to
research in various home tasks such as nursing, caring, and security. In STAIR
Actions, each video has a single action label. Moreover, for each action
category, there are around 1,000 videos that were obtained from YouTube or
produced by crowdsource workers. The duration of each video is mostly five to
six seconds. The total number of videos is 102,462. We explain how we
constructed STAIR Actions and show the characteristics of STAIR Actions
compared to existing datasets for human action recognition. Experiments with
three major models for action recognition show that STAIR Actions can train
large models and achieve good performance. STAIR Actions can be downloaded from
http://actions.stair.center
|
[
{
"version": "v1",
"created": "Thu, 12 Apr 2018 05:48:06 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Apr 2018 03:26:54 GMT"
},
{
"version": "v3",
"created": "Mon, 16 Apr 2018 05:40:42 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Yoshikawa",
"Yuya",
""
],
[
"Lin",
"Jiaqing",
""
],
[
"Takeuchi",
"Akikazu",
""
]
] |
new_dataset
| 0.999905 |
1804.05088
|
Yuval Pinter
|
Ian Stewart, Yuval Pinter, Jacob Eisenstein
|
S\'i o no, qu\`e penses? Catalonian Independence and Linguistic Identity
on Social Media
|
NAACL 2018
| null | null | null |
cs.CL cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Political identity is often manifested in language variation, but the
relationship between the two is still relatively unexplored from a quantitative
perspective. This study examines the use of Catalan, a language local to the
semi-autonomous region of Catalonia in Spain, on Twitter in discourse related
to the 2017 independence referendum. We corroborate prior findings that
pro-independence tweets are more likely to include the local language than
anti-independence tweets. We also find that Catalan is used more often in
referendum-related discourse than in other contexts, contrary to prior findings
on language variation. This suggests a strong role for the Catalan language in
the expression of Catalonian political identity.
|
[
{
"version": "v1",
"created": "Fri, 13 Apr 2018 18:52:14 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Stewart",
"Ian",
""
],
[
"Pinter",
"Yuval",
""
],
[
"Eisenstein",
"Jacob",
""
]
] |
new_dataset
| 0.999309 |
1804.05091
|
Cosmin Ancuti
|
Codruta O. Ancuti, Cosmin Ancuti, Radu Timofte and Christophe De
Vleeschouwer
|
I-HAZE: a dehazing benchmark with real hazy and haze-free indoor images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image dehazing has become an important computational imaging topic in the
recent years. However, due to the lack of ground truth images, the comparison
of dehazing methods is not straightforward, nor objective. To overcome this
issue we introduce a new dataset -named I-HAZE- that contains 35 image pairs of
hazy and corresponding haze-free (ground-truth) indoor images. Different from
most of the existing dehazing databases, hazy images have been generated using
real haze produced by a professional haze machine. For easy color calibration
and improved assessment of dehazing algorithms, each scene include a MacBeth
color checker. Moreover, since the images are captured in a controlled
environment, both haze-free and hazy images are captured under the same
illumination conditions. This represents an important advantage of the I-HAZE
dataset that allows us to objectively compare the existing image dehazing
techniques using traditional image quality metrics such as PSNR and SSIM.
|
[
{
"version": "v1",
"created": "Fri, 13 Apr 2018 19:01:39 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Ancuti",
"Codruta O.",
""
],
[
"Ancuti",
"Cosmin",
""
],
[
"Timofte",
"Radu",
""
],
[
"De Vleeschouwer",
"Christophe",
""
]
] |
new_dataset
| 0.999118 |
1804.05250
|
Akshay R
|
Akshay Raman, Kimberly Chou
|
Porting nTorrent to ndnSIM
|
3 pages, 6 figures
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
BitTorrent is a popular communication protocol for peer-to-peer file sharing.
It uses a data-centric approach, wherein the data is decentralized and peers
request each other for pieces of the file(s). Aspects of this process is
similar to the Named Data Networking (NDN) architecture, but is realized
completely at the application level on top of TCP/IP networking. nTorrent is a
peer-to-peer file sharing application that is based on NDN. The goal of this
project is to port the application onto ndnSIM to allow for simulation and
testing.
|
[
{
"version": "v1",
"created": "Sat, 14 Apr 2018 17:13:10 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Raman",
"Akshay",
""
],
[
"Chou",
"Kimberly",
""
]
] |
new_dataset
| 0.997097 |
1804.05253
|
Debanjan Ghosh
|
Debanjan Ghosh and Smaranda Muresan
|
"With 1 follower I must be AWESOME :P". Exploring the role of irony
markers in irony recognition
|
ICWSM 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conversations in social media often contain the use of irony or sarcasm, when
the users say the opposite of what they really mean. Irony markers are the
meta-communicative clues that inform the reader that an utterance is ironic. We
propose a thorough analysis of theoretically grounded irony markers in two
social media platforms: $Twitter$ and $Reddit$. Classification and frequency
analysis show that for $Twitter$, typographic markers such as emoticons and
emojis are the most discriminative markers to recognize ironic utterances,
while for $Reddit$ the morphological markers (e.g., interjections, tag
questions) are the most discriminative.
|
[
{
"version": "v1",
"created": "Sat, 14 Apr 2018 17:39:45 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Ghosh",
"Debanjan",
""
],
[
"Muresan",
"Smaranda",
""
]
] |
new_dataset
| 0.954187 |
1804.05294
|
Antonio San Mart\'in
|
P. Le\'on-Ara\'uz, A. San Mart\'in
|
The EcoLexicon Semantic Sketch Grammar: from Knowledge Patterns to Word
Sketches
|
Proceedings of the LREC 2018 Workshop Globalex 2018 Lexicography &
WordNets, edited by Kerneman, I. & Krek, S., pages 94-99. Miyazaki: Globalex
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many projects have applied knowledge patterns (KPs) to the retrieval of
specialized information. Yet terminologists still rely on manual analysis of
concordance lines to extract semantic information, since there are no
user-friendly publicly available applications enabling them to find knowledge
rich contexts (KRCs). To fill this void, we have created the KP-based
EcoLexicon Semantic SketchGrammar (ESSG) in the well-known corpus query system
Sketch Engine. For the first time, the ESSG is now publicly available inSketch
Engine to query the EcoLexicon English Corpus. Additionally, reusing the ESSG
in any English corpus uploaded by the user enables Sketch Engine to extract
KRCs codifying generic-specific, part-whole, location, cause and function
relations, because most of the KPs are domain-independent. The information is
displayed in the form of summary lists (word sketches) containing the pairs of
terms linked by a given semantic relation. This paper describes the process of
building a KP-based sketch grammar with special focus on the last stage,
namely, the evaluation with refinement purposes. We conducted an initial
shallow precision and recall evaluation of the 64 English sketch grammar rules
created so far for hyponymy, meronymy and causality. Precision was measured
based on a random sample of concordances extracted from each word sketch type.
Recall was assessed based on a random sample of concordances where known term
pairs are found. The results are necessary for the improvement and refinement
of the ESSG. The noise of false positives helped to further specify the rules,
whereas the silence of false negatives allows us to find useful new patterns.
|
[
{
"version": "v1",
"created": "Sun, 15 Apr 2018 02:21:28 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"León-Araúz",
"P.",
""
],
[
"Martín",
"A. San",
""
]
] |
new_dataset
| 0.998698 |
1804.05338
|
Jo Schlemper
|
Jo Schlemper, Ozan Oktay, Liang Chen, Jacqueline Matthew, Caroline
Knight, Bernhard Kainz, Ben Glocker, Daniel Rueckert
|
Attention-Gated Networks for Improving Ultrasound Scan Plane Detection
|
Submitted to MIDL2018 (OpenReview:
https://openreview.net/forum?id=BJtn7-3sM)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we apply an attention-gated network to real-time automated scan
plane detection for fetal ultrasound screening. Scan plane detection in fetal
ultrasound is a challenging problem due the poor image quality resulting in low
interpretability for both clinicians and automated algorithms. To solve this,
we propose incorporating self-gated soft-attention mechanisms. A soft-attention
mechanism generates a gating signal that is end-to-end trainable, which allows
the network to contextualise local information useful for prediction. The
proposed attention mechanism is generic and it can be easily incorporated into
any existing classification architectures, while only requiring a few
additional parameters. We show that, when the base network has a high capacity,
the incorporated attention mechanism can provide efficient object localisation
while improving the overall performance. When the base network has a low
capacity, the method greatly outperforms the baseline approach and
significantly reduces false positives. Lastly, the generated attention maps
allow us to understand the model's reasoning process, which can also be used
for weakly supervised object localisation.
|
[
{
"version": "v1",
"created": "Sun, 15 Apr 2018 11:15:28 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Schlemper",
"Jo",
""
],
[
"Oktay",
"Ozan",
""
],
[
"Chen",
"Liang",
""
],
[
"Matthew",
"Jacqueline",
""
],
[
"Knight",
"Caroline",
""
],
[
"Kainz",
"Bernhard",
""
],
[
"Glocker",
"Ben",
""
],
[
"Rueckert",
"Daniel",
""
]
] |
new_dataset
| 0.994476 |
1804.05371
|
Maya Levy
|
Maya Levy and Eitan Yaakobi
|
Mutually Uncorrelated Codes for DNA Storage
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mutually Uncorrelated (MU) codes are a class of codes in which no proper
prefix of one codeword is a suffix of another codeword. These codes were
originally studied for synchronization purposes and recently, Yazdi et al.
showed their applicability to enable random access in DNA storage. In this work
we follow the research of Yazdi et al. and study MU codes along with their
extensions to correct errors and balanced codes. We first review a well known
construction of MU codes and study the asymptotic behavior of its cardinality.
This task is accomplished by studying a special class of run-length limited
codes that impose the longest run of zeros to be at most some function of the
codewords length. We also present an efficient algorithm for this class of
constrained codes and show how to use this analysis for MU codes. Next, we
extend the results on the run-length limited codes in order to study
$(d_h,d_m)$-MU codes that impose a minimum Hamming distance of $d_h$ between
different codewords and $d_m$ between prefixes and suffixes. In particular, we
show an efficient construction of these codes with nearly optimal redundancy.
We also provide similar results for the edit distance and balanced MU codes.
Lastly, we draw connections to the problems of comma-free and prefix
synchronized codes.
|
[
{
"version": "v1",
"created": "Sun, 15 Apr 2018 15:40:37 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Levy",
"Maya",
""
],
[
"Yaakobi",
"Eitan",
""
]
] |
new_dataset
| 0.98237 |
1804.05398
|
Radhika Mamidi Dr
|
Radhika Mamidi
|
Context and Humor: Understanding Amul advertisements of India
|
Presented at Workshop in Designing Humour in Human-Computer
Interaction (HUMIC 2017). September 26th 2017, Mumbai, India. In conjunction
with INTERACT 2017
| null | null | null |
cs.CL
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Contextual knowledge is the most important element in understanding language.
By contextual knowledge we mean both general knowledge and discourse knowledge
i.e. knowledge of the situational context, background knowledge and the
co-textual context [10]. In this paper, we will discuss the importance of
contextual knowledge in understanding the humor present in the cartoon based
Amul advertisements in India.In the process, we will analyze these
advertisements and also see if humor is an effective tool for advertising and
thereby, for marketing.These bilingual advertisements also expect the audience
to have the appropriate linguistic knowledge which includes knowledge of
English and Hindi vocabulary, morphology and syntax. Different techniques like
punning, portmanteaus and parodies of popular proverbs, expressions, acronyms,
famous dialogues, songs etc are employed to convey the message in a humorous
way. The present study will concentrate on these linguistic cues and the
required context for understanding wit and humor.
|
[
{
"version": "v1",
"created": "Sun, 15 Apr 2018 18:00:53 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Mamidi",
"Radhika",
""
]
] |
new_dataset
| 0.999466 |
1804.05409
|
Philip Feldman
|
Philip Feldman, Aaron Dant, Wayne Lutters
|
Simon's Anthill: Mapping and Navigating Belief Spaces
|
Collective Intelligence 2018
| null | null | null |
cs.MA cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the parable of Simon's Ant, an ant follows a complex path along a beach on
to reach its goal. The story shows how the interaction of simple rules and a
complex environment result in complex behavior. But this relationship can be
looked at in another way - given path and rules, we can infer the environment.
With a large population of agents - human or animal - it should be possible to
build a detailed map of a population's social and physical environment. In this
abstract, we describe the development of a framework to create such maps of
human belief space. These maps are built from the combined trajectories of a
large number of agents. Currently, these maps are built using multidimensional
agent-based simulation, but the framework is designed to work using data from
computer-mediated human communication. Maps incorporating human data should
support visualization and navigation of the "plains of research", "fashionable
foothills" and "conspiracy cliffs" of human belief spaces.
|
[
{
"version": "v1",
"created": "Sun, 15 Apr 2018 19:03:17 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Feldman",
"Philip",
""
],
[
"Dant",
"Aaron",
""
],
[
"Lutters",
"Wayne",
""
]
] |
new_dataset
| 0.973753 |
1804.05469
|
Kai Xu
|
Chengjie Niu, Jun Li and Kai Xu
|
Im2Struct: Recovering 3D Shape Structure from a Single RGB Image
|
CVPR 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose to recover 3D shape structures from single RGB images, where
structure refers to shape parts represented by cuboids and part relations
encompassing connectivity and symmetry. Given a single 2D image with an object
depicted, our goal is automatically recover a cuboid structure of the object
parts as well as their mutual relations. We develop a convolutional-recursive
auto-encoder comprised of structure parsing of a 2D image followed by structure
recovering of a cuboid hierarchy. The encoder is achieved by a multi-scale
convolutional network trained with the task of shape contour estimation,
thereby learning to discern object structures in various forms and scales. The
decoder fuses the features of the structure parsing network and the original
image, and recursively decodes a hierarchy of cuboids. Since the decoder
network is learned to recover part relations including connectivity and
symmetry explicitly, the plausibility and generality of part structure recovery
can be ensured. The two networks are jointly trained using the training data of
contour-mask and cuboid structure pairs. Such pairs are generated by rendering
stock 3D CAD models coming with part segmentation. Our method achieves
unprecedentedly faithful and detailed recovery of diverse 3D part structures
from single-view 2D images. We demonstrate two applications of our method
including structure-guided completion of 3D volumes reconstructed from
single-view images and structure-aware interactive editing of 2D images.
|
[
{
"version": "v1",
"created": "Mon, 16 Apr 2018 01:32:30 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Niu",
"Chengjie",
""
],
[
"Li",
"Jun",
""
],
[
"Xu",
"Kai",
""
]
] |
new_dataset
| 0.995809 |
1804.05492
|
Bernadette Boscoe
|
Bernadette M. Boscoe (Randles), Irene V. Pasquetto, Milena S. Golshan,
Christine L. Borgman
|
Using the Jupyter Notebook as a Tool for Open Science: An Empirical
Study
| null |
2017 ACM/IEEE Joint Conference on Digital Libraries (JCDL) (2017).
Toronto, ON, Canada. June 19, 2017 to June 23, 2017, ISBN: 978-1-5386-3862-0
pp: 1-2
| null | null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As scientific work becomes more computational and data intensive, research
processes and results become more difficult to interpret and reproduce. In this
poster, we show how the Jupyter notebook, a tool originally designed as a free
version of Mathematica notebooks, has evolved to become a robust tool for
scientists to share code, associated computation, and documentation.
|
[
{
"version": "v1",
"created": "Mon, 16 Apr 2018 03:40:10 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Boscoe",
"Bernadette M.",
"",
"Randles"
],
[
"Pasquetto",
"Irene V.",
""
],
[
"Golshan",
"Milena S.",
""
],
[
"Borgman",
"Christine L.",
""
]
] |
new_dataset
| 0.970459 |
1804.05514
|
Mayank Singh
|
Mayank Singh, Pradeep Dogga, Sohan Patro, Dhiraj Barnwal, Ritam Dutt,
Rajarshi Haldar, Pawan Goyal and Animesh Mukherjee
|
CL Scholar: The ACL Anthology Knowledge Graph Miner
|
5 pages
| null | null | null |
cs.DL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present CL Scholar, the ACL Anthology knowledge graph miner to facilitate
high-quality search and exploration of current research progress in the
computational linguistics community. In contrast to previous works,
periodically crawling, indexing and processing of new incoming articles is
completely automated in the current system. CL Scholar utilizes both textual
and network information for knowledge graph construction. As an additional
novel initiative, CL Scholar supports more than 1200 scholarly natural language
queries along with standard keyword-based search on constructed knowledge
graph. It answers binary, statistical and list based natural language queries.
The current system is deployed at http://cnerg.iitkgp.ac.in/aclakg. We also
provide REST API support along with bulk download facility. Our code and data
are available at https://github.com/CLScholar.
|
[
{
"version": "v1",
"created": "Mon, 16 Apr 2018 06:15:06 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Singh",
"Mayank",
""
],
[
"Dogga",
"Pradeep",
""
],
[
"Patro",
"Sohan",
""
],
[
"Barnwal",
"Dhiraj",
""
],
[
"Dutt",
"Ritam",
""
],
[
"Haldar",
"Rajarshi",
""
],
[
"Goyal",
"Pawan",
""
],
[
"Mukherjee",
"Animesh",
""
]
] |
new_dataset
| 0.987755 |
1804.05516
|
Cunsheng Ding
|
Cunsheng Ding and Ziling Heng
|
The Subfield Codes of Ovoid Codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ovoids in $\PG(3, \gf(q))$ have been an interesting topic in coding theory,
combinatorics, and finite geometry for a long time. So far only two families of
ovoids are known. The first is the elliptic quadratics and the second is the
Tits ovoids. It is known that an ovoid in $\PG(3, \gf(q))$ corresponds to a
$[q^2+1, 4, q^2-q]$ code over $\gf(q)$, which is called an ovoid code. The
objectives of this paper is to study the subfield codes of the two families of
ovoid codes. The dimensions, minimum weights, and the weight distributions of
the subfield codes of the elliptic quadric codes and Tits ovoid codes are
settled. The parameters of the duals of these subfield codes are also studied.
Some of the codes presented in this paper are optimal, and some are
distance-optimal. The parameters of the subfield codes are new.
|
[
{
"version": "v1",
"created": "Mon, 16 Apr 2018 06:22:25 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Ding",
"Cunsheng",
""
],
[
"Heng",
"Ziling",
""
]
] |
new_dataset
| 0.999169 |
1804.05554
|
Bert Moons
|
Bert Moons, Daniel Bankman, Lita Yang, Boris Murmann, Marian Verhelst
|
BinarEye: An Always-On Energy-Accuracy-Scalable Binary CNN Processor
With All Memory On Chip in 28nm CMOS
|
Presented at the 2018 IEEE Custom Integrated Circuits Conference
(CICC). Presentation is available here:
https://www.researchgate.net/publication/324452819_Presentation_on_Binareye_at_CICC
| null | null | null |
cs.DC cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces BinarEye: a digital processor for always-on Binary
Convolutional Neural Networks. The chip maximizes data reuse through a Neuron
Array exploiting local weight Flip-Flops. It stores full network models and
feature maps and hence requires no off-chip bandwidth, which leads to a 230
1b-TOPS/W peak efficiency. Its 3 levels of flexibility - (a) weight
reconfiguration, (b) a programmable network depth and (c) a programmable
network width - allow trading energy for accuracy depending on the task's
requirements. BinarEye's full system input-to-label energy consumption ranges
from 14.4uJ/f for 86% CIFAR-10 and 98% owner recognition down to 0.92uJ/f for
94% face detection at up to 1700 frames per second. This is 3-12-70x more
efficient than the state-of-the-art at on-par accuracy.
|
[
{
"version": "v1",
"created": "Mon, 16 Apr 2018 08:51:29 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Moons",
"Bert",
""
],
[
"Bankman",
"Daniel",
""
],
[
"Yang",
"Lita",
""
],
[
"Murmann",
"Boris",
""
],
[
"Verhelst",
"Marian",
""
]
] |
new_dataset
| 0.993405 |
1804.05790
|
Zhengqin Li
|
Zhengqin Li, Kalyan Sunkavalli, Manmohan Chandraker
|
Materials for Masses: SVBRDF Acquisition with a Single Mobile Phone
Image
|
submitted to European Conference on Computer Vision
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a material acquisition approach to recover the spatially-varying
BRDF and normal map of a near-planar surface from a single image captured by a
handheld mobile phone camera. Our method images the surface under arbitrary
environment lighting with the flash turned on, thereby avoiding shadows while
simultaneously capturing high-frequency specular highlights. We train a CNN to
regress an SVBRDF and surface normals from this image. Our network is trained
using a large-scale SVBRDF dataset and designed to incorporate physical
insights for material estimation, including an in-network rendering layer to
model appearance and a material classifier to provide additional supervision
during training. We refine the results from the network using a dense CRF
module whose terms are designed specifically for our task. The framework is
trained end-to-end and produces high quality results for a variety of
materials. We provide extensive ablation studies to evaluate our network on
both synthetic and real data, while demonstrating significant improvements in
comparisons with prior works.
|
[
{
"version": "v1",
"created": "Mon, 16 Apr 2018 16:59:38 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Li",
"Zhengqin",
""
],
[
"Sunkavalli",
"Kalyan",
""
],
[
"Chandraker",
"Manmohan",
""
]
] |
new_dataset
| 0.989592 |
1804.05804
|
Lucas Janson
|
Lucas Janson, Tommy Hu, Marco Pavone
|
Safe Motion Planning in Unknown Environments: Optimality Benchmarks and
Tractable Policies
| null | null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper addresses the problem of planning a safe (i.e., collision-free)
trajectory from an initial state to a goal region when the obstacle space is
a-priori unknown and is incrementally revealed online, e.g., through
line-of-sight perception. Despite its ubiquitous nature, this formulation of
motion planning has received relatively little theoretical investigation, as
opposed to the setup where the environment is assumed known. A fundamental
challenge is that, unlike motion planning with known obstacles, it is not even
clear what an optimal policy to strive for is. Our contribution is threefold.
First, we present a notion of optimality for safe planning in unknown
environments in the spirit of comparative (as opposed to competitive) analysis,
with the goal of obtaining a benchmark that is, at least conceptually,
attainable. Second, by leveraging this theoretical benchmark, we derive a
pseudo-optimal class of policies that can seamlessly incorporate any amount of
prior or learned information while still guaranteeing the robot never collides.
Finally, we demonstrate the practicality of our algorithmic approach in
numerical experiments using a range of environment types and dynamics,
including a comparison with a state of the art method. A key aspect of our
framework is that it automatically and implicitly weighs exploration versus
exploitation in a way that is optimal with respect to the information
available.
|
[
{
"version": "v1",
"created": "Mon, 16 Apr 2018 17:24:26 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Janson",
"Lucas",
""
],
[
"Hu",
"Tommy",
""
],
[
"Pavone",
"Marco",
""
]
] |
new_dataset
| 0.996767 |
1804.05827
|
Zuxuan Wu
|
Zuxuan Wu, Xintong Han, Yen-Liang Lin, Mustafa Gkhan Uzunbas, Tom
Goldstein, Ser Nam Lim, Larry S. Davis
|
DCAN: Dual Channel-wise Alignment Networks for Unsupervised Scene
Adaptation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Harvesting dense pixel-level annotations to train deep neural networks for
semantic segmentation is extremely expensive and unwieldy at scale. While
learning from synthetic data where labels are readily available sounds
promising, performance degrades significantly when testing on novel realistic
data due to domain discrepancies. We present Dual Channel-wise Alignment
Networks (DCAN), a simple yet effective approach to reduce domain shift at both
pixel-level and feature-level. Exploring statistics in each channel of CNN
feature maps, our framework performs channel-wise feature alignment, which
preserves spatial structures and semantic information, in both an image
generator and a segmentation network. In particular, given an image from the
source domain and unlabeled samples from the target domain, the generator
synthesizes new images on-the-fly to resemble samples from the target domain in
appearance and the segmentation network further refines high-level features
before predicting semantic maps, both of which leverage feature statistics of
sampled images from the target domain. Unlike much recent and concurrent work
relying on adversarial training, our framework is lightweight and easy to
train. Extensive experiments on adapting models trained on synthetic
segmentation benchmarks to real urban scenes demonstrate the effectiveness of
the proposed framework.
|
[
{
"version": "v1",
"created": "Mon, 16 Apr 2018 17:54:08 GMT"
}
] | 2018-04-17T00:00:00 |
[
[
"Wu",
"Zuxuan",
""
],
[
"Han",
"Xintong",
""
],
[
"Lin",
"Yen-Liang",
""
],
[
"Uzunbas",
"Mustafa Gkhan",
""
],
[
"Goldstein",
"Tom",
""
],
[
"Lim",
"Ser Nam",
""
],
[
"Davis",
"Larry S.",
""
]
] |
new_dataset
| 0.968521 |
1706.06982
|
Matthew Tesfaldet
|
Matthew Tesfaldet, Marcus A. Brubaker, Konstantinos G. Derpanis
|
Two-Stream Convolutional Networks for Dynamic Texture Synthesis
|
In proc. CVPR 2018. Full results available at
https://ryersonvisionlab.github.io/two-stream-projpage/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a two-stream model for dynamic texture synthesis. Our model is
based on pre-trained convolutional networks (ConvNets) that target two
independent tasks: (i) object recognition, and (ii) optical flow prediction.
Given an input dynamic texture, statistics of filter responses from the object
recognition ConvNet encapsulate the per-frame appearance of the input texture,
while statistics of filter responses from the optical flow ConvNet model its
dynamics. To generate a novel texture, a randomly initialized input sequence is
optimized to match the feature statistics from each stream of an example
texture. Inspired by recent work on image style transfer and enabled by the
two-stream model, we also apply the synthesis approach to combine the texture
appearance from one texture with the dynamics of another to generate entirely
novel dynamic textures. We show that our approach generates novel, high quality
samples that match both the framewise appearance and temporal evolution of
input texture. Finally, we quantitatively evaluate our texture synthesis
approach with a thorough user study.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2017 16:09:28 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Nov 2017 18:42:02 GMT"
},
{
"version": "v3",
"created": "Tue, 10 Apr 2018 23:47:29 GMT"
},
{
"version": "v4",
"created": "Thu, 12 Apr 2018 21:39:51 GMT"
}
] | 2018-04-16T00:00:00 |
[
[
"Tesfaldet",
"Matthew",
""
],
[
"Brubaker",
"Marcus A.",
""
],
[
"Derpanis",
"Konstantinos G.",
""
]
] |
new_dataset
| 0.989642 |
1711.08488
|
Charles Ruizhongtai Qi
|
Charles R. Qi, Wei Liu, Chenxia Wu, Hao Su, Leonidas J. Guibas
|
Frustum PointNets for 3D Object Detection from RGB-D Data
|
15 pages, 12 figures, 14 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we study 3D object detection from RGB-D data in both indoor and
outdoor scenes. While previous methods focus on images or 3D voxels, often
obscuring natural 3D patterns and invariances of 3D data, we directly operate
on raw point clouds by popping up RGB-D scans. However, a key challenge of this
approach is how to efficiently localize objects in point clouds of large-scale
scenes (region proposal). Instead of solely relying on 3D proposals, our method
leverages both mature 2D object detectors and advanced 3D deep learning for
object localization, achieving efficiency as well as high recall for even small
objects. Benefited from learning directly in raw point clouds, our method is
also able to precisely estimate 3D bounding boxes even under strong occlusion
or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection
benchmarks, our method outperforms the state of the art by remarkable margins
while having real-time capability.
|
[
{
"version": "v1",
"created": "Wed, 22 Nov 2017 19:52:18 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Apr 2018 00:30:24 GMT"
}
] | 2018-04-16T00:00:00 |
[
[
"Qi",
"Charles R.",
""
],
[
"Liu",
"Wei",
""
],
[
"Wu",
"Chenxia",
""
],
[
"Su",
"Hao",
""
],
[
"Guibas",
"Leonidas J.",
""
]
] |
new_dataset
| 0.997849 |
1712.00649
|
Nitish Mital
|
Nitish Mital, Deniz Gunduz and Cong Ling
|
Coded Caching in a Multi-Server System with Random Topology
|
Published in WCNC, 2018
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cache-aided content delivery is studied in a multi-server system with $P$
servers and $K$ users, each equipped with a local cache memory. In the delivery
phase, each user connects randomly to any $\rho$ out of $P$ servers. Thanks to
the availability of multiple servers, which model small base stations with
limited storage capacity, user demands can be satisfied with reduced storage
capacity at each server and reduced delivery rate per server; however, this
also leads to reduced multicasting opportunities compared to a single server
serving all the users simultaneously. A joint storage and proactive caching
scheme is proposed, which exploits coded storage across the servers, uncoded
cache placement at the users, and coded delivery. The delivery \textit{latency}
is studied for both \textit{successive} and \textit{simultaneous} transmission
from the servers. It is shown that, with successive transmission the achievable
average delivery latency is comparable to that achieved by a single server,
while the gap between the two depends on $\rho$, the available redundancy
across servers, and can be reduced by increasing the storage capacity at the
SBSs.
|
[
{
"version": "v1",
"created": "Sat, 2 Dec 2017 18:06:57 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Apr 2018 15:52:20 GMT"
}
] | 2018-04-16T00:00:00 |
[
[
"Mital",
"Nitish",
""
],
[
"Gunduz",
"Deniz",
""
],
[
"Ling",
"Cong",
""
]
] |
new_dataset
| 0.981117 |
1801.06345
|
Luojun Lin
|
Lingyu Liang, Luojun Lin, Lianwen Jin, Duorui Xie and Mengru Li
|
SCUT-FBP5500: A Diverse Benchmark Dataset for Multi-Paradigm Facial
Beauty Prediction
|
6 pages, 14 figures, conference paper
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Facial beauty prediction (FBP) is a significant visual recognition problem to
make assessment of facial attractiveness that is consistent to human
perception. To tackle this problem, various data-driven models, especially
state-of-the-art deep learning techniques, were introduced, and benchmark
dataset become one of the essential elements to achieve FBP. Previous works
have formulated the recognition of facial beauty as a specific supervised
learning problem of classification, regression or ranking, which indicates that
FBP is intrinsically a computation problem with multiple paradigms. However,
most of FBP benchmark datasets were built under specific computation
constrains, which limits the performance and flexibility of the computational
model trained on the dataset. In this paper, we argue that FBP is a
multi-paradigm computation problem, and propose a new diverse benchmark
dataset, called SCUT-FBP5500, to achieve multi-paradigm facial beauty
prediction. The SCUT-FBP5500 dataset has totally 5500 frontal faces with
diverse properties (male/female, Asian/Caucasian, ages) and diverse labels
(face landmarks, beauty scores within [1,~5], beauty score distribution), which
allows different computational models with different FBP paradigms, such as
appearance-based/shape-based facial beauty classification/regression model for
male/female of Asian/Caucasian. We evaluated the SCUT-FBP5500 dataset for FBP
using different combinations of feature and predictor, and various deep
learning methods. The results indicates the improvement of FBP and the
potential applications based on the SCUT-FBP5500.
|
[
{
"version": "v1",
"created": "Fri, 19 Jan 2018 09:53:19 GMT"
}
] | 2018-04-16T00:00:00 |
[
[
"Liang",
"Lingyu",
""
],
[
"Lin",
"Luojun",
""
],
[
"Jin",
"Lianwen",
""
],
[
"Xie",
"Duorui",
""
],
[
"Li",
"Mengru",
""
]
] |
new_dataset
| 0.999776 |
1804.04701
|
Dragos Strugar
|
Dragos Strugar, Rasheed Hussain, JooYoung Lee, Manuel Mazzara, Victor
Rivera
|
Reputation in M2M Economy
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Triggered by modern technologies, our possibilities may now expand beyond the
unthinkable. Cars externally may look similar to decades ago, but a dramatic
revolution happened inside the cabin as a result of their computation,
communications, and storage capabilities. With the advent of Electric
Autonomous Vehicles (EAVs), Artificial Intelligence and ecological technologies
found the best synergy. Several transportation problems may be solved
(accidents, emissions, and congestion among others), and the foundation of
Machine-to-Machine (M2M) economy could be established, in addition to
value-added services such as infotainment (information and entertainment).
In the world where intelligent technologies are pervading everyday life,
software and algorithms play a major role. Software has been lately introduced
in virtually every technological product available on the market, from phones
to television sets to cars and even housing. Artificial Intelligence is one of
the consequences of this pervasive presence of algorithms. The role of software
is becoming dominant and technology is, at times pervasive, of our existence.
Concerns, such as privacy and security, demand high attention and have been
already explored to some level of detail. However, intelligent agents and
actors are often considered as perfect entities that will overcome human
error-prone nature. This may not always be the case and we advocate that the
notion of reputation is also applicable to intelligent artificial agents, in
particular to EAVs.
|
[
{
"version": "v1",
"created": "Thu, 12 Apr 2018 19:28:59 GMT"
}
] | 2018-04-16T00:00:00 |
[
[
"Strugar",
"Dragos",
""
],
[
"Hussain",
"Rasheed",
""
],
[
"Lee",
"JooYoung",
""
],
[
"Mazzara",
"Manuel",
""
],
[
"Rivera",
"Victor",
""
]
] |
new_dataset
| 0.999014 |
1804.04758
|
Takuma Oda
|
Takuma Oda and Carlee Joe-Wong
|
MOVI: A Model-Free Approach to Dynamic Fleet Management
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern vehicle fleets, e.g., for ridesharing platforms and taxi companies,
can reduce passengers' waiting times by proactively dispatching vehicles to
locations where pickup requests are anticipated in the future. Yet it is
unclear how to best do this: optimal dispatching requires optimizing over
several sources of uncertainty, including vehicles' travel times to their
dispatched locations, as well as coordinating between vehicles so that they do
not attempt to pick up the same passenger. While prior works have developed
models for this uncertainty and used them to optimize dispatch policies, in
this work we introduce a model-free approach. Specifically, we propose MOVI, a
Deep Q-network (DQN)-based framework that directly learns the optimal vehicle
dispatch policy. Since DQNs scale poorly with a large number of possible
dispatches, we streamline our DQN training and suppose that each individual
vehicle independently learns its own optimal policy, ensuring scalability at
the cost of less coordination between vehicles. We then formulate a centralized
receding-horizon control (RHC) policy to compare with our DQN policies. To
compare these policies, we design and build MOVI as a large-scale realistic
simulator based on 15 million taxi trip records that simulates policy-agnostic
responses to dispatch decisions. We show that the DQN dispatch policy reduces
the number of unserviced requests by 76% compared to without dispatch and 20%
compared to the RHC approach, emphasizing the benefits of a model-free approach
and suggesting that there is limited value to coordinating vehicle actions.
This finding may help to explain the success of ridesharing platforms, for
which drivers make individual decisions.
|
[
{
"version": "v1",
"created": "Fri, 13 Apr 2018 00:54:22 GMT"
}
] | 2018-04-16T00:00:00 |
[
[
"Oda",
"Takuma",
""
],
[
"Joe-Wong",
"Carlee",
""
]
] |
new_dataset
| 0.993167 |
1804.04760
|
Joobin Gharibshah
|
Joobin Gharibshah, Evangelos E. Papalexakis, and Michalis Faloutsos
|
RIPEx: Extracting malicious IP addresses from security forums using
cross-forum learning
|
12 pages, Accepted in n 22nd Pacific-Asia Conference on Knowledge
Discovery and Data Mining (PAKDD), 2018
| null | null | null |
cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Is it possible to extract malicious IP addresses reported in security forums
in an automatic way? This is the question at the heart of our work. We focus on
security forums, where security professionals and hackers share knowledge and
information, and often report misbehaving IP addresses. So far, there have only
been a few efforts to extract information from such security forums. We propose
RIPEx, a systematic approach to identify and label IP addresses in security
forums by utilizing a cross-forum learning method. In more detail, the
challenge is twofold: (a) identifying IP addresses from other numerical
entities, such as software version numbers, and (b) classifying the IP address
as benign or malicious. We propose an integrated solution that tackles both
these problems. A novelty of our approach is that it does not require training
data for each new forum. Our approach does knowledge transfer across forums: we
use a classifier from our source forums to identify seed information for
training a classifier on the target forum. We evaluate our method using data
collected from five security forums with a total of 31K users and 542K posts.
First, RIPEx can distinguish IP address from other numeric expressions with 95%
precision and above 93% recall on average. Second, RIPEx identifies malicious
IP addresses with an average precision of 88% and over 78% recall, using our
cross-forum learning. Our work is a first step towards harnessing the wealth of
useful information that can be found in security forums.
|
[
{
"version": "v1",
"created": "Fri, 13 Apr 2018 01:08:42 GMT"
}
] | 2018-04-16T00:00:00 |
[
[
"Gharibshah",
"Joobin",
""
],
[
"Papalexakis",
"Evangelos E.",
""
],
[
"Faloutsos",
"Michalis",
""
]
] |
new_dataset
| 0.999052 |
1804.04785
|
Dacheng Tao
|
Xiaoqing Yin, Xiyang Dai, Xinchao Wang, Maojun Zhang, Dacheng Tao,
Larry Davis
|
Deep Motion Boundary Detection
|
17 pages, 5 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motion boundary detection is a crucial yet challenging problem. Prior methods
focus on analyzing the gradients and distributions of optical flow fields, or
use hand-crafted features for motion boundary learning. In this paper, we
propose the first dedicated end-to-end deep learning approach for motion
boundary detection, which we term as MoBoNet. We introduce a refinement network
structure which takes source input images, initial forward and backward optical
flows as well as corresponding warping errors as inputs and produces
high-resolution motion boundaries. Furthermore, we show that the obtained
motion boundaries, through a fusion sub-network we design, can in turn guide
the optical flows for removing the artifacts. The proposed MoBoNet is generic
and works with any optical flows. Our motion boundary detection and the refined
optical flow estimation achieve results superior to the state of the art.
|
[
{
"version": "v1",
"created": "Fri, 13 Apr 2018 04:19:06 GMT"
}
] | 2018-04-16T00:00:00 |
[
[
"Yin",
"Xiaoqing",
""
],
[
"Dai",
"Xiyang",
""
],
[
"Wang",
"Xinchao",
""
],
[
"Zhang",
"Maojun",
""
],
[
"Tao",
"Dacheng",
""
],
[
"Davis",
"Larry",
""
]
] |
new_dataset
| 0.997861 |
1804.04800
|
Joobin Gharibshah
|
Joobin Gharibshah, Tai Ching Li, Andre Castro, Konstantinos
Pelechrinis, Evangelos E. Papalexakis, Michalis Faloutsos
|
Mining actionable information from security forums: the case of
malicious IP addresses
|
10 pages
| null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
The goal of this work is to systematically extract information from hacker
forums, whose information would be in general described as unstructured: the
text of a post is not necessarily following any writing rules. By contrast,
many security initiatives and commercial entities are harnessing the readily
public information, but they seem to focus on structured sources of
information. Here, we focus on the problem of identifying malicious IP
addresses, among the IP addresses which are reported in the forums. We develop
a method to automate the identification of malicious IP addresses with the
design goal of being independent of external sources. A key novelty is that we
use a matrix decomposition method to extract latent features of the behavioral
information of the users, which we combine with textual information from the
related posts. A key design feature of our technique is that it can be readily
applied to different language forums, since it does not require a sophisticated
Natural Language Processing approach. In particular, our solution only needs a
small number of keywords in the new language plus the users behavior captured
by specific features. We also develop a tool to automate the data collection
from security forums. Using our tool, we collect approximately 600K posts from
3 different forums. Our method exhibits high classification accuracy, while the
precision of identifying malicious IP in post is greater than 88% in all three
forums. We argue that our method can provide significantly more information: we
find up to 3 times more potentially malicious IP address compared to the
reference blacklist VirusTotal. As the cyber-wars are becoming more intense,
having early accesses to useful information becomes more imperative to remove
the hackers first-move advantage, and our work is a solid step towards this
direction.
|
[
{
"version": "v1",
"created": "Fri, 13 Apr 2018 07:01:08 GMT"
}
] | 2018-04-16T00:00:00 |
[
[
"Gharibshah",
"Joobin",
""
],
[
"Li",
"Tai Ching",
""
],
[
"Castro",
"Andre",
""
],
[
"Pelechrinis",
"Konstantinos",
""
],
[
"Papalexakis",
"Evangelos E.",
""
],
[
"Faloutsos",
"Michalis",
""
]
] |
new_dataset
| 0.969867 |
1804.04833
|
Andr\'es Lucero
|
Andr\'es Lucero
|
Living Without a Mobile Phone: An Autoethnography
|
12 pages
| null |
10.1145/3196709.3196731
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an autoethnography of my experiences living without a
mobile phone. What started as an experiment motivated by a personal need to
reduce stress, has resulted in two voluntary mobile phone breaks spread over
nine years (i.e., 2002-2008 and 2014-2017). Conducting this autoethnography is
the means to assess if the lack of having a phone has had any real impact in my
life. Based on formative and summative analyses, four meaningful units or
themes were identified (i.e., social relationships, everyday work, research
career, and location and security), and judged using seven criteria for
successful ethnography from existing literature. Furthermore, I discuss factors
that allow me to make the choice of not having a mobile phone, as well as the
relevance that the lessons gained from not having a mobile phone have on the
lives of people who are involuntarily disconnected from communication
infrastructures.
|
[
{
"version": "v1",
"created": "Fri, 13 Apr 2018 08:31:13 GMT"
}
] | 2018-04-16T00:00:00 |
[
[
"Lucero",
"Andrés",
""
]
] |
new_dataset
| 0.997969 |
1804.04835
|
Takayuki Nozaki
|
Tomokazu Emoto, Takayuki Nozaki
|
Shifted Coded Slotted ALOHA
|
5 pages, 7 figures, submitted to ISITA 2018
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The random access scheme is a fundamental scenario in which users transmit
through a shared channel and cannot coordinate each other. In recent years,
successive interference cancellation (SIC) was introduced into the random
access scheme. It is possible to decode transmitted packets using collided
packets by the SIC. The coded slotted ALOHA (CSA) is a random access scheme
using the SIC. The CSA encodes each packet using a local code prior to
transmission. It is known that the CSA achieves excellent throughput. On the
other hand, it is reported that in the coding theory time shift improves the
decoding performance for packet-oriented erasure correcting codes. In this
paper, we propose a random access scheme which applies the time shift to the
CSA in order to achieve better throughput. Numerical examples show that our
proposed random access scheme achieves better throughput and packet loss rate
than the CSA.
|
[
{
"version": "v1",
"created": "Fri, 13 Apr 2018 08:32:59 GMT"
}
] | 2018-04-16T00:00:00 |
[
[
"Emoto",
"Tomokazu",
""
],
[
"Nozaki",
"Takayuki",
""
]
] |
new_dataset
| 0.998442 |
1804.04866
|
Luca Rossetto M.Sc.
|
Silvan Heller, Luca Rossetto, Heiko Schuldt
|
The PS-Battles Dataset - an Image Collection for Image Manipulation
Detection
|
The dataset introduced in this paper can be found on
https://github.com/dbisUnibas/PS-Battles
| null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The boost of available digital media has led to a significant increase in
derivative work. With tools for manipulating objects becoming more and more
mature, it can be very difficult to determine whether one piece of media was
derived from another one or tampered with. As derivations can be done with
malicious intent, there is an urgent need for reliable and easily usable
tampering detection methods. However, even media considered semantically
untampered by humans might have already undergone compression steps or light
post-processing, making automated detection of tampering susceptible to false
positives. In this paper, we present the PS-Battles dataset which is gathered
from a large community of image manipulation enthusiasts and provides a basis
for media derivation and manipulation detection in the visual domain. The
dataset consists of 102'028 images grouped into 11'142 subsets, each containing
the original image as well as a varying number of manipulated derivatives.
|
[
{
"version": "v1",
"created": "Fri, 13 Apr 2018 09:59:54 GMT"
}
] | 2018-04-16T00:00:00 |
[
[
"Heller",
"Silvan",
""
],
[
"Rossetto",
"Luca",
""
],
[
"Schuldt",
"Heiko",
""
]
] |
new_dataset
| 0.993708 |
1804.04925
|
Benoit Rosa
|
Mustafa Suphi Erden, Beno\^it Rosa (ISIR), J\'erome Szewczyk (ISIR),
Guillaume Morel (LRP)
|
Mechanical design of a distal scanner for confocal microlaparoscope: A
conic solution
| null |
2013 IEEE International Conference on Robotics and Automation
(ICRA), May 2013, Karlsruhe, France. IEEE
|
10.1109/ICRA.2013.6630725
| null |
cs.RO physics.med-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the mechanical design of a distal scanner to perform a
spiral scan for mosaic-imaging with a confocal microlaparoscope. First, it is
demonstrated with ex vivo experiments that a spiral scan performs better than a
raster scan on soft tissue. Then a mechanical design is developed in order to
perform the spiral scan. The design in this paper is based on a conic structure
with a particular curved surface. The mechanism is simple to implement and to
drive; therefore, it is a low-cost solution. A 5:1 scale prototype is
implemented by rapid prototyping and the requirements are validated by
experiments. The experiments include manual and motor drive of the system. The
manual drive demonstrates the resulting spiral motion by drawing the tip
trajectory with an attached pencil. The motor drive demonstrates the speed
control of the system with an analysis of video thread capturing the trajectory
of a laser beam emitted from the tip.
|
[
{
"version": "v1",
"created": "Fri, 13 Apr 2018 12:58:02 GMT"
}
] | 2018-04-16T00:00:00 |
[
[
"Erden",
"Mustafa Suphi",
"",
"ISIR"
],
[
"Rosa",
"Benoît",
"",
"ISIR"
],
[
"Szewczyk",
"Jérome",
"",
"ISIR"
],
[
"Morel",
"Guillaume",
"",
"LRP"
]
] |
new_dataset
| 0.99876 |
1804.04963
|
Julia Noothout
|
Julia M. H. Noothout, Bob D. de Vos, Jelmer M. Wolterink, Tim Leiner,
Ivana I\v{s}gum
|
CNN-based Landmark Detection in Cardiac CTA Scans
|
This work was submitted to MIDL 2018 Conference
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fast and accurate anatomical landmark detection can benefit many medical
image analysis methods. Here, we propose a method to automatically detect
anatomical landmarks in medical images. Automatic landmark detection is
performed with a patch-based fully convolutional neural network (FCNN) that
combines regression and classification. For any given image patch, regression
is used to predict the 3D displacement vector from the image patch to the
landmark. Simultaneously, classification is used to identify patches that
contain the landmark. Under the assumption that patches close to a landmark can
determine the landmark location more precisely than patches farther from it,
only those patches that contain the landmark according to classification are
used to determine the landmark location. The landmark location is obtained by
calculating the average landmark location using the computed 3D displacement
vectors. The method is evaluated using detection of six clinically relevant
landmarks in coronary CT angiography (CCTA) scans: the right and left ostium,
the bifurcation of the left main coronary artery (LM) into the left anterior
descending and the left circumflex artery, and the origin of the right,
non-coronary, and left aortic valve commissure. The proposed method achieved an
average Euclidean distance error of 2.19 mm and 2.88 mm for the right and left
ostium respectively, 3.78 mm for the bifurcation of the LM, and 1.82 mm, 2.10
mm and 1.89 mm for the origin of the right, non-coronary, and left aortic valve
commissure respectively, demonstrating accurate performance. The proposed
combination of regression and classification can be used to accurately detect
landmarks in CCTA scans.
|
[
{
"version": "v1",
"created": "Fri, 13 Apr 2018 14:32:42 GMT"
}
] | 2018-04-16T00:00:00 |
[
[
"Noothout",
"Julia M. H.",
""
],
[
"de Vos",
"Bob D.",
""
],
[
"Wolterink",
"Jelmer M.",
""
],
[
"Leiner",
"Tim",
""
],
[
"Išgum",
"Ivana",
""
]
] |
new_dataset
| 0.986282 |
1804.04512
|
Baptiste Wicht
|
Baptiste Wicht and Jean Hennebert and Andreas Fischer
|
DLL: A Blazing Fast Deep Neural Network Library
|
6 pages
| null | null | null |
cs.LG cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep Learning Library (DLL) is a new library for machine learning with deep
neural networks that focuses on speed. It supports feed-forward neural networks
such as fully-connected Artificial Neural Networks (ANNs) and Convolutional
Neural Networks (CNNs). It also has very comprehensive support for Restricted
Boltzmann Machines (RBMs) and Convolutional RBMs. Our main motivation for this
work was to propose and evaluate novel software engineering strategies with
potential to accelerate runtime for training and inference. Such strategies are
mostly independent of the underlying deep learning algorithms. On three
different datasets and for four different neural network models, we compared
DLL to five popular deep learning frameworks. Experimentally, it is shown that
the proposed framework is systematically and significantly faster on CPU and
GPU. In terms of classification performance, similar accuracies as the other
frameworks are reported.
|
[
{
"version": "v1",
"created": "Wed, 11 Apr 2018 13:56:07 GMT"
}
] | 2018-04-15T00:00:00 |
[
[
"Wicht",
"Baptiste",
""
],
[
"Hennebert",
"Jean",
""
],
[
"Fischer",
"Andreas",
""
]
] |
new_dataset
| 0.9965 |
1611.06403
|
Yannick Hold-Geoffroy
|
Yannick Hold-Geoffroy, Kalyan Sunkavalli, Sunil Hadap, Emiliano
Gambaretto, Jean-Fran\c{c}ois Lalonde
|
Deep Outdoor Illumination Estimation
|
CVPR'17 preprint, 8 pages + 2 pages of citations, 12 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a CNN-based technique to estimate high-dynamic range outdoor
illumination from a single low dynamic range image. To train the CNN, we
leverage a large dataset of outdoor panoramas. We fit a low-dimensional
physically-based outdoor illumination model to the skies in these panoramas
giving us a compact set of parameters (including sun position, atmospheric
conditions, and camera parameters). We extract limited field-of-view images
from the panoramas, and train a CNN with this large set of input image--output
lighting parameter pairs. Given a test image, this network can be used to infer
illumination parameters that can, in turn, be used to reconstruct an outdoor
illumination environment map. We demonstrate that our approach allows the
recovery of plausible illumination conditions and enables photorealistic
virtual object insertion from a single image. An extensive evaluation on both
the panorama dataset and captured HDR environment maps shows that our technique
significantly outperforms previous solutions to this problem.
|
[
{
"version": "v1",
"created": "Sat, 19 Nov 2016 17:23:15 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Apr 2017 21:38:27 GMT"
},
{
"version": "v3",
"created": "Wed, 11 Apr 2018 15:47:14 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Hold-Geoffroy",
"Yannick",
""
],
[
"Sunkavalli",
"Kalyan",
""
],
[
"Hadap",
"Sunil",
""
],
[
"Gambaretto",
"Emiliano",
""
],
[
"Lalonde",
"Jean-François",
""
]
] |
new_dataset
| 0.999254 |
1702.08122
|
Yuyang Wang
|
Yuyang Wang, Kiran Venugopal, Andreas F. Molisch, Robert W. Heath Jr
|
MmWave vehicle-to-infrastructure communication: Analysis of urban
microcellular networks
|
Accepted to IEEE Transactions on Vehicular Technology
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vehicle-to-infrastructure (V2I) communication may provide high data rates to
vehicles via millimeter-wave (mmWave) microcellular networks. This paper uses
stochastic geometry to analyze the coverage of urban mmWave microcellular
networks. Prior work used a pathloss model with a line-of-sight probability
function based on randomly oriented buildings, to determine whether a link was
line-of-sight or non-line-of-sight. In this paper, we use a pathloss model
inspired by measurements, which uses a Manhattan distance pathloss model and
accounts for differences in pathloss exponents and losses when turning corners.
In our model, streets are randomly located as a Manhattan Poisson line process
(MPLP) and the base stations (BSs) are distributed according to a Poisson point
process. Our model is well suited for urban microcellular networks where the
BSs are deployed at street level. Based on this new approach, we derive the
coverage probability under certain BS association rules to obtain closed-form
solutions without much complexity. In addition, we draw two main conclusions
from our work. First, non-line-of-sight BSs are not a major benefit for
association or source of interference most of the time. Second, there is an
ultra-dense regime where deploying active BSs does not enhance coverage.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2017 01:12:37 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Feb 2018 22:40:26 GMT"
},
{
"version": "v3",
"created": "Thu, 12 Apr 2018 17:11:41 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Wang",
"Yuyang",
""
],
[
"Venugopal",
"Kiran",
""
],
[
"Molisch",
"Andreas F.",
""
],
[
"Heath",
"Robert W.",
"Jr"
]
] |
new_dataset
| 0.996119 |
1704.07699
|
Lucia Ballerini
|
Lucia Ballerini, Ruggiero Lovreglio, Maria del C. Valdes-Hernandez,
Joel Ramirez, Bradley J. MacIntosh, Sandra E. Black and Joanna M. Wardlaw
|
Perivascular Spaces Segmentation in Brain MRI Using Optimal 3D Filtering
| null | null |
10.1038/s41598-018-19781-5
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Perivascular Spaces (PVS) are a recently recognised feature of Small Vessel
Disease (SVD), also indicating neuroinflammation, and are an important part of
the brain's circulation and glymphatic drainage system. Quantitative analysis
of PVS on Magnetic Resonance Images (MRI) is important for understanding their
relationship with neurological diseases. In this work, we propose a
segmentation technique based on the 3D Frangi filtering for extraction of PVS
from MRI. Based on prior knowledge from neuroradiological ratings of PVS, we
used ordered logit models to optimise Frangi filter parameters in response to
the variability in the scanner's parameters and study protocols. We optimized
and validated our proposed models on two independent cohorts, a dementia sample
(N=20) and patients who previously had mild to moderate stroke (N=48). Results
demonstrate the robustness and generalisability of our segmentation method.
Segmentation-based PVS burden estimates correlated with neuroradiological
assessments (Spearman's $\rho$ = 0.74, p $<$ 0.001), suggesting the great
potential of our proposed method
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2017 14:02:06 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Ballerini",
"Lucia",
""
],
[
"Lovreglio",
"Ruggiero",
""
],
[
"Valdes-Hernandez",
"Maria del C.",
""
],
[
"Ramirez",
"Joel",
""
],
[
"MacIntosh",
"Bradley J.",
""
],
[
"Black",
"Sandra E.",
""
],
[
"Wardlaw",
"Joanna M.",
""
]
] |
new_dataset
| 0.998848 |
1711.05938
|
Yang Zhang
|
Zehui Xiong, Yang Zhang, Dusit Niyato, Ping Wang and Zhu Han
|
When Mobile Blockchain Meets Edge Computing
|
Accepted by IEEE Communications Magazine
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blockchain, as the backbone technology of the current popular Bitcoin digital
currency, has become a promising decentralized data management framework.
Although blockchain has been widely adopted in many applications, e.g.,
finance, healthcare, and logistics, its application in mobile services is still
limited. This is due to the fact that blockchain users need to solve preset
proof-of-work puzzles to add new data, i.e., a block, to the blockchain.
Solving the proof-of-work, however, consumes substantial resources in terms of
CPU time and energy, which is not suitable for resource-limited mobile devices.
To facilitate blockchain applications in future mobile Internet of Things
systems, multiple access mobile edge computing appears to be an auspicious
solution to solve the proof-of-work puzzles for mobile users. We first
introduce a novel concept of edge computing for mobile blockchain. Then, we
introduce an economic approach for edge computing resource management.
Moreover, a prototype of mobile edge computing enabled blockchain systems is
presented with experimental results to justify the proposed concept.
|
[
{
"version": "v1",
"created": "Thu, 16 Nov 2017 05:53:57 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Apr 2018 23:14:28 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Xiong",
"Zehui",
""
],
[
"Zhang",
"Yang",
""
],
[
"Niyato",
"Dusit",
""
],
[
"Wang",
"Ping",
""
],
[
"Han",
"Zhu",
""
]
] |
new_dataset
| 0.999799 |
1804.04257
|
Vivek Kulkarni
|
Mai ElSherief, Vivek Kulkarni, Dana Nguyen, William Yang Wang,
Elizabeth Belding
|
Hate Lingo: A Target-based Linguistic Analysis of Hate Speech in Social
Media
|
10 pages, 7 figures. ICWSM-2018 accepted
| null | null | null |
cs.CL cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While social media empowers freedom of expression and individual voices, it
also enables anti-social behavior, online harassment, cyberbullying, and hate
speech. In this paper, we deepen our understanding of online hate speech by
focusing on a largely neglected but crucial aspect of hate speech -- its
target: either "directed" towards a specific person or entity, or "generalized"
towards a group of people sharing a common protected characteristic. We perform
the first linguistic and psycholinguistic analysis of these two forms of hate
speech and reveal the presence of interesting markers that distinguish these
types of hate speech. Our analysis reveals that Directed hate speech, in
addition to being more personal and directed, is more informal, angrier, and
often explicitly attacks the target (via name calling) with fewer analytic
words and more words suggesting authority and influence. Generalized hate
speech, on the other hand, is dominated by religious hate, is characterized by
the use of lethal words such as murder, exterminate, and kill; and quantity
words such as million and many. Altogether, our work provides a data-driven
analysis of the nuances of online-hate speech that enables not only a deepened
understanding of hate speech and its social implications but also its
detection.
|
[
{
"version": "v1",
"created": "Wed, 11 Apr 2018 23:39:49 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"ElSherief",
"Mai",
""
],
[
"Kulkarni",
"Vivek",
""
],
[
"Nguyen",
"Dana",
""
],
[
"Wang",
"William Yang",
""
],
[
"Belding",
"Elizabeth",
""
]
] |
new_dataset
| 0.997905 |
1804.04300
|
Hassan El-Arsh
|
Alaa Eldin Rohiem Shehata, Hassan Yakout El-Arsh
|
Lightweight Joint Compression-Encryption-Authentication-Integrity
Framework Based on Arithmetic Coding
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Arithmetic Coding is an efficient lossless compression scheme applied for
many multimedia standards such as JPEG, JPEG2000, H.263, H.264 and H.265. Due
to nonlinearity, high error propagation and high error sensitivity of
arithmetic coders, many techniques have been developed for extending the usage
of arithmetic coders for security as a lightweight joint compression and
encryption solution for systems with limited resources. Through this paper, we
will describe how to upgrade these techniques to achieve an additional low cost
authentication and integrity capabilities with arithmetic coders. Consequently,
the new proposed technique can produce a secure and lightweight framework of
compression, encryption, authentication and integrity for limited resources
environments such as Internet of Things (IoT) and embedded systems. Although
the proposed technique can be used alongside with any arithmetic coder based
system, we will focus on the implementations for JPEG and JPEG2000 standards.
|
[
{
"version": "v1",
"created": "Thu, 12 Apr 2018 03:35:26 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Shehata",
"Alaa Eldin Rohiem",
""
],
[
"El-Arsh",
"Hassan Yakout",
""
]
] |
new_dataset
| 0.96936 |
1804.04338
|
Christoph Baur
|
Christoph Baur, Shadi Albarqouni, Nassir Navab
|
MelanoGANs: High Resolution Skin Lesion Synthesis with GANs
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generative Adversarial Networks (GANs) have been successfully used to
synthesize realistically looking images of faces, scenery and even medical
images. Unfortunately, they usually require large training datasets, which are
often scarce in the medical field, and to the best of our knowledge GANs have
been only applied for medical image synthesis at fairly low resolution.
However, many state-of-the-art machine learning models operate on high
resolution data as such data carries indispensable, valuable information. In
this work, we try to generate realistically looking high resolution images of
skin lesions with GANs, using only a small training dataset of 2000 samples.
The nature of the data allows us to do a direct comparison between the image
statistics of the generated samples and the real dataset. We both
quantitatively and qualitatively compare state-of-the-art GAN architectures
such as DCGAN and LAPGAN against a modification of the latter for the task of
image generation at a resolution of 256x256px. Our investigation shows that we
can approximate the real data distribution with all of the models, but we
notice major differences when visually rating sample realism, diversity and
artifacts. In a set of use-case experiments on skin lesion classification, we
further show that we can successfully tackle the problem of heavy class
imbalance with the help of synthesized high resolution melanoma samples.
|
[
{
"version": "v1",
"created": "Thu, 12 Apr 2018 06:18:31 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Baur",
"Christoph",
""
],
[
"Albarqouni",
"Shadi",
""
],
[
"Navab",
"Nassir",
""
]
] |
new_dataset
| 0.977229 |
1804.04343
|
Amit Saha
|
Ramdoot Pydipaty and Amit Saha
|
On Using Non-Volatile Memory in Apache Lucene
|
4 pages
| null | null | null |
cs.IR cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Apache Lucene is a widely popular information retrieval library used to
provide search functionality in an extremely wide variety of applications.
Naturally, it has to efficiently index and search large number of documents.
With non-volatile memory in DIMM form factor (NVDIMM), software now has access
to durable, byte-addressable memory with write latency within an order of
magnitude of DRAM write latency.
In this preliminary article, we present the first reported work on the impact
of using NVDIMM on the performance of committing, searching, and near-real time
searching in Apache Lucene. We show modest improvements by using NVM but, our
empirical study suggests that bigger impact requires redesigning Lucene to
access NVM as byte-addressable memory using loads and stores, instead of
accessing NVM via the file system.
|
[
{
"version": "v1",
"created": "Thu, 12 Apr 2018 06:39:28 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Pydipaty",
"Ramdoot",
""
],
[
"Saha",
"Amit",
""
]
] |
new_dataset
| 0.997926 |
1804.04347
|
EPTCS
|
Rahul Kumar Bhadani (The University of Arizona), Jonathan Sprinkle
(The University of Arizona), Matthew Bunting (The University of Arizona)
|
The CAT Vehicle Testbed: A Simulator with Hardware in the Loop for
Autonomous Vehicle Applications
|
In Proceedings SCAV 2018, arXiv:1804.03406
|
EPTCS 269, 2018, pp. 32-47
|
10.4204/EPTCS.269.4
| null |
cs.RO cs.SE cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the CAT Vehicle (Cognitive and Autonomous Test Vehicle)
Testbed: a research testbed comprised of a distributed simulation-based
autonomous vehicle, with straightforward transition to hardware in the loop
testing and execution, to support research in autonomous driving technology.
The evolution of autonomous driving technology from active safety features and
advanced driving assistance systems to full sensor-guided autonomous driving
requires testing of every possible scenario. However, researchers who want to
demonstrate new results on a physical platform face difficult challenges, if
they do not have access to a robotic platform in their own labs. Thus, there is
a need for a research testbed where simulation-based results can be rapidly
validated through hardware in the loop simulation, in order to test the
software on board the physical platform. The CAT Vehicle Testbed offers such a
testbed that can mimic dynamics of a real vehicle in simulation and then
seamlessly transition to reproduction of use cases with hardware. The simulator
utilizes the Robot Operating System (ROS) with a physics-based vehicle model,
including simulated sensors and actuators with configurable parameters. The
testbed allows multi-vehicle simulation to support vehicle to vehicle
interaction. Our testbed also facilitates logging and capturing of the data in
the real time that can be played back to examine particular scenarios or use
cases, and for regression testing. As part of the demonstration of feasibility,
we present a brief description of the CAT Vehicle Challenge, in which student
researchers from all over the globe were able to reproduce their simulation
results with fewer than 2 days of interfacing with the physical platform.
|
[
{
"version": "v1",
"created": "Thu, 12 Apr 2018 06:53:23 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Bhadani",
"Rahul Kumar",
"",
"The University of Arizona"
],
[
"Sprinkle",
"Jonathan",
"",
"The University of Arizona"
],
[
"Bunting",
"Matthew",
"",
"The University of Arizona"
]
] |
new_dataset
| 0.9996 |
1804.04361
|
Emmanouil Tsardoulias
|
Panagiotis Doxopoulos, Konstantinos L. Panayiotou, Emmanouil G.
Tsardoulias, Andreas L. Symeonidis
|
Creating an extrovert robotic assistant via IoT networking devices
|
Accepted in ICCR17
| null | null | null |
cs.CY cs.HC cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The communication and collaboration of Cyber-Physical Systems, including
machines and robots, among themselves and with humans, is expected to attract
researchers' interest for the years to come. A key element of the new
revolution is the Internet of Things (IoT). IoT infrastructures enable
communication between different connected devices using internet protocols. The
integration of robots in an IoT platform can improve robot capabilities by
providing access to other devices and resources. In this paper we present an
IoT-enabled application including a NAO robot which can communicate through an
IoT platform with a reflex measurement system and a hardware node that provides
robotics-oriented services in the form of RESTful web services. An activity
reminder application is also included, illustrating the extension capabilities
of the system.
|
[
{
"version": "v1",
"created": "Thu, 12 Apr 2018 07:46:08 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Doxopoulos",
"Panagiotis",
""
],
[
"Panayiotou",
"Konstantinos L.",
""
],
[
"Tsardoulias",
"Emmanouil G.",
""
],
[
"Symeonidis",
"Andreas L.",
""
]
] |
new_dataset
| 0.973345 |
1804.04362
|
Emmanouil Tsardoulias
|
Vasilis N. Remmas, Konstantinos L. Panayiotou, Emmanouil G.
Tsardoulias, Andreas L. Symeonidis
|
SRCA - The Scalable Robotic Cloud Agents Architecture
|
Accepted in ICCR17
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In an effort to penetrate the market at an affordable cost, consumer robots
tend to provide limited processing capabilities, just enough to serve the
purpose they have been designed for. However, a robot, in principle, should be
able to interact and process the constantly increasing information streams
generated from sensors or other devices. This would require the implementation
of algorithms and mathematical models for the accurate processing of data
volumes and significant computational resources. It is clear that as the data
deluge continues to grow exponentially, deploying such algorithms on consumer
robots will not be easy. Current work presents a cloud-based architecture that
aims to offload computational resources from robots to a remote infrastructure,
by utilizing and implementing cloud technologies. This way robots are allowed
to enjoy functionality offered by complex algorithms that are executed on the
cloud. The proposed system architecture allows developers and engineers not
specialised in robotic implementation environments to utilize generic robotic
algorithms and services off-the-shelf.
|
[
{
"version": "v1",
"created": "Thu, 12 Apr 2018 07:48:07 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Remmas",
"Vasilis N.",
""
],
[
"Panayiotou",
"Konstantinos L.",
""
],
[
"Tsardoulias",
"Emmanouil G.",
""
],
[
"Symeonidis",
"Andreas L.",
""
]
] |
new_dataset
| 0.998819 |
1804.04395
|
Dimitri Block
|
Sergej Grunau, Dimitri Block, Uwe Meier
|
Multi-Label Wireless Interference Identification with Convolutional
Neural Networks
|
Submitted to the 16th International Conference on Industrial
Informatics (INDIN 2018)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The steadily growing use of license-free frequency bands require reliable
coexistence management and therefore proper wireless interference
identification (WII). In this work, we propose a WII approach based upon a deep
convolutional neural network (CNN) which classifies multiple IEEE 802.15.1,
IEEE 802.11 b/g and IEEE 802.15.4 interfering signals in the presence of a
utilized signal. The generated multi-label dataset contains frequency- and
time-limited sensing snapshots with the bandwidth of 10 MHz and duration of
12.8 $\mu$s, respectively. Each snapshot combines one utilized signal with up
to multiple interfering signals. The approach shows promising results for
same-technology interference with a classification accuracy of approximately
100 % for IEEE 802.15.1 and IEEE 802.15.4 signals. For IEEE 802.11 b/g signals
the accuracy increases for cross-technology interference with at least 90 %.
|
[
{
"version": "v1",
"created": "Thu, 12 Apr 2018 09:31:32 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Grunau",
"Sergej",
""
],
[
"Block",
"Dimitri",
""
],
[
"Meier",
"Uwe",
""
]
] |
new_dataset
| 0.999337 |
1804.04426
|
Ahmed Taha
|
Ahmed Taha, Spyros Boukoros, Jesus Luna, Stefan Katzenbeisser, Neeraj
Suri
|
QRES: Quantitative Reasoning on Encrypted Security SLAs
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While regulators advocate for higher cloud transparency, many Cloud Service
Providers (CSPs) often do not provide detailed information regarding their
security implementations in their Service Level Agreements (SLAs). In practice,
CSPs are hesitant to release detailed information regarding their security
posture for security and proprietary reasons. This lack of transparency hinders
the adoption of cloud computing by enterprises and individuals. Unless CSPs
share information regarding the technical details of their security proceedings
and standards, customers cannot verify which cloud provider matched their needs
in terms of security and privacy guarantees. To address this problem, we
propose QRES, the first system that enables (a) CSPs to disclose detailed
information about their offered security services in an encrypted form to
ensure data confidentiality, and (b) customers to assess the CSPs' offered
security services and find those satisfying their security requirements. Our
system preserves each party's privacy by leveraging a novel evaluation method
based on Secure Two Party Computation (2PC) and Searchable Encryption
techniques. We implement QRES and highlight its usefulness by applying it to
existing standardized SLAs. The real world tests illustrate that the system
runs in acceptable time for practical application even when used with a
multitude of CSPs. We formally prove the security requirements of the proposed
system against a strong realistic adversarial model, using an automated
cryptographic protocol verifier.
|
[
{
"version": "v1",
"created": "Thu, 12 Apr 2018 11:05:00 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Taha",
"Ahmed",
""
],
[
"Boukoros",
"Spyros",
""
],
[
"Luna",
"Jesus",
""
],
[
"Katzenbeisser",
"Stefan",
""
],
[
"Suri",
"Neeraj",
""
]
] |
new_dataset
| 0.982269 |
1804.04487
|
Bernd Finkbeiner
|
Florian-Michael Adolf, Peter Faymonville, Bernd Finkbeiner, Sebastian
Schirmer, Christoph Torens
|
Stream Runtime Monitoring on UAS
| null | null | null | null |
cs.SE cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned Aircraft Systems (UAS) with autonomous decision-making capabilities
are of increasing interest for a wide area of applications such as logistics
and disaster recovery. In order to ensure the correct behavior of the system
and to recognize hazardous situations or system faults, we applied stream
runtime monitoring techniques within the DLR ARTIS (Autonomous Research Testbed
for Intelligent System) family of unmanned aircraft. We present our experience
from specification elicitation, instrumentation, offline log-file analysis, and
online monitoring on the flight computer on a test rig. The debugging and
health management support through stream runtime monitoring techniques have
proven highly beneficial for system design and development. At the same time,
the project has identified usability improvements to the specification
language, and has influenced the design of the language.
|
[
{
"version": "v1",
"created": "Thu, 29 Mar 2018 16:55:28 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Adolf",
"Florian-Michael",
""
],
[
"Faymonville",
"Peter",
""
],
[
"Finkbeiner",
"Bernd",
""
],
[
"Schirmer",
"Sebastian",
""
],
[
"Torens",
"Christoph",
""
]
] |
new_dataset
| 0.993573 |
1804.04526
|
Simon Gottschalk
|
Simon Gottschalk, Elena Demidova
|
EventKG: A Multilingual Event-Centric Temporal Knowledge Graph
| null | null | null | null |
cs.CL cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the key requirements to facilitate semantic analytics of information
regarding contemporary and historical events on the Web, in the news and in
social media is the availability of reference knowledge repositories containing
comprehensive representations of events and temporal relations. Existing
knowledge graphs, with popular examples including DBpedia, YAGO and Wikidata,
focus mostly on entity-centric information and are insufficient in terms of
their coverage and completeness with respect to events and temporal relations.
EventKG presented in this paper is a multilingual event-centric temporal
knowledge graph that addresses this gap. EventKG incorporates over 690 thousand
contemporary and historical events and over 2.3 million temporal relations
extracted from several large-scale knowledge graphs and semi-structured sources
and makes them available through a canonical representation.
|
[
{
"version": "v1",
"created": "Thu, 12 Apr 2018 14:12:48 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Gottschalk",
"Simon",
""
],
[
"Demidova",
"Elena",
""
]
] |
new_dataset
| 0.989143 |
1804.04549
|
James Kapaldo
|
James Kapaldo
|
Seed-Point Based Geometric Partitioning of Nuclei Clumps
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When applying automatic analysis of fluorescence or histopathological images
of cells, it is necessary to partition, or de-clump, partially overlapping cell
nuclei. In this work, I describe a method of partitioning partially overlapping
cell nuclei using a seed-point based geometric partitioning. The geometric
partitioning creates two different types of cuts, cuts between two boundary
vertices and cuts between one boundary vertex and a new vertex introduced to
the boundary interior. The cuts are then ranked according to a scoring metric,
and the highest scoring cuts are used. This method was tested on a set of 2420
clumps of nuclei and was found to produced better results than current popular
analysis software.
|
[
{
"version": "v1",
"created": "Thu, 12 Apr 2018 14:46:24 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Kapaldo",
"James",
""
]
] |
new_dataset
| 0.999174 |
1804.04555
|
Cong Ma
|
Cong Ma, Changshui Yang, Fan Yang, Yueqing Zhuang, Ziwei Zhang, Huizhu
Jia, Xiaodong Xie
|
Trajectory Factory: Tracklet Cleaving and Re-connection by Deep Siamese
Bi-GRU for Multiple Object Tracking
|
6 pages, 5 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-Object Tracking (MOT) is a challenging task in the complex scene such
as surveillance and autonomous driving. In this paper, we propose a novel
tracklet processing method to cleave and re-connect tracklets on crowd or
long-term occlusion by Siamese Bi-Gated Recurrent Unit (GRU). The tracklet
generation utilizes object features extracted by CNN and RNN to create the
high-confidence tracklet candidates in sparse scenario. Due to mis-tracking in
the generation process, the tracklets from different objects are split into
several sub-tracklets by a bidirectional GRU. After that, a Siamese GRU based
tracklet re-connection method is applied to link the sub-tracklets which belong
to the same object to form a whole trajectory. In addition, we extract the
tracklet images from existing MOT datasets and propose a novel dataset to train
our networks. The proposed dataset contains more than 95160 pedestrian images.
It has 793 different persons in it. On average, there are 120 images for each
person with positions and sizes. Experimental results demonstrate the
advantages of our model over the state-of-the-art methods on MOT16.
|
[
{
"version": "v1",
"created": "Thu, 12 Apr 2018 15:05:55 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Ma",
"Cong",
""
],
[
"Yang",
"Changshui",
""
],
[
"Yang",
"Fan",
""
],
[
"Zhuang",
"Yueqing",
""
],
[
"Zhang",
"Ziwei",
""
],
[
"Jia",
"Huizhu",
""
],
[
"Xie",
"Xiaodong",
""
]
] |
new_dataset
| 0.96419 |
1804.04610
|
Jiajun Wu
|
Xingyuan Sun, Jiajun Wu, Xiuming Zhang, Zhoutong Zhang, Chengkai
Zhang, Tianfan Xue, Joshua B. Tenenbaum, William T. Freeman
|
Pix3D: Dataset and Methods for Single-Image 3D Shape Modeling
|
CVPR 2018. The first two authors contributed equally to this work.
Project page: http://pix3d.csail.mit.edu
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study 3D shape modeling from a single image and make contributions to it
in three aspects. First, we present Pix3D, a large-scale benchmark of diverse
image-shape pairs with pixel-level 2D-3D alignment. Pix3D has wide applications
in shape-related tasks including reconstruction, retrieval, viewpoint
estimation, etc. Building such a large-scale dataset, however, is highly
challenging; existing datasets either contain only synthetic data, or lack
precise alignment between 2D images and 3D shapes, or only have a small number
of images. Second, we calibrate the evaluation criteria for 3D shape
reconstruction through behavioral studies, and use them to objectively and
systematically benchmark cutting-edge reconstruction algorithms on Pix3D.
Third, we design a novel model that simultaneously performs 3D reconstruction
and pose estimation; our multi-task learning approach achieves state-of-the-art
performance on both tasks.
|
[
{
"version": "v1",
"created": "Thu, 12 Apr 2018 16:30:39 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Sun",
"Xingyuan",
""
],
[
"Wu",
"Jiajun",
""
],
[
"Zhang",
"Xiuming",
""
],
[
"Zhang",
"Zhoutong",
""
],
[
"Zhang",
"Chengkai",
""
],
[
"Xue",
"Tianfan",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"Freeman",
"William T.",
""
]
] |
new_dataset
| 0.999893 |
1804.04619
|
Seungjae Lee
|
Seungjae Lee, Youngjin Jo, Dongheon Yoo, Jaebum Cho, Dukho Lee, and
Byoungho Lee
|
TomoReal: Tomographic Displays
|
10 pages, 5 figures
| null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Since the history of display technologies began, people have dreamed an
ultimate 3D display system. In order to get close to the dream, 3D displays
should provide both of psychological and physiological cues for recognition of
depth information. However, it is challenging to satisfy the essential features
without sacrifice in conventional technical values including resolution, frame
rate, and eye-box. Here, we present a new type of 3D displays: tomographic
displays. We claim that tomographic displays may support extremely wide depth
of field, quasi-continuous accommodation, omni-directional motion parallax,
preserved resolution, full frame, and moderate field of view within enough
eye-box. Tomographic displays consist of focus-tunable optics, 2D display
panel, and fast spatially adjustable backlight. The synchronization of the
focus-tunable optics and the backlight enables the 2D display panel to express
the depth information. Tomographic displays have various applications including
tabletop 3D displays, head-up displays, and near-eye stereoscopes. In this
study, we implement a near-eye display named TomoReal, which is one of the most
promising application of tomographic displays. We conclude with the detailed
analysis and thorough discussion for tomographic displays, which would open a
new research field.
|
[
{
"version": "v1",
"created": "Thu, 22 Mar 2018 05:46:27 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Lee",
"Seungjae",
""
],
[
"Jo",
"Youngjin",
""
],
[
"Yoo",
"Dongheon",
""
],
[
"Cho",
"Jaebum",
""
],
[
"Lee",
"Dukho",
""
],
[
"Lee",
"Byoungho",
""
]
] |
new_dataset
| 0.999569 |
1804.04632
|
Francesco Rampazzo
|
Francesco Rampazzo, Emilio Zagheni, Ingmar Weber, Maria Rita Testa,
Francesco Billari
|
Mater certa est, pater numquam: What can Facebook Advertising Data Tell
Us about Male Fertility Rates?
|
Please cite the version from Proceedings of the Twelfth International
Conference on Web and Social Media (ICWSM-2018)
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In many developing countries, timely and accurate information about birth
rates and other demographic indicators is still lacking, especially for male
fertility rates. Using anonymous and aggregate data from Facebook's Advertising
Platform, we produce global estimates of the Mean Age at Childbearing (MAC), a
key indicator of fertility postponement. Our analysis indicates that fertility
measures based on Facebook data are highly correlated with conventional
indicators based on traditional data, for those countries for which we have
statistics. For instance, the correlation of the MAC computed using Facebook
and United Nations data is 0.47 (p = 4.02e-08) and 0.79 (p = 2.2e-15) for
female and male respectively. Out of sample validation for a simple regression
model indicates that the mean absolute percentage error is 2.3%. We use the
linear model and Facebook data to produce estimates of the male MAC for
countries for which we do not have data.
|
[
{
"version": "v1",
"created": "Thu, 12 Apr 2018 17:03:36 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"Rampazzo",
"Francesco",
""
],
[
"Zagheni",
"Emilio",
""
],
[
"Weber",
"Ingmar",
""
],
[
"Testa",
"Maria Rita",
""
],
[
"Billari",
"Francesco",
""
]
] |
new_dataset
| 0.999417 |
1804.04649
|
Shirin Nilizadeh
|
Mai ElSherief, Shirin Nilizadeh, Dana Nguyen, Giovanni Vigna,
Elizabeth Belding
|
Peer to Peer Hate: Hate Speech Instigators and Their Targets
| null |
ICWSM 2018
| null | null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While social media has become an empowering agent to individual voices and
freedom of expression, it also facilitates anti-social behaviors including
online harassment, cyberbullying, and hate speech. In this paper, we present
the first comparative study of hate speech instigators and target users on
Twitter. Through a multi-step classification process, we curate a comprehensive
hate speech dataset capturing various types of hate. We study the distinctive
characteristics of hate instigators and targets in terms of their profile
self-presentation, activities, and online visibility. We find that hate
instigators target more popular and high profile Twitter users, and that
participating in hate speech can result in greater online visibility. We
conduct a personality analysis of hate instigators and targets and show that
both groups have eccentric personality facets that differ from the general
Twitter population. Our results advance the state of the art of understanding
online hate speech engagement.
|
[
{
"version": "v1",
"created": "Thu, 12 Apr 2018 17:55:29 GMT"
}
] | 2018-04-13T00:00:00 |
[
[
"ElSherief",
"Mai",
""
],
[
"Nilizadeh",
"Shirin",
""
],
[
"Nguyen",
"Dana",
""
],
[
"Vigna",
"Giovanni",
""
],
[
"Belding",
"Elizabeth",
""
]
] |
new_dataset
| 0.998986 |
1706.03091
|
Panos Alevizos
|
Panos N. Alevizos, Konstantinos Tountas, Aggelos Bletsas
|
Multistatic Scatter Radio Sensor Networks for Extended Coverage
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scatter radio, i.e., communication by means of reflection, has been recently
proposed as a viable ultra-low power solution for wireless sensor networks
(WSNs). This work offers a detailed comparison between monostatic and
multistatic scatter radio architectures. In monostatic architecture, the reader
consists of both the illuminating transmitter and the receiver of signals
scattered back from the sensors. The multistatic architecture includes several
ultra-low cost illuminating carrier emitters and a single reader.
Maximum-likelihood coherent and noncoherent bit error rate (BER), diversity
order, average information and energy outage probability comparison is
performed, under dyadic Nakagami fading, filling a gap in the literature. It is
found that: (i) diversity order, BER, and tag location-independent performance
bounds of multistatic architecture outperform monostatic, (ii) energy outage
due to radio frequency (RF) harvesting for passive tags, is less frequent in
multistatic than monostatic architecture, and (iii) multistatic coverage is
higher than monostatic. Furthermore, a proof-of-concept {digital} multistatic,
scatter radio WSN with a single receiver, four low-cost emitters and multiple
ambiently-powered, low-bitrate tags, perhaps the first of its kind, is
experimentally demonstrated (at $13$ dBm transmission power), covering an area
of $3500$ m$^2$. Research findings are applicable in the industries of WSNs,
radio frequency identification (RFID), and emerging Internet-of-Things.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2017 18:56:42 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Feb 2018 15:05:02 GMT"
},
{
"version": "v3",
"created": "Wed, 11 Apr 2018 14:10:38 GMT"
}
] | 2018-04-12T00:00:00 |
[
[
"Alevizos",
"Panos N.",
""
],
[
"Tountas",
"Konstantinos",
""
],
[
"Bletsas",
"Aggelos",
""
]
] |
new_dataset
| 0.959853 |
1707.06642
|
Oisin Mac Aodha
|
Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex
Shepard, Hartwig Adam, Pietro Perona, Serge Belongie
|
The iNaturalist Species Classification and Detection Dataset
|
CVPR 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing image classification datasets used in computer vision tend to have a
uniform distribution of images across object categories. In contrast, the
natural world is heavily imbalanced, as some species are more abundant and
easier to photograph than others. To encourage further progress in challenging
real world conditions we present the iNaturalist species classification and
detection dataset, consisting of 859,000 images from over 5,000 different
species of plants and animals. It features visually similar species, captured
in a wide variety of situations, from all over the world. Images were collected
with different camera types, have varying image quality, feature a large class
imbalance, and have been verified by multiple citizen scientists. We discuss
the collection of the dataset and present extensive baseline experiments using
state-of-the-art computer vision classification and detection models. Results
show that current non-ensemble based methods achieve only 67% top one
classification accuracy, illustrating the difficulty of the dataset.
Specifically, we observe poor results for classes with small numbers of
training examples suggesting more attention is needed in low-shot learning.
|
[
{
"version": "v1",
"created": "Thu, 20 Jul 2017 17:59:55 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Apr 2018 20:22:13 GMT"
}
] | 2018-04-12T00:00:00 |
[
[
"Van Horn",
"Grant",
""
],
[
"Mac Aodha",
"Oisin",
""
],
[
"Song",
"Yang",
""
],
[
"Cui",
"Yin",
""
],
[
"Sun",
"Chen",
""
],
[
"Shepard",
"Alex",
""
],
[
"Adam",
"Hartwig",
""
],
[
"Perona",
"Pietro",
""
],
[
"Belongie",
"Serge",
""
]
] |
new_dataset
| 0.999681 |
1710.10000
|
Umar Iqbal
|
Mykhaylo Andriluka, Umar Iqbal, Eldar Insafutdinov, Leonid Pishchulin,
Anton Milan, Juergen Gall and Bernt Schiele
|
PoseTrack: A Benchmark for Human Pose Estimation and Tracking
|
www.posetrack.net
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human poses and motions are important cues for analysis of videos with people
and there is strong evidence that representations based on body pose are highly
effective for a variety of tasks such as activity recognition, content
retrieval and social signal processing. In this work, we aim to further advance
the state of the art by establishing "PoseTrack", a new large-scale benchmark
for video-based human pose estimation and articulated tracking, and bringing
together the community of researchers working on visual human analysis. The
benchmark encompasses three competition tracks focusing on i) single-frame
multi-person pose estimation, ii) multi-person pose estimation in videos, and
iii) multi-person articulated tracking. To facilitate the benchmark and
challenge we collect, annotate and release a new %large-scale benchmark dataset
that features videos with multiple people labeled with person tracks and
articulated pose. A centralized evaluation server is provided to allow
participants to evaluate on a held-out test set. We envision that the proposed
benchmark will stimulate productive research both by providing a large and
representative training dataset as well as providing a platform to objectively
evaluate and compare the proposed methods. The benchmark is freely accessible
at https://posetrack.net.
|
[
{
"version": "v1",
"created": "Fri, 27 Oct 2017 06:20:30 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Apr 2018 18:20:56 GMT"
}
] | 2018-04-12T00:00:00 |
[
[
"Andriluka",
"Mykhaylo",
""
],
[
"Iqbal",
"Umar",
""
],
[
"Insafutdinov",
"Eldar",
""
],
[
"Pishchulin",
"Leonid",
""
],
[
"Milan",
"Anton",
""
],
[
"Gall",
"Juergen",
""
],
[
"Schiele",
"Bernt",
""
]
] |
new_dataset
| 0.99983 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.