id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1606.02055
|
St\'ephane Lens
|
St\'ephane Lens, Bernard Boigelot
|
From Constrained Delaunay Triangulations to Roadmap Graphs with
Arbitrary Clearance
| null | null | null | null |
cs.CG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work studies path planning in two-dimensional space, in the presence of
polygonal obstacles. We specifically address the problem of building a roadmap
graph, that is, an abstract representation of all the paths that can
potentially be followed around a given set of obstacles. Our solution consists
in an original refinement algorithm for constrained Delaunay triangulations,
aimed at generating a roadmap graph suited for planning paths with arbitrary
clearance. In other words, a minimum distance to the obstacles can be
specified, and the graph does not have to be recomputed if this distance is
modified. Compared to other solutions, our approach has the advantage of being
simpler, as well as significantly more efficient.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2016 08:04:43 GMT"
}
] | 2016-06-08T00:00:00 |
[
[
"Lens",
"Stéphane",
""
],
[
"Boigelot",
"Bernard",
""
]
] |
new_dataset
| 0.956589 |
1606.02162
|
Fabrizio Montecchiani
|
Alessio Arleo, Walter Didimo, Giuseppe Liotta, Fabrizio Montecchiani
|
A Distributed Force-Directed Algorithm on Giraph: Design and Experiments
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we study the problem of designing a distributed graph
visualization algorithm for large graphs. The algorithm must be simple to
implement and the computing infrastructure must not require major hardware or
software investments. We design, implement, and experiment a force-directed
algorithm in Giraph, a popular open source framework for distributed computing,
based on a vertex-centric design paradigm. The algorithm is tested both on real
and artificial graphs with up to million edges, by using a rather inexpensive
PaaS (Platform as a Service) infrastructure of Amazon. The experiments show the
scalability and effectiveness of our technique when compared to a centralized
implementation of the same force-directed model. We show that graphs with about
one million edges can be drawn in less than 8 minutes, by spending about 1\$
per drawing in the cloud computing infrastructure.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2016 14:56:59 GMT"
}
] | 2016-06-08T00:00:00 |
[
[
"Arleo",
"Alessio",
""
],
[
"Didimo",
"Walter",
""
],
[
"Liotta",
"Giuseppe",
""
],
[
"Montecchiani",
"Fabrizio",
""
]
] |
new_dataset
| 0.994298 |
1502.00054
|
Bengi Aygun
|
Bengi Aygun, Mate Boban, and Alexander M. Wyglinski
|
ECPR: Environment- and Context-aware Combined Power and Rate Distributed
Congestion Control for Vehicular Communications
|
37 Pages, 12 Figures, 5 Tables, Elsevier Computer Communications, May
2016
| null |
10.1016/j.comcom.2016.05.015
|
COMCOM5336
|
cs.OH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Safety and efficiency applications in vehicular networks rely on the exchange
of periodic messages between vehicles. These messages contain position, speed,
heading, and other vital information that makes the vehicles aware of their
surroundings. The drawback of exchanging periodic cooperative messages is that
they generate significant channel load. Decentralized Congestion Control (DCC)
algorithms have been proposed to minimize the channel load. However, while the
rationale for periodic message exchange is to improve awareness, existing DCC
algorithms do not use awareness as a metric for deciding when, at what power,
and at what rate the periodic messages need to be sent in order to make sure
all vehicles are informed. We propose an environment- and context-aware DCC
algorithm combines power and rate control in order to improve cooperative
awareness by adapting to both specific propagation environments (e.g., urban
intersections, open highways, suburban roads) as well as application
requirements (e.g., different target cooperative awareness range). Studying
various operational conditions (e.g., speed, direction, and application
requirement), ECPR adjusts the transmit power of the messages in order to reach
the desired awareness ratio at the target distance while at the same time
controlling the channel load using an adaptive rate control algorithm. By
performing extensive simulations, including realistic propagation as well as
environment modeling and realistic vehicle operational environments (varying
demand on both awareness range and rate), we show that ECPR can increase
awareness by 20% while keeping the channel load and interference at almost the
same level. When permitted by the awareness requirements, ECPR can improve the
average message rate by 18% compared to algorithms that perform rate adaptation
only.
|
[
{
"version": "v1",
"created": "Sat, 31 Jan 2015 01:34:02 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Jun 2016 20:03:33 GMT"
}
] | 2016-06-07T00:00:00 |
[
[
"Aygun",
"Bengi",
""
],
[
"Boban",
"Mate",
""
],
[
"Wyglinski",
"Alexander M.",
""
]
] |
new_dataset
| 0.999383 |
1604.05689
|
Mohammad Ashraful Hoque Mohammad Ashraful Hoque
|
Mohammad A. Hoque and Matti Siekkinen, Jonghoe Koo, and Sasu Tarkoma
|
Accurate Online Full Charge Capacity Modeling of Smartphone Batteries
| null | null | null | null |
cs.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Full charge capacity (FCC) refers to the amount of energy a battery can hold.
It is the fundamental property of smartphone batteries that diminishes as the
battery ages and is charged/discharged. We investigate the behavior of
smartphone batteries while charging and demonstrate that the battery voltage
and charging rate information can together characterize the FCC of a battery.
We propose a new method for accurately estimating FCC without exposing
low-level system details or introducing new hardware or system modules. We also
propose and implement a collaborative FCC estimation technique that builds on
crowdsourced battery data. The method finds the reference voltage curve and
charging rate of a particular smartphone model from the data and then compares
the curve and rate of an individual user with the model reference curve. After
analyzing a large data set, we report that 55% of all devices and at least one
device in 330 out of 357 unique device models lost some of their FCC. For some
models, the median capacity loss exceeded 20% with the inter-quartile range
being over 20 pp. The models enable debugging the performance of smartphone
batteries, more accurate power modeling, and energy-aware system or application
optimization.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2016 18:42:39 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Jun 2016 17:54:37 GMT"
}
] | 2016-06-07T00:00:00 |
[
[
"Hoque",
"Mohammad A.",
""
],
[
"Siekkinen",
"Matti",
""
],
[
"Koo",
"Jonghoe",
""
],
[
"Tarkoma",
"Sasu",
""
]
] |
new_dataset
| 0.962474 |
1605.09046
|
Krzysztof Choromanski
|
Krzysztof Choromanski, Francois Fagan, Cedric Gouy-Pailler, Anne
Morvan, Tamas Sarlos, Jamal Atif
|
TripleSpin - a generic compact paradigm for fast machine learning
computations
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a generic compact computational framework relying on structured
random matrices that can be applied to speed up several machine learning
algorithms with almost no loss of accuracy. The applications include new fast
LSH-based algorithms, efficient kernel computations via random feature maps,
convex optimization algorithms, quantization techniques and many more. Certain
models of the presented paradigm are even more compressible since they apply
only bit matrices. This makes them suitable for deploying on mobile devices.
All our findings come with strong theoretical guarantees. In particular, as a
byproduct of the presented techniques and by using relatively new
Berry-Esseen-type CLT for random vectors, we give the first theoretical
guarantees for one of the most efficient existing LSH algorithms based on the
$\textbf{HD}_{3}\textbf{HD}_{2}\textbf{HD}_{1}$ structured matrix ("Practical
and Optimal LSH for Angular Distance"). These guarantees as well as theoretical
results for other aforementioned applications follow from the same general
theoretical principle that we present in the paper. Our structured family
contains as special cases all previously considered structured schemes,
including the recently introduced $P$-model. Experimental evaluation confirms
the accuracy and efficiency of TripleSpin matrices.
|
[
{
"version": "v1",
"created": "Sun, 29 May 2016 19:07:09 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Jun 2016 15:05:31 GMT"
}
] | 2016-06-07T00:00:00 |
[
[
"Choromanski",
"Krzysztof",
""
],
[
"Fagan",
"Francois",
""
],
[
"Gouy-Pailler",
"Cedric",
""
],
[
"Morvan",
"Anne",
""
],
[
"Sarlos",
"Tamas",
""
],
[
"Atif",
"Jamal",
""
]
] |
new_dataset
| 0.979469 |
1606.01356
|
Alan Litchfield
|
Alan Litchfield and Abid Shahzad
|
Virtualization Technology: Cross-VM Cache Side Channel Attacks make it
Vulnerable
|
ISBN# 978-0-646-95337-3 Presented at the Australasian Conference on
Information Systems 2015 (arXiv:1605.01032)
| null | null |
ACIS/2015/111
|
cs.CY cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Cloud computing provides an effective business model for the deployment of IT
infrastructure, platform, and software services. Often, facilities are
outsourced to cloud providers and this offers the service consumer
virtualization technologies without the added cost burden of development.
However, virtualization introduces serious threats to service delivery such as
Denial of Service (DoS) attacks, Cross-VM Cache Side Channel attacks,
Hypervisor Escape and Hyper-jacking. One of the most sophisticated forms of
attack is the cross-VM cache side channel attack that exploits shared cache
memory between VMs. A cache side channel attack results in side channel data
leakage, such as cryptographic keys. Various techniques used by the attackers
to launch cache side channel attack are presented, as is a critical analysis of
countermeasures against cache side channel attacks.
|
[
{
"version": "v1",
"created": "Sat, 4 Jun 2016 09:31:29 GMT"
}
] | 2016-06-07T00:00:00 |
[
[
"Litchfield",
"Alan",
""
],
[
"Shahzad",
"Abid",
""
]
] |
new_dataset
| 0.977912 |
1606.01426
|
Aziz Mohaisen
|
Ah Reum Kang and Seong Hoon Jeong and Aziz Mohaisen and Huy Kang Kim
|
Multimodal Game Bot Detection using User Behavioral Characteristics
| null |
Springerplus. 2016; 5: 523
| null | null |
cs.CY cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the online service industry has continued to grow, illegal activities in
the online world have drastically increased and become more diverse. Most
illegal activities occur continuously because cyber assets, such as game items
and cyber money in online games, can be monetized into real currency. The aim
of this study is to detect game bots in a Massively Multiplayer Online Role
Playing Game (MMORPG). We observed the behavioral characteristics of game bots
and found that they execute repetitive tasks associated with gold farming and
real money trading. We propose a game bot detection methodology based on user
behavioral characteristics. The methodology of this paper was applied to real
data provided by a major MMORPG company. Detection accuracy rate increased to
96.06% on the banned account list.
|
[
{
"version": "v1",
"created": "Sat, 4 Jun 2016 22:47:16 GMT"
}
] | 2016-06-07T00:00:00 |
[
[
"Kang",
"Ah Reum",
""
],
[
"Jeong",
"Seong Hoon",
""
],
[
"Mohaisen",
"Aziz",
""
],
[
"Kim",
"Huy Kang",
""
]
] |
new_dataset
| 0.984527 |
1606.01479
|
Christian Claudel
|
Kapil Sharma, Christian Claudel
|
Short range networks of wearables for safer mobility in smart cities
|
Workshop on System and Control Perspectives for Smart City, IEEE CDC
2015
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ensuring safe and efficient mobility is a critical issue for smart city
operators. Increasing safety not only reduces the likelihood of road injuries
and fatalities, but also reduces traffic congestion and disruptions caused by
accidents, increasing efficiency. While new vehicles are increasingly equipped
with semi-automation, the added costs will initially limit the penetration rate
of these systems. An inexpensive way to replace or augment these systems is to
create networks of wearables (smart glasses, watches) that exchange positional
and path data at a very fast rate between all users, identify collision risks
and feedback collision resolution information to all users in an intuitive way
through their smart glasses.
|
[
{
"version": "v1",
"created": "Sun, 5 Jun 2016 08:39:21 GMT"
}
] | 2016-06-07T00:00:00 |
[
[
"Sharma",
"Kapil",
""
],
[
"Claudel",
"Christian",
""
]
] |
new_dataset
| 0.982771 |
1606.01540
|
John Schulman
|
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John
Schulman, Jie Tang, Wojciech Zaremba
|
OpenAI Gym
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
OpenAI Gym is a toolkit for reinforcement learning research. It includes a
growing collection of benchmark problems that expose a common interface, and a
website where people can share their results and compare the performance of
algorithms. This whitepaper discusses the components of OpenAI Gym and the
design decisions that went into the software.
|
[
{
"version": "v1",
"created": "Sun, 5 Jun 2016 17:54:48 GMT"
}
] | 2016-06-07T00:00:00 |
[
[
"Brockman",
"Greg",
""
],
[
"Cheung",
"Vicki",
""
],
[
"Pettersson",
"Ludwig",
""
],
[
"Schneider",
"Jonas",
""
],
[
"Schulman",
"John",
""
],
[
"Tang",
"Jie",
""
],
[
"Zaremba",
"Wojciech",
""
]
] |
new_dataset
| 0.997753 |
1606.01601
|
Jiaping Zhao
|
Jiaping Zhao and Laurent Itti
|
shapeDTW: shape Dynamic Time Warping
|
14 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic Time Warping (DTW) is an algorithm to align temporal sequences with
possible local non-linear distortions, and has been widely applied to audio,
video and graphics data alignments. DTW is essentially a point-to-point
matching method under some boundary and temporal consistency constraints.
Although DTW obtains a global optimal solution, it does not necessarily achieve
locally sensible matchings. Concretely, two temporal points with entirely
dissimilar local structures may be matched by DTW. To address this problem, we
propose an improved alignment algorithm, named shape Dynamic Time Warping
(shapeDTW), which enhances DTW by taking point-wise local structural
information into consideration. shapeDTW is inherently a DTW algorithm, but
additionally attempts to pair locally similar structures and to avoid matching
points with distinct neighborhood structures. We apply shapeDTW to align audio
signal pairs having ground-truth alignments, as well as artificially simulated
pairs of aligned sequences, and obtain quantitatively much lower alignment
errors than DTW and its two variants. When shapeDTW is used as a distance
measure in a nearest neighbor classifier (NN-shapeDTW) to classify time series,
it beats DTW on 64 out of 84 UCR time series datasets, with significantly
improved classification accuracies. By using a properly designed local
structure descriptor, shapeDTW improves accuracies by more than 10% on 18
datasets. To the best of our knowledge, shapeDTW is the first distance measure
under the nearest neighbor classifier scheme to significantly outperform DTW,
which had been widely recognized as the best distance measure to date. Our code
is publicly accessible at: https://github.com/jiapingz/shapeDTW.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2016 02:38:01 GMT"
}
] | 2016-06-07T00:00:00 |
[
[
"Zhao",
"Jiaping",
""
],
[
"Itti",
"Laurent",
""
]
] |
new_dataset
| 0.996791 |
1606.01607
|
Milad Mohammadi
|
Milad Mohammadi, Tor M. Aamodt, William J. Dally
|
CG-OoO: Energy-Efficient Coarse-Grain Out-of-Order Execution
|
11 pages
| null | null | null |
cs.AR cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the Coarse-Grain Out-of-Order (CG- OoO) general purpose
processor designed to achieve close to In-Order processor energy while
maintaining Out-of-Order (OoO) performance. CG-OoO is an energy-performance
proportional general purpose architecture that scales according to the program
load. Block-level code processing is at the heart of the this architecture;
CG-OoO speculates, fetches, schedules, and commits code at block-level
granularity. It eliminates unnecessary accesses to energy consuming tables, and
turns large tables into smaller and distributed tables that are cheaper to
access. CG-OoO leverages compiler-level code optimizations to deliver efficient
static code, and exploits dynamic instruction-level parallelism and block-level
parallelism. CG-OoO introduces Skipahead issue, a complexity effective, limited
out-of-order instruction scheduling model. Through the energy efficiency
techniques applied to the compiler and processor pipeline stages, CG-OoO closes
64% of the average energy gap between the In-Order and Out-of-Order baseline
processors at the performance of the OoO baseline. This makes CG-OoO 1.9x more
efficient than the OoO on the energy-delay product inverse metric.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2016 03:44:52 GMT"
}
] | 2016-06-07T00:00:00 |
[
[
"Mohammadi",
"Milad",
""
],
[
"Aamodt",
"Tor M.",
""
],
[
"Dally",
"William J.",
""
]
] |
new_dataset
| 0.991645 |
1606.01719
|
Kasim Sinan Yildirim
|
Kas{\i}m Sinan Y{\i}ld{\i}r{\i}m, Henko Aantjes, Amjad Yousef Majid,
Przemys{\l}aw Pawe{\l}czak
|
On the Synchronization of Intermittently Powered Wireless Embedded
Systems
|
Accepted for HLPC 2016 - Hillariously Low-Power Computing - Pushing
the Boundaries of Intermittently Powered Devices
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Battery-free computational RFID platforms, such as WISP (Wireless
Identification and Sensing Platform), are emerging intermittently powered
devices designed for replacing existing battery-powered sensor networks. As
their applications become increasingly complex, we anticipate that
synchronization (among others) to appear as one of crucial building blocks for
collaborative and coordinated actions. With this paper we aim at providing
initial observations regarding the synchronization of intermittently powered
systems. In particular, we design and implement the first and very initial
synchronization protocol for the WISP platform that provides explicit
synchronization among individual WISPs that reside inside the communication
range of a common RFID reader. Evaluations in our testbed showed that with our
mechanism a synchronization error of approximately 1.5 milliseconds can be
ensured between the RFID reader and a WISP tag.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2016 12:51:51 GMT"
}
] | 2016-06-07T00:00:00 |
[
[
"Yıldırım",
"Kasım Sinan",
""
],
[
"Aantjes",
"Henko",
""
],
[
"Majid",
"Amjad Yousef",
""
],
[
"Pawełczak",
"Przemysław",
""
]
] |
new_dataset
| 0.955686 |
1606.01833
|
Michael Mitzenmacher
|
Michael Mitzenmacher
|
Analyzing Distributed Join-Idle-Queue: A Fluid Limit Approach
|
11 pages, draft paper, likely to be at Allerton 2016, possibly
improved before then
| null | null | null |
cs.DC cs.DS cs.NI cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the context of load balancing, Lu et al. introduced the distributed
Join-Idle-Queue algorithm, where a group of dispatchers distribute jobs to a
cluster of parallel servers. Each dispatcher maintains a queue of idle servers;
when a job arrives to a dispatcher, it sends it to a server on its queue, or to
a random server if the queue is empty. In turn, when a server has no jobs, it
requests to be placed on the idle queue of a randomly chosen dispatcher.
Although this algorithm was shown to be quite effective, the original
asymptotic analysis makes simplifying assumptions that become increasingly
inaccurate as the system load increases. Further, the analysis does not
naturally generalize to interesting variations, such as having a server request
to be placed on the idle queue of a dispatcher before it has completed all
jobs, which can be beneficial under high loads.
We provide a new asymptotic analysis of Join-Idle-Queue systems based on mean
field fluid limit methods, deriving families of differential equations that
describe these systems. Our analysis avoids previous simplifying assumptions,
is empirically more accurate, and generalizes naturally to the variation
described above, as well as other simple variations. Our theoretical and
empirical analyses shed further light on the performance of Join-Idle-Queue,
including potential performance pitfalls under high load.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2016 17:15:50 GMT"
}
] | 2016-06-07T00:00:00 |
[
[
"Mitzenmacher",
"Michael",
""
]
] |
new_dataset
| 0.973878 |
cs/0701185
|
Frank Gurski
|
Frank Gurski
|
Graph Operations on Clique-Width Bounded Graphs
|
30 pages, to appear in "Theory of Computing Systems"
| null |
10.1007/s00224-016-9685-1
| null |
cs.DS cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Clique-width is a well-known graph parameter. Many NP-hard graph problems
admit polynomial-time solutions when restricted to graphs of bounded
clique-width. The same holds for NLC-width. In this paper we study the behavior
of clique-width and NLC-width under various graph operations and graph
transformations. We give upper and lower bounds for the clique-width and
NLC-width of the modified graphs in terms of the clique-width and NLC-width of
the involved graphs.
|
[
{
"version": "v1",
"created": "Mon, 29 Jan 2007 14:36:25 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Nov 2014 14:02:50 GMT"
},
{
"version": "v3",
"created": "Sat, 4 Jun 2016 16:45:38 GMT"
}
] | 2016-06-07T00:00:00 |
[
[
"Gurski",
"Frank",
""
]
] |
new_dataset
| 0.959916 |
1602.01366
|
Andreas Pieris
|
Pablo Barcelo and Georg Gottlob and Andreas Pieris
|
Semantic Acyclicity Under Constraints
| null | null | null | null |
cs.DB cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A conjunctive query (CQ) is semantically acyclic if it is equivalent to an
acyclic one. Semantic acyclicity has been studied in the constraint-free case,
and deciding whether a query enjoys this property is NP-complete. However, in
case the database is subject to constraints such as tuple-generating
dependencies (tgds) that can express, e.g., inclusion dependencies, or
equality-generating dependencies (egds) that capture, e.g., functional
dependencies, a CQ may turn out to be semantically acyclic under the
constraints while not semantically acyclic in general. This opens avenues to
new query optimization techniques. In this paper we initiate and develop the
theory of semantic acyclicity under constraints. More precisely, we study the
following natural problem: Given a CQ and a set of constraints, is the query
semantically acyclic under the constraints, or, in other words, is the query
equivalent to an acyclic one over all those databases that satisfy the set of
constraints? We show that, contrary to what one might expect, decidability of
CQ containment is a necessary but not sufficient condition for the decidability
of semantic acyclicity. In particular, we show that semantic acyclicity is
undecidable in the presence of full tgds (i.e., Datalog rules). In view of this
fact, we focus on the main classes of tgds for which CQ containment is
decidable, and do not capture the class of full tgds, namely guarded,
non-recursive and sticky tgds. For these classes we show that semantic
acyclicity is decidable, and its complexity coincides with the complexity of CQ
containment. In the case of egds, we show that for keys over unary and binary
predicates semantic acyclicity is decidable (NP-complete). We finally consider
the problem of evaluating a semantically acyclic query over a database that
satisfies a set of constraints; for guarded tgds and functional dependencies
this problem is tractable.
|
[
{
"version": "v1",
"created": "Wed, 3 Feb 2016 17:01:36 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jun 2016 16:39:28 GMT"
}
] | 2016-06-06T00:00:00 |
[
[
"Barcelo",
"Pablo",
""
],
[
"Gottlob",
"Georg",
""
],
[
"Pieris",
"Andreas",
""
]
] |
new_dataset
| 0.994155 |
1606.00994
|
Mathieu Bouet
|
Mathieu Bouet, Vania Conan, Hicham Khalife, Kevin Phemius, Jawad
Seddar
|
MUREN: delivering edge services in joint SDN-SDR multi-radio nodes
|
7 pages, 9 figures
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To meet the growing local and distributed computing needs, the cloud is now
descending to the network edge and sometimes to user equipments. This approach
aims at distributing computing, data processing, and networking services closer
to the end users. Instead of concentrating data and computation in a small
number of large clouds, many edge systems are envisioned to be deployed close
to the end users or where computing and intelligent networking can best meet
user needs. In this paper, we go further converging such massively distributed
computing systems with multiple radio accesses. We propose an architecture
called MUREN (Multi-Radio Edge Node) for managing traffic in future mobile edge
networks. Our solution is based on the Mobile Edge Cloud (MEC) architecture and
its close interaction with Software Defined Networking (SDN), the whole jointly
interacting with Software-Defined Radios (SDR). We have implemented our
architecture in a proof of concept and tested it with two edge scenarios. Our
experiments show that centralizing the intelligence in the MEC allows to
guarantee the requirements of the edge services either by adapting the waveform
parameters, or through changing the radio interface or even by reconfiguring
the applications. More generally, the best decision can be seen as the optimal
reaction to the wireless links variations
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2016 07:46:59 GMT"
}
] | 2016-06-06T00:00:00 |
[
[
"Bouet",
"Mathieu",
""
],
[
"Conan",
"Vania",
""
],
[
"Khalife",
"Hicham",
""
],
[
"Phemius",
"Kevin",
""
],
[
"Seddar",
"Jawad",
""
]
] |
new_dataset
| 0.974336 |
1606.01010
|
Vinh Thong Ta
|
Vinh Thong Ta
|
Automated Road Traffic Congestion Detection and Alarm Systems:
Incorporating V2I communications into ATCSs
|
31 pages
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this position paper, we address the problems of automated road congestion
detection and alerting systems and their security properties. We review
different theoretical adaptive road traffic control approaches, and three
widely deployed adaptive traffic control systems (ATCSs), namely, SCATS, SCOOT
and InSync. We then discuss some related research questions, and the
corresponding possible approaches, as well as the adversary model and potential
attack scenarios. Two theoretical concepts of automated road congestion alarm
systems (including system architecture, communication protocol, and algorithms)
are proposed on top of ATCSs, such as SCATS, SCOOT and InSync, by incorporating
secure wireless vehicle-to-infrastructure (V2I) communications. Finally, the
security properties of the proposed system have been discussed and analysed
using the ProVerif protocol verification tool.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2016 09:07:40 GMT"
}
] | 2016-06-06T00:00:00 |
[
[
"Ta",
"Vinh Thong",
""
]
] |
new_dataset
| 0.969295 |
1606.01037
|
Jan Gray
|
Jan Gray
|
GRVI Phalanx: A Massively Parallel RISC-V FPGA Accelerator Accelerator
|
Presented at 2nd International Workshop on Overlay Architectures for
FPGAs (OLAF 2016) arXiv:1605.08149
| null | null |
OLAF/2016/05
|
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
GRVI is an FPGA-efficient RISC-V RV32I soft processor. Phalanx is a parallel
processor and accelerator array framework. Groups of processors and
accelerators form shared memory clusters. Clusters are interconnected with each
other and with extreme bandwidth I/O and memory devices by a 300- bit-wide
Hoplite NOC. An example Kintex UltraScale KU040 system has 400 RISC-V cores,
peak throughput of 100,000 MIPS, peak shared memory bandwidth of 600 GB/s, NOC
bisection bandwidth of 700 Gbps, and uses 13 W.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2016 10:37:41 GMT"
}
] | 2016-06-06T00:00:00 |
[
[
"Gray",
"Jan",
""
]
] |
new_dataset
| 0.999528 |
1606.01045
|
Jean-Guillaume Dumas
|
Xavier Bultel (LIMOS), Jannik Dreier (CASSIS), Jean-Guillaume Dumas
(CASYS), Pascal Lafourcade (LIMOS)
|
Physical Zero-Knowledge Proofs for Akari, Takuzu, Kakuro and KenKen
|
FUN with algorithms 2016, Jun 2016, La Maddalena, Italy
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Akari, Takuzu, Kakuro and KenKen are logic games similar to Sudoku. In Akari,
a labyrinth on a grid has to be lit by placing lanterns, respecting various
constraints. In Takuzu a grid has to be filled with 0's and 1's, while
respecting certain constraints. In Kakuro a grid has to be filled with numbers
such that the sums per row and column match given values; similarly in KenKen a
grid has to be filled with numbers such that in given areas the product, sum,
difference or quotient equals a given value. We give physical algorithms to
realize zero-knowledge proofs for these games which allow a player to show that
he knows a solution without revealing it. These interactive proofs can be
realized with simple office material as they only rely on cards and envelopes.
Moreover, we formalize our algorithms and prove their security.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2016 11:09:14 GMT"
}
] | 2016-06-06T00:00:00 |
[
[
"Bultel",
"Xavier",
"",
"LIMOS"
],
[
"Dreier",
"Jannik",
"",
"CASSIS"
],
[
"Dumas",
"Jean-Guillaume",
"",
"CASYS"
],
[
"Lafourcade",
"Pascal",
"",
"LIMOS"
]
] |
new_dataset
| 0.995161 |
1606.01120
|
Kleanthis Thramboulidis
|
Kleanthis Thramboulidis, Theodoros Foradis
|
From Mechatronic Components to Industrial Automation Things - An IoT
model for cyber-physical manufacturing systems
|
9 pages, 10 figures
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
IoT is considered as one of the key enabling technologies for the fourth
industrial revolution, that is known as Industry 4.0. In this paper, we
consider the mechatronic component as the lowest level in the system
composition hierarchy that tightly integrates mechanics with the electronics
and software required to convert the mechanics to intelligent (smart) object
offering well defined services to its environment. For this mechatronic
component to be integrated in the IoT-based industrial automation environment,
a software layer is required on top of it to convert its conventional interface
to an IoT compliant one. This layer, that we call IoTwrapper, transforms the
conventional mechatronic component to an Industrial Automation Thing (IAT). The
IAT is the key element of an IoT model specifically developed in the context of
this work for the manufacturing domain. The model is compared to existing IoT
models and its main differences are discussed. A model-to-model transformer is
presented to automatically transform the legacy mechatronic component to an IAT
ready to be integrated in the IoT-based industrial automation environment. The
UML4IoT profile is used in the form of a Domain Specific Modeling Language to
automate this transformation. A prototype implementation of an Industrial
Automation Thing using C and the Contiki operating system demonstrates the
effectiveness of the proposed approach.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2016 14:52:45 GMT"
}
] | 2016-06-06T00:00:00 |
[
[
"Thramboulidis",
"Kleanthis",
""
],
[
"Foradis",
"Theodoros",
""
]
] |
new_dataset
| 0.960693 |
1606.01177
|
Jos Vermaseren A
|
John C. Collins and J.A.M. Vermaseren
|
Axodraw Version 2
|
Files can be found at
www.nikhef.nl/~form/maindir/others/axodraw2/axodraw2.html
| null | null | null |
cs.OH hep-ph hep-th
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present version two of the Latex graphical style file Axodraw. It has a
number of new drawing primitives and many extra options, and it can now work
with \program{pdflatex} to directly produce output in PDF file format (but with
the aid of an auxiliary program).
|
[
{
"version": "v1",
"created": "Fri, 27 May 2016 14:28:38 GMT"
}
] | 2016-06-06T00:00:00 |
[
[
"Collins",
"John C.",
""
],
[
"Vermaseren",
"J. A. M.",
""
]
] |
new_dataset
| 0.995183 |
1508.05977
|
Kenza Guenda
|
K. Guenda, G.G. La Guardia and T.A. Gulliver
|
Algebraic Quantum Synchronizable Codes
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, we construct quantum synchronizable codes (QSCs) based on the
sum and intersection of cyclic codes. Further, infinite families of QSCs are
obtained from BCH and duadic codes. Moreover, we show that the work of
Fujiwara~\cite{fujiwara1} can be generalized to repeated root cyclic codes
(RRCCs) such that QSCs are always obtained, which is not the case with simple
root cyclic codes. The usefulness of this extension is illustrated via examples
of infinite families of QSCs from repeated root duadic codes. Finally, QSCs are
constructed from the product of cyclic codes.
|
[
{
"version": "v1",
"created": "Mon, 24 Aug 2015 21:21:41 GMT"
},
{
"version": "v2",
"created": "Tue, 17 May 2016 02:59:28 GMT"
},
{
"version": "v3",
"created": "Thu, 2 Jun 2016 19:50:52 GMT"
}
] | 2016-06-03T00:00:00 |
[
[
"Guenda",
"K.",
""
],
[
"La Guardia",
"G. G.",
""
],
[
"Gulliver",
"T. A.",
""
]
] |
new_dataset
| 0.999142 |
1602.00860
|
Simon Blackburn
|
Simon R. Blackburn and M.J.B. Robshaw
|
On the security of the Algebraic Eraser tag authentication protocol
|
21 pages. Minor changes. Final version accepted for ACNS 2016
| null | null | null |
cs.CR math.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Algebraic Eraser has been gaining prominence as SecureRF, the company
commercializing the algorithm, increases its marketing reach. The scheme is
claimed to be well-suited to IoT applications but a lack of detail in available
documentation has hampered peer-review. Recently more details of the system
have emerged after a tag authentication protocol built using the Algebraic
Eraser was proposed for standardization in ISO/IEC SC31 and SecureRF provided
an open public description of the protocol. In this paper we describe a range
of attacks on this protocol that include very efficient and practical tag
impersonation as well as partial, and total, tag secret key recovery. Most of
these results have been practically verified, they contrast with the 80-bit
security that is claimed for the protocol, and they emphasize the importance of
independent public review for any cryptographic proposal.
|
[
{
"version": "v1",
"created": "Tue, 2 Feb 2016 10:02:53 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Jun 2016 13:24:39 GMT"
}
] | 2016-06-03T00:00:00 |
[
[
"Blackburn",
"Simon R.",
""
],
[
"Robshaw",
"M. J. B.",
""
]
] |
new_dataset
| 0.996304 |
1606.00541
|
Hui Liu Mr
|
Zhangxin Chen, Hui Liu, Bo Yang
|
Parallel Triangular Solvers on GPU
| null | null | null | null |
cs.MS cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate GPU based parallel triangular solvers
systematically. The parallel triangular solvers are fundamental to incomplete
LU factorization family preconditioners and algebraic multigrid solvers. We
develop a new matrix format suitable for GPU devices. Parallel lower triangular
solvers and upper triangular solvers are developed for this new data structure.
With these solvers, ILU preconditioners and domain decomposition
preconditioners are developed. Numerical results show that we can speed
triangular solvers around seven times faster.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2016 05:54:09 GMT"
}
] | 2016-06-03T00:00:00 |
[
[
"Chen",
"Zhangxin",
""
],
[
"Liu",
"Hui",
""
],
[
"Yang",
"Bo",
""
]
] |
new_dataset
| 0.996806 |
1606.00581
|
Rahul Vaze
|
Rahul Vaze, Marceau Coupechoux
|
Online Budgeted Truthful Matching
|
To appear in NetEcon 2016 and Performance Evaluation Review
| null | null | null |
cs.DS cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An online truthful budgeted matching problem is considered for a bipartite
graph, where the right vertices are available ahead of time, and individual
left vertices arrive sequentially. On arrival of a left vertex, its edge
utilities (or weights) to all the right vertices and a corresponding cost (or
bid) are revealed. If a left vertex is matched to any of the right vertices,
then it has to be paid at least as much as its cost. The problem is to match
each left vertex instantaneously and irrevocably to any one of the right
vertices, if at all, to find the maximum weight matching that is truthful,
under a payment budget constraint. Truthfulness condition requires that no left
vertex has any incentive of misreporting its cost. Assuming that the vertices
arrive in an uniformly random order (secretary model) with arbitrary utilities,
a truthful algorithm is proposed that is $24\beta$-competitive (where $\beta$
is the ratio of the maximum and the minimum utility) and satisfies the payment
budget constraint. Direct applications of this problem include crowdsourcing
auctions, and matching wireless users to cooperative relays in device-to-device
enabled cellular network.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2016 08:32:08 GMT"
}
] | 2016-06-03T00:00:00 |
[
[
"Vaze",
"Rahul",
""
],
[
"Coupechoux",
"Marceau",
""
]
] |
new_dataset
| 0.956664 |
1606.00726
|
Michael Otte
|
Michael Otte
|
On Solving Floating Point SSSP Using an Integer Priority Queue
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the single source shortest path planning problem (SSSP) in the
case of floating point edge weights. We show how any integer based Dijkstra
solution that relies on a monotone integer priority queue to create a full
ordering over path lengths in order to solve integer SSSP can be used as an
oracle to solve floating point SSSP with positive edge weights (floating point
P-SSSP). Floating point P-SSSP is of particular interest to the robotics
community. This immediately yields a handful of faster runtimes for floating
point P-SSSP; for example, ${O({m + n\log \log \frac{C}{\delta}})}$, where $C$
is the largest weight and $\delta$ is the minimum edge weight in the graph. It
also ensures that many future advances for integer SSSP will be transferable to
floating point P-SSSP.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2016 15:43:29 GMT"
}
] | 2016-06-03T00:00:00 |
[
[
"Otte",
"Michael",
""
]
] |
new_dataset
| 0.994463 |
1606.00750
|
Adrian Shatte
|
Adrian Shatte, Jason Holdsworth and Ickjai Lee
|
Multi-synchronous collaboration between desktop and mobile users: A case
study of report writing for emergency management
|
ISBN# 978-0-646-95337-3 Presented at the Australasian Conference on
Information Systems 2015 (arXiv:1605.01032)
| null | null |
ACIS/2015/86
|
cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The development of multi-synchronous decision support systems to facilitate
collaboration between diverse users is an emerging field in emergency
management. Traditionally, information management for emergency response has
been a centralised effort. However, modern devices such as smartphones provide
new methods for gaining real-time information about a disaster from users in
the field. In this paper, we present a framework for multi-synchronous
collaborative report writing in the scope of emergency management. This
framework supports desktop-based users as information providers and consumers,
alongside mobile users as information providers to facilitate multi-synchronous
collaboration. We consider the benefits of our framework for writing
collaborative Situation Reports and discuss future directions for research.
|
[
{
"version": "v1",
"created": "Sat, 28 May 2016 05:31:11 GMT"
}
] | 2016-06-03T00:00:00 |
[
[
"Shatte",
"Adrian",
""
],
[
"Holdsworth",
"Jason",
""
],
[
"Lee",
"Ickjai",
""
]
] |
new_dataset
| 0.986157 |
1606.00752
|
Dale MacKrell
|
Purva Koparkar and Dale MacKrell
|
How Fluffy is the Cloud ?: Cloud Intelligence for a Not-For-Profit
|
ISBN# 978-0-646-95337-3 Presented at the Australasian Conference on
Information Systems 2015 (arXiv:1605.01032)
| null | null |
ACIS/2015/84
|
cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Business Intelligence (BI) is becoming more accessible and less expensive
with fewer risks through various deployment options available in the Cloud.
Cloud computing facilitates the acquisition of custom solutions for
not-for-profit (NFP) organisations at affordable and scalable costs on a
flexible pay-as-you-go basis. In this paper, we explore the key technical and
organisational aspects of BI in the Cloud (Cloud Intelligence) deployment in an
Australian NFP whose BI maturity is rising although still low. This
organisation aspires to Cloud Intelligence for improved managerial decision
making yet the issues surrounding the adoption of Cloud Intelligence are
complex, especially where corporate and Cloud governance is concerned. From the
findings of the case study, a conceptual framework has been developed and
presented which offers a view of how governance could be deployed so that NFPs
gain maximum leverage through their adoption of the Cloud.
|
[
{
"version": "v1",
"created": "Sat, 28 May 2016 05:24:57 GMT"
}
] | 2016-06-03T00:00:00 |
[
[
"Koparkar",
"Purva",
""
],
[
"MacKrell",
"Dale",
""
]
] |
new_dataset
| 0.995111 |
1606.00815
|
Patrick Sol\'e
|
Adel Alahmadi, Hatoon Shohaib, Patrick Sol\'e
|
On self-dual double negacirculant codes
|
1O pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Double negacirculant (DN) codes are the analogues in odd characteristic of
double circulant codes. Self-dual DN codes of odd dimension are shown to be
consta-dihedral. Exact counting formulae are derived for DN codes. The special
class of length a power of two is studied by means of Dickson polynomials, and
is shown to contain families of codes with relative distances satisfying a
modified Gilbert-Varshamov bound.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2016 19:26:54 GMT"
}
] | 2016-06-03T00:00:00 |
[
[
"Alahmadi",
"Adel",
""
],
[
"Shohaib",
"Hatoon",
""
],
[
"Solé",
"Patrick",
""
]
] |
new_dataset
| 0.999524 |
1601.03874
|
Laurent Chuat
|
Pawel Szalachowski, Laurent Chuat, Adrian Perrig
|
PKI Safety Net (PKISN): Addressing the Too-Big-to-Be-Revoked Problem of
the TLS Ecosystem
|
IEEE EuroS&P 2016
| null |
10.1109/EuroSP.2016.38
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a public-key infrastructure (PKI), clients must have an efficient and
secure way to determine whether a certificate was revoked (by an entity
considered as legitimate to do so), while preserving user privacy. A few
certification authorities (CAs) are currently responsible for the issuance of
the large majority of TLS certificates. These certificates are considered valid
only if the certificate of the issuing CA is also valid. The certificates of
these important CAs are effectively too big to be revoked, as revoking them
would result in massive collateral damage. To solve this problem, we redesign
the current revocation system with a novel approach that we call PKI Safety Net
(PKISN), which uses publicly accessible logs to store certificates (in the
spirit of Certificate Transparency) and revocations. The proposed system
extends existing mechanisms, which enables simple deployment. Moreover, we
present a complete implementation and evaluation of our scheme.
|
[
{
"version": "v1",
"created": "Fri, 15 Jan 2016 11:00:22 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Feb 2016 13:43:16 GMT"
}
] | 2016-06-02T00:00:00 |
[
[
"Szalachowski",
"Pawel",
""
],
[
"Chuat",
"Laurent",
""
],
[
"Perrig",
"Adrian",
""
]
] |
new_dataset
| 0.969646 |
1602.03146
|
Sumeet Katariya
|
Sumeet Katariya, Branislav Kveton, Csaba Szepesv\'ari, Zheng Wen
|
DCM Bandits: Learning to Rank with Multiple Clicks
|
Proceedings of the 33rd International Conference on Machine Learning
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A search engine recommends to the user a list of web pages. The user examines
this list, from the first page to the last, and clicks on all attractive pages
until the user is satisfied. This behavior of the user can be described by the
dependent click model (DCM). We propose DCM bandits, an online learning variant
of the DCM where the goal is to maximize the probability of recommending
satisfactory items, such as web pages. The main challenge of our learning
problem is that we do not observe which attractive item is satisfactory. We
propose a computationally-efficient learning algorithm for solving our problem,
dcmKL-UCB; derive gap-dependent upper bounds on its regret under reasonable
assumptions; and also prove a matching lower bound up to logarithmic factors.
We evaluate our algorithm on synthetic and real-world problems, and show that
it performs well even when our model is misspecified. This work presents the
first practical and regret-optimal online algorithm for learning to rank with
multiple clicks in a cascade-like click model.
|
[
{
"version": "v1",
"created": "Tue, 9 Feb 2016 20:03:30 GMT"
},
{
"version": "v2",
"created": "Tue, 31 May 2016 20:52:17 GMT"
}
] | 2016-06-02T00:00:00 |
[
[
"Katariya",
"Sumeet",
""
],
[
"Kveton",
"Branislav",
""
],
[
"Szepesvári",
"Csaba",
""
],
[
"Wen",
"Zheng",
""
]
] |
new_dataset
| 0.989645 |
1605.07732
|
Wenxue Cheng
|
Wenxue Cheng, Fengyuan Ren, Wanchun Jiang, Kun Qian, Tong Zhang, Ran
Shu
|
Isolating Mice and Elephant in Data Centers
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data centers traffic is composed by numerous latency-sensitive "mice" flows,
which is consisted of only several packets, and a few throughput-sensitive
"elephant" flows, which occupy more than 80% of overall load. Generally, the
short-lived "mice" flows induce transient congestion and the long-lived
"elephant" flows cause persistent congestion. The network congestion is a major
performance inhibitor. Conventionally, the hop-by-hop and end-to-end flow
control mechanisms are employed to relief transient and persistent congestion,
respectively. However, in face of the mixture of elephants and mice, we find
the hybrid congestion control scheme including hop-by-hop and end-to-end flow
control mechanisms suffers from serious performance impairments. As a step
further, our in-depth analysis reveals that the hybrid scheme performs poor at
latency of mice and throughput of elephant. Motivated by this understanding, we
argue for isolating mice and elephants in different queues, such that the
hop-by-hop and end-to-end flow control mechanisms are independently imposed to
short-lived and long-lived flows, respectively. Our solution is
readily-deployable and compatible with current commodity network devices and
can leverage various congestion control mechanisms. Extensive simulations show
that our proposal of isolation can simultaneously improve the latency of mice
by at least 30% and the link utilization to almost 100%.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2016 04:52:14 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Jun 2016 02:17:21 GMT"
}
] | 2016-06-02T00:00:00 |
[
[
"Cheng",
"Wenxue",
""
],
[
"Ren",
"Fengyuan",
""
],
[
"Jiang",
"Wanchun",
""
],
[
"Qian",
"Kun",
""
],
[
"Zhang",
"Tong",
""
],
[
"Shu",
"Ran",
""
]
] |
new_dataset
| 0.966841 |
1606.00110
|
Chris Thomas
|
Christopher Thomas
|
OpenSalicon: An Open Source Implementation of the Salicon Saliency Model
|
Github Repository: https://github.com/CLT29/OpenSALICON
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this technical report, we present our publicly downloadable implementation
of the SALICON saliency model. At the time of this writing, SALICON is one of
the top performing saliency models on the MIT 300 fixation prediction dataset
which evaluates how well an algorithm is able to predict where humans would
look in a given image. Recently, numerous models have achieved state-of-the-art
performance on this benchmark, but none of the top 5 performing models
(including SALICON) are available for download. To address this issue, we have
created a publicly downloadable implementation of the SALICON model. It is our
hope that our model will engender further research in visual attention modeling
by providing a baseline for comparison of other algorithms and a platform for
extending this implementation. The model we provide supports both training and
testing, enabling researchers to quickly fine-tune the model on their own
dataset. We also provide a pre-trained model and code for those users who only
need to generate saliency maps for images without training their own model.
|
[
{
"version": "v1",
"created": "Wed, 1 Jun 2016 04:28:10 GMT"
}
] | 2016-06-02T00:00:00 |
[
[
"Thomas",
"Christopher",
""
]
] |
new_dataset
| 0.999593 |
1606.00134
|
Somphong Jitman
|
Kenza Guenda, Somphong Jitman and T. Aaron Gulliver
|
Constructions of Good Entanglement-Assisted Quantum Error Correcting
Codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Entanglement-assisted quantum error correcting codes (EAQECCs) are a simple
and fundamental class of codes. They allow for the construction of quantum
codes from classical codes by relaxing the duality condition and using
pre-shared entanglement between the sender and receiver. However, in general it
is not easy to determine the number of shared pairs required to construct an
EAQECC. In this paper, we show that this number is related to the hull of the
classical code. Using this fact, we give methods to construct EAQECCs requiring
desirable amount of entanglement. This leads to design families of EAQECCs with
good error performance. Moreover, we construct maximal entanglement EAQECCs
from LCD codes. Finally, we prove the existence of asymptotically good EAQECCs
in the odd characteristic case.
|
[
{
"version": "v1",
"created": "Wed, 1 Jun 2016 06:51:43 GMT"
}
] | 2016-06-02T00:00:00 |
[
[
"Guenda",
"Kenza",
""
],
[
"Jitman",
"Somphong",
""
],
[
"Gulliver",
"T. Aaron",
""
]
] |
new_dataset
| 0.970976 |
1606.00425
|
Fidel Barrera-Cruz
|
Soroush Alamdari, Patrizio Angelini, Fidel Barrera-Cruz, Timothy M.
Chan, Giordano Da Lozzo, Giuseppe Di Battista, Fabrizio Frati, Penny Haxell,
Anna Lubiw, Maurizio Patrignani, Vincenzo Roselli, Sahil Singla, Bryan T.
Wilkinson
|
How to morph planar graph drawings
|
31 pages, 18 figures
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given an $n$-vertex graph and two straight-line planar drawings of the graph
that have the same faces and the same outer face, we show that there is a morph
(i.e., a continuous transformation) between the two drawings that preserves
straight-line planarity and consists of $O(n)$ steps, which we prove is optimal
in the worst case. Each step is a unidirectional linear morph, which means that
every vertex moves at constant speed along a straight line, and the lines are
parallel although the vertex speeds may differ. Thus we provide an efficient
version of Cairns' 1944 proof of the existence of straight-line
planarity-preserving morphs for triangulated graphs, which required an
exponential number of steps.
|
[
{
"version": "v1",
"created": "Wed, 1 Jun 2016 19:53:45 GMT"
}
] | 2016-06-02T00:00:00 |
[
[
"Alamdari",
"Soroush",
""
],
[
"Angelini",
"Patrizio",
""
],
[
"Barrera-Cruz",
"Fidel",
""
],
[
"Chan",
"Timothy M.",
""
],
[
"Da Lozzo",
"Giordano",
""
],
[
"Di Battista",
"Giuseppe",
""
],
[
"Frati",
"Fabrizio",
""
],
[
"Haxell",
"Penny",
""
],
[
"Lubiw",
"Anna",
""
],
[
"Patrignani",
"Maurizio",
""
],
[
"Roselli",
"Vincenzo",
""
],
[
"Singla",
"Sahil",
""
],
[
"Wilkinson",
"Bryan T.",
""
]
] |
new_dataset
| 0.997338 |
1602.06291
|
Shalini Ghosh
|
Shalini Ghosh, Oriol Vinyals, Brian Strope, Scott Roy, Tom Dean, Larry
Heck
|
Contextual LSTM (CLSTM) models for Large scale NLP tasks
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Documents exhibit sequential structure at multiple levels of abstraction
(e.g., sentences, paragraphs, sections). These abstractions constitute a
natural hierarchy for representing the context in which to infer the meaning of
words and larger fragments of text. In this paper, we present CLSTM (Contextual
LSTM), an extension of the recurrent neural network LSTM (Long-Short Term
Memory) model, where we incorporate contextual features (e.g., topics) into the
model. We evaluate CLSTM on three specific NLP tasks: word prediction, next
sentence selection, and sentence topic prediction. Results from experiments run
on two corpora, English documents in Wikipedia and a subset of articles from a
recent snapshot of English Google News, indicate that using both words and
topics as features improves performance of the CLSTM models over baseline LSTM
models for these tasks. For example on the next sentence selection task, we get
relative accuracy improvements of 21% for the Wikipedia dataset and 18% for the
Google News dataset. This clearly demonstrates the significant benefit of using
context appropriately in natural language (NL) tasks. This has implications for
a wide variety of NL applications like question answering, sentence completion,
paraphrase generation, and next utterance prediction in dialog systems.
|
[
{
"version": "v1",
"created": "Fri, 19 Feb 2016 20:52:08 GMT"
},
{
"version": "v2",
"created": "Tue, 31 May 2016 17:19:09 GMT"
}
] | 2016-06-01T00:00:00 |
[
[
"Ghosh",
"Shalini",
""
],
[
"Vinyals",
"Oriol",
""
],
[
"Strope",
"Brian",
""
],
[
"Roy",
"Scott",
""
],
[
"Dean",
"Tom",
""
],
[
"Heck",
"Larry",
""
]
] |
new_dataset
| 0.998083 |
1605.05894
|
Muhammad Imran
|
Muhammad Imran, Prasenjit Mitra, Carlos Castillo
|
Twitter as a Lifeline: Human-annotated Twitter Corpora for NLP of
Crisis-related Messages
|
Accepted at the 10th Language Resources and Evaluation Conference
(LREC), 6 pages
| null | null | null |
cs.CL cs.CY cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Microblogging platforms such as Twitter provide active communication channels
during mass convergence and emergency events such as earthquakes, typhoons.
During the sudden onset of a crisis situation, affected people post useful
information on Twitter that can be used for situational awareness and other
humanitarian disaster response efforts, if processed timely and effectively.
Processing social media information pose multiple challenges such as parsing
noisy, brief and informal messages, learning information categories from the
incoming stream of messages and classifying them into different classes among
others. One of the basic necessities of many of these tasks is the availability
of data, in particular human-annotated data. In this paper, we present
human-annotated Twitter corpora collected during 19 different crises that took
place between 2013 and 2015. To demonstrate the utility of the annotations, we
train machine learning classifiers. Moreover, we publish first largest word2vec
word embeddings trained on 52 million crisis-related tweets. To deal with
tweets language issues, we present human-annotated normalized lexical resources
for different lexical variations.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2016 11:32:29 GMT"
},
{
"version": "v2",
"created": "Tue, 31 May 2016 17:30:05 GMT"
}
] | 2016-06-01T00:00:00 |
[
[
"Imran",
"Muhammad",
""
],
[
"Mitra",
"Prasenjit",
""
],
[
"Castillo",
"Carlos",
""
]
] |
new_dataset
| 0.999249 |
1605.09473
|
Yu Wang
|
Yu Wang and Yang Feng and Xiyang Zhang and Richard Niemi and Jiebo Luo
|
Will Sanders Supporters Jump Ship for Trump? Fine-grained Analysis of
Twitter Followers
|
Election-series, 4 pages, 6 figures, under review for CIKM 2016.
arXiv admin note: substantial text overlap with arXiv:1605.05401
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the likelihood of Bernie Sanders supporters voting
for Donald Trump instead of Hillary Clinton. Building from a unique time-series
dataset of the three candidates' Twitter followers, which we make public here,
we first study the proportion of Sanders followers who simultaneously follow
Trump (but not Clinton) and how this evolves over time. Then we train a
convolutional neural network to classify the gender of Sanders followers, and
study whether men are more likely to jump ship for Trump than women. Our study
shows that between March and May an increasing proportion of Sanders followers
are following Trump (but not Clinton). The proportion of Sanders followers who
follow Clinton but not Trump has actually decreased. Equally important, our
study suggests that the jumping ship behavior will be affected by gender and
that men are more likely to switch to Trump than women.
|
[
{
"version": "v1",
"created": "Tue, 31 May 2016 02:51:15 GMT"
}
] | 2016-06-01T00:00:00 |
[
[
"Wang",
"Yu",
""
],
[
"Feng",
"Yang",
""
],
[
"Zhang",
"Xiyang",
""
],
[
"Niemi",
"Richard",
""
],
[
"Luo",
"Jiebo",
""
]
] |
new_dataset
| 0.998056 |
1605.09497
|
Nicholas Mattei
|
Andres Abeliuk, Haris Aziz, Gerardo Berbeglia, Serge Gaspers, Petr
Kalina, Nicholas Mattei, Dominik Peters, Paul Stursberg, Pascal Van
Hentenryck, Toby Walsh
|
Interdependent Scheduling Games
|
Accepted to IJCAI 2016
| null | null | null |
cs.GT cs.AI cs.MA cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a model of interdependent scheduling games in which each player
controls a set of services that they schedule independently. A player is free
to schedule his own services at any time; however, each of these services only
begins to accrue reward for the player when all predecessor services, which may
or may not be controlled by the same player, have been activated. This model,
where players have interdependent services, is motivated by the problems faced
in planning and coordinating large-scale infrastructures, e.g., restoring
electricity and gas to residents after a natural disaster or providing medical
care in a crisis when different agencies are responsible for the delivery of
staff, equipment, and medicine. We undertake a game-theoretic analysis of this
setting and in particular consider the issues of welfare maximization,
computing best responses, Nash dynamics, and existence and computation of Nash
equilibria.
|
[
{
"version": "v1",
"created": "Tue, 31 May 2016 04:54:46 GMT"
}
] | 2016-06-01T00:00:00 |
[
[
"Abeliuk",
"Andres",
""
],
[
"Aziz",
"Haris",
""
],
[
"Berbeglia",
"Gerardo",
""
],
[
"Gaspers",
"Serge",
""
],
[
"Kalina",
"Petr",
""
],
[
"Mattei",
"Nicholas",
""
],
[
"Peters",
"Dominik",
""
],
[
"Stursberg",
"Paul",
""
],
[
"Van Hentenryck",
"Pascal",
""
],
[
"Walsh",
"Toby",
""
]
] |
new_dataset
| 0.993448 |
1605.09626
|
Juhani Risku
|
Outi Alapekkala and Juhani Risku
|
Software startuppers took the medias paycheck Medias fightback happens
through startup culture and abstraction shifts
|
8 pages, 4 figures, ICE Conference, Trondheim Norway 2016
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The collapse of old print media and journalism happened when the Internet,
its solutions, services and communities became mature and mobile devices
reached the market. The reader abandoned printed dailies for free and mobile
access to information. The business of core industries of the early Internet
and mobile communication, the mobile network manufacturers and operators are
also in stagnation and decline. Therefore these industries may have similar
interests to improve or even restructure their own businesses as well as to
establish totally new business models by going into media and journalism.
This paper analyses, first, the production flows and business models of the
old and present media species. Second, it analyses the current market
positioning of the network manufacturers and operators. Third, the paper
suggests two avenues for media and journalism and the network manufacturers and
operators, the Trio, to join their forces to update journalism and make all
three stagnating industries great again. Last, we propose further research,
development and discussion on the topic and envision possible futures for
journalism, if the three would engage in cooperation. We see that the
discussion should consist of ethical, societal and philosophical subjects
because the development of the Internet solutions are based on 'technology
first' actions.
We find and outline a tremendous opportunity to create a new industry with
new actors through combining the interests of the network manufacturers,
network operators and journalism in a systemic solution through a strategic
alliance and collaboration Fig. 1. Software startuppers with their applications
and communities will be the drivers for this abstraction shift in media and
journalism.
|
[
{
"version": "v1",
"created": "Tue, 31 May 2016 13:37:09 GMT"
}
] | 2016-06-01T00:00:00 |
[
[
"Alapekkala",
"Outi",
""
],
[
"Risku",
"Juhani",
""
]
] |
new_dataset
| 0.994971 |
1605.09716
|
Alireza Sadeghi
|
Michele Luvisotto, Alireza Sadeghi, Farshad Lahouti, Stefano Vitturi,
Michele Zorzi
|
RCFD: A Frequency Based Channel Access Scheme for Full Duplex Wireless
Networks
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, several working implementations of in--band full--duplex wireless
systems have been presented, where the same node can transmit and receive
simultaneously in the same frequency band. The introduction of such a
possibility at the physical layer could lead to improved performance but also
poses several challenges at the MAC layer. In this paper, an innovative
mechanism of channel contention in full--duplex OFDM wireless networks is
proposed. This strategy is able to ensure efficient transmission scheduling
with the result of avoiding collisions and effectively exploiting full--duplex
opportunities. As a consequence, considerable performance improvements are
observed with respect to standard and state--of--the--art MAC protocols for
wireless networks, as highlighted by extensive simulations performed in ad hoc
wireless networks with varying number of nodes.
|
[
{
"version": "v1",
"created": "Tue, 31 May 2016 17:08:39 GMT"
}
] | 2016-06-01T00:00:00 |
[
[
"Luvisotto",
"Michele",
""
],
[
"Sadeghi",
"Alireza",
""
],
[
"Lahouti",
"Farshad",
""
],
[
"Vitturi",
"Stefano",
""
],
[
"Zorzi",
"Michele",
""
]
] |
new_dataset
| 0.985781 |
1605.09772
|
Daniel Ciolek
|
Daniel Ciolek, Victor Braberman, Nicol\'as D'Ippolito and Sebasti\'an
Uchitel
|
Technical Report: Directed Controller Synthesis of Discrete Event
Systems
|
8 pages, submitted to the 55th IEEE Conference on Decision and
Control
| null | null | null |
cs.SY cs.AI cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a Directed Controller Synthesis (DCS) technique for
discrete event systems. The DCS method explores the solution space for reactive
controllers guided by a domain-independent heuristic. The heuristic is derived
from an efficient abstraction of the environment based on the componentized way
in which complex environments are described. Then by building the composition
of the components on-the-fly DCS obtains a solution by exploring a reduced
portion of the state space. This work focuses on untimed discrete event systems
with safety and co-safety (i.e. reachability) goals. An evaluation for the
technique is presented comparing it to other well-known approaches to
controller synthesis (based on symbolic representation and compositional
analyses).
|
[
{
"version": "v1",
"created": "Tue, 31 May 2016 19:12:41 GMT"
}
] | 2016-06-01T00:00:00 |
[
[
"Ciolek",
"Daniel",
""
],
[
"Braberman",
"Victor",
""
],
[
"D'Ippolito",
"Nicolás",
""
],
[
"Uchitel",
"Sebastián",
""
]
] |
new_dataset
| 0.98194 |
1511.03552
|
Laszlo Kish
|
Bruce Zhang, Laszlo B. Kish, Claes-Goran Granqvist
|
Drawing from hats by noise-based logic
|
Accepted for Publication in the International Journal of Parallel,
Emergent and Distributed Systems. December 17, 2015
| null |
10.1080/17445760.2016.1140168
| null |
cs.ET cs.CC
|
http://creativecommons.org/licenses/by/4.0/
|
We utilize the asymmetric random telegraph wave-based instantaneous
noise-base logic scheme to represent the problem of drawing numbers from a hat,
and we consider two identical hats with the first 2^N integer numbers. In the
first problem, Alice secretly draws an arbitrary number from one of the hats,
and Bob must find out which hat is missing a number. In the second problem,
Alice removes a known number from one of the hats and another known number from
the other hat, and Bob must identify these hats. We show that, when the
preparation of the hats with the numbers is accounted for, the noise-based
logic scheme always provides an exponential speed-up and/or it requires
exponentially smaller computational complexity than deterministic alternatives.
Both the stochasticity and the ability to superpose numbers are essential
components of the exponential improvement.
|
[
{
"version": "v1",
"created": "Wed, 11 Nov 2015 16:12:07 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Nov 2015 15:35:37 GMT"
},
{
"version": "v3",
"created": "Sat, 9 Jan 2016 15:04:24 GMT"
}
] | 2016-05-31T00:00:00 |
[
[
"Zhang",
"Bruce",
""
],
[
"Kish",
"Laszlo B.",
""
],
[
"Granqvist",
"Claes-Goran",
""
]
] |
new_dataset
| 0.991435 |
1604.05431
|
Murray Elder
|
Tara Brough and Laura Ciobanu and Murray Elder and Georg Zetzsche
|
Permutations of context-free, ET0L and indexed languages
|
11 pages, 1 figure. Improved proof of the main theorem from previous
version arXiv:1412.5512
| null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For a language $L$, we consider its cyclic closure, and more generally the
language $C^k(L)$, which consists of all words obtained by partitioning words
from $L$ into $k$ factors and permuting them. We prove that the classes of ET0L
and EDT0L languages are closed under the operators $C^k$. This both sharpens
and generalises Brandst\"adt's result that if $L$ is context-free then $C^k(L)$
is context-sensitive and not context-free in general for $k\geq 3$. We also
show that the cyclic closure of an indexed language is indexed.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2016 05:07:56 GMT"
},
{
"version": "v2",
"created": "Fri, 27 May 2016 22:05:13 GMT"
}
] | 2016-05-31T00:00:00 |
[
[
"Brough",
"Tara",
""
],
[
"Ciobanu",
"Laura",
""
],
[
"Elder",
"Murray",
""
],
[
"Zetzsche",
"Georg",
""
]
] |
new_dataset
| 0.987999 |
1605.08870
|
Dmitry Zaitsev
|
Dmitry A. Zaitsev
|
k-neighborhood for Cellular Automata
|
8 pages, 3 figures, 12 references, OEIS: A265014, A266213
| null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A neighborhood for d-dimensional cellular automata is introduced that spans
the range from von Neumann to Moore neighborhood using a parameter which
represents the dimension of hypercubes connecting neighboring cells. The
neighborhood is extended to include a concept of radius. The number of
neighbors is calculated. For diamond-shaped neighborhoods, a sequence is
obtained whose partial sums equal Delannoy numbers.
|
[
{
"version": "v1",
"created": "Sat, 28 May 2016 10:08:32 GMT"
}
] | 2016-05-31T00:00:00 |
[
[
"Zaitsev",
"Dmitry A.",
""
]
] |
new_dataset
| 0.997903 |
1605.09089
|
Xun Wang
|
Xun Wang and Mary-Anne Williams
|
PyRIDE: An Interactive Development Environment for PR2 Robot
| null | null | null | null |
cs.RO cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Python based Robot Interactive Development Environment (PyRIDE) is a software
that supports rapid \textit{interactive} programming of robot skills and
behaviours on PR2/ROS (Robot Operating System) platform. One of the key
features of PyRIDE is its interactive remotely accessible Python console that
allows its users to program robots \textit{online} and in \textit{realtime} in
the same way as using the standard Python interactive interpreter. It allows
programs to be modified while they are running. PyRIDE is also a software
integration framework that abstracts and aggregates disparate low level ROS
software modules, e.g. arm joint motor controllers, and exposes their
functionalities through a unified Python programming interface. PR2 programmers
are able to experiment and develop robot behaviours without dealing with
specific details of accessing underlying softwares and hardwares. PyRIDE
provides a client-server mechanism that allows remote user access of the robot
functionalities, e.g. remote robot monitoring and control, access real-time
robot camera image data etc. This enables multi-modal human robot interactions
using different devices and user interfaces. All these features are seamlessly
integrated into one lightweight and portable middleware package. In this paper,
we use four real life scenarios to demonstrate PyRIDE key features and
illustrate the usefulness of software.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2016 02:40:43 GMT"
}
] | 2016-05-31T00:00:00 |
[
[
"Wang",
"Xun",
""
],
[
"Williams",
"Mary-Anne",
""
]
] |
new_dataset
| 0.999127 |
1605.09153
|
Zolt\'an Kov\'acs
|
Francisco Botana and Zolt\'an Kov\'acs
|
New tools in GeoGebra offering novel opportunities to teach loci and
envelopes
|
21 pages, 19 figures
| null | null | null |
cs.CG math.HO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
GeoGebra is an open source mathematics education software tool being used in
thousands of schools worldwide. Since version 4.2 (December 2012) it supports
symbolic computation of locus equations as a result of joint effort of
mathematicians and programmers helping the GeoGebra developer team. The joint
work, based on former researches, started in 2010 and continued until present
days, now enables fast locus and envelope computations even in a web browser in
full HTML5 mode. Thus, classroom demonstrations and deeper investigations of
dynamic analytical geometry are ready to use on tablets or smartphones as well.
In our paper we consider some typical secondary school topics where
investigating loci is a natural way of defining mathematical objects. We
discuss the technical possibilities in GeoGebra by using the new commands
LocusEquation and Envelope, showing through different examples how these
commands can enrich the learning of mathematics. The covered school topics
include definition of a parabola and other conics in different situations like
synthetic definitions or points and curves associated with a triangle. Despite
the fact that in most secondary schools, no other than quadratic curves are
discussed, simple generalization of some exercises, and also every day
problems, will smoothly introduce higher order algebraic curves. Thus our paper
mentions the cubic curve "strophoid" as locus of the orthocenter of a triangle
when one of the vertices moves on a circle. Also quartic "cardioid" and sextic
"nephroid" can be of every day interest when investigating mathematics in, say,
a coffee cup.
We also focus on GeoGebra specific tips and tricks when constructing a
geometric figure to be available for getting the locus equation. Among others,
simplification and synthetization (via the intercept theorem) are mentioned.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2016 09:37:28 GMT"
}
] | 2016-05-31T00:00:00 |
[
[
"Botana",
"Francisco",
""
],
[
"Kovács",
"Zoltán",
""
]
] |
new_dataset
| 0.998758 |
1605.09185
|
Sebastian Brunner M. Sc.
|
Sebastian G. Brunner, Franz Steinmetz, Rico Belder, Andreas D\"omel
|
RAFCON: a Graphical Tool for Task Programming and Mission Control
|
8 pages, 5 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There are many application fields for robotic systems including service
robotics, search and rescue missions, industry and space robotics. As the
scenarios in these areas grow more and more complex, there is a high demand for
powerful tools to efficiently program heterogeneous robotic systems. Therefore,
we created RAFCON, a graphical tool to develop robotic tasks and to be used for
mission control by remotely monitoring the execution of the tasks. To define
the tasks, we use state machines which support hierarchies and concurrency.
Together with a library concept, even complex scenarios can be handled
gracefully. RAFCON supports sophisticated debugging functionality and tightly
integrates error handling and recovery mechanisms. A GUI with a powerful state
machine editor makes intuitive, visual programming and fast prototyping
possible. We demonstrated the capabilities of our tool in the SpaceBotCamp
national robotic competition, in which our mobile robot solved all exploration
and assembly challenges fully autonomously. It is therefore also a promising
tool for various RoboCup leagues.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2016 11:40:49 GMT"
}
] | 2016-05-31T00:00:00 |
[
[
"Brunner",
"Sebastian G.",
""
],
[
"Steinmetz",
"Franz",
""
],
[
"Belder",
"Rico",
""
],
[
"Dömel",
"Andreas",
""
]
] |
new_dataset
| 0.99967 |
1604.06778
|
Yan Duan
|
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel
|
Benchmarking Deep Reinforcement Learning for Continuous Control
|
14 pages, ICML 2016
| null | null | null |
cs.LG cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, researchers have made significant progress combining the advances
in deep learning for learning feature representations with reinforcement
learning. Some notable examples include training agents to play Atari games
based on raw pixel data and to acquire advanced manipulation skills using raw
sensory inputs. However, it has been difficult to quantify progress in the
domain of continuous control due to the lack of a commonly adopted benchmark.
In this work, we present a benchmark suite of continuous control tasks,
including classic tasks like cart-pole swing-up, tasks with very high state and
action dimensionality such as 3D humanoid locomotion, tasks with partial
observations, and tasks with hierarchical structure. We report novel findings
based on the systematic evaluation of a range of implemented reinforcement
learning algorithms. Both the benchmark and reference implementations are
released at https://github.com/rllab/rllab in order to facilitate experimental
reproducibility and to encourage adoption by other researchers.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2016 18:57:24 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Apr 2016 06:16:06 GMT"
},
{
"version": "v3",
"created": "Fri, 27 May 2016 19:25:59 GMT"
}
] | 2016-05-30T00:00:00 |
[
[
"Duan",
"Yan",
""
],
[
"Chen",
"Xi",
""
],
[
"Houthooft",
"Rein",
""
],
[
"Schulman",
"John",
""
],
[
"Abbeel",
"Pieter",
""
]
] |
new_dataset
| 0.963858 |
1605.05989
|
Sharif Hossain
|
Anupam Chattopadhyay and Sharif Md Khairul Hossain
|
Ancilla-free Reversible Logic Synthesis via Sorting
|
12 pages, 13 figures
| null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reversible logic synthesis is emerging as a major research component for
post-CMOS computing devices, in particular Quantum computing. In this work, we
link the reversible logic synthesis problem to sorting algorithms. Based on our
analysis, an alternative derivation of the worst-case complexity of generated
reversible circuits is provided. Furthermore, a novel column-wise reversible
logic synthesis method, termed RevCol, is designed with inspiration from radix
sort. Extending the principles of RevCol, we present a hybrid reversible logic
synthesis framework. The theoretical and experimental results are presented.
The results are extensively benchmarked with state-of-the-art ancilla-free
reversible logic synthesis methods.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2016 15:10:18 GMT"
},
{
"version": "v2",
"created": "Thu, 26 May 2016 23:44:57 GMT"
}
] | 2016-05-30T00:00:00 |
[
[
"Chattopadhyay",
"Anupam",
""
],
[
"Hossain",
"Sharif Md Khairul",
""
]
] |
new_dataset
| 0.996536 |
1605.08367
|
Rodrigo De Salvo Braz
|
Rodrigo de Salvo Braz, Ciaran O'Reilly, Vibhav Gogate, Rina Dechter
|
Probabilistic Inference Modulo Theories
|
Submitted to StarAI-16 workshop as closely revised version of
IJCAI-16 paper
| null | null | null |
cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present SGDPLL(T), an algorithm that solves (among many other problems)
probabilistic inference modulo theories, that is, inference problems over
probabilistic models defined via a logic theory provided as a parameter
(currently, propositional, equalities on discrete sorts, and inequalities, more
specifically difference arithmetic, on bounded integers). While many solutions
to probabilistic inference over logic representations have been proposed,
SGDPLL(T) is simultaneously (1) lifted, (2) exact and (3) modulo theories, that
is, parameterized by a background logic theory. This offers a foundation for
extending it to rich logic languages such as data structures and relational
data. By lifted, we mean algorithms with constant complexity in the domain size
(the number of values that variables can take). We also detail a solver for
summations with difference arithmetic and show experimental results from a
scenario in which SGDPLL(T) is much faster than a state-of-the-art
probabilistic solver.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2016 17:10:10 GMT"
},
{
"version": "v2",
"created": "Fri, 27 May 2016 02:29:20 GMT"
}
] | 2016-05-30T00:00:00 |
[
[
"Braz",
"Rodrigo de Salvo",
""
],
[
"O'Reilly",
"Ciaran",
""
],
[
"Gogate",
"Vibhav",
""
],
[
"Dechter",
"Rina",
""
]
] |
new_dataset
| 0.956693 |
1605.08639
|
Craig Alan Feinstein
|
Craig Alan Feinstein
|
Dialogue Concerning The Two Chief World Views
|
5 pages
|
Progress in Physics, 2016 (vol. 12), issue 3, pp. 280-283
| null | null |
cs.GL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In 1632, Galileo Galilei wrote a book called \textit{Dialogue Concerning the
Two Chief World Systems} which compared the new Copernican model of the
universe with the old Ptolemaic model. His book took the form of a dialogue
between three philosophers, Salviati, a proponent of the Copernican model,
Simplicio, a proponent of the Ptolemaic model, and Sagredo, who was initially
open-minded and neutral. In this paper, I am going to use Galileo's idea to
present a dialogue between three modern philosophers, Mr. Spock, a proponent of
the view that $\mathsf{P} \neq \mathsf{NP}$, Professor Simpson, a proponent of
the view that $\mathsf{P} = \mathsf{NP}$, and Judge Wapner, who is initially
open-minded and neutral.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2016 19:36:40 GMT"
}
] | 2016-05-30T00:00:00 |
[
[
"Feinstein",
"Craig Alan",
""
]
] |
new_dataset
| 0.996899 |
1506.07933
|
Amir Gholami
|
Amir Gholami, Judith Hill, Dhairya Malhotra, George Biros
|
AccFFT: A library for distributed-memory FFT on CPU and GPU
architectures
|
Parallel FFT Library
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new library for parallel distributed Fast Fourier Transforms
(FFT). The importance of FFT in science and engineering and the advances in
high performance computing necessitate further improvements. AccFFT extends
existing FFT libraries for CUDA-enabled Graphics Processing Units (GPUs) to
distributed memory clusters. We use overlapping communication method to reduce
the overhead of PCIe transfers from/to GPU. We present numerical results on the
Maverick platform at the Texas Advanced Computing Center (TACC) and on the
Titan system at the Oak Ridge National Laboratory (ORNL). We present the
scaling of the library up to 4,096 K20 GPUs of Titan.
|
[
{
"version": "v1",
"created": "Fri, 26 Jun 2015 01:19:31 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Sep 2015 19:58:27 GMT"
},
{
"version": "v3",
"created": "Wed, 25 May 2016 20:06:16 GMT"
}
] | 2016-05-27T00:00:00 |
[
[
"Gholami",
"Amir",
""
],
[
"Hill",
"Judith",
""
],
[
"Malhotra",
"Dhairya",
""
],
[
"Biros",
"George",
""
]
] |
new_dataset
| 0.984403 |
1605.08300
|
Siddhartha Kumar
|
Siddhartha Kumar and Eirik Rosnes and Alexandre Graell i Amat
|
Secure Repairable Fountain Codes
|
To appear in IEEE Communications Letters
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this letter, we provide the construction of repairable fountain codes
(RFCs) for distributed storage systems that are information-theoretically
secure against an eavesdropper that has access to the data stored in a subset
of the storage nodes and the data downloaded to repair an additional subset of
storage nodes. The security is achieved by adding random symbols to the
message, which is then encoded by the concatenation of a Gabidulin code and an
RFC. We compare the achievable code rates of the proposed codes with those of
secure minimum storage regenerating codes and secure locally repairable codes.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2016 14:36:32 GMT"
}
] | 2016-05-27T00:00:00 |
[
[
"Kumar",
"Siddhartha",
""
],
[
"Rosnes",
"Eirik",
""
],
[
"Amat",
"Alexandre Graell i",
""
]
] |
new_dataset
| 0.998486 |
1605.08374
|
Zelda Mariet
|
Zelda Mariet and Suvrit Sra
|
Kronecker Determinantal Point Processes
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Determinantal Point Processes (DPPs) are probabilistic models over all
subsets a ground set of $N$ items. They have recently gained prominence in
several applications that rely on "diverse" subsets. However, their
applicability to large problems is still limited due to the $\mathcal O(N^3)$
complexity of core tasks such as sampling and learning. We enable efficient
sampling and learning for DPPs by introducing KronDPP, a DPP model whose kernel
matrix decomposes as a tensor product of multiple smaller kernel matrices. This
decomposition immediately enables fast exact sampling. But contrary to what one
may expect, leveraging the Kronecker product structure for speeding up DPP
learning turns out to be more difficult. We overcome this challenge, and derive
batch and stochastic optimization algorithms for efficiently learning the
parameters of a KronDPP.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2016 17:33:31 GMT"
}
] | 2016-05-27T00:00:00 |
[
[
"Mariet",
"Zelda",
""
],
[
"Sra",
"Suvrit",
""
]
] |
new_dataset
| 0.993533 |
1605.08412
|
Tobias Strau{\ss}
|
Gundram Leifert and Tobias Strau{\ss} and Tobias Gr\"uning and Roger
Labahn
|
CITlab ARGUS for historical handwritten documents
|
Description of CITlab's System for the HTRtS 2015 Task : Handwritten
Text Recognition on the tranScriptorium Dataset
| null | null | null |
cs.CV cs.AI cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe CITlab's recognition system for the HTRtS competition attached to
the 13. International Conference on Document Analysis and Recognition, ICDAR
2015. The task comprises the recognition of historical handwritten documents.
The core algorithms of our system are based on multi-dimensional recurrent
neural networks (MDRNN) and connectionist temporal classification (CTC). The
software modules behind that as well as the basic utility technologies are
essentially powered by PLANET's ARGUS framework for intelligent text
recognition and image processing.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2016 19:19:43 GMT"
}
] | 2016-05-27T00:00:00 |
[
[
"Leifert",
"Gundram",
""
],
[
"Strauß",
"Tobias",
""
],
[
"Grüning",
"Tobias",
""
],
[
"Labahn",
"Roger",
""
]
] |
new_dataset
| 0.996261 |
1511.00418
|
Mikhail Ivanov
|
Mikhail Ivanov, Fredrik Brannstrom, Alexandre Graell i Amat, and Petar
Popovski
|
Broadcast Coded Slotted ALOHA: A Finite Frame Length Analysis
|
arXiv admin note: text overlap with arXiv:1501.03389
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an uncoordinated medium access control (MAC) protocol, called
all-to-all broadcast coded slotted ALOHA (B-CSA) for reliable all-to-all
broadcast with strict latency constraints. In B-CSA, each user acts as both
transmitter and receiver in a half-duplex mode. The half-duplex mode gives rise
to a double unequal error protection (DUEP) phenomenon: the more a user repeats
its packet, the higher the probability that this packet is decoded by other
users, but the lower the probability for this user to decode packets from
others. We analyze the performance of B-CSA over the packet erasure channel for
a finite frame length. In particular, we provide a general analysis of stopping
sets for B-CSA and derive an analytical approximation of the performance in the
error floor (EF) region, which captures the DUEP feature of B-CSA. Simulation
results reveal that the proposed approximation predicts very well the
performance of B-CSA in the EF region. Finally, we consider the application of
B-CSA to vehicular communications and compare its performance with that of
carrier sense multiple access (CSMA), the current MAC protocol in vehicular
networks. The results show that B-CSA is able to support a much larger number
of users than CSMA with the same reliability.
|
[
{
"version": "v1",
"created": "Mon, 2 Nov 2015 09:22:53 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2016 15:24:49 GMT"
}
] | 2016-05-26T00:00:00 |
[
[
"Ivanov",
"Mikhail",
""
],
[
"Brannstrom",
"Fredrik",
""
],
[
"Amat",
"Alexandre Graell i",
""
],
[
"Popovski",
"Petar",
""
]
] |
new_dataset
| 0.998751 |
1511.05644
|
Alireza Makhzani
|
Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow,
Brendan Frey
|
Adversarial Autoencoders
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose the "adversarial autoencoder" (AAE), which is a
probabilistic autoencoder that uses the recently proposed generative
adversarial networks (GAN) to perform variational inference by matching the
aggregated posterior of the hidden code vector of the autoencoder with an
arbitrary prior distribution. Matching the aggregated posterior to the prior
ensures that generating from any part of prior space results in meaningful
samples. As a result, the decoder of the adversarial autoencoder learns a deep
generative model that maps the imposed prior to the data distribution. We show
how the adversarial autoencoder can be used in applications such as
semi-supervised classification, disentangling style and content of images,
unsupervised clustering, dimensionality reduction and data visualization. We
performed experiments on MNIST, Street View House Numbers and Toronto Face
datasets and show that adversarial autoencoders achieve competitive results in
generative modeling and semi-supervised classification tasks.
|
[
{
"version": "v1",
"created": "Wed, 18 Nov 2015 02:32:39 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2016 00:17:45 GMT"
}
] | 2016-05-26T00:00:00 |
[
[
"Makhzani",
"Alireza",
""
],
[
"Shlens",
"Jonathon",
""
],
[
"Jaitly",
"Navdeep",
""
],
[
"Goodfellow",
"Ian",
""
],
[
"Frey",
"Brendan",
""
]
] |
new_dataset
| 0.996412 |
1601.01645
|
Luis Valente
|
Luis Valente (1), Esteban Clua (1), Alexandre Ribeiro Silva (2), Bruno
Feij\'o (3) ((1) Universidade Federal Fluminense, (2) Instituto Federal do
Tri\^angulo Mineiro, (3) PUC-Rio)
|
Live-action Virtual Reality Games
|
10 pages, technical report published at "Monografias em Ci\^encia da
Computa\c{c}\~ao, PUC-Rio" (ISSN 0103-9741), MCC03/15, July 2015
| null | null |
MCC03/15
|
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes the concept of "live-action virtual reality games" as a
new genre of digital games based on an innovative combination of live-action,
mixed-reality, context-awareness, and interaction paradigms that comprise
tangible objects, context-aware input devices, and embedded/embodied
interactions. Live-action virtual reality games are "live-action games" because
a player physically acts out (using his/her real body and senses) his/her
"avatar" (his/her virtual representation) in the game stage, which is the
mixed-reality environment where the game happens. The game stage is a kind of
"augmented virtuality"; a mixed-reality where the virtual world is augmented
with real-world information. In live-action virtual reality games, players wear
HMD devices and see a virtual world that is constructed using the physical
world architecture as the basic geometry and context information. Physical
objects that reside in the physical world are also mapped to virtual elements.
Live-action virtual reality games keeps the virtual and real-worlds
superimposed, requiring players to physically move in the environment and to
use different interaction paradigms (such as tangible and embodied interaction)
to complete game activities. This setup enables the players to touch physical
architectural elements (such as walls) and other objects, "feeling" the game
stage. Players have free movement and may interact with physical objects placed
in the game stage, implicitly and explicitly. Live-action virtual reality games
differ from similar game concepts because they sense and use contextual
information to create unpredictable game experiences, giving rise to emergent
gameplay.
|
[
{
"version": "v1",
"created": "Thu, 7 Jan 2016 19:30:37 GMT"
}
] | 2016-05-26T00:00:00 |
[
[
"Valente",
"Luis",
""
],
[
"Clua",
"Esteban",
""
],
[
"Silva",
"Alexandre Ribeiro",
""
],
[
"Feijó",
"Bruno",
""
]
] |
new_dataset
| 0.995672 |
1604.05921
|
Alexandre De Siqueira
|
Alexandre Fioravante de Siqueira and Fl\'avio Camargo Cabrera and
Wagner Massayuki Nakasuga and Aylton Pagamisse and Aldo Eloizo Job
|
Jansen-MIDAS: a multi-level photomicrograph segmentation software based
on isotropic undecimated wavelets
|
arXiv version: 25 pages, 10 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image segmentation, the process of separating the elements within an image,
is frequently used for obtaining information from photomicrographs. However,
segmentation methods should be used with reservations: incorrect segmentation
can mislead when interpreting regions of interest (ROI), thus decreasing the
success rate of additional procedures. Multi-Level Starlet Segmentation (MLSS)
and Multi-Level Starlet Optimal Segmentation (MLSOS) were developed to address
the photomicrograph segmentation deficiency on general tools. These methods
gave rise to Jansen-MIDAS, an open-source software which a scientist can use to
obtain a multi-level threshold segmentation of his/hers photomicrographs. This
software is presented in two versions: a text-based version, for GNU Octave,
and a graphical user interface (GUI) version, for MathWorks MATLAB. It can be
used to process several types of images, becoming a reliable alternative to the
scientist.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2016 12:40:50 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2016 10:21:28 GMT"
}
] | 2016-05-26T00:00:00 |
[
[
"de Siqueira",
"Alexandre Fioravante",
""
],
[
"Cabrera",
"Flávio Camargo",
""
],
[
"Nakasuga",
"Wagner Massayuki",
""
],
[
"Pagamisse",
"Aylton",
""
],
[
"Job",
"Aldo Eloizo",
""
]
] |
new_dataset
| 0.998066 |
1605.07512
|
Xiang Sun
|
Xiang Sun and Nirwan Ansari
|
Green Cloudlet Network: A Distributed Green Mobile Cloud Network
|
accepted for publication in IEEE Network on March 29, 2016
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article introduces a Green Cloudlet Network (GCN) architecture in the
context of mobile cloud computing. The proposed architecture is aimed at
providing seamless and low End-to-End (E2E) delay between a User Equipment (UE)
and its Avatar (its software clone) in the cloudlets to facilitate the
application workloads offloading process. Furthermore, Software Define
Networking (SDN) based core network is introduced in the GCN architecture by
replacing the traditional Evolved Packet Core (EPC) in the LTE network in order
to provide efficient communications connections between different end points.
Cloudlet Network File System (CNFS) is designed based on the proposed
architecture in order to protect Avatars' dataset against hardware failure and
improve the Avatars' performance in terms of data access latency. Moreover,
green energy supplement is proposed in the architecture in order to reduce the
extra Operational Expenditure (OPEX) and CO2 footprint incurred by running the
distributed cloudlets. Owing to the temporal and spatial dynamics of both the
green energy generation and energy demands of Green Cloudlet Systems (GCSs),
designing an optimal green energy management strategy based on the
characteristics of the green energy generation and the energy demands of eNBs
and cloudlets to minimize the on-grid energy consumption is critical to the
cloudlet provider.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2016 15:51:27 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2016 02:50:17 GMT"
}
] | 2016-05-26T00:00:00 |
[
[
"Sun",
"Xiang",
""
],
[
"Ansari",
"Nirwan",
""
]
] |
new_dataset
| 0.998916 |
1605.07734
|
James McCauley
|
James McCauley, Zhi Liu, Aurojit Panda, Teemu Koponen, Barath
Raghavan, Jennifer Rexford and Scott Shenker
|
Recursive SDN for Carrier Networks
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Control planes for global carrier networks should be programmable (so that
new functionality can be easily introduced) and scalable (so they can handle
the numerical scale and geographic scope of these networks). Neither
traditional control planes nor new SDN-based control planes meet both of these
goals. In this paper, we propose a framework for recursive routing computations
that combines the best of SDN (programmability) and traditional networks
(scalability through hierarchy) to achieve these two desired properties.
Through simulation on graphs of up to 10,000 nodes, we evaluate our design's
ability to support a variety of routing and traffic engineering solutions,
while incorporating a fast failure recovery mechanism.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2016 05:02:52 GMT"
}
] | 2016-05-26T00:00:00 |
[
[
"McCauley",
"James",
""
],
[
"Liu",
"Zhi",
""
],
[
"Panda",
"Aurojit",
""
],
[
"Koponen",
"Teemu",
""
],
[
"Raghavan",
"Barath",
""
],
[
"Rexford",
"Jennifer",
""
],
[
"Shenker",
"Scott",
""
]
] |
new_dataset
| 0.993589 |
1301.4981
|
Florian Deloup L
|
Guillaume Bonfante and Florian Deloup
|
The genus of regular languages
|
36 pages, about 30 pdf figures; table of contents and new references
added; pages numbered; other minor changes; email and addresses of authors
added; new example and figure added; improvement of the upper bound in the
main theorem
| null |
10.1017/S0960129516000037
| null |
cs.FL cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The article defines and studies the genus of finite state deterministic
automata (FSA) and regular languages. Indeed, a FSA can be seen as a graph for
which the notion of genus arises. At the same time, a FSA has a semantics via
its underlying language. It is then natural to make a connection between the
languages and the notion of genus. After we introduce and justify the the
notion of the genus for regular languages, the following questions are
addressed. First, depending on the size of the alphabet, we provide upper and
lower bounds on the genus of regular languages : we show that under a
relatively generic condition on the alphabet and the geometry of the automata,
the genus grows at least linearly in terms of the size of the automata. Second,
we show that the topological cost of the powerset determinization procedure is
exponential. Third, we prove that the notion of minimization is orthogonal to
the notion of genus. Fourth, we build regular languages of arbitrary large
genus: the notion of genus defines a proper hierarchy of regular languages.
|
[
{
"version": "v1",
"created": "Mon, 21 Jan 2013 20:50:48 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Feb 2013 21:39:29 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Feb 2013 17:18:05 GMT"
},
{
"version": "v4",
"created": "Wed, 20 Feb 2013 12:43:51 GMT"
},
{
"version": "v5",
"created": "Wed, 6 Nov 2013 16:34:55 GMT"
},
{
"version": "v6",
"created": "Wed, 23 Apr 2014 07:10:00 GMT"
}
] | 2016-05-25T00:00:00 |
[
[
"Bonfante",
"Guillaume",
""
],
[
"Deloup",
"Florian",
""
]
] |
new_dataset
| 0.998239 |
1505.03065
|
Hamidreza Aghasi
|
Hamidreza Aghasi, Rouhollah Mousavi Iraei, Azad Naeemi and Ehsan
Afshari
|
Smart Detector Cell: A Scalable All-Spin Circuit for Low Power
Non-Boolean Pattern Recognition
|
This article is accepted to appear in IEEE Transactions on
Nanotechnology
| null |
10.1109/TNANO.2016.2530779
| null |
cs.ET cond-mat.mes-hall
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new circuit for non-Boolean recognition of binary images.
Employing all-spin logic (ASL) devices, we design logic comparators and
non-Boolean decision blocks for compact and efficient computation. By
manipulation of fan-in number in different stages of the circuit, the structure
can be extended for larger training sets or larger images. Operating based on
the mainly similarity idea, the system is capable of constructing a mean image
and compare it with a separate input image within a short decision time. Taking
advantage of the non-volatility of ASL devices, the proposed circuit is capable
of hybrid memory/logic operation. Compared with existing CMOS pattern
recognition circuits, this work achieves a smaller footprint, lower power
consumption, faster decision time and a lower operational voltage. To the best
of our knowledge, this is the first fully spin-based complete pattern
recognition circuit demonstrated using spintronic devices.
|
[
{
"version": "v1",
"created": "Sun, 10 May 2015 07:42:14 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Dec 2015 05:01:58 GMT"
},
{
"version": "v3",
"created": "Mon, 15 Feb 2016 16:34:45 GMT"
}
] | 2016-05-25T00:00:00 |
[
[
"Aghasi",
"Hamidreza",
""
],
[
"Iraei",
"Rouhollah Mousavi",
""
],
[
"Naeemi",
"Azad",
""
],
[
"Afshari",
"Ehsan",
""
]
] |
new_dataset
| 0.989048 |
1511.07053
|
Francesco Visin
|
Francesco Visin, Marco Ciccone, Adriana Romero, Kyle Kastner,
Kyunghyun Cho, Yoshua Bengio, Matteo Matteucci, Aaron Courville
|
ReSeg: A Recurrent Neural Network-based Model for Semantic Segmentation
|
In CVPR Deep Vision Workshop, 2016
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a structured prediction architecture, which exploits the local
generic features extracted by Convolutional Neural Networks and the capacity of
Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed
architecture, called ReSeg, is based on the recently introduced ReNet model for
image classification. We modify and extend it to perform the more challenging
task of semantic segmentation. Each ReNet layer is composed of four RNN that
sweep the image horizontally and vertically in both directions, encoding
patches or activations, and providing relevant global information. Moreover,
ReNet layers are stacked on top of pre-trained convolutional layers, benefiting
from generic local features. Upsampling layers follow ReNet layers to recover
the original image resolution in the final predictions. The proposed ReSeg
architecture is efficient, flexible and suitable for a variety of semantic
segmentation tasks. We evaluate ReSeg on several widely-used semantic
segmentation datasets: Weizmann Horse, Oxford Flower, and CamVid; achieving
state-of-the-art performance. Results show that ReSeg can act as a suitable
architecture for semantic segmentation tasks, and may have further applications
in other structured prediction problems. The source code and model
hyperparameters are available on https://github.com/fvisin/reseg.
|
[
{
"version": "v1",
"created": "Sun, 22 Nov 2015 19:25:27 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Jan 2016 14:41:56 GMT"
},
{
"version": "v3",
"created": "Tue, 24 May 2016 15:55:41 GMT"
}
] | 2016-05-25T00:00:00 |
[
[
"Visin",
"Francesco",
""
],
[
"Ciccone",
"Marco",
""
],
[
"Romero",
"Adriana",
""
],
[
"Kastner",
"Kyle",
""
],
[
"Cho",
"Kyunghyun",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Matteucci",
"Matteo",
""
],
[
"Courville",
"Aaron",
""
]
] |
new_dataset
| 0.975223 |
1605.07008
|
Sebastian B\"ock
|
Sebastian B\"ock, Filip Korzeniowski, Jan Schl\"uter, Florian Krebs,
Gerhard Widmer
|
madmom: a new Python Audio and Music Signal Processing Library
| null | null | null | null |
cs.SD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present madmom, an open-source audio processing and music
information retrieval (MIR) library written in Python. madmom features a
concise, NumPy-compatible, object oriented design with simple calling
conventions and sensible default values for all parameters, which facilitates
fast prototyping of MIR applications. Prototypes can be seamlessly converted
into callable processing pipelines through madmom's concept of Processors,
callable objects that run transparently on multiple cores. Processors can also
be serialised, saved, and re-run to allow results to be easily reproduced
anywhere. Apart from low-level audio processing, madmom puts emphasis on
musically meaningful high-level features. Many of these incorporate machine
learning techniques and madmom provides a module that implements some in MIR
commonly used methods such as hidden Markov models and neural networks.
Additionally, madmom comes with several state-of-the-art MIR algorithms for
onset detection, beat, downbeat and meter tracking, tempo estimation, and piano
transcription. These can easily be incorporated into bigger MIR systems or run
as stand-alone programs.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2016 13:29:09 GMT"
}
] | 2016-05-25T00:00:00 |
[
[
"Böck",
"Sebastian",
""
],
[
"Korzeniowski",
"Filip",
""
],
[
"Schlüter",
"Jan",
""
],
[
"Krebs",
"Florian",
""
],
[
"Widmer",
"Gerhard",
""
]
] |
new_dataset
| 0.999772 |
1605.07083
|
Michele Ciavotta Dr.
|
Michele Ciavotta, Eugenio Gianniti, Danilo Ardagna
|
D-SPACE4Cloud: A Design Tool for Big Data Applications
| null | null | null | null |
cs.DC cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The last years have seen a steep rise in data generation worldwide, with the
development and widespread adoption of several software projects targeting the
Big Data paradigm. Many companies currently engage in Big Data analytics as
part of their core business activities, nonetheless there are no tools and
techniques to support the design of the underlying hardware configuration
backing such systems. In particular, the focus in this report is set on Cloud
deployed clusters, which represent a cost-effective alternative to on premises
installations. We propose a novel tool implementing a battery of optimization
and prediction techniques integrated so as to efficiently assess several
alternative resource configurations, in order to determine the minimum cost
cluster deployment satisfying QoS constraints. Further, the experimental
campaign conducted on real systems shows the validity and relevance of the
proposed method.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2016 16:37:54 GMT"
},
{
"version": "v2",
"created": "Tue, 24 May 2016 17:07:58 GMT"
}
] | 2016-05-25T00:00:00 |
[
[
"Ciavotta",
"Michele",
""
],
[
"Gianniti",
"Eugenio",
""
],
[
"Ardagna",
"Danilo",
""
]
] |
new_dataset
| 0.998485 |
1605.07167
|
Silvia Puglisi
|
Silvia Puglisi, David Rebollo-Monedero and Jordi Forn\'e
|
On Web User Tracking: How Third-Party Http Requests Track Users'
Browsing Patterns for Personalised Advertising
|
arXiv admin note: substantial text overlap with arXiv:1605.06537
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
On today's Web, users trade access to their private data for content and
services. Advertising sustains the business model of many websites and
applications. Efficient and successful advertising relies on predicting users'
actions and tastes to suggest a range of products to buy. It follows that,
while surfing the Web users leave traces regarding their identity in the form
of activity patterns and unstructured data. We analyse how advertising networks
build user footprints and how the suggested advertising reacts to changes in
the user behaviour.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2016 21:22:48 GMT"
}
] | 2016-05-25T00:00:00 |
[
[
"Puglisi",
"Silvia",
""
],
[
"Rebollo-Monedero",
"David",
""
],
[
"Forné",
"Jordi",
""
]
] |
new_dataset
| 0.9558 |
1605.07316
|
Jonathan Cacace Dr
|
Jonathan Cacace, Alberto Finzi and Vincenzo Lippiello
|
Multimodal Interaction with Multiple Co-located Drones in Search and
Rescue Missions
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a multimodal interaction framework suitable for a human rescuer
that operates in proximity with a set of co-located drones during search
missions. This work is framed in the context of the SHERPA project whose goal
is to develop a mixed ground and aerial robotic platform to support search and
rescue activities in a real-world alpine scenario. Differently from typical
human-drone interaction settings, here the operator is not fully dedicated to
the drones, but involved in search and rescue tasks, hence only able to provide
sparse, incomplete, although high-value, instructions to the robots. This
operative scenario requires a human-interaction framework that supports
multimodal communication along with an effective and natural mixed-initiative
interaction between the human and the robots. In this work, we illustrate the
domain and the proposed multimodal interaction framework discussing the system
at work in a simulated case study.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2016 07:00:39 GMT"
}
] | 2016-05-25T00:00:00 |
[
[
"Cacace",
"Jonathan",
""
],
[
"Finzi",
"Alberto",
""
],
[
"Lippiello",
"Vincenzo",
""
]
] |
new_dataset
| 0.992633 |
1605.07343
|
Sergio Consoli
|
Diego Reforgiato Recupero, Mario Castronovo, Sergio Consoli, Tarcisio
Costanzo, Aldo Gangemi, Luigi Grasso, Giorgia Lodi, Gianluca Merendino,
Misael Mongiov\`i, Valentina Presutti, Salvatore Davide Rapisarda, Salvo
Rosa, Emanuele Spampinato
|
An Innovative, Open, Interoperable Citizen Engagement Cloud Platform for
Smart Government and Users' Interaction
|
23 pages, 7 figures, journal paper
|
Journal of the Knowledge Economy, 2016, 7(2):388-412
|
10.1007/s13132-016-0361-0
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces an open, interoperable, and cloud-computing-based
citizen engagement platform for the management of administrative processes of
public administrations, which also increases the engagement of citizens. The
citizen engagement platform is the outcome of a 3-year Italian national project
called PRISMA (Interoperable cloud platforms for smart government). The aim of
the project is to constitute a new model of digital ecosystem that can support
and enable new methods of interaction among public administrations, citizens,
companies, and other stakeholders surrounding cities. The platform has been
defined by the media as a flexible (enable the addition of any kind of
application or service) and open (enable access to open services) Italian
"cloud" that allows public administrations to access to a vast knowledge base
represented as linked open data to be reused by a stakeholder community with
the aim of developing new applications ("Cloud Apps") tailored to the specific
needs of citizens. The platform has been used by Catania and Syracuse
municipalities, two of the main cities of southern Italy, located in the
Sicilian region. The fully adoption of the platform is rapidly spreading around
the whole region (local developers have already used available application
programming interfaces (APIs) to create additional services for citizens and
administrations) to such an extent that other provinces of Sicily and Italy in
general expressed their interest for its usage. The platform is available
online and, as mentioned above, is open source and provides APIs for full
exploitation.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2016 09:11:27 GMT"
}
] | 2016-05-25T00:00:00 |
[
[
"Recupero",
"Diego Reforgiato",
""
],
[
"Castronovo",
"Mario",
""
],
[
"Consoli",
"Sergio",
""
],
[
"Costanzo",
"Tarcisio",
""
],
[
"Gangemi",
"Aldo",
""
],
[
"Grasso",
"Luigi",
""
],
[
"Lodi",
"Giorgia",
""
],
[
"Merendino",
"Gianluca",
""
],
[
"Mongiovì",
"Misael",
""
],
[
"Presutti",
"Valentina",
""
],
[
"Rapisarda",
"Salvatore Davide",
""
],
[
"Rosa",
"Salvo",
""
],
[
"Spampinato",
"Emanuele",
""
]
] |
new_dataset
| 0.9979 |
1605.07363
|
Apratim Bhattacharyya
|
Apratim Bhattacharyya, Mateusz Malinowski, Mario Fritz
|
Spatio-Temporal Image Boundary Extrapolation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Boundary prediction in images as well as video has been a very active topic
of research and organizing visual information into boundaries and segments is
believed to be a corner stone of visual perception. While prior work has
focused on predicting boundaries for observed frames, our work aims at
predicting boundaries of future unobserved frames. This requires our model to
learn about the fate of boundaries and extrapolate motion patterns. We
experiment on established real-world video segmentation dataset, which provides
a testbed for this new task. We show for the first time spatio-temporal
boundary extrapolation in this challenging scenario. Furthermore, we show
long-term prediction of boundaries in situations where the motion is governed
by the laws of physics. We successfully predict boundaries in a billiard
scenario without any assumptions of a strong parametric model or any object
notion. We argue that our model has with minimalistic model assumptions derived
a notion of 'intuitive physics' that can be applied to novel scenes.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2016 10:22:33 GMT"
}
] | 2016-05-25T00:00:00 |
[
[
"Bhattacharyya",
"Apratim",
""
],
[
"Malinowski",
"Mateusz",
""
],
[
"Fritz",
"Mario",
""
]
] |
new_dataset
| 0.999567 |
1605.07369
|
Ganesh Sundaramoorthi
|
Dong Lao and Ganesh Sundaramoorthi
|
Quickest Moving Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a general framework and method for simultaneous detection and
segmentation of an object in a video that moves (or comes into view of the
camera) at some unknown time in the video. The method is an online approach
based on motion segmentation, and it operates under dynamic backgrounds caused
by a moving camera or moving nuisances. The goal of the method is to detect and
segment the object as soon as it moves. Due to stochastic variability in the
video and unreliability of the motion signal, several frames are needed to
reliably detect the object. The method is designed to detect and segment with
minimum delay subject to a constraint on the false alarm rate. The method is
derived as a problem of Quickest Change Detection. Experiments on a dataset
show the effectiveness of our method in minimizing detection delay subject to
false alarm constraints.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2016 10:40:13 GMT"
}
] | 2016-05-25T00:00:00 |
[
[
"Lao",
"Dong",
""
],
[
"Sundaramoorthi",
"Ganesh",
""
]
] |
new_dataset
| 0.970166 |
1405.5919
|
Szymon Grabowski
|
Szymon Grabowski, Marcin Raniszewski
|
Two simple full-text indexes based on the suffix array
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose two suffix array inspired full-text indexes. One, called SA-hash,
augments the suffix array with a hash table to speed up pattern searches due to
significantly narrowed search interval before the binary search phase. The
other, called FBCSA, is a compact data structure, similar to M{\"a}kinen's
compact suffix array, but working on fixed sized blocks. Experiments on the
Pizza~\&~Chili 200\,MB datasets show that SA-hash is about 2--3 times faster in
pattern searches (counts) than the standard suffix array, for the price of
requiring $0.2n-1.1n$ bytes of extra space, where $n$ is the text length, and
setting a minimum pattern length. FBCSA is relatively fast in single cell
accesses (a few times faster than related indexes at about the same or better
compression), but not competitive if many consecutive cells are to be
extracted. Still, for the task of extracting, e.g., 10 successive cells its
time-space relation remains attractive.
|
[
{
"version": "v1",
"created": "Thu, 22 May 2014 21:55:00 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2016 17:04:14 GMT"
}
] | 2016-05-24T00:00:00 |
[
[
"Grabowski",
"Szymon",
""
],
[
"Raniszewski",
"Marcin",
""
]
] |
new_dataset
| 0.99877 |
1508.05488
|
Gang Mei
|
Gang Mei
|
CudaChain: A Practical GPU-accelerated 2D Convex Hull Algorithm
| null |
SpringerPlus 2016:2284
|
10.1186/s40064-016-2284-4
| null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a practical GPU-accelerated convex hull algorithm and a
novel Sorting-based Preprocessing Approach (SPA) for planar point sets. The
proposed algorithm consists of two stages: (1) two rounds of preprocessing
performed on the GPU and (2) the finalization of calculating the expected
convex hull on the CPU. We first discard the interior points that locate inside
a quadrilateral formed by four extreme points, and then distribute the
remaining points into several (typically four) sub regions. For each subset of
points, we first sort them in parallel, then perform the second round of
discarding using SPA, and finally form a simple chain for the current remaining
points. A simple polygon can be easily generated by directly connecting all the
chains in sub regions. We at last obtain the expected convex hull of the input
points by calculating the convex hull of the simple polygon. We use the library
Thrust to realize the parallel sorting, reduction, and partitioning for better
efficiency and simplicity. Experimental results show that our algorithm
achieves 5x ~ 6x speedups over the Qhull implementation for 20M points. Thus,
this algorithm is competitive in practical applications for its simplicity and
satisfied efficiency.
|
[
{
"version": "v1",
"created": "Sat, 22 Aug 2015 09:32:11 GMT"
}
] | 2016-05-24T00:00:00 |
[
[
"Mei",
"Gang",
""
]
] |
new_dataset
| 0.998821 |
1605.06536
|
Silvia Puglisi
|
Silvia Puglisi, Angel Torres Moreira, Gerard Marrugat Torregrosa,
Monica Aguilar Igartua, and Jordi Forn\'e
|
MobilitApp: Analysing mobility data of citizens in the metropolitan area
of Barcelona
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
MobilitApp is a platform designed to provide smart mobility services in urban
areas. It is designed to help citizens and transport authorities alike.
Citizens will be able to access the MobilitApp mobile application and decide
their optimal transportation strategy by visualising their usual routes, their
carbon footprint, receiving tips, analytics and general mobility information,
such as traffic and incident alerts. Transport authorities and service
providers will be able to access information about the mobility pattern of
citizens to o er their best services, improve costs and planning. The
MobilitApp client runs on Android devices and records synchronously, while
running in the background, periodic location updates from its users. The
information obtained is processed and analysed to understand the mobility
patterns of our users in the city of Barcelona, Spain.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2016 20:46:02 GMT"
}
] | 2016-05-24T00:00:00 |
[
[
"Puglisi",
"Silvia",
""
],
[
"Moreira",
"Angel Torres",
""
],
[
"Torregrosa",
"Gerard Marrugat",
""
],
[
"Igartua",
"Monica Aguilar",
""
],
[
"Forné",
"Jordi",
""
]
] |
new_dataset
| 0.999682 |
1605.06537
|
Silvia Puglisi
|
Silvia Puglisi, David Rebollo-Monedero, and Jordi Forn\'e
|
You never surf alone. Ubiquitous tracking of users' browsing habits
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the early age of the internet users enjoyed a large level of anonymity. At
the time web pages were just hypertext documents; almost no personalisation of
the user experience was o ered. The Web today has evolved as a world wide
distributed system following specific architectural paradigms. On the web now,
an enormous quantity of user generated data is shared and consumed by a network
of applications and services, reasoning upon users expressed preferences and
their social and physical connections. Advertising networks follow users'
browsing habits while they surf the web, continuously collecting their traces
and surfing patterns. We analyse how users tracking happens on the web by
measuring their online footprint and estimating how quickly advertising
networks are able to pro le users by their browsing habits.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2016 20:49:54 GMT"
}
] | 2016-05-24T00:00:00 |
[
[
"Puglisi",
"Silvia",
""
],
[
"Rebollo-Monedero",
"David",
""
],
[
"Forné",
"Jordi",
""
]
] |
new_dataset
| 0.994005 |
1605.06710
|
Agostinho Rosa
|
Nuno Ramos, Sergio Salgado and Agostinho C Rosa
|
Chess Player by Co-Evolutionary Algorithm
|
8 pages, 11 figures and 12 tables
| null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A co-evolutionary algorithm (CA) based chess player is presented.
Implementation details of the algorithms, namely coding, population, variation
operators are described. The alpha-beta or mini-max like behaviour of the
player is achieved through two competitive or cooperative populations. Special
attention is given to the fitness function evaluation (the heart of the
solution). Test results on algorithms vs. algorithms or human player is
provided.
|
[
{
"version": "v1",
"created": "Sat, 21 May 2016 23:45:38 GMT"
}
] | 2016-05-24T00:00:00 |
[
[
"Ramos",
"Nuno",
""
],
[
"Salgado",
"Sergio",
""
],
[
"Rosa",
"Agostinho C",
""
]
] |
new_dataset
| 0.973999 |
1605.06778
|
Maximilian Schmitt
|
Maximilian Schmitt and Bj\"orn W. Schuller
|
openXBOW - Introducing the Passau Open-Source Crossmodal Bag-of-Words
Toolkit
|
9 pages, 1 figure, pre-print
| null | null | null |
cs.CV cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce openXBOW, an open-source toolkit for the generation of
bag-of-words (BoW) representations from multimodal input. In the BoW principle,
word histograms were first used as features in document classification, but the
idea was and can easily be adapted to, e.g., acoustic or visual low-level
descriptors, introducing a prior step of vector quantisation. The openXBOW
toolkit supports arbitrary numeric input features and text input and
concatenates computed subbags to a final bag. It provides a variety of
extensions and options. To our knowledge, openXBOW is the first publicly
available toolkit for the generation of crossmodal bags-of-words. The
capabilities of the tool are exemplified in two sample scenarios:
time-continuous speech-based emotion recognition and sentiment analysis in
tweets where improved results over other feature representation forms were
observed.
|
[
{
"version": "v1",
"created": "Sun, 22 May 2016 12:14:55 GMT"
}
] | 2016-05-24T00:00:00 |
[
[
"Schmitt",
"Maximilian",
""
],
[
"Schuller",
"Björn W.",
""
]
] |
new_dataset
| 0.987475 |
1605.06894
|
Chao Wang
|
Chao Wang, Qi Yu, Lei Gong, Xi Li, Yuan Xie, Xuehai Zhou
|
DLAU: A Scalable Deep Learning Accelerator Unit on FPGA
| null | null | null | null |
cs.LG cs.DC cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the emerging field of machine learning, deep learning shows excellent
ability in solving complex learning problems. However, the size of the networks
becomes increasingly large scale due to the demands of the practical
applications, which poses significant challenge to construct a high performance
implementations of deep learning neural networks. In order to improve the
performance as well to maintain the low power cost, in this paper we design
DLAU, which is a scalable accelerator architecture for large-scale deep
learning networks using FPGA as the hardware prototype. The DLAU accelerator
employs three pipelined processing units to improve the throughput and utilizes
tile techniques to explore locality for deep learning applications.
Experimental results on the state-of-the-art Xilinx FPGA board demonstrate that
the DLAU accelerator is able to achieve up to 36.1x speedup comparing to the
Intel Core2 processors, with the power consumption at 234mW.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2016 04:56:04 GMT"
}
] | 2016-05-24T00:00:00 |
[
[
"Wang",
"Chao",
""
],
[
"Yu",
"Qi",
""
],
[
"Gong",
"Lei",
""
],
[
"Li",
"Xi",
""
],
[
"Xie",
"Yuan",
""
],
[
"Zhou",
"Xuehai",
""
]
] |
new_dataset
| 0.999466 |
1605.06903
|
Grigore Stamatescu
|
Grigore Stamatescu, Iulia Stamatescu, Nicoleta Arghira, Vasile
Calofir, Ioana Fagarasan
|
Building Cyber-Physical Energy Systems
|
4 pages, 6 figures
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The built environment, as hallmark of modern society, has become one of the
key drivers of energy demand. This makes for meaningful application of novel
paradigms, such as cyber-physical systems, with large scale impact for both
primary energy consumption reduction as well as (micro-) grid stability
problems. In a bottom-up approach we analyze the drivers of CPS design,
deployment and adoption in smart buildings. This ranges from low-level embedded
and real time system challenges, instrumentation and control issues, up to ICT
security layers protecting information in a world of ubiquitous connectivity. A
modeling and predictive control framework is also discussed with outlook of
deployment for HVAC optimization to a new facility for research from our
campus.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2016 06:28:26 GMT"
}
] | 2016-05-24T00:00:00 |
[
[
"Stamatescu",
"Grigore",
""
],
[
"Stamatescu",
"Iulia",
""
],
[
"Arghira",
"Nicoleta",
""
],
[
"Calofir",
"Vasile",
""
],
[
"Fagarasan",
"Ioana",
""
]
] |
new_dataset
| 0.99152 |
1605.06927
|
Mahdi Hajiaghayi
|
Mahdi Hajiaghayi, Hamid Jafarkhani
|
MDS Codes with Progressive Engagement Property for Cloud Storage Systems
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fast and efficient failure recovery is a new challenge for cloud storage
systems with a large number of storage nodes. A pivotal recovery metric upon
the failure of a storage node is repair bandwidth cost which refers to the
amount of data that must be downloaded for regenerating the lost data. Since
all the surviving nodes are not always accessible, we intend to introduce a
class of maximum distance separable (MDS) codes that can be re-used when the
number of selected nodes varies yet yields close to optimal repair bandwidth.
Such codes provide flexibility in engaging more surviving nodes in favor of
reducing the repair bandwidth without redesigning the code structure and
changing the content of the existing nodes. We call this property of MDS codes
progressive engagement. This name comes from the fact that if a failure occurs,
it is shown that the best strategy is to incrementally engage the surviving
nodes according to their accessing cost (delay, number of hops, traffic load or
availability in general) until the repair-bandwidth or accessing cost
constraints are met. We argue that the existing MDS codes fail to satisfy the
progressive engagement property. We subsequently present a search algorithm to
find a new set of codes named rotation codes that has both progressive
engagement and MDS properties. Furthermore, we illustrate how the existing
permutation codes can provide progressive engagement by modifying the original
recovery scheme. Simulation results are presented to compare the repair
bandwidth performance of such codes when the number of participating nodes
varies as well as their speed of single failure recovery.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2016 08:16:28 GMT"
}
] | 2016-05-24T00:00:00 |
[
[
"Hajiaghayi",
"Mahdi",
""
],
[
"Jafarkhani",
"Hamid",
""
]
] |
new_dataset
| 0.995062 |
1605.07026
|
Bappaditya Mandal
|
Bappaditya Mandal and Nizar Ouarti
|
Spontaneous vs. Posed smiles - can we tell the difference?
|
10 pages, 5 figures, 6 tables, International Conference on Computer
Vision and Image Processing (CVIP 2016)
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Smile is an irrefutable expression that shows the physical state of the mind
in both true and deceptive ways. Generally, it shows happy state of the mind,
however, `smiles' can be deceptive, for example people can give a smile when
they feel happy and sometimes they might also give a smile (in a different way)
when they feel pity for others. This work aims to distinguish spontaneous
(felt) smile expressions from posed (deliberate) smiles by extracting and
analyzing both global (macro) motion of the face and subtle (micro) changes in
the facial expression features through both tracking a series of facial
fiducial markers as well as using dense optical flow. Specifically the eyes and
lips features are captured and used for analysis. It aims to automatically
classify all smiles into either `spontaneous' or `posed' categories, by using
support vector machines (SVM). Experimental results on large database show
promising results as compared to other relevant methods.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2016 14:21:30 GMT"
}
] | 2016-05-24T00:00:00 |
[
[
"Mandal",
"Bappaditya",
""
],
[
"Ouarti",
"Nizar",
""
]
] |
new_dataset
| 0.997136 |
1506.07118
|
Moshe Sulamy
|
Yehuda Afek, Deborah M. Gordon, and Moshe Sulamy
|
Idle Ants Have a Role
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Using elementary distributed computing techniques we suggest an explanation
for two unexplained phenomena in regards to ant colonies, (a) a substantial
amount of ants in an ant colony are idle, and (b) the observed low
survivability of new ant colonies in nature. Ant colonies employ task
allocation, in which ants progress from one task to the other, to meet changing
demands introduced by the environment. Extending the biological task allocation
model given in [Pacala, Gordon and Godfray 1996] we present a distributed
algorithm which mimics the mechanism ants use to solve task allocation
efficiently in nature. Analyzing the time complexity of the algorithm reveals
an exponential gap on the time it takes an ant colony to satisfy a certain work
demand with and without idle ants. We provide an $O(\ln n)$ upper bound when a
constant fraction of the colony are idle ants, and a contrasting lower bound of
$\Omega(n)$ when there are no idle ants, where $n$ is the total number of ants
in the colony.
|
[
{
"version": "v1",
"created": "Tue, 23 Jun 2015 18:06:14 GMT"
},
{
"version": "v2",
"created": "Fri, 20 May 2016 15:50:47 GMT"
}
] | 2016-05-23T00:00:00 |
[
[
"Afek",
"Yehuda",
""
],
[
"Gordon",
"Deborah M.",
""
],
[
"Sulamy",
"Moshe",
""
]
] |
new_dataset
| 0.998978 |
1602.02261
|
Rodrigo Nogueira
|
Rodrigo Nogueira and Kyunghyun Cho
|
End-to-End Goal-Driven Web Navigation
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a goal-driven web navigation as a benchmark task for evaluating an
agent with abilities to understand natural language and plan on partially
observed environments. In this challenging task, an agent navigates through a
website, which is represented as a graph consisting of web pages as nodes and
hyperlinks as directed edges, to find a web page in which a query appears. The
agent is required to have sophisticated high-level reasoning based on natural
languages and efficient sequential decision-making capability to succeed. We
release a software tool, called WebNav, that automatically transforms a website
into this goal-driven web navigation task, and as an example, we make WikiNav,
a dataset constructed from the English Wikipedia. We extensively evaluate
different variants of neural net based artificial agents on WikiNav and observe
that the proposed goal-driven web navigation well reflects the advances in
models, making it a suitable benchmark for evaluating future progress.
Furthermore, we extend the WikiNav with question-answer pairs from Jeopardy!
and test the proposed agent based on recurrent neural networks against strong
inverted index based search engines. The artificial agents trained on WikiNav
outperforms the engined based approaches, demonstrating the capability of the
proposed goal-driven navigation as a good proxy for measuring the progress in
real-world tasks such as focused crawling and question-answering.
|
[
{
"version": "v1",
"created": "Sat, 6 Feb 2016 14:53:02 GMT"
},
{
"version": "v2",
"created": "Fri, 20 May 2016 16:26:58 GMT"
}
] | 2016-05-23T00:00:00 |
[
[
"Nogueira",
"Rodrigo",
""
],
[
"Cho",
"Kyunghyun",
""
]
] |
new_dataset
| 0.996519 |
1604.01595
|
Kazuyuki Asada
|
Kazuyuki Asada and Naoki Kobayashi
|
On Word and Frontier Languages of Unsafe Higher-Order Grammars
| null | null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Higher-order grammars are extensions of regular and context-free grammars,
where non-terminals may take parameters. They have been extensively studied in
1980's, and restudied recently in the context of model checking and program
verification. We show that the class of unsafe order-(n+1) word languages
coincides with the class of frontier languages of unsafe order-n tree
languages. We use intersection types for transforming an order-(n+1) word
grammar to a corresponding order-n tree grammar. The result has been proved for
safe languages by Damm in 1982, but it has been open for unsafe languages, to
our knowledge. Various known results on higher-order grammars can be obtained
as almost immediate corollaries of our result.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2016 12:47:52 GMT"
},
{
"version": "v2",
"created": "Mon, 16 May 2016 11:49:15 GMT"
},
{
"version": "v3",
"created": "Fri, 20 May 2016 06:43:01 GMT"
}
] | 2016-05-23T00:00:00 |
[
[
"Asada",
"Kazuyuki",
""
],
[
"Kobayashi",
"Naoki",
""
]
] |
new_dataset
| 0.96362 |
1605.05843
|
Zhixiong Niu
|
Zhixiong Niu, Hong Xu, Yongqiang Tian, Libin Liu, Peng Wang, Zhenhua
Li
|
Benchmarking NFV Software Dataplanes
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A key enabling technology of NFV is software dataplane, which has attracted
much attention in both academia and industry recently. Yet, till now there is
little understanding about its performance in practice. In this paper, we make
a benchmark measurement study of NFV software dataplanes in terms of packet
processing capability, one of the most fundamental and critical performance
metrics. Specifically, we compare two state-of-the-art open-source NFV
dataplanes, SoftNIC and ClickOS, using commodity 10GbE NICs under various
typical workloads. Our key observations are that (1) both dataplanes have
performance issues processing small (<=128B) packets; (2) it is not always best
to put all VMs of a service chain on one server due to NUMA effect. We propose
resource allocation strategies to remedy the problems, including carefully
adding CPU cores and vNICs to VMs, and spreading VMs of a service chain to
separate servers. To fundamentally address these problems and scale their
performance, SoftNIC and ClickOS could improve the support for NIC queues and
multiple cores.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2016 08:21:15 GMT"
},
{
"version": "v2",
"created": "Fri, 20 May 2016 03:25:39 GMT"
}
] | 2016-05-23T00:00:00 |
[
[
"Niu",
"Zhixiong",
""
],
[
"Xu",
"Hong",
""
],
[
"Tian",
"Yongqiang",
""
],
[
"Liu",
"Libin",
""
],
[
"Wang",
"Peng",
""
],
[
"Li",
"Zhenhua",
""
]
] |
new_dataset
| 0.97195 |
1605.06177
|
David Hall
|
David Hall and Pietro Perona
|
Fine-Grained Classification of Pedestrians in Video: Benchmark and State
of the Art
|
CVPR 2015
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A video dataset that is designed to study fine-grained categorisation of
pedestrians is introduced. Pedestrians were recorded "in-the-wild" from a
moving vehicle. Annotations include bounding boxes, tracks, 14 keypoints with
occlusion information and the fine-grained categories of age (5 classes), sex
(2 classes), weight (3 classes) and clothing style (4 classes). There are a
total of 27,454 bounding box and pose labels across 4222 tracks. This dataset
is designed to train and test algorithms for fine-grained categorisation of
people, it is also useful for benchmarking tracking, detection and pose
estimation of pedestrians. State-of-the-art algorithms for fine-grained
classification and pose estimation were tested using the dataset and the
results are reported as a useful performance baseline.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2016 00:03:42 GMT"
}
] | 2016-05-23T00:00:00 |
[
[
"Hall",
"David",
""
],
[
"Perona",
"Pietro",
""
]
] |
new_dataset
| 0.999758 |
1605.06216
|
Longjiang Qu
|
Kangquan Li, Longjiang Qu, Chao Li and Shaojing Fu
|
New Permutation Trinomials Constructed from Fractional Polynomials
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Permutation trinomials over finite fields consititute an active research due
to their simple algebraic form, additional extraordinary properties and their
wide applications in many areas of science and engineering. In the present
paper, six new classes of permutation trinomials over finite fields of even
characteristic are constructed from six fractional polynomials. Further, three
classes of permutation trinomials over finite fields of characteristic three
are raised. Distinct from most of the known permutation trinomials which are
with fixed exponents, our results are some general classes of permutation
trinomials with one parameter in the exponents. Finally, we propose a few
conjectures.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2016 05:54:32 GMT"
}
] | 2016-05-23T00:00:00 |
[
[
"Li",
"Kangquan",
""
],
[
"Qu",
"Longjiang",
""
],
[
"Li",
"Chao",
""
],
[
"Fu",
"Shaojing",
""
]
] |
new_dataset
| 0.995007 |
1605.06285
|
Stefano Salsano
|
Giuseppe Siracusano, Roberto Bifulco, Simon Kuenzer, Stefano Salsano,
Nicola Blefari Melazzi, Felipe Huici
|
On-the-Fly TCP Acceleration with Miniproxy
|
Extended version of paper accepted for ACM HotMiddlebox 2016
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
TCP proxies are basic building blocks for many advanced middleboxes. In this
paper we present Miniproxy, a TCP proxy built on top of a specialized
minimalistic cloud operating system. Miniproxy's connection handling
performance is comparable to that of full-fledged GNU/Linux TCP proxy
implementations, but its minimalistic footprint enables new use cases.
Specifically, Miniproxy requires as little as 6 MB to run and boots in tens of
milliseconds, enabling massive consolidation, on-the-fly instantiation and edge
cloud computing scenarios. We demonstrate the benefits of Miniproxy by
implementing and evaluating a TCP acceleration use case.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2016 10:55:40 GMT"
}
] | 2016-05-23T00:00:00 |
[
[
"Siracusano",
"Giuseppe",
""
],
[
"Bifulco",
"Roberto",
""
],
[
"Kuenzer",
"Simon",
""
],
[
"Salsano",
"Stefano",
""
],
[
"Melazzi",
"Nicola Blefari",
""
],
[
"Huici",
"Felipe",
""
]
] |
new_dataset
| 0.970631 |
1605.06319
|
Nikola Milo\v{s}evi\'c MSc
|
Nikola Milosevic and Goran Nenadic
|
As Cool as a Cucumber: Towards a Corpus of Contemporary Similes in
Serbian
|
Phrase modelling, simile extraction, language resource building,
crowdsourcing
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Similes are natural language expressions used to compare unlikely things,
where the comparison is not taken literally. They are often used in everyday
communication and are an important part of cultural heritage. Having an
up-to-date corpus of similes is challenging, as they are constantly coined
and/or adapted to the contemporary times. In this paper we present a
methodology for semi-automated collection of similes from the world wide web
using text mining techniques. We expanded an existing corpus of traditional
similes (containing 333 similes) by collecting 446 additional expressions. We,
also, explore how crowdsourcing can be used to extract and curate new similes.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2016 12:20:27 GMT"
}
] | 2016-05-23T00:00:00 |
[
[
"Milosevic",
"Nikola",
""
],
[
"Nenadic",
"Goran",
""
]
] |
new_dataset
| 0.999429 |
1605.06325
|
Xing Wei
|
Xing Wei, Qingxiong Yang, Yihong Gong, Ming-Hsuan Yang, Narendra Ahuja
|
Superpixel Hierarchy
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Superpixel segmentation is becoming ubiquitous in computer vision. In
practice, an object can either be represented by a number of segments in finer
levels of detail or included in a surrounding region at coarser levels of
detail, and thus a superpixel segmentation hierarchy is useful for applications
that require different levels of image segmentation detail depending on the
particular image objects segmented. Unfortunately, there is no method that can
generate all scales of superpixels accurately in real-time. As a result, a
simple yet effective algorithm named Super Hierarchy (SH) is proposed in this
paper. It is as accurate as the state-of-the-art but 1-2 orders of magnitude
faster. The proposed method can be directly integrated with recent efficient
edge detectors like the structured forest edges to significantly outperforms
the state-of-the-art in terms of segmentation accuracy. Quantitative and
qualitative evaluation on a number of computer vision applications was
conducted, demonstrating that the proposed method is the top performer.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2016 12:38:24 GMT"
}
] | 2016-05-23T00:00:00 |
[
[
"Wei",
"Xing",
""
],
[
"Yang",
"Qingxiong",
""
],
[
"Gong",
"Yihong",
""
],
[
"Yang",
"Ming-Hsuan",
""
],
[
"Ahuja",
"Narendra",
""
]
] |
new_dataset
| 0.991242 |
1605.06417
|
Yuan Jiang
|
Wei Shen, Yuan Jiang, Wenjing Gao, Dan Zeng, Xinggang Wang
|
Shape Recognition by Bag of Skeleton-associated Contour Parts
|
10 pages. Has been Accepted by Pattern Recognition Letters 2016
| null |
10.1007/978-3-662-45646-0_40
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Contour and skeleton are two complementary representations for shape
recognition. However combining them in a principal way is nontrivial, as they
are generally abstracted by different structures (closed string vs graph),
respectively. This paper aims at addressing the shape recognition problem by
combining contour and skeleton according to the correspondence between them.
The correspondence provides a straightforward way to associate skeletal
information with a shape contour. More specifically, we propose a new shape
descriptor. named Skeleton-associated Shape Context (SSC), which captures the
features of a contour fragment associated with skeletal information. Benefited
from the association, the proposed shape descriptor provides the complementary
geometric information from both contour and skeleton parts, including the
spatial distribution and the thickness change along the shape part. To form a
meaningful shape feature vector for an overall shape, the Bag of Features
framework is applied to the SSC descriptors extracted from it. Finally, the
shape feature vector is fed into a linear SVM classifier to recognize the
shape. The encouraging experimental results demonstrate that the proposed way
to combine contour and skeleton is effective for shape recognition, which
achieves the state-of-the-art performances on several standard shape
benchmarks.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2016 16:07:41 GMT"
}
] | 2016-05-23T00:00:00 |
[
[
"Shen",
"Wei",
""
],
[
"Jiang",
"Yuan",
""
],
[
"Gao",
"Wenjing",
""
],
[
"Zeng",
"Dan",
""
],
[
"Wang",
"Xinggang",
""
]
] |
new_dataset
| 0.999637 |
1605.06424
|
Zeeshan Lakhani
|
Russell Brown, Zeeshan Lakhani, and Paul Place
|
Big(ger) Sets: decomposed delta CRDT Sets in Riak
|
PaPoC '16 Proceedings of the 2nd Workshop on the Principles and
Practice of Consistency for Distributed Data, Article No. 5, Publication
Date: 2016-04-18
| null |
10.1145/2911151.2911156
| null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
CRDT[24] Sets as implemented in Riak[6] perform poorly for writes, both as
cardinality grows, and for sets larger than 500KB[25]. Riak users wish to
create high cardinality CRDT sets, and expect better than O(n) performance for
individual insert and remove operations. By decomposing a CRDT set on disk, and
employing delta-replication[2], we can achieve far better performance than just
delta replication alone: relative to the size of causal metadata, not the
cardinality of the set, and we can support sets that are 100s times the size of
Riak sets, while still providing the same level of consistency. There is a
trade-off in read performance but we expect it is mitigated by enabling queries
on sets.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2016 16:27:13 GMT"
}
] | 2016-05-23T00:00:00 |
[
[
"Brown",
"Russell",
""
],
[
"Lakhani",
"Zeeshan",
""
],
[
"Place",
"Paul",
""
]
] |
new_dataset
| 0.998045 |
1602.06456
|
Junil Choi
|
Junil Choi and Vutha Va and Nuria Gonzalez-Prelcic and Robert Daniels
and Chandra R. Bhat and Robert W. Heath Jr
|
Millimeter Wave Vehicular Communication to Support Massive Automotive
Sensing
|
7 pages, 5 figures, 1 table, submitted to IEEE Communications
Magazine
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As driving becomes more automated, vehicles are being equipped with more
sensors generating even higher data rates. Radars (RAdio Detection and Ranging)
are used for object detection, visual cameras as virtual mirrors, and LIDARs
(LIght Detection and Ranging) for generating high resolution depth associated
range maps, all to enhance the safety and efficiency of driving. Connected
vehicles can use wireless communication to exchange sensor data, allowing them
to enlarge their sensing range and improve automated driving functions.
Unfortunately, conventional technologies, such as dedicated short-range
communication (DSRC) and 4G cellular communication, do not support the
gigabit-per-second data rates that would be required for raw sensor data
exchange between vehicles. This paper makes the case that millimeter wave
(mmWave) communication is the only viable approach for high bandwidth connected
vehicles. The motivations and challenges associated with using mmWave for
vehicle-to-vehicle and vehicle-to-infrastructure applications are highlighted.
A high-level solution to one key challenge - the overhead of mmWave beam
training - is proposed. The critical feature of this solution is to leverage
information derived from the sensors or DSRC as side information for the mmWave
communication link configuration. Examples and simulation results show that the
beam alignment overhead can be reduced by using position information obtained
from DSRC.
|
[
{
"version": "v1",
"created": "Sat, 20 Feb 2016 20:42:38 GMT"
},
{
"version": "v2",
"created": "Wed, 18 May 2016 21:31:27 GMT"
}
] | 2016-05-20T00:00:00 |
[
[
"Choi",
"Junil",
""
],
[
"Va",
"Vutha",
""
],
[
"Gonzalez-Prelcic",
"Nuria",
""
],
[
"Daniels",
"Robert",
""
],
[
"Bhat",
"Chandra R.",
""
],
[
"Heath",
"Robert W.",
"Jr"
]
] |
new_dataset
| 0.999742 |
1605.05841
|
Muhammad Haris Mughees Mr.
|
Muhammad Haris Mughees, Zhiyun Qian, Zubair Shafiq, Karishma Dash, Pan
Hui
|
A First Look at Ad-block Detection: A New Arms Race on the Web
|
12 pages, 12 figures
| null |
10.1145/1235
| null |
cs.CR cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rise of ad-blockers is viewed as an economic threat by online publishers,
especially those who primarily rely on ad- vertising to support their services.
To address this threat, publishers have started retaliating by employing
ad-block detectors, which scout for ad-blocker users and react to them by
restricting their content access and pushing them to whitelist the website or
disabling ad-blockers altogether. The clash between ad-blockers and ad-block
detectors has resulted in a new arms race on the web. In this paper, we present
the first systematic measurement and analysis of ad-block detection on the web.
We have designed and implemented a machine learning based tech- nique to
automatically detect ad-block detection, and use it to study the deployment of
ad-block detectors on Alexa top- 100K websites. The approach is promising with
precision of 94.8% and recall of 93.1%. We characterize the spectrum of
different strategies used by websites for ad-block detection. We find that most
of publishers use fairly simple passive ap- proaches for ad-block detection.
However, we also note that a few websites use third-party services, e.g.
PageFair, for ad-block detection and response. The third-party services use
active deception and other sophisticated tactics to de- tect ad-blockers. We
also find that the third-party services can successfully circumvent ad-blockers
and display ads on publisher websites.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2016 08:07:22 GMT"
}
] | 2016-05-20T00:00:00 |
[
[
"Mughees",
"Muhammad Haris",
""
],
[
"Qian",
"Zhiyun",
""
],
[
"Shafiq",
"Zubair",
""
],
[
"Dash",
"Karishma",
""
],
[
"Hui",
"Pan",
""
]
] |
new_dataset
| 0.99852 |
1605.05863
|
Ran Tao
|
Ran Tao, Efstratios Gavves, Arnold W.M. Smeulders
|
Siamese Instance Search for Tracking
|
This paper is accepted to the IEEE Conference on Computer Vision and
Pattern Recognition, 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present a tracker, which is radically different from
state-of-the-art trackers: we apply no model updating, no occlusion detection,
no combination of trackers, no geometric matching, and still deliver
state-of-the-art tracking performance, as demonstrated on the popular online
tracking benchmark (OTB) and six very challenging YouTube videos. The presented
tracker simply matches the initial patch of the target in the first frame with
candidates in a new frame and returns the most similar patch by a learned
matching function. The strength of the matching function comes from being
extensively trained generically, i.e., without any data of the target, using a
Siamese deep neural network, which we design for tracking. Once learned, the
matching function is used as is, without any adapting, to track previously
unseen targets. It turns out that the learned matching function is so powerful
that a simple tracker built upon it, coined Siamese INstance search Tracker,
SINT, which only uses the original observation of the target from the first
frame, suffices to reach state-of-the-art performance. Further, we show the
proposed tracker even allows for target re-identification after the target was
absent for a complete video shot.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2016 09:24:40 GMT"
}
] | 2016-05-20T00:00:00 |
[
[
"Tao",
"Ran",
""
],
[
"Gavves",
"Efstratios",
""
],
[
"Smeulders",
"Arnold W. M.",
""
]
] |
new_dataset
| 0.990662 |
1605.05912
|
Kele Xu
|
Aurore Jaumard-Hakoun, Kele Xu, Pierre Roussel-Ragot, G\'erard
Dreyfus, Bruce Denby
|
Tongue contour extraction from ultrasound images based on deep neural
network
|
5 pages, 3 figures, published in The International Congress of
Phonetic Sciences, 2015
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Studying tongue motion during speech using ultrasound is a standard
procedure, but automatic ultrasound image labelling remains a challenge, as
standard tongue shape extraction methods typically require human intervention.
This article presents a method based on deep neural networks to automatically
extract tongue contour from ultrasound images on a speech dataset. We use a
deep autoencoder trained to learn the relationship between an image and its
related contour, so that the model is able to automatically reconstruct
contours from the ultrasound image alone. In this paper, we use an automatic
labelling algorithm instead of time-consuming hand-labelling during the
training process, and estimate the performances of both automatic labelling and
contour extraction as compared to hand-labelling. Observed results show quality
scores comparable to the state of the art.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2016 12:20:40 GMT"
}
] | 2016-05-20T00:00:00 |
[
[
"Jaumard-Hakoun",
"Aurore",
""
],
[
"Xu",
"Kele",
""
],
[
"Roussel-Ragot",
"Pierre",
""
],
[
"Dreyfus",
"Gérard",
""
],
[
"Denby",
"Bruce",
""
]
] |
new_dataset
| 0.968715 |
1605.06083
|
Emiel van Miltenburg
|
Emiel van Miltenburg
|
Stereotyping and Bias in the Flickr30K Dataset
|
In: Proceedings of the Workshop on Multimodal Corpora (MMC-2016),
pages 1-4. Editors: Jens Edlund, Dirk Heylen and Patrizia Paggio
| null | null | null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An untested assumption behind the crowdsourced descriptions of the images in
the Flickr30K dataset (Young et al., 2014) is that they "focus only on the
information that can be obtained from the image alone" (Hodosh et al., 2013, p.
859). This paper presents some evidence against this assumption, and provides a
list of biases and unwarranted inferences that can be found in the Flickr30K
dataset. Finally, it considers methods to find examples of these, and discusses
how we should deal with stereotype-driven descriptions in future applications.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2016 19:17:23 GMT"
}
] | 2016-05-20T00:00:00 |
[
[
"van Miltenburg",
"Emiel",
""
]
] |
new_dataset
| 0.976994 |
1510.04165
|
Xueliang Li
|
Xueliang Li and John P. Gallagher
|
Fine-Grained Energy Modeling for the Source Code of a Mobile Application
|
10 pages
| null | null | null |
cs.OH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Energy efficiency has a significant influence on user experience of
battery-driven devices such as smartphones and tablets. The goal of an energy
model for source code is to lay a foundation for the application of
energy-saving techniques during software development. The challenge is to
relate hardware energy consumption to high-level application code, considering
the complex run-time context and software stack. Traditional techniques build
the energy model by mapping a hardware energy model onto software constructs;
this approach faces obstacles when the software stack consists of a number of
abstract layers. Another approach that has been followed is to utilize hardware
or operating system features to estimate software energy information at a
coarse level of granularity such as blocks, methods or even applications. In
this paper, we explain how to construct a fine-grained energy model for the
source code, which is based on "energy operations" identified directly from the
source code and able to provide more valuable information for code
optimization. We apply the approach to a class of applications based on a
game-engine, and explain the wider applicability of the method.
|
[
{
"version": "v1",
"created": "Wed, 14 Oct 2015 15:49:20 GMT"
},
{
"version": "v2",
"created": "Wed, 18 May 2016 12:29:40 GMT"
}
] | 2016-05-19T00:00:00 |
[
[
"Li",
"Xueliang",
""
],
[
"Gallagher",
"John P.",
""
]
] |
new_dataset
| 0.998026 |
1605.00287
|
Xiang Xiang
|
Minh Dao, Xiang Xiang, Bulent Ayhan, Chiman Kwan, Trac D. Tran
|
Detecting Burnscar from Hyperspectral Imagery via Sparse Representation
with Low-Rank Interference
|
It is not a publishable version at this point as there is no IP
coverage at the moment
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a burnscar detection model for hyperspectral
imaging (HSI) data. The proposed model contains two-processing steps in which
the first step separate and then suppress the cloud information presenting in
the data set using an RPCA algorithm and the second step detect the burnscar
area in the low-rank component output of the first step. Experiments are
conducted on the public MODIS dataset available at NASA official website.
|
[
{
"version": "v1",
"created": "Sun, 1 May 2016 18:18:45 GMT"
},
{
"version": "v2",
"created": "Tue, 17 May 2016 23:25:22 GMT"
}
] | 2016-05-19T00:00:00 |
[
[
"Dao",
"Minh",
""
],
[
"Xiang",
"Xiang",
""
],
[
"Ayhan",
"Bulent",
""
],
[
"Kwan",
"Chiman",
""
],
[
"Tran",
"Trac D.",
""
]
] |
new_dataset
| 0.998336 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.