id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1608.04180
|
Harini Dananjani Kolamunna Ms
|
Harini Kolamunna, Jagmohan Chauhan, Yining Hu, Kanchana Thilakarathna,
Diego Perino, Dwight Makaroff and Aruna Seneviratne
|
Are wearable devices ready for HTTPS? Measuring the cost of secure
communication protocols on wearable devices
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The majority of available wearable devices require communication with
Internet servers for data analysis and storage, and rely on a paired smartphone
to enable secure communication. However, wearable devices are mostly equipped
with WiFi network interfaces, enabling direct communication with the Internet.
Secure communication protocols should then run on these wearables itself, yet
it is not clear if they can be efficiently supported. In this paper, we show
that wearable devices are ready for direct and secure Internet communication by
means of experiments with both controlled and Internet servers. We observe that
the overall energy consumption and communication delay can be reduced with
direct Internet connection via WiFi from wearables compared to using
smartphones as relays via Bluetooth. We also show that the additional HTTPS
cost caused by TLS handshake and encryption is closely related to number of
parallel connections, and has the same relative impact on wearables and
smartphones.
|
[
{
"version": "v1",
"created": "Mon, 15 Aug 2016 05:13:28 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Dec 2016 01:18:41 GMT"
}
] | 2016-12-14T00:00:00 |
[
[
"Kolamunna",
"Harini",
""
],
[
"Chauhan",
"Jagmohan",
""
],
[
"Hu",
"Yining",
""
],
[
"Thilakarathna",
"Kanchana",
""
],
[
"Perino",
"Diego",
""
],
[
"Makaroff",
"Dwight",
""
],
[
"Seneviratne",
"Aruna",
""
]
] |
new_dataset
| 0.980455 |
1612.03937
|
Andrea Margheri
|
Francesco Paolo Schiavo, Vladimiro Sassone, Luca Nicoletti, Andrea
Margheri
|
FaaS: Federation-as-a-Service
|
Technical Report Edited by Francesco Paolo Schiavo, Vladimiro
Sassone, Luca Nicoletti and Andrea Margheri
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
This document is the main high-level architecture specification of the
SUNFISH cloud federation solution. Its main objective is to introduce the
concept of Federation-as-a-Service (FaaS) and the SUNFISH platform. FaaS is the
new and innovative cloud federation service proposed by the SUNFISH project.
The document defines the functionalities of FaaS, its governance and precise
objectives. With respect to these objectives, the document proposes the
high-level architecture of the SUNFISH platform: the software architecture that
permits realising a FaaS federation. More specifically, the document describes
all the components forming the platform, the offered functionalities and their
high-level interactions underlying the main FaaS functionalities. The document
concludes by outlining the main implementation strategies towards the actual
implementation of the proposed cloud federation solution.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2016 21:31:55 GMT"
}
] | 2016-12-14T00:00:00 |
[
[
"Schiavo",
"Francesco Paolo",
""
],
[
"Sassone",
"Vladimiro",
""
],
[
"Nicoletti",
"Luca",
""
],
[
"Margheri",
"Andrea",
""
]
] |
new_dataset
| 0.998981 |
1612.03997
|
Sergey Denisov
|
M. Krivonosov, S. Denisov, and V. Zaburdaev
|
L\'{e}vy robotics
|
arXiv admin note: text overlap with arXiv:1410.5100
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Two the most common tasks for autonomous mobile robots is to explore the
environment and locate a target. %In the last case, the objective is either to
find a target in the shortest time possible or, alternatively, to find %as many
targets as possible for a given amount of time. Targets could range from
sources of chemical contamination to people needing assistance in a disaster
area. From the very beginning, the quest for most efficient search algorithms
was strongly influenced by behavioral science and ecology, where researchers
try to figure out the strategies used by leaving beings, from bacteria to
mammals. Since then, bio-inspired random search algorithms remain one the most
important directions in autonomous robotics. Recently a new wave arrived
bringing a specific type of random walks as a universal search strategy
exploited by immune cells, insects, mussels, albatrosses, sharks, deers, and a
dozen of other animals including humans. These \textit{L\'{e}vy} walks combine
two key features, the ability of walkers to spread anomalously fast while
moving with a finite velocity. The latter is especially valuable in the context
of robotics because it respects the reality autonomous robots live in. There is
already an impressive body of publications on L\'{e}vy robotics; yet research
in this field is unfolding further, constantly bringing new results, ideas,
hypothesis and speculations. In this mini-review we survey the current state of
the field, list latest advances, discuss the prevailing trends, and outline
further perspectives.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2016 02:09:16 GMT"
}
] | 2016-12-14T00:00:00 |
[
[
"Krivonosov",
"M.",
""
],
[
"Denisov",
"S.",
""
],
[
"Zaburdaev",
"V.",
""
]
] |
new_dataset
| 0.99748 |
1612.04209
|
Daniil Galaktionov
|
Nieves R. Brisaboa, Antonio Fari\~na, Daniil Galaktionov, M. Andrea
Rodr\'iguez
|
Compact Trip Representation over Networks
|
This research has received funding from the European Union's Horizon
2020 research and innovation programme under the Marie Sk{\l}odowska-Curie
Actions H2020-MSCA-RISE-2015 BIRDS GA No. 690941
|
23rd International Symposium, SPIRE 2016, Beppu, Japan, October
18-20, 2016, Proceedings pp 240-253
|
10.1007/978-3-319-46049-9_23
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new Compact Trip Representation (CTR) that allows us to manage
users' trips (moving objects) over networks. These could be public
transportation networks (buses, subway, trains, and so on) where nodes are
stations or stops, or road networks where nodes are intersections. CTR
represents the sequences of nodes and time instants in users' trips. The
spatial component is handled with a data structure based on the well-known
Compressed Suffix Array (CSA), which provides both a compact representation and
interesting indexing capabilities. We also represent the temporal component of
the trips, that is, the time instants when users visit nodes in their trips. We
create a sequence with these time instants, which are then self-indexed with a
balanced Wavelet Matrix (WM). This gives us the ability to solve range-interval
queries efficiently. We show how CTR can solve relevant spatial and
spatio-temporal queries over large sets of trajectories. Finally, we also
provide experimental results to show the space requirements and query
efficiency of CTR.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2016 14:45:29 GMT"
}
] | 2016-12-14T00:00:00 |
[
[
"Brisaboa",
"Nieves R.",
""
],
[
"Fariña",
"Antonio",
""
],
[
"Galaktionov",
"Daniil",
""
],
[
"Rodríguez",
"M. Andrea",
""
]
] |
new_dataset
| 0.999743 |
1612.04268
|
Javier De la Cruz
|
Javier de la Cruz
|
On dually almost MRD codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we define and study a family of codes which come close to be
MRD codes, so we call them AMRD codes (almost MRD). An AMRD code is a code with
rank defect equal to 1. AMRD codes whose duals are AMRD are called dually AMRD.
Dually AMRD codes are the closest to the MRD codes given that both they and
their dual codes are almost optimal. Necessary and sufficient conditions for
the codes to be dually AMRD are given. Furthermore we show that dually AMRD
codes and codes of rank defect one and maximum 2-generalized weight coincide
when the size of the matrix divides the dimension.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2016 16:24:40 GMT"
}
] | 2016-12-14T00:00:00 |
[
[
"de la Cruz",
"Javier",
""
]
] |
new_dataset
| 0.999542 |
1612.04324
|
Xu Zhou
|
Xu Zhou, Xiaoli Zhang, Jiucai Zhang, Rui Liu
|
Stabilization and Trajectory Control of a Quadrotor with Uncertain
Suspended Load
|
56 pages, 12 figures, article submitted to ASME Journal of Dynamic
Systems Measurement and Control, 2016 April
| null | null | null |
cs.RO cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stabilization and trajectory control of a quadrotor carrying a suspended load
with a fixed known mass has been extensively studied in recent years. However,
the load mass is not always known beforehand or may vary during the practical
transportations. This mass uncertainty brings uncertain disturbances to the
quadrotor system, causing existing controllers to have worse stability and
trajectory tracking performance. To improve the quadrotor stability and
trajectory tracking capability in this situation, we fully investigate the
impacts of the uncertain load mass on the quadrotor. By comparing the
performances of three different controllers -- the proportional-derivative (PD)
controller, the sliding mode controller (SMC), and the model predictive
controller (MPC) -- stabilization rather than trajectory tracking error is
proved to be the main influence in the load mass uncertainty. A critical motion
mass exists for the quadrotor to maintain a desired transportation performance.
Moreover, simulation results verify that a controller with strong robustness
against disturbances is a good choice for practical applications.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2016 19:21:12 GMT"
}
] | 2016-12-14T00:00:00 |
[
[
"Zhou",
"Xu",
""
],
[
"Zhang",
"Xiaoli",
""
],
[
"Zhang",
"Jiucai",
""
],
[
"Liu",
"Rui",
""
]
] |
new_dataset
| 0.992368 |
1612.04363
|
Anastasios Giovanidis
|
Anastasios Giovanidis, Apostolos Avranas
|
Spatial multi-LRU: Distributed Caching for Wireless Networks with
Coverage Overlaps
|
14 pages, double column, 5 figures, 15 sub-figures in total. arXiv
admin note: substantial text overlap with arXiv:1602.07623
| null | null | null |
cs.NI cs.IT cs.MM cs.PF math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article introduces a novel family of decentralised caching policies,
applicable to wireless networks with finite storage at the edge-nodes
(stations). These policies, that are based on the Least-Recently-Used
replacement principle, are here referred to as spatial multi-LRU. They update
cache inventories in a way that provides content diversity to users who are
covered by, and thus have access to, more than one station. Two variations are
proposed, the multi-LRU-One and -All, which differ in the number of replicas
inserted in the involved caches. We analyse their performance under two types
of traffic demand, the Independent Reference Model (IRM) and a model that
exhibits temporal locality. For IRM, we propose a Che-like approximation to
predict the hit probability, which gives very accurate results. Numerical
evaluations show that the performance of multi-LRU increases the more the
multi-coverage areas increase, and it is close to the performance of
centralised policies, when multi-coverage is sufficient. For IRM traffic,
multi-LRU-One is preferable to multi-LRU-All, whereas when the traffic exhibits
temporal locality the -All variation can perform better. Both variations
outperform the simple LRU. When popularity knowledge is not accurate, the new
policies can perform better than centralised ones.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2016 20:54:49 GMT"
}
] | 2016-12-14T00:00:00 |
[
[
"Giovanidis",
"Anastasios",
""
],
[
"Avranas",
"Apostolos",
""
]
] |
new_dataset
| 0.952464 |
1606.07085
|
Dylan Hutchison
|
Dylan Hutchison, Jeremy Kepner, Vijay Gadepally, Bill Howe
|
From NoSQL Accumulo to NewSQL Graphulo: Design and Utility of Graph
Algorithms inside a BigTable Database
|
9 pages, to appear in 2016 IEEE High Performance Extreme Computing
Conference (HPEC)
| null |
10.1109/HPEC.2016.7761577
| null |
cs.DB cs.DC cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Google BigTable's scale-out design for distributed key-value storage inspired
a generation of NoSQL databases. Recently the NewSQL paradigm emerged in
response to analytic workloads that demand distributed computation local to
data storage. Many such analytics take the form of graph algorithms, a trend
that motivated the GraphBLAS initiative to standardize a set of matrix math
kernels for building graph algorithms. In this article we show how it is
possible to implement the GraphBLAS kernels in a BigTable database by
presenting the design of Graphulo, a library for executing graph algorithms
inside the Apache Accumulo database. We detail the Graphulo implementation of
two graph algorithms and conduct experiments comparing their performance to two
main-memory matrix math systems. Our results shed insight into the conditions
that determine when executing a graph algorithm is faster inside a database
versus an external system---in short, that memory requirements and relative I/O
are critical factors.
|
[
{
"version": "v1",
"created": "Wed, 22 Jun 2016 20:08:47 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Aug 2016 04:09:48 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Hutchison",
"Dylan",
""
],
[
"Kepner",
"Jeremy",
""
],
[
"Gadepally",
"Vijay",
""
],
[
"Howe",
"Bill",
""
]
] |
new_dataset
| 0.978781 |
1607.06541
|
Jeremy Kepner
|
William S. Song, Vitaliy Gleyzer, Alexei Lomakin, Jeremy Kepner
|
Novel Graph Processor Architecture, Prototype System, and Results
|
7 pages, 8 figures, IEEE HPEC 2016
| null |
10.1109/HPEC.2016.7761635
| null |
cs.DC cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph algorithms are increasingly used in applications that exploit large
databases. However, conventional processor architectures are inadequate for
handling the throughput and memory requirements of graph computation. Lincoln
Laboratory's graph-processor architecture represents a rethinking of parallel
architectures for graph problems. Our processor utilizes innovations that
include a sparse matrix-based graph instruction set, a cacheless memory system,
accelerator-based architecture, a systolic sorter, high-bandwidth
multi-dimensional toroidal communication network, and randomized
communications. A field-programmable gate array (FPGA) prototype of the new
graph processor has been developed with significant performance enhancement
over conventional processors in graph computational throughput.
|
[
{
"version": "v1",
"created": "Fri, 22 Jul 2016 02:22:44 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Song",
"William S.",
""
],
[
"Gleyzer",
"Vitaliy",
""
],
[
"Lomakin",
"Alexei",
""
],
[
"Kepner",
"Jeremy",
""
]
] |
new_dataset
| 0.955415 |
1607.06543
|
Jeremy Kepner
|
Chansup Byun, Jeremy Kepner, William Arcand, David Bestor, Bill
Bergeron, Vijay Gadepally, Matthew Hubbell, Peter Michaleas, Julie Mullen,
Andrew Prout, Antonio Rosa, Charles Yee, Albert Reuther
|
LLMapReduce: Multi-Level Map-Reduce for High Performance Data Analysis
|
8 pages; 19 figures; IEEE HPEC 2016
| null |
10.1109/HPEC.2016.7761618
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The map-reduce parallel programming model has become extremely popular in the
big data community. Many big data workloads can benefit from the enhanced
performance offered by supercomputers. LLMapReduce provides the familiar
map-reduce parallel programming model to big data users running on a
supercomputer. LLMapReduce dramatically simplifies map-reduce programming by
providing simple parallel programming capability in one line of code.
LLMapReduce supports all programming languages and many schedulers. LLMapReduce
can work with any application without the need to modify the application.
Furthermore, LLMapReduce can overcome scaling limits in the map-reduce parallel
programming model via options that allow the user to switch to the more
efficient single-program-multiple-data (SPMD) parallel programming model. These
features allow users to reduce the computational overhead by more than 10x
compared to standard map-reduce for certain applications. LLMapReduce is widely
used by hundreds of users at MIT. Currently LLMapReduce works with several
schedulers such as SLURM, Grid Engine and LSF.
|
[
{
"version": "v1",
"created": "Fri, 22 Jul 2016 02:45:53 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Byun",
"Chansup",
""
],
[
"Kepner",
"Jeremy",
""
],
[
"Arcand",
"William",
""
],
[
"Bestor",
"David",
""
],
[
"Bergeron",
"Bill",
""
],
[
"Gadepally",
"Vijay",
""
],
[
"Hubbell",
"Matthew",
""
],
[
"Michaleas",
"Peter",
""
],
[
"Mullen",
"Julie",
""
],
[
"Prout",
"Andrew",
""
],
[
"Rosa",
"Antonio",
""
],
[
"Yee",
"Charles",
""
],
[
"Reuther",
"Albert",
""
]
] |
new_dataset
| 0.965649 |
1609.07545
|
Jeremy Kepner
|
Siddharth Samsi, Laura Brattain, William Arcand, David Bestor, Bill
Bergeron, Chansup Byun, Vijay Gadepally, Michael Houle, Matthew Hubbell,
Michael Jones, Anna Klein, Peter Michaleas, Lauren Milechin, Julie Mullen,
Andrew Prout, Antonio Rosa, Charles Yee, Jeremy Kepner, Albert Reuther
|
Benchmarking SciDB Data Import on HPC Systems
|
5 pages, 4 figures, IEEE High Performance Extreme Computing (HPEC)
2016, best paper finalist
| null |
10.1109/HPEC.2016.7761617
| null |
cs.DB cs.DC cs.PF q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
SciDB is a scalable, computational database management system that uses an
array model for data storage. The array data model of SciDB makes it ideally
suited for storing and managing large amounts of imaging data. SciDB is
designed to support advanced analytics in database, thus reducing the need for
extracting data for analysis. It is designed to be massively parallel and can
run on commodity hardware in a high performance computing (HPC) environment. In
this paper, we present the performance of SciDB using simulated image data. The
Dynamic Distributed Dimensional Data Model (D4M) software is used to implement
the benchmark on a cluster running the MIT SuperCloud software stack. A peak
performance of 2.2M database inserts per second was achieved on a single node
of this system. We also show that SciDB and the D4M toolbox provide more
efficient ways to access random sub-volumes of massive datasets compared to the
traditional approaches of reading volumetric data from individual files. This
work describes the D4M and SciDB tools we developed and presents the initial
performance results. This performance was achieved by using parallel inserts, a
in-database merging of arrays as well as supercomputing techniques, such as
distributed arrays and single-program-multiple-data programming.
|
[
{
"version": "v1",
"created": "Sat, 24 Sep 2016 01:01:30 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Samsi",
"Siddharth",
""
],
[
"Brattain",
"Laura",
""
],
[
"Arcand",
"William",
""
],
[
"Bestor",
"David",
""
],
[
"Bergeron",
"Bill",
""
],
[
"Byun",
"Chansup",
""
],
[
"Gadepally",
"Vijay",
""
],
[
"Houle",
"Michael",
""
],
[
"Hubbell",
"Matthew",
""
],
[
"Jones",
"Michael",
""
],
[
"Klein",
"Anna",
""
],
[
"Michaleas",
"Peter",
""
],
[
"Milechin",
"Lauren",
""
],
[
"Mullen",
"Julie",
""
],
[
"Prout",
"Andrew",
""
],
[
"Rosa",
"Antonio",
""
],
[
"Yee",
"Charles",
""
],
[
"Kepner",
"Jeremy",
""
],
[
"Reuther",
"Albert",
""
]
] |
new_dataset
| 0.994893 |
1609.07548
|
Jeremy Kepner
|
Vijay Gadepally, Peinan Chen, Jennie Duggan, Aaron Elmore, Brandon
Haynes, Jeremy Kepner, Samuel Madden, Tim Mattson, Michael Stonebraker
|
The BigDAWG Polystore System and Architecture
|
6 pages, 5 figures, IEEE High Performance Extreme Computing (HPEC)
conference 2016
| null |
10.1109/HPEC.2016.7761636
| null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Organizations are often faced with the challenge of providing data management
solutions for large, heterogenous datasets that may have different underlying
data and programming models. For example, a medical dataset may have
unstructured text, relational data, time series waveforms and imagery. Trying
to fit such datasets in a single data management system can have adverse
performance and efficiency effects. As a part of the Intel Science and
Technology Center on Big Data, we are developing a polystore system designed
for such problems. BigDAWG (short for the Big Data Analytics Working Group) is
a polystore system designed to work on complex problems that naturally span
across different processing or storage engines. BigDAWG provides an
architecture that supports diverse database systems working with different data
models, support for the competing notions of location transparency and semantic
completeness via islands and a middleware that provides a uniform multi--island
interface. Initial results from a prototype of the BigDAWG system applied to a
medical dataset validate polystore concepts. In this article, we will describe
polystore databases, the current BigDAWG architecture and its application on
the MIMIC II medical dataset, initial performance results and our future
development plans.
|
[
{
"version": "v1",
"created": "Sat, 24 Sep 2016 01:14:06 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Gadepally",
"Vijay",
""
],
[
"Chen",
"Peinan",
""
],
[
"Duggan",
"Jennie",
""
],
[
"Elmore",
"Aaron",
""
],
[
"Haynes",
"Brandon",
""
],
[
"Kepner",
"Jeremy",
""
],
[
"Madden",
"Samuel",
""
],
[
"Mattson",
"Tim",
""
],
[
"Stonebraker",
"Michael",
""
]
] |
new_dataset
| 0.99872 |
1609.08642
|
Jeremy Kepner
|
Timothy Weale, Vijay Gadepally, Dylan Hutchison, Jeremy Kepner
|
Benchmarking the Graphulo Processing Framework
|
5 pages, 4 figures, IEEE High Performance Extreme Computing (HPEC)
conference 2016
| null |
10.1109/HPEC.2016.7761640
| null |
cs.DB cs.MS cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph algorithms have wide applicablity to a variety of domains and are often
used on massive datasets. Recent standardization efforts such as the GraphBLAS
specify a set of key computational kernels that hardware and software
developers can adhere to. Graphulo is a processing framework that enables
GraphBLAS kernels in the Apache Accumulo database. In our previous work, we
have demonstrated a core Graphulo operation called \textit{TableMult} that
performs large-scale multiplication operations of database tables. In this
article, we present the results of scaling the Graphulo engine to larger
problems and scalablity when a greater number of resources is used.
Specifically, we present two experiments that demonstrate Graphulo scaling
performance is linear with the number of available resources. The first
experiment demonstrates cluster processing rates through Graphulo's TableMult
operator on two large graphs, scaled between $2^{17}$ and $2^{19}$ vertices.
The second experiment uses TableMult to extract a random set of rows from a
large graph ($2^{19}$ nodes) to simulate a cued graph analytic. These
benchmarking results are of relevance to Graphulo users who wish to apply
Graphulo to their graph problems.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2016 20:09:03 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Weale",
"Timothy",
""
],
[
"Gadepally",
"Vijay",
""
],
[
"Hutchison",
"Dylan",
""
],
[
"Kepner",
"Jeremy",
""
]
] |
new_dataset
| 0.994996 |
1611.08588
|
Sanghoon Hong
|
Sanghoon Hong, Byungseok Roh, Kye-Hyeon Kim, Yeongjae Cheon, Minje
Park
|
PVANet: Lightweight Deep Neural Networks for Real-time Object Detection
|
Presented at NIPS 2016 Workshop on Efficient Methods for Deep Neural
Networks (EMDNN). Continuation of arXiv:1608.08021. The affiliation has been
corrected
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In object detection, reducing computational cost is as important as improving
accuracy for most practical usages. This paper proposes a novel network
structure, which is an order of magnitude lighter than other state-of-the-art
networks while maintaining the accuracy. Based on the basic principle of more
layers with less channels, this new deep neural network minimizes its
redundancy by adopting recent innovations including C.ReLU and Inception
structure. We also show that this network can be trained efficiently to achieve
solid results on well-known object detection benchmarks: 84.9% and 84.2% mAP on
VOC2007 and VOC2012 while the required compute is less than 10% of the recent
ResNet-101.
|
[
{
"version": "v1",
"created": "Wed, 23 Nov 2016 17:43:28 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Dec 2016 22:30:17 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Hong",
"Sanghoon",
""
],
[
"Roh",
"Byungseok",
""
],
[
"Kim",
"Kye-Hyeon",
""
],
[
"Cheon",
"Yeongjae",
""
],
[
"Park",
"Minje",
""
]
] |
new_dataset
| 0.997233 |
1612.03182
|
Mark D. Hill
|
Luis Ceze, Mark D. Hill, and Thomas F. Wenisch
|
Arch2030: A Vision of Computer Architecture Research over the Next 15
Years
|
A Computing Community Consortium (CCC) white paper, 7 pages
| null | null | null |
cs.AR cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Application trends, device technologies and the architecture of systems drive
progress in information technologies. However, the former engines of such
progress - Moore's Law and Dennard Scaling - are rapidly reaching the point of
diminishing returns. The time has come for the computing community to boldly
confront a new challenge: how to secure a foundational future for information
technology's continued progress. The computer architecture community engaged in
several visioning exercises over the years. Five years ago, we released a white
paper, 21st Century Computer Architecture, which influenced funding programs in
both academia and industry. More recently, the IEEE Rebooting Computing
Initiative explored the future of computing systems in the architecture,
device, and circuit domains. This report stems from an effort to continue this
dialogue, reach out to the applications and devices/circuits communities, and
understand their trends and vision. We aim to identify opportunities where
architecture research can bridge the gap between the application and device
domains.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2016 21:02:13 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Ceze",
"Luis",
""
],
[
"Hill",
"Mark D.",
""
],
[
"Wenisch",
"Thomas F.",
""
]
] |
new_dataset
| 0.969654 |
1612.03312
|
Mingshen Sun
|
Mingshen Sun, Xiaolei Li, John C.S. Lui, Richard T.B. Ma, Zhenkai
Liang
|
Monet: A User-oriented Behavior-based Malware Variants Detection System
for Android
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Android, the most popular mobile OS, has around 78% of the mobile market
share. Due to its popularity, it attracts many malware attacks. In fact, people
have discovered around one million new malware samples per quarter, and it was
reported that over 98% of these new malware samples are in fact "derivatives"
(or variants) from existing malware families. In this paper, we first show that
runtime behaviors of malware's core functionalities are in fact similar within
a malware family. Hence, we propose a framework to combine "runtime behavior"
with "static structures" to detect malware variants. We present the design and
implementation of MONET, which has a client and a backend server module. The
client module is a lightweight, in-device app for behavior monitoring and
signature generation, and we realize this using two novel interception
techniques. The backend server is responsible for large scale malware
detection. We collect 3723 malware samples and top 500 benign apps to carry out
extensive experiments of detecting malware variants and defending against
malware transformation. Our experiments show that MONET can achieve around 99%
accuracy in detecting malware variants. Furthermore, it can defend against 10
different obfuscation and transformation techniques, while only incurs around
7% performance overhead and about 3% battery overhead. More importantly, MONET
will automatically alert users with intrusion details so to prevent further
malicious behaviors.
|
[
{
"version": "v1",
"created": "Sat, 10 Dec 2016 16:20:21 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Sun",
"Mingshen",
""
],
[
"Li",
"Xiaolei",
""
],
[
"Lui",
"John C. S.",
""
],
[
"Ma",
"Richard T. B.",
""
],
[
"Liang",
"Zhenkai",
""
]
] |
new_dataset
| 0.998403 |
1612.03371
|
Barath Raghavan
|
Adam Lerner, Giulia Fanti, Yahel Ben-David, Jesus Garcia, Paul
Schmitt, Barath Raghavan
|
Rangzen: Anonymously Getting the Word Out in a Blackout
| null | null | null | null |
cs.NI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years governments have shown themselves willing to impose blackouts
to shut off key communication infrastructure during times of civil strife, and
to surveil citizen communications whenever possible. However, it is exactly
during such strife that citizens need reliable and anonymous communications the
most. In this paper, we present Rangzen, a system for anonymous broadcast
messaging during network blackouts. Rangzen is distinctive in both aim and
design. Our aim is to provide an anonymous, one-to-many messaging layer that
requires only users' smartphones and can withstand network-level attacks. Our
design is a delay-tolerant mesh network which deprioritizes adversarial
messages by means of a social graph while preserving user anonymity. We built a
complete implementation that runs on Android smartphones, present benchmarks of
its performance and battery usage, and present simulation results suggesting
Rangzen's efficacy at scale.
|
[
{
"version": "v1",
"created": "Sun, 11 Dec 2016 04:49:38 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Lerner",
"Adam",
""
],
[
"Fanti",
"Giulia",
""
],
[
"Ben-David",
"Yahel",
""
],
[
"Garcia",
"Jesus",
""
],
[
"Schmitt",
"Paul",
""
],
[
"Raghavan",
"Barath",
""
]
] |
new_dataset
| 0.999695 |
1612.03457
|
Robert Escriva
|
Robert Escriva, Robbert van Renesse
|
Consus: Taming the Paxi
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Consus is a strictly serializable geo-replicated transactional key-value
store. The key contribution of Consus is a new commit protocol that reduces the
cost of executing a transaction to three wide area message delays in the common
case. Augmenting the commit protocol are multiple Paxos implementations
optimized for different purposes. Together the different implementations and
optimizations comprise a cohesive system that provides low latency, high
availability, and strong guarantees. This paper describes the techniques
implemented in the open source release of Consus, and lays the groundwork for
evaluating Consus once the system implementation is sufficiently robust for a
thorough evaluation.
|
[
{
"version": "v1",
"created": "Sun, 11 Dec 2016 19:17:26 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Escriva",
"Robert",
""
],
[
"van Renesse",
"Robbert",
""
]
] |
new_dataset
| 0.984643 |
1612.03488
|
Piotr Danilewski
|
Piotr Danilewski (1 and 2 and 3), Philipp Slusallek (1 and 2 and 4)
((1) Saarland University, Germany, (2) Intel Visual Computing Institute,
Germany, (3) Theoretical Computer Science, Jagiellonian University, Poland,
(4) Deutsches Forschungszentrum f\"ur K\"unstliche Intelligenz, Germany)
|
ManyDSL: A Host for Many Languages
|
14 pages, 11 code listings, 3 figures
| null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Domain-specific languages are becoming increasingly important. Almost every
application touches multiple domains. But how to define, use, and combine
multiple DSLs within the same application?
The most common approach is to split the project along the domain boundaries
into multiple pieces and files. Each file is then compiled separately.
Alternatively, multiple languages can be embedded in a flexible host language:
within the same syntax a new domain semantic is provided. In this paper we
follow a less explored route of metamorphic languages. These languages are able
to modify their own syntax and semantics on the fly, thus becoming a more
flexible host for DSLs.
Our language allows for dynamic creation of grammars and switching languages
where needed. We achieve this through a novel concept of Syntax-Directed
Execution. A language grammar includes semantic actions that are pieces of
functional code executed immediately during parsing. By avoiding additional
intermediate representation, connecting actions from different languages and
domains is greatly simplified. Still, actions can generate highly specialized
code though lambda encapsulation and Dynamic Staging.
|
[
{
"version": "v1",
"created": "Sun, 11 Dec 2016 21:58:17 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Danilewski",
"Piotr",
"",
"1 and 2 and 3"
],
[
"Slusallek",
"Philipp",
"",
"1 and 2 and 4"
]
] |
new_dataset
| 0.999501 |
1612.03618
|
Abbas Heydarnoori
|
Sahar Badihi, Abbas Heydarnoori
|
Generating Code Summaries Using the Power of the Crowd
|
28 pages, 11 figures, 9 tables
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the first steps to perform most of the software maintenance
activities, such as updating features or fixing bugs, is to have a relatively
good understanding of the program's source code which is often written by other
developers. A code summary is a description about a program's entities (e.g.,
its methods) which helps developers have a better comprehension of the code in
a shorter period of time. However, generating code summaries can be a
challenging task. To mitigate this problem, in this article, we introduce
CrowdSummarizer, a code summarization platform that benefits from the concepts
of crowdsourcing, gamification, and natural language processing to
automatically generate a high level summary for the methods of a Java program.
We have implemented CrowdSummarizer as an Eclipse plugin together with a
web-based code summarization game that can be played by the crowd. The results
of two empirical studies that evaluate the applicability of the approach and
the quality of generated summaries indicate that CrowdSummarizer is effective
in generating quality results.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2016 11:21:18 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Badihi",
"Sahar",
""
],
[
"Heydarnoori",
"Abbas",
""
]
] |
new_dataset
| 0.964956 |
1612.03628
|
Marc Bola\~nos
|
Marc Bola\~nos, \'Alvaro Peris, Francisco Casacuberta, Petia Radeva
|
VIBIKNet: Visual Bidirectional Kernelized Network for Visual Question
Answering
|
Submitted to IbPRIA'17, 8 pages, 3 figures, 1 table
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we address the problem of visual question answering by
proposing a novel model, called VIBIKNet. Our model is based on integrating
Kernelized Convolutional Neural Networks and Long-Short Term Memory units to
generate an answer given a question about an image. We prove that VIBIKNet is
an optimal trade-off between accuracy and computational load, in terms of
memory and time consumption. We validate our method on the VQA challenge
dataset and compare it to the top performing methods in order to illustrate its
performance and speed.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2016 11:41:46 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Bolaños",
"Marc",
""
],
[
"Peris",
"Álvaro",
""
],
[
"Casacuberta",
"Francisco",
""
],
[
"Radeva",
"Petia",
""
]
] |
new_dataset
| 0.995322 |
1612.03638
|
Tillmann Miltzow
|
Jean Cardinal and Stefan Felsner and Tillmann Miltzow and Casey
Tompkins and Birgit Vogtenhuber
|
Intersection Graphs of Rays and Grounded Segments
|
16 pages 12 Figures
| null | null | null |
cs.DM cs.CC cs.CG math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider several classes of intersection graphs of line segments in the
plane and prove new equality and separation results between those classes. In
particular, we show that: (1) intersection graphs of grounded segments and
intersection graphs of downward rays form the same graph class, (2) not every
intersection graph of rays is an intersection graph of downward rays, and (3)
not every intersection graph of rays is an outer segment graph. The first
result answers an open problem posed by Cabello and Jej\v{c}i\v{c}. The third
result confirms a conjecture by Cabello. We thereby completely elucidate the
remaining open questions on the containment relations between these classes of
segment graphs. We further characterize the complexity of the recognition
problems for the classes of outer segment, grounded segment, and ray
intersection graphs. We prove that these recognition problems are complete for
the existential theory of the reals. This holds even if a 1-string realization
is given as additional input.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2016 12:10:02 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Cardinal",
"Jean",
""
],
[
"Felsner",
"Stefan",
""
],
[
"Miltzow",
"Tillmann",
""
],
[
"Tompkins",
"Casey",
""
],
[
"Vogtenhuber",
"Birgit",
""
]
] |
new_dataset
| 0.987775 |
1612.03731
|
Hongwei Liu
|
Hongwei Liu, Maouche Youcef
|
A Note on Hamming distance of constacyclic codes of length $p^s$ over
$\mathbb F_{p^m} + u\mathbb F_{p^m}$
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For any prime $p$, $\lambda$-constacyclic codes of length $p^s$ over ${\cal
R}=\mathbb{F}_{p^m} + u\mathbb{F}_{p^m}$ are precisely the ideals of the local
ring ${\cal R}_{\lambda}=\frac{{\cal R}[x]}{\left\langle x^{p^s}-\lambda
\right\rangle}$, where $u^2=0$. In this paper, we first investigate the Hamming
distances of cyclic codes of length $p^s$ over ${\cal R}$. The minimum Hamming
distances of all cyclic codes of length $p^s$ over ${\cal R}$ are determined.
Moreover, an isometry between cyclic and $\alpha$-constacyclic codes of length
$p^s$ over ${\cal R}$ is established, where $\alpha$ is a nonzero element of
$\mathbb{F}_{p^m}$, which carries over the results regarding cyclic codes
corresponding to $\alpha$-constacyclic codes of length $p^s$ over ${\cal R}$.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2016 15:08:17 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Liu",
"Hongwei",
""
],
[
"Youcef",
"Maouche",
""
]
] |
new_dataset
| 0.961779 |
1612.03757
|
Zhiyong Chen
|
Zhanzhan Zhang, Zhiyong Chen, Hao Feng, Bin Xia, Weiliang Xie, and
Yong Zhao
|
Cache-enabled Uplink Transmission in Wireless Small Cell Networks
|
submitted to IEEE Trans. Veh. Tech., Sep. 2016
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is starting to become a big trend in the era of social networking that
people produce and upload user-generated contents to Internet via wireless
networks, bringing a significant burden on wireless uplink networks. In this
paper, we contribute to designing and theoretical understanding of wireless
cache-enabled upload transmission in a delay-tolerant small cell network to
relieve the burden, and then propose the corresponding scheduling policies for
the small base station (SBS) under the infinite and finite cache sizes.
Specifically, the cache ability introduced by SBS enables SBS to eliminate the
redundancy among the upload contents from users. This strategy not only
alleviates the wireless backhual traffic congestion from SBS to a macro base
station (MBS), but also improves the transmission efficiency of SBS. We then
investigate the scheduling schemes of SBS to offload more data traffic under
caching size constraint. Moreover, two operational regions for the wireless
cache-enabled upload network, namely, the delay-limited region and the
cache-limited region, are established to reveal the fundamental tradeoff
between the delay tolerance and the cache ability. Finally, numerical results
are provided to demonstrate the significant performance gains of the proposed
wireless cache-enabled upload network.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2016 15:54:51 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Zhang",
"Zhanzhan",
""
],
[
"Chen",
"Zhiyong",
""
],
[
"Feng",
"Hao",
""
],
[
"Xia",
"Bin",
""
],
[
"Xie",
"Weiliang",
""
],
[
"Zhao",
"Yong",
""
]
] |
new_dataset
| 0.998872 |
1612.03762
|
Margherita Zorzi
|
Carlo Combi, Margherita Zorzi, Gabriele Pozzani, Ugo Moretti
|
From narrative descriptions to MedDRA: automagically encoding adverse
drug reactions
|
arXiv admin note: substantial text overlap with arXiv:1506.08052
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The collection of narrative spontaneous reports is an irreplaceable source
for the prompt detection of suspected adverse drug reactions (ADRs): qualified
domain experts manually revise a huge amount of narrative descriptions and then
encode texts according to MedDRA standard terminology. The manual annotation of
narrative documents with medical terminology is a subtle and expensive task,
since the number of reports is growing up day-by-day. MagiCoder, a Natural
Language Processing algorithm, is proposed for the automatic encoding of
free-text descriptions into MedDRA terms. MagiCoder procedure is efficient in
terms of computational complexity (in particular, it is linear in the size of
the narrative input and the terminology). We tested it on a large dataset of
about 4500 manually revised reports, by performing an automated comparison
between human and MagiCoder revisions. For the current base version of
MagiCoder, we measured: on short descriptions, an average recall of $86\%$ and
an average precision of $88\%$; on medium-long descriptions (up to 255
characters), an average recall of $64\%$ and an average precision of $63\%$.
From a practical point of view, MagiCoder reduces the time required for
encoding ADR reports. Pharmacologists have simply to review and validate the
MagiCoder terms proposed by the application, instead of choosing the right
terms among the 70K low level terms of MedDRA. Such improvement in the
efficiency of pharmacologists' work has a relevant impact also on the quality
of the subsequent data analysis. We developed MagiCoder for the Italian
pharmacovigilance language. However, our proposal is based on a general
approach, not depending on the considered language nor the term dictionary.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2016 16:14:02 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Combi",
"Carlo",
""
],
[
"Zorzi",
"Margherita",
""
],
[
"Pozzani",
"Gabriele",
""
],
[
"Moretti",
"Ugo",
""
]
] |
new_dataset
| 0.996982 |
1612.03772
|
Hadi Fanaee-T
|
Hadi Fanaee-T and Joao Gama
|
SimTensor: A synthetic tensor data generator
| null | null | null | null |
cs.MS math.NA stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
SimTensor is a multi-platform, open-source software for generating artificial
tensor data (either with CP/PARAFAC or Tucker structure) for reproducible
research on tensor factorization algorithms. SimTensor is a stand-alone
application based on MATALB. It provides a wide range of facilities for
generating tensor data with various configurations. It comes with a
user-friendly graphical user interface, which enables the user to generate
tensors with complicated settings in an easy way. It also has this facility to
export generated data to universal formats such as CSV and HDF5, which can be
imported via a wide range of programming languages (C, C++, Java, R, Fortran,
MATLAB, Perl, Python, and many more). The most innovative part of SimTensor is
this that can generate temporal tensors with periodic waves, seasonal effects
and streaming structure. it can apply constraints such as non-negativity and
different kinds of sparsity to the data. SimTensor also provides this facility
to simulate different kinds of change-points and inject various types of
anomalies. The source code and binary versions of SimTensor is available for
download in http://www.simtensor.org.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2016 19:13:03 GMT"
}
] | 2016-12-13T00:00:00 |
[
[
"Fanaee-T",
"Hadi",
""
],
[
"Gama",
"Joao",
""
]
] |
new_dataset
| 0.999078 |
1612.02880
|
Jingang Zhong
|
Zibang Zhang, Xueying Wang, Jingang Zhong
|
Fast Fourier single-pixel imaging using binary illumination
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fourier single-pixel imaging (FSI) has proven capable of reconstructing
high-quality two-dimensional and three-dimensional images. The utilization of
the sparsity of natural images in Fourier domain allows high-resolution images
to be reconstructed from far fewer measurements than effective image pixels.
However, applying original FSI in digital micro-mirror device (DMD) based
high-speed imaging system turns out to be challenging, because the original FSI
uses grayscale Fourier basis patterns for illumination while DMDs generate
grayscale patterns at a relatively low rate. DMDs are a binary device which can
only generate a black-and-white pattern at each instance. In this paper, we
adopt binary Fourier patterns for illumination to achieve DMD-based high-speed
single-pixel imaging. Binary Fourier patterns are generated by upsampling and
then applying error diffusion based dithering to the grayscale patterns.
Experiments demonstrate the proposed technique able to achieve static imaging
with high quality and dynamic imaging in real time. The proposed technique
potentially allows high-quality and high-speed imaging over broad wavebands.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2016 01:02:37 GMT"
}
] | 2016-12-12T00:00:00 |
[
[
"Zhang",
"Zibang",
""
],
[
"Wang",
"Xueying",
""
],
[
"Zhong",
"Jingang",
""
]
] |
new_dataset
| 0.97569 |
1612.02967
|
Aleksejs Fomins
|
Aleksejs Fomins and Benedikt Oswald
|
Dune-CurvilinearGrid: Parallel Dune Grid Manager for Unstructured
Tetrahedral Curvilinear Meshes
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the dune-curvilineargrid module. The module provides the
self-contained, parallel grid manager, as well as the underlying elementary
curvilinear geometry module dune-curvilineargeometry. This work is motivated by
the need for reliable and scalable electromagnetic design of nanooptical
devices. Curvilinear geometries improve both the accuracy of modeling smooth
material boundaries, and the h/p-convergence rate of PDE solutions, reducing
the necessary computational effort. dune-curvilineargrid provides a large
spectrum of features for scalable parallel implementations of Finite Element
and Boundary Integral methods over curvilinear tetrahedral geometries,
including symbolic polynomial mappings and operations, recursive integration,
sparse and dense grid communication, parallel timing and memory footprint
diagnostics utilities. It is written in templated C++ using MPI for
parallelization and ParMETIS for grid partitioning, and is provided as a module
for the DUNE interface. The dune-curvilineargrid grid manager is continuously
developed and improved, and so is this documentation. For the most recent
version of the documentation, as well as the source code, please refer to the
provided repositories and our website.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2016 10:31:04 GMT"
}
] | 2016-12-12T00:00:00 |
[
[
"Fomins",
"Aleksejs",
""
],
[
"Oswald",
"Benedikt",
""
]
] |
new_dataset
| 0.993186 |
1612.02975
|
Dharmendra Kumar
|
Dharmendra Kumar, Debasis Mitra, Bhargab B. Bhattacharya
|
On Fault-Tolerant Design of Exclusive-OR Gates in QCA
|
9 pages, 26 figures, Microprocessors and Microsystems Journal
(communicated)
| null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Design paradigms of logic circuits with Quantum-dot Cellular Automata (QCA)
have been extensively studied in the recent past. Unfortunately, due to the
lack of mature fabrication support, QCA-based circuits often suffer from
various types of manufacturing defects and variations, and therefore, are
unreliable and error-prone. QCA-based Exclusive-OR (XOR) gates are frequently
used in the construction of several computing subsystems such as adders, linear
feedback shift registers, parity generators and checkers. However, none of the
existing designs for QCA XOR gates have considered the issue of ensuring
fault-tolerance. Simulation results also show that these designs can hardly
tolerate any fault. We investigate the applicability of various existing
fault-tolerant schemes such as triple modular redundancy (TMR), NAND
multiplexing, and majority multiplexing in the context of practical realization
of QCA XOR gate. Our investigations reveal that these techniques incur
prohibitively large area and delay and hence, they are unsuitable for practical
scenarios. We propose here realistic designs of QCA XOR gates (in terms of area
and delay) with significantly high fault-tolerance against all types of cell
misplacement defects such as cell omission, cell displacement, cell
misalignment and extra/additional cell deposition. Furthermore, the absence of
any crossing in the proposed designs facilitates low-cost fabrication of such
systems.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2016 11:08:14 GMT"
}
] | 2016-12-12T00:00:00 |
[
[
"Kumar",
"Dharmendra",
""
],
[
"Mitra",
"Debasis",
""
],
[
"Bhattacharya",
"Bhargab B.",
""
]
] |
new_dataset
| 0.995017 |
1612.03032
|
Frantisek Farka
|
Ekaterina Komendantskaya, Franti\v{s}ek Farka
|
CoALP-Ty'16
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
This volume constitutes the pre-proceedings of the Workshop on Coalgebra,
Horn Clause Logic Programming and Types (CoALP-Ty'16), held on 28--29 November
2016 in Edinburgh as a mark of the end of the EPSRC Grant Coalgebraic Logic
Programming for Type Inference, by E. Komendantskaya and J. Power. This volume
consists of extended abstracts describing current research in the following
areas:
Semantics: Lawvere theories and Coalgebra in Logic and Functional Programming
Programming languages: Horn Clause Logic for Type Inference in Functional
Languages and Beyond
After discussion at the workshop authors of the extended abstracts will be
invited to submit a full paper to go through a second round of refereeing and
selection for the formal proceedings.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2016 14:18:04 GMT"
}
] | 2016-12-12T00:00:00 |
[
[
"Komendantskaya",
"Ekaterina",
""
],
[
"Farka",
"František",
""
]
] |
new_dataset
| 0.99414 |
1612.03094
|
Adri\`a Recasens
|
Adri\`a Recasens, Carl Vondrick, Aditya Khosla, Antonio Torralba
|
Following Gaze Across Views
|
9 pages, 8 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Following the gaze of people inside videos is an important signal for
understanding people and their actions. In this paper, we present an approach
for following gaze across views by predicting where a particular person is
looking throughout a scene. We collect VideoGaze, a new dataset which we use as
a benchmark to both train and evaluate models. Given one view with a person in
it and a second view of the scene, our model estimates a density for gaze
location in the second view. A key aspect of our approach is an end-to-end
model that solves the following sub-problems: saliency, gaze pose, and
geometric relationships between views. Although our model is supervised only
with gaze, we show that the model learns to solve these subproblems
automatically without supervision. Experiments suggest that our approach
follows gaze better than standard baselines and produces plausible results for
everyday situations.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2016 17:20:17 GMT"
}
] | 2016-12-12T00:00:00 |
[
[
"Recasens",
"Adrià",
""
],
[
"Vondrick",
"Carl",
""
],
[
"Khosla",
"Aditya",
""
],
[
"Torralba",
"Antonio",
""
]
] |
new_dataset
| 0.994593 |
1612.03153
|
Hanbyul Joo
|
Hanbyul Joo, Tomas Simon, Xulong Li, Hao Liu, Lei Tan, Lin Gui, Sean
Banerjee, Timothy Godisart, Bart Nabbe, Iain Matthews, Takeo Kanade, Shohei
Nobuhara, Yaser Sheikh
|
Panoptic Studio: A Massively Multiview System for Social Interaction
Capture
|
Submitted to IEEE Transactions on Pattern Analysis and Machine
Intelligence
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an approach to capture the 3D motion of a group of people engaged
in a social interaction. The core challenges in capturing social interactions
are: (1) occlusion is functional and frequent; (2) subtle motion needs to be
measured over a space large enough to host a social group; (3) human appearance
and configuration variation is immense; and (4) attaching markers to the body
may prime the nature of interactions. The Panoptic Studio is a system organized
around the thesis that social interactions should be measured through the
integration of perceptual analyses over a large variety of view points. We
present a modularized system designed around this principle, consisting of
integrated structural, hardware, and software innovations. The system takes, as
input, 480 synchronized video streams of multiple people engaged in social
activities, and produces, as output, the labeled time-varying 3D structure of
anatomical landmarks on individuals in the space. Our algorithm is designed to
fuse the "weak" perceptual processes in the large number of views by
progressively generating skeletal proposals from low-level appearance cues, and
a framework for temporal refinement is also presented by associating body parts
to reconstructed dense 3D trajectory stream. Our system and method are the
first in reconstructing full body motion of more than five people engaged in
social interactions without using markers. We also empirically demonstrate the
impact of the number of views in achieving this goal.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2016 20:25:04 GMT"
}
] | 2016-12-12T00:00:00 |
[
[
"Joo",
"Hanbyul",
""
],
[
"Simon",
"Tomas",
""
],
[
"Li",
"Xulong",
""
],
[
"Liu",
"Hao",
""
],
[
"Tan",
"Lei",
""
],
[
"Gui",
"Lin",
""
],
[
"Banerjee",
"Sean",
""
],
[
"Godisart",
"Timothy",
""
],
[
"Nabbe",
"Bart",
""
],
[
"Matthews",
"Iain",
""
],
[
"Kanade",
"Takeo",
""
],
[
"Nobuhara",
"Shohei",
""
],
[
"Sheikh",
"Yaser",
""
]
] |
new_dataset
| 0.95182 |
1109.1027
|
Ronald Caplan
|
R.M. Caplan and R. Carretero
|
A Two-Step High-Order Compact Scheme for the Laplacian Operator and its
Implementation in an Explicit Method for Integrating the Nonlinear
Schr\"odinger Equation
|
18 pages, 3 figures
| null | null | null |
cs.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe and test an easy-to-implement two-step high-order compact (2SHOC)
scheme for the Laplacian operator and its implementation into an explicit
finite-difference scheme for simulating the nonlinear Schr\"odinger equation
(NLSE). Our method relies on a compact `double-differencing' which is shown to
be computationally equivalent to standard fourth-order non-compact schemes.
Through numerical simulations of the NLSE using fourth-order Runge-Kutta, we
confirm that our scheme shows the desired fourth-order accuracy. A computation
and storage requirement comparison is made between the 2SHOC scheme and the
non-compact equivalent scheme for both the Laplacian operator alone, as well as
when implemented in the NLSE simulations. Stability bounds are also shown in
order to get maximum efficiency out of the method. We conclude that the modest
increase in storage and computation of the 2SHOC schemes are well worth the
advantages of having the schemes compact, and their ease of implementation
makes their use very useful for practical implementations.
|
[
{
"version": "v1",
"created": "Mon, 5 Sep 2011 22:38:05 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Mar 2013 21:51:04 GMT"
}
] | 2016-12-09T00:00:00 |
[
[
"Caplan",
"R. M.",
""
],
[
"Carretero",
"R.",
""
]
] |
new_dataset
| 0.951491 |
1510.07026
|
Heejin Ahn
|
Heejin Ahn and Domitilla Del Vecchio
|
Semi-autonomous Intersection Collision Avoidance through Job-shop
Scheduling
|
Submitted to Hybrid Systems: Computation and Control (HSCC) 2016
| null |
10.1145/2883817.2883830
| null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we design a supervisor to prevent vehicle collisions at
intersections. An intersection is modeled as an area containing multiple
conflict points where vehicle paths cross in the future. At every time step,
the supervisor determines whether there will be more than one vehicle in the
vicinity of a conflict point at the same time. If there is, then an impending
collision is detected, and the supervisor overrides the drivers to avoid
collision. A major challenge in the design of a supervisor as opposed to an
autonomous vehicle controller is to verify whether future collisions will occur
based on the current drivers choices. This verification problem is particularly
hard due to the large number of vehicles often involved in intersection
collision, to the multitude of conflict points, and to the vehicles dynamics.
In order to solve the verification problem, we translate the problem to a
job-shop scheduling problem that yields equivalent answers. The job-shop
scheduling problem can, in turn, be transformed into a mixed-integer linear
program when the vehicle dynamics are first-order dynamics, and can thus be
solved by using a commercial solver.
|
[
{
"version": "v1",
"created": "Fri, 23 Oct 2015 19:41:50 GMT"
}
] | 2016-12-09T00:00:00 |
[
[
"Ahn",
"Heejin",
""
],
[
"Del Vecchio",
"Domitilla",
""
]
] |
new_dataset
| 0.994479 |
1608.05233
|
Frantisek Farka
|
Franti\v{s}ek Farka, Ekaterina Komendantskaya, and Kevin Hammond
|
Coinductive Soundness of Corecursive Type Class Resolution
|
Pre-proceedings paper presented at the 26th International Symposium
on Logic-Based Program Synthesis and Transformation (LOPSTR 2016), Edinburgh,
Scotland UK, 6-8 September 2016 (arXiv:1608.02534)
| null | null |
LOPSTR/2016/2
|
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Horn clauses and first-order resolution are commonly used to implement type
classes in Haskell. Several corecursive extensions to type class resolution
have recently been proposed, with the goal of allowing (co)recursive dictionary
construction where resolution does not termi- nate. This paper shows, for the
first time, that corecursive type class resolution and its extensions are
coinductively sound with respect to the greatest Herbrand models of logic
programs and that they are induc- tively unsound with respect to the least
Herbrand models. We establish incompleteness results for various fragments of
the proof system.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2016 10:37:22 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Dec 2016 13:37:20 GMT"
}
] | 2016-12-09T00:00:00 |
[
[
"Farka",
"František",
""
],
[
"Komendantskaya",
"Ekaterina",
""
],
[
"Hammond",
"Kevin",
""
]
] |
new_dataset
| 0.982559 |
1610.04080
|
Philippe Wenger
|
Philippe Wenger (IRCCyN)
|
Cuspidal Robots
|
arXiv admin note: text overlap with arXiv:1002.1773
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This chapter is dedicated to the so-called cuspidal robots, i.e. those robots
that can move from one inverse geometric solution to another without meeting a
singular confuguration. This feature was discovered quite recently and has then
been fascinating a lot of researchers. After a brief history of cuspidal
robots, the chapter provides the main features of cuspidal robots: explanation
of the non-singular change of posture, uniqueness domains, regions of feasible
paths, identification and classification of cuspidal robots. The chapter
focuses on 3-R orthogonal serial robots. The case of 6-dof robots and parallel
robots is discussed in the end of this chapter.
|
[
{
"version": "v1",
"created": "Thu, 13 Oct 2016 13:58:59 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Dec 2016 13:26:59 GMT"
}
] | 2016-12-09T00:00:00 |
[
[
"Wenger",
"Philippe",
"",
"IRCCyN"
]
] |
new_dataset
| 0.997694 |
1612.02468
|
Frederic Le Mouel
|
Roya Golchay (CITI), Fr\'ed\'eric Le Mou\"el (CITI), Julien Ponge
(CITI), Nicolas Stouls (CITI)
|
Spontaneous Proximity Clouds: Making Mobile Devices to Collaborate for
Resource and Data Sharing
|
in Proceedings of the 12th EAI International Conference on
Collaborative Computing: Networking, Applications and Worksharing
(CollaborateCom'2016), Nov 2016, Beijing, China
| null | null | null |
cs.NI cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The base motivation of Mobile Cloud Computing was empowering mobile devices
by application offloading onto powerful cloud resources. However, this goal
can't entirely be reached because of the high offloading cost imposed by the
long physical distance between the mobile device and the cloud. To address this
issue, we propose an application offloading onto a nearby mobile cloud composed
of the mobile devices in the vicinity-a Spontaneous Proximity Cloud. We
introduce our proposed dynamic, ant-inspired, bi-objective offloading
middleware-ACOMMA, and explain its extension to perform a close mobile
application offloading. With the learning-based offloading decision-making
process of ACOMMA, combined to the collaborative resource sharing, the mobile
devices can cooperate for decision cache sharing. We evaluate the performance
of ACOMMA in collaborative mode with real benchmarks Face Recognition and
Monte-Carlo algorithms-and achieve 50% execution time gain.
|
[
{
"version": "v1",
"created": "Thu, 3 Nov 2016 10:29:30 GMT"
}
] | 2016-12-09T00:00:00 |
[
[
"Golchay",
"Roya",
"",
"CITI"
],
[
"Mouël",
"Frédéric Le",
"",
"CITI"
],
[
"Ponge",
"Julien",
"",
"CITI"
],
[
"Stouls",
"Nicolas",
"",
"CITI"
]
] |
new_dataset
| 0.999304 |
1612.02498
|
Odemir Bruno PhD
|
Jo\~ao B. Florindo, Odemir M. Bruno
|
Discrete Schroedinger Transform For Texture Recognition
|
15 pages, 7 figures
| null | null | null |
cs.CV physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents a new procedure to extract features of grey-level texture
images based on the discrete Schroedinger transform. This is a non-linear
transform where the image is mapped as the initial probability distribution of
a wave function and such distribution evolves in time following the
Schroedinger equation from Quantum Mechanics. The features are provided by
statistical moments of the distribution measured at different times. The
proposed method is applied to the classification of three databases of textures
used for benchmark and compared to other well-known texture descriptors in the
literature, such as textons, local binary patterns, multifractals, among
others. All of them are outperformed by the proposed method in terms of
percentage of images correctly classified. The proposal is also applied to the
identification of plant species using scanned images of leaves and again it
outperforms other texture methods. A test with images affected by Gaussian and
"salt \& pepper" noise is also carried out, also with the best performance
achieved by the Schroedinger descriptors.
|
[
{
"version": "v1",
"created": "Thu, 8 Dec 2016 00:49:18 GMT"
}
] | 2016-12-09T00:00:00 |
[
[
"Florindo",
"João B.",
""
],
[
"Bruno",
"Odemir M.",
""
]
] |
new_dataset
| 0.999047 |
1612.02509
|
Ayushi Sinha
|
Ayushi Sinha, Michael Kazhdan
|
Geodesics using Waves: Computing Distances using Wave Propagation
|
10 pages, 14 figures
| null | null | null |
cs.CG cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a new method for computing approximate geodesic
distances. We introduce the wave method for approximating geodesic distances
from a point on a manifold mesh. Our method involves the solution of two linear
systems of equations. One system of equations is solved repeatedly to propagate
the wave on the entire mesh, and one system is solved once after wave
propagation is complete in order to compute the approximate geodesic distances
up to an additive constant. However, these systems need to be pre-factored only
once, and can be solved efficiently at each iteration. All of our tests
required approximately between 300 and 400 iterations, which were completed in
a few seconds. Therefore, this method can approximate geodesic distances
quickly, and the approximation is highly accurate.
|
[
{
"version": "v1",
"created": "Thu, 8 Dec 2016 01:57:47 GMT"
}
] | 2016-12-09T00:00:00 |
[
[
"Sinha",
"Ayushi",
""
],
[
"Kazhdan",
"Michael",
""
]
] |
new_dataset
| 0.995248 |
1612.02603
|
Atsushi Ooka
|
Atsushi Ooka and Suyong Eum and Shingo Ata and Masayuki Murata
|
Compact CAR: Low-Overhead Cache Replacement Policy for an ICN Router
|
15 pages, 29 figures, submitted to Computer Communications
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Information-centric networking (ICN) has gained attention from network
research communities due to its capability of efficient content dissemination.
In-network caching function in ICN plays an important role to achieve the
design motivation. However, many researchers on in-network caching have focused
on where to cache rather than how to cache: the former is known as contents
deployment in the network and the latter is known as cache replacement in an
ICN element. Although, the cache replacement has been intensively researched in
the context of web-caching and content delivery network previously, the
conventional approaches cannot be directly applied to ICN due to the fine
granularity of cacheable items in ICN, which eventually changes the access
patterns.
In this paper, we argue that ICN requires a novel cache replacement algorithm
to fulfill the requirements in the design of high performance ICN element.
Then, we propose a novel cache replacement algorithm to satisfy the
requirements named Compact CLOCK with Adaptive Replacement (Compact CAR), which
can reduce the consumption of cache memory to one-tenth compared to
conventional approaches.
|
[
{
"version": "v1",
"created": "Thu, 8 Dec 2016 11:38:26 GMT"
}
] | 2016-12-09T00:00:00 |
[
[
"Ooka",
"Atsushi",
""
],
[
"Eum",
"Suyong",
""
],
[
"Ata",
"Shingo",
""
],
[
"Murata",
"Masayuki",
""
]
] |
new_dataset
| 0.999736 |
1612.02649
|
Judy Hoffman
|
Judy Hoffman, Dequan Wang, Fisher Yu, Trevor Darrell
|
FCNs in the Wild: Pixel-level Adversarial and Constraint-based
Adaptation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fully convolutional models for dense prediction have proven successful for a
wide range of visual tasks. Such models perform well in a supervised setting,
but performance can be surprisingly poor under domain shifts that appear mild
to a human observer. For example, training on one city and testing on another
in a different geographic region and/or weather condition may result in
significantly degraded performance due to pixel-level distribution shift. In
this paper, we introduce the first domain adaptive semantic segmentation
method, proposing an unsupervised adversarial approach to pixel prediction
problems. Our method consists of both global and category specific adaptation
techniques. Global domain alignment is performed using a novel semantic
segmentation network with fully convolutional domain adversarial learning. This
initially adapted space then enables category specific adaptation through a
generalization of constrained weak learning, with explicit transfer of the
spatial layout from the source to the target domains. Our approach outperforms
baselines across different settings on multiple large-scale datasets, including
adapting across various real city environments, different synthetic
sub-domains, from simulated to real environments, and on a novel large-scale
dash-cam dataset.
|
[
{
"version": "v1",
"created": "Thu, 8 Dec 2016 14:11:10 GMT"
}
] | 2016-12-09T00:00:00 |
[
[
"Hoffman",
"Judy",
""
],
[
"Wang",
"Dequan",
""
],
[
"Yu",
"Fisher",
""
],
[
"Darrell",
"Trevor",
""
]
] |
new_dataset
| 0.957494 |
1612.02675
|
Karthik Gopinath
|
Karthik Gopinath and Jayanthi Sivaswamy
|
Domain knowledge assisted cyst segmentation in OCT retinal images
|
The paper was accepted as an oral presentation in MICCAI-2015 OPTIMA
Cyst Segmentation Challenge
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
3D imaging modalities are becoming increasingly popular and relevant in
retinal imaging owing to their effectiveness in highlighting structures in
sub-retinal layers. OCT is one such modality which has great importance in the
context of analysis of cystoid structures in subretinal layers. Signal to noise
ratio(SNR) of the images obtained from OCT is less and hence automated and
accurate determination of cystoid structures from OCT is a challenging task. We
propose an automated method for detecting/segmenting cysts in 3D OCT volumes.
The proposed method is biologically inspired and fast aided by the domain
knowledge about the cystoid structures. An ensemble learning methodRandom
forests is learnt for classification of detected region into cyst region. The
method achieves detection and segmentation in a unified setting. We believe the
proposed approach with further improvements can be a promising starting point
for more robust approach. This method is validated against the training set
achieves a mean dice coefficient of 0.3893 with a standard deviation of 0.2987
|
[
{
"version": "v1",
"created": "Thu, 8 Dec 2016 14:59:07 GMT"
}
] | 2016-12-09T00:00:00 |
[
[
"Gopinath",
"Karthik",
""
],
[
"Sivaswamy",
"Jayanthi",
""
]
] |
new_dataset
| 0.997862 |
1306.1295
|
Yi Wang
|
Yi Wang
|
MathGR: a tensor and GR computation package to keep it simple
|
12 pages, 2 figures; v2: version to match updated software; v3: Ibp
part updated to match behavior of code
| null | null | null |
cs.MS astro-ph.CO gr-qc hep-th physics.comp-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the MathGR package, written in Mathematica. The package can
manipulate tensor and GR calculations with either abstract or explicit indices,
simplify tensors with permutational symmetries, decompose tensors from abstract
indices to partially or completely explicit indices and convert partial
derivatives into total derivatives. Frequently used GR tensors and a model of
FRW universe with ADM type perturbations are predefined. The package is built
around the philosophy to "keep it simple", and makes use of latest tensor
technologies of Mathematica.
|
[
{
"version": "v1",
"created": "Thu, 6 Jun 2013 05:08:44 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Aug 2014 17:42:34 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Dec 2016 00:50:35 GMT"
}
] | 2016-12-08T00:00:00 |
[
[
"Wang",
"Yi",
""
]
] |
new_dataset
| 0.974712 |
1611.07383
|
Hao Zhuang
|
Hao Zhuang and Florian Pydde
|
A Non-Intrusive and Context-Based Vulnerability Scoring Framework for
Cloud Services
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding the severity of vulnerabilities within cloud services is
particularly important for today service administrators.Although many systems,
e.g., CVSS, have been built to evaluate and score the severity of
vulnerabilities for administrators, the scoring schemes employed by these
systems fail to take into account the contextual information of specific
services having these vulnerabilities, such as what roles they play in a
particular service. Such a deficiency makes resulting scores unhelpful. This
paper presents a practical framework, NCVS, that offers automatic and
contextual scoring mechanism to evaluate the severity of vulnerabilities for a
particular service. Specifically, for a given service S, NCVS first
automatically collects S contextual information including topology,
configurations, vulnerabilities and their dependencies. Then, NCVS uses the
collected information to build a contextual dependency graph, named CDG, to
model S context. Finally, NCVS scores and ranks all the vulnerabilities in S by
analyzing S context, such as what roles the vulnerabilities play in S, and how
critical they affect the functionality of S. NCVS is novel and useful, because
1) context-based vulnerability scoring results are highly relevant and
meaningful for administrators to understand each vulnerability importance
specific to the target service; and 2) the workflow of NCVS does not need
instrumentation or modifications to any source code. Our experimental results
demonstrate that NCVS can obtain more relevant vulnerability scoring results
than comparable system, such as CVSS.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2016 16:09:12 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Dec 2016 09:00:11 GMT"
}
] | 2016-12-08T00:00:00 |
[
[
"Zhuang",
"Hao",
""
],
[
"Pydde",
"Florian",
""
]
] |
new_dataset
| 0.974976 |
1609.09444
|
Arnab Ghosh
|
Arnab Ghosh and Viveka Kulharia and Amitabha Mukerjee and Vinay
Namboodiri and Mohit Bansal
|
Contextual RNN-GANs for Abstract Reasoning Diagram Generation
|
To Appear in AAAI-17 and NIPS Workshop on Adversarial Training
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding, predicting, and generating object motions and transformations
is a core problem in artificial intelligence. Modeling sequences of evolving
images may provide better representations and models of motion and may
ultimately be used for forecasting, simulation, or video generation.
Diagrammatic Abstract Reasoning is an avenue in which diagrams evolve in
complex patterns and one needs to infer the underlying pattern sequence and
generate the next image in the sequence. For this, we develop a novel
Contextual Generative Adversarial Network based on Recurrent Neural Networks
(Context-RNN-GANs), where both the generator and the discriminator modules are
based on contextual history (modeled as RNNs) and the adversarial discriminator
guides the generator to produce realistic images for the particular time step
in the image sequence. We evaluate the Context-RNN-GAN model (and its variants)
on a novel dataset of Diagrammatic Abstract Reasoning, where it performs
competitively with 10th-grade human performance but there is still scope for
interesting improvements as compared to college-grade human performance. We
also evaluate our model on a standard video next-frame prediction task,
achieving improved performance over comparable state-of-the-art.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2016 17:56:32 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Dec 2016 13:14:09 GMT"
}
] | 2016-12-07T00:00:00 |
[
[
"Ghosh",
"Arnab",
""
],
[
"Kulharia",
"Viveka",
""
],
[
"Mukerjee",
"Amitabha",
""
],
[
"Namboodiri",
"Vinay",
""
],
[
"Bansal",
"Mohit",
""
]
] |
new_dataset
| 0.987093 |
1612.01593
|
Francesco De Pellegrini Dr.
|
Francesco De Pellegrini, Antonio Massaro, Leonardo Goratti and Rachid
El-Azouzi
|
Competitive Caching of Contents in 5G Edge Cloud Networks
|
12 pages
| null | null | null |
cs.GT cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The surge of mobile data traffic forces network operators to cope with
capacity shortage. The deployment of small cells in 5G networks is meant to
reduce latency, backhaul traffic and increase radio access capacity. In this
context, mobile edge computing technology will be used to manage dedicated
cache space in the radio access network. Thus, mobile network operators will be
able to provision OTT content providers with new caching services to enhance
the quality of experience of their customers on the move. In turn, the cache
memory in the mobile edge network will become a shared resource. Hence, we
study a competitive caching scheme where contents are stored at given price set
by the mobile network operator. We first formulate a resource allocation
problem for a tagged content provider seeking to minimize the expected missed
cache rate. The optimal caching policy is derived accounting for popularity and
availability of contents, the spatial distribution of small cells, and the
caching strategies of competing content providers. It is showed to induce a
specific order on contents to be cached based on their popularity and
availability. Next, we study a game among content providers in the form of a
generalized Kelly mechanism with bounded strategy sets and heterogeneous
players. Existence and uniqueness of the Nash equilibrium are proved. Finally,
extensive numerical results validate and characterize the performance of the
model.
|
[
{
"version": "v1",
"created": "Mon, 5 Dec 2016 23:46:19 GMT"
}
] | 2016-12-07T00:00:00 |
[
[
"De Pellegrini",
"Francesco",
""
],
[
"Massaro",
"Antonio",
""
],
[
"Goratti",
"Leonardo",
""
],
[
"El-Azouzi",
"Rachid",
""
]
] |
new_dataset
| 0.993969 |
1612.01638
|
EPTCS
|
Timo Kehrer (Politecnico di Milano), Christos Tsigkanos (Politecnico
di Milano), Carlo Ghezzi (Politecnico di Milano)
|
An EMOF-Compliant Abstract Syntax for Bigraphs
|
In Proceedings GaM 2016, arXiv:1612.01053
|
EPTCS 231, 2016, pp. 16-30
|
10.4204/EPTCS.231.2
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bigraphs are an emerging modeling formalism for structures in ubiquitous
computing. Besides an algebraic notation, which can be adopted to provide an
algebraic syntax for bigraphs, the bigraphical theory introduces a visual
concrete syntax which is intuitive and unambiguous at the same time; the
standard visual notation can be customized and thus tailored to domain-specific
requirements. However, in contrast to modeling standards based on the
Meta-Object Facility (MOF) and domain-specific languages typically used in
model-driven engineering (MDE), the bigraphical theory lacks a precise
definition of an abstract syntax for bigraphical modeling languages. As a
consequence, available modeling and analysis tools use proprietary formats for
representing bigraphs internally and persistently, which hampers the exchange
of models across tool boundaries. Moreover, tools can be hardly integrated with
standard MDE technologies in order to build sophisticated tool chains and
modeling environments, as required for systematic engineering of large systems
or fostering experimental work to evaluate the bigraphical theory in real-world
applications. To overcome this situation, we propose an abstract syntax for
bigraphs which is compliant to the Essential MOF (EMOF) standard defined by the
Object Management Group (OMG). We use typed graphs as a formal underpinning of
EMOF-based models and present a canonical mapping which maps bigraphs to typed
graphs in a natural way. We also discuss application-specific variation points
in the graph-based representation of bigraphs. Following standard techniques
from software product line engineering, we present a framework to customize the
graph-based representation to support a variety of application scenarios.
|
[
{
"version": "v1",
"created": "Tue, 6 Dec 2016 02:36:13 GMT"
}
] | 2016-12-07T00:00:00 |
[
[
"Kehrer",
"Timo",
"",
"Politecnico di Milano"
],
[
"Tsigkanos",
"Christos",
"",
"Politecnico\n di Milano"
],
[
"Ghezzi",
"Carlo",
"",
"Politecnico di Milano"
]
] |
new_dataset
| 0.977034 |
1612.01655
|
Xin Yang
|
Xin Yang, Lequan Yu, Lingyun Wu, Yi Wang, Dong Ni, Jing Qin, Pheng-Ann
Heng
|
Fine-grained Recurrent Neural Networks for Automatic Prostate
Segmentation in Ultrasound Images
|
To appear in AAAI Conference 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Boundary incompleteness raises great challenges to automatic prostate
segmentation in ultrasound images. Shape prior can provide strong guidance in
estimating the missing boundary, but traditional shape models often suffer from
hand-crafted descriptors and local information loss in the fitting procedure.
In this paper, we attempt to address those issues with a novel framework. The
proposed framework can seamlessly integrate feature extraction and shape prior
exploring, and estimate the complete boundary with a sequential manner. Our
framework is composed of three key modules. Firstly, we serialize the static 2D
prostate ultrasound images into dynamic sequences and then predict prostate
shapes by sequentially exploring shape priors. Intuitively, we propose to learn
the shape prior with the biologically plausible Recurrent Neural Networks
(RNNs). This module is corroborated to be effective in dealing with the
boundary incompleteness. Secondly, to alleviate the bias caused by different
serialization manners, we propose a multi-view fusion strategy to merge shape
predictions obtained from different perspectives. Thirdly, we further implant
the RNN core into a multiscale Auto-Context scheme to successively refine the
details of the shape prediction map. With extensive validation on challenging
prostate ultrasound images, our framework bridges severe boundary
incompleteness and achieves the best performance in prostate boundary
delineation when compared with several advanced methods. Additionally, our
approach is general and can be extended to other medical image segmentation
tasks, where boundary incompleteness is one of the main challenges.
|
[
{
"version": "v1",
"created": "Tue, 6 Dec 2016 03:56:07 GMT"
}
] | 2016-12-07T00:00:00 |
[
[
"Yang",
"Xin",
""
],
[
"Yu",
"Lequan",
""
],
[
"Wu",
"Lingyun",
""
],
[
"Wang",
"Yi",
""
],
[
"Ni",
"Dong",
""
],
[
"Qin",
"Jing",
""
],
[
"Heng",
"Pheng-Ann",
""
]
] |
new_dataset
| 0.99531 |
1612.01744
|
Laurent Besacier
|
Alexandre Berard and Olivier Pietquin and Christophe Servan and
Laurent Besacier
|
Listen and Translate: A Proof of Concept for End-to-End Speech-to-Text
Translation
|
accepted to NIPS workshop on End-to-end Learning for Speech and Audio
Processing
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes a first attempt to build an end-to-end speech-to-text
translation system, which does not use source language transcription during
learning or decoding. We propose a model for direct speech-to-text translation,
which gives promising results on a small French-English synthetic corpus.
Relaxing the need for source language transcription would drastically change
the data collection methodology in speech translation, especially in
under-resourced scenarios. For instance, in the former project DARPA TRANSTAC
(speech translation from spoken Arabic dialects), a large effort was devoted to
the collection of speech transcripts (and a prerequisite to obtain transcripts
was often a detailed transcription guide for languages with little standardized
spelling). Now, if end-to-end approaches for speech-to-text translation are
successful, one might consider collecting data by asking bilingual speakers to
directly utter speech in the source language from target language text
utterances. Such an approach has the advantage to be applicable to any
unwritten (source) language.
|
[
{
"version": "v1",
"created": "Tue, 6 Dec 2016 10:48:56 GMT"
}
] | 2016-12-07T00:00:00 |
[
[
"Berard",
"Alexandre",
""
],
[
"Pietquin",
"Olivier",
""
],
[
"Servan",
"Christophe",
""
],
[
"Besacier",
"Laurent",
""
]
] |
new_dataset
| 0.95082 |
1612.01749
|
Almog Lahav
|
Almog Lahav, Tanya Chernyakova (Student Member, IEEE), Yonina C. Eldar
(Fellow, IEEE)
|
FoCUS: Fourier-based Coded Ultrasound
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern imaging systems typically use single-carrier short pulses for
transducer excitation. Coded signals together with pulse compression are
successfully used in radar and communication to increase the amount of
transmitted energy. Previous research verified significant improvement in SNR
and imaging depth for ultrasound imaging with coded signals. Since pulse
compression needs to be applied at each transducer element, the implementation
of coded excitation (CE) in array imaging is computationally complex. Applying
pulse compression on the beamformer output reduces the computational load but
also degrades both the axial and lateral point spread function (PSF)
compromising image quality. In this work we present an approach for efficient
implementation of pulse compression by integrating it into frequency domain
beamforming. This method leads to significant reduction in the amount of
computations without affecting axial resolution. The lateral resolution is
dictated by the factor of savings in computational load. We verify the
performance of our method on a Verasonics imaging system and compare the
resulting images to time-domain processing. We show that up to 77 fold
reduction in computational complexity can be achieved in a typical imaging
setups. The efficient implementation makes CE a feasible approach in array
imaging paving the way to enhanced SNR as well as improved imaging depth and
frame-rate.
|
[
{
"version": "v1",
"created": "Tue, 6 Dec 2016 10:55:49 GMT"
}
] | 2016-12-07T00:00:00 |
[
[
"Lahav",
"Almog",
"",
"Student Member, IEEE"
],
[
"Chernyakova",
"Tanya",
"",
"Student Member, IEEE"
],
[
"Eldar",
"Yonina C.",
"",
"Fellow, IEEE"
]
] |
new_dataset
| 0.997721 |
1512.07815
|
Pankaj Pansari
|
Pankaj Pansari, M. Pawan Kumar
|
Truncated Max-of-Convex Models
|
Under review at CVPR 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Truncated convex models (TCM) are a special case of pairwise random fields
that have been widely used in computer vision. However, by restricting the
order of the potentials to be at most two, they fail to capture useful image
statistics. We propose a natural generalization of TCM to high-order random
fields, which we call truncated max-of-convex models (TMCM). The energy
function of TMCM consistsof two types of potentials: (i) unary potential, which
has no restriction on its form; and (ii) clique potential, which is the sum of
the m largest truncated convex distances over all label pairs in a clique. The
use of a convex distance function encourages smoothness, while truncation
allows for discontinuities in the labeling. By using m > 1, TMCM provides
robustness towards errors in the definition of the cliques. In order to
minimize the energy function of a TMCM over all possible labelings, we design
an efficient st-MINCUT based range expansion algorithm. We prove the accuracy
of our algorithm by establishing strong multiplicative bounds for several
special cases of interest. Using synthetic and standard real data sets, we
demonstrate the benefit of our high-order TMCM over pairwise TCM, as well as
the benefit of our range expansion algorithm over other st-MINCUT based
approaches.
|
[
{
"version": "v1",
"created": "Thu, 24 Dec 2015 13:52:44 GMT"
},
{
"version": "v2",
"created": "Sat, 3 Dec 2016 15:56:12 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Pansari",
"Pankaj",
""
],
[
"Kumar",
"M. Pawan",
""
]
] |
new_dataset
| 0.992231 |
1602.01537
|
Vachik Dave
|
Vachik S. Dave and Mohammad Al Hasan
|
TopCom: Index for Shortest Distance Query in Directed Graph
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Finding shortest distance between two vertices in a graph is an important
problem due to its numerous applications in diverse domains, including
geo-spatial databases, social network analysis, and information retrieval.
Classical algorithms (such as, Dijkstra) solve this problem in polynomial time,
but these algorithms cannot provide real-time response for a large number of
bursty queries on a large graph. So, indexing based solutions that pre-process
the graph for efficiently answering (exactly or approximately) a large number
of distance queries in real-time is becoming increasingly popular. Existing
solutions have varying performance in terms of index size, index building time,
query time, and accuracy. In this work, we propose T OP C OM , a novel
indexing-based solution for exactly answering distance queries. Our experiments
with two of the existing state-of-the-art methods (IS-Label and TreeMap) show
the superiority of T OP C OM over these two methods considering scalability and
query time. Besides, indexing of T OP C OM exploits the DAG (directed acyclic
graph) structure in the graph, which makes it significantly faster than the
existing methods if the SCCs (strongly connected component) of the input graph
are relatively small.
|
[
{
"version": "v1",
"created": "Thu, 4 Feb 2016 02:02:05 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Dec 2016 02:56:53 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Dave",
"Vachik S.",
""
],
[
"Hasan",
"Mohammad Al",
""
]
] |
new_dataset
| 0.965438 |
1603.02636
|
Lucas Beyer
|
Lucas Beyer and Alexander Hermans and Bastian Leibe
|
DROW: Real-Time Deep Learning based Wheelchair Detection in 2D Range
Data
|
Lucas Beyer and Alexander Hermans contributed equally
| null | null | null |
cs.RO cs.CV cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the DROW detector, a deep learning based detector for 2D range
data. Laser scanners are lighting invariant, provide accurate range data, and
typically cover a large field of view, making them interesting sensors for
robotics applications. So far, research on detection in laser range data has
been dominated by hand-crafted features and boosted classifiers, potentially
losing performance due to suboptimal design choices. We propose a Convolutional
Neural Network (CNN) based detector for this task. We show how to effectively
apply CNNs for detection in 2D range data, and propose a depth preprocessing
step and voting scheme that significantly improve CNN performance. We
demonstrate our approach on wheelchairs and walkers, obtaining state of the art
detection results. Apart from the training data, none of our design choices
limits the detector to these two classes, though. We provide a ROS node for our
detector and release our dataset containing 464k laser scans, out of which 24k
were annotated.
|
[
{
"version": "v1",
"created": "Tue, 8 Mar 2016 19:39:19 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Dec 2016 18:06:28 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Beyer",
"Lucas",
""
],
[
"Hermans",
"Alexander",
""
],
[
"Leibe",
"Bastian",
""
]
] |
new_dataset
| 0.956514 |
1604.06182
|
Haroon Idrees
|
Haroon Idrees, Amir R. Zamir, Yu-Gang Jiang, Alex Gorban, Ivan Laptev,
Rahul Sukthankar, Mubarak Shah
|
The THUMOS Challenge on Action Recognition for Videos "in the Wild"
|
Preprint submitted to Computer Vision and Image Understanding
| null |
10.1016/j.cviu.2016.10.018
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatically recognizing and localizing wide ranges of human actions has
crucial importance for video understanding. Towards this goal, the THUMOS
challenge was introduced in 2013 to serve as a benchmark for action
recognition. Until then, video action recognition, including THUMOS challenge,
had focused primarily on the classification of pre-segmented (i.e., trimmed)
videos, which is an artificial task. In THUMOS 2014, we elevated action
recognition to a more practical level by introducing temporally untrimmed
videos. These also include `background videos' which share similar scenes and
backgrounds as action videos, but are devoid of the specific actions. The three
editions of the challenge organized in 2013--2015 have made THUMOS a common
benchmark for action classification and detection and the annual challenge is
widely attended by teams from around the world.
In this paper we describe the THUMOS benchmark in detail and give an overview
of data collection and annotation procedures. We present the evaluation
protocols used to quantify results in the two THUMOS tasks of action
classification and temporal detection. We also present results of submissions
to the THUMOS 2015 challenge and review the participating approaches.
Additionally, we include a comprehensive empirical study evaluating the
differences in action recognition between trimmed and untrimmed videos, and how
well methods trained on trimmed videos generalize to untrimmed videos. We
conclude by proposing several directions and improvements for future THUMOS
challenges.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2016 05:08:59 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Idrees",
"Haroon",
""
],
[
"Zamir",
"Amir R.",
""
],
[
"Jiang",
"Yu-Gang",
""
],
[
"Gorban",
"Alex",
""
],
[
"Laptev",
"Ivan",
""
],
[
"Sukthankar",
"Rahul",
""
],
[
"Shah",
"Mubarak",
""
]
] |
new_dataset
| 0.999326 |
1604.08336
|
Somaiyeh Mahmoud Zadeh
|
Somaiyeh Mahmoud.Zadeh, David M.W Powers, Karl Sammut
|
An Autonomous Reactive Architecture for Efficient AUV Mission Time
Management in Realistic Severe Ocean Environment
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Today AUVs operation still remains restricted to very particular tasks with
low real autonomy due to battery restrictions. Efficient motion planning and
mission scheduling are principle requirement toward advance autonomy and
facilitate the vehicle to handle long-range operations. A single vehicle cannot
carry out all tasks in a large scale terrain; hence, it needs a certain degree
of autonomy in performing robust decision making and awareness of the
mission/environment to trade-off between tasks to be completed, managing the
available time, and ensuring safe deployment at all stages of the mission. In
this respect, this research introduces a modular control architecture including
higher/lower level planners, in which the higher level module is responsible
for increasing mission productivity by assigning prioritized tasks while
guiding the vehicle toward its final destination in a terrain covered by
several waypoints; and the lower level is responsible for vehicle's safe
deployment in a smaller scale encountering time-varying ocean current and
different uncertain static/moving obstacles similar to actual ocean
environment. Synchronization between higher and lower level modules is
efficiently configured to manage the mission time and to guarantee on-time
termination of the mission. The performance and accuracy of two higher and
lower level modules are tested and validated using ant colony and firefly
optimization algorithm, respectively. After all, the overall performance of the
architecture is investigated in 10 different mission scenarios. The analyze of
the captured results from different simulated missions confirm the efficiency
and inherent robustness of the introduced architecture in efficient time
management, safe deployment, and providing beneficial operation by proper
prioritizing the tasks in accordance with mission time.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2016 07:48:49 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jun 2016 23:24:06 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Zadeh",
"Somaiyeh Mahmoud.",
""
],
[
"Powers",
"David M. W",
""
],
[
"Sammut",
"Karl",
""
]
] |
new_dataset
| 0.952969 |
1607.02192
|
Ankur Taly
|
Andres Erbsen, Asim Shankar, Ankur Taly
|
Distributed Authorization in Vanadium
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this tutorial, we present an authorization model for distributed systems
that operate with limited internet connectivity. Reliable internet access
remains a luxury for a majority of the world's population. Even for those who
can afford it, a dependence on internet connectivity may lead to sub-optimal
user experiences. With a focus on decentralized deployment, we present an
authorization model that is suitable for scenarios where devices right next to
each other (such as a sensor or a friend's phone) should be able to communicate
securely in a peer-to-peer manner. The model has been deployed as part of an
open-source distributed application framework called Vanadium. As part of this
tutorial, we survey some of the key ideas and techniques used in distributed
authorization, and explain how they are combined in the design of our model.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2016 23:00:25 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Dec 2016 19:47:36 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Erbsen",
"Andres",
""
],
[
"Shankar",
"Asim",
""
],
[
"Taly",
"Ankur",
""
]
] |
new_dataset
| 0.991205 |
1611.08844
|
Benedetta Franceschiello Dr.
|
B. Franceschiello, A. Sarti, G. Citti
|
A neuro-mathematical model for geometrical optical illusions
|
13 pages, 38 figures divided in 15 groups
| null | null | null |
cs.CV q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Geometrical optical illusions have been object of many studies due to the
possibility they offer to understand the behaviour of low-level visual
processing. They consist in situations in which the perceived geometrical
properties of an object differ from those of the object in the visual stimulus.
Starting from the geometrical model introduced by Citti and Sarti in [3], we
provide a mathematical model and a computational algorithm which allows to
interpret these phenomena and to qualitatively reproduce the perceived
misperception.
|
[
{
"version": "v1",
"created": "Sun, 27 Nov 2016 13:52:24 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Franceschiello",
"B.",
""
],
[
"Sarti",
"A.",
""
],
[
"Citti",
"G.",
""
]
] |
new_dataset
| 0.993867 |
1612.00835
|
Patsorn Sangkloy
|
Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher Yu, James Hays
|
Scribbler: Controlling Deep Image Synthesis with Sketch and Color
|
13 pages, 14 figures
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, there have been several promising methods to generate realistic
imagery from deep convolutional networks. These methods sidestep the
traditional computer graphics rendering pipeline and instead generate imagery
at the pixel level by learning from large collections of photos (e.g. faces or
bedrooms). However, these methods are of limited utility because it is
difficult for a user to control what the network produces. In this paper, we
propose a deep adversarial image synthesis architecture that is conditioned on
sketched boundaries and sparse color strokes to generate realistic cars,
bedrooms, or faces. We demonstrate a sketch based image synthesis system which
allows users to 'scribble' over the sketch to indicate preferred color for
objects. Our network can then generate convincing images that satisfy both the
color and the sketch constraints of user. The network is feed-forward which
allows users to see the effect of their edits in real time. We compare to
recent work on sketch to image synthesis and show that our approach can
generate more realistic, more diverse, and more controllable outputs. The
architecture is also effective at user-guided colorization of grayscale images.
|
[
{
"version": "v1",
"created": "Fri, 2 Dec 2016 20:53:01 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Dec 2016 20:06:57 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Sangkloy",
"Patsorn",
""
],
[
"Lu",
"Jingwan",
""
],
[
"Fang",
"Chen",
""
],
[
"Yu",
"Fisher",
""
],
[
"Hays",
"James",
""
]
] |
new_dataset
| 0.999081 |
1612.00866
|
John Beieler
|
John Beieler
|
Creating a Real-Time, Reproducible Event Dataset
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The generation of political event data has remained much the same since the
mid-1990s, both in terms of data acquisition and the process of coding text
into data. Since the 1990s, however, there have been significant improvements
in open-source natural language processing software and in the availability of
digitized news content. This paper presents a new, next-generation event
dataset, named Phoenix, that builds from these and other advances. This dataset
includes improvements in the underlying news collection process and event
coding software, along with the creation of a general processing pipeline
necessary to produce daily-updated data. This paper provides a face validity
checks by briefly examining the data for the conflict in Syria, and a
comparison between Phoenix and the Integrated Crisis Early Warning System data.
|
[
{
"version": "v1",
"created": "Fri, 2 Dec 2016 21:28:00 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Beieler",
"John",
""
]
] |
new_dataset
| 0.999489 |
1612.00914
|
Minjia Shi
|
Minjia Shi, Daitao Huang, Patrick Sole
|
Some ternary cubic two-weight codes
|
11 pages, submitted on 2 December. arXiv admin note: text overlap
with arXiv:1612.00118
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study trace codes with defining set $L,$ a subgroup of the multiplicative
group of an extension of degree $m$ of the alphabet ring
$\mathbb{F}_3+u\mathbb{F}_3+u^{2}\mathbb{F}_{3},$ with $u^{3}=1.$ These codes
are abelian, and their ternary images are quasi-cyclic of co-index three
(a.k.a. cubic codes). Their Lee weight distributions are computed by using
Gauss sums. These codes have three nonzero weights when $m$ is singly-even and
$|L|=\frac{3^{3m}-3^{2m}}{2}.$ When $m$ is odd, and
$|L|=\frac{3^{3m}-3^{2m}}{2}$, or $|L|={3^{3m}-3^{2m}}$ and $m$ is a positive
integer, we obtain two new infinite families of two-weight codes which are
optimal. Applications of the image codes to secret sharing schemes are also
given.
|
[
{
"version": "v1",
"created": "Sat, 3 Dec 2016 02:13:51 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Shi",
"Minjia",
""
],
[
"Huang",
"Daitao",
""
],
[
"Sole",
"Patrick",
""
]
] |
new_dataset
| 0.997332 |
1612.00958
|
Anna Lubiw
|
Anna Lubiw and Vinayak Pathak
|
Reconfiguring Ordered Bases of a Matroid
| null | null | null | null |
cs.DS math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For a matroid with an ordered (or "labelled") basis, a basis exchange step
removes one element with label $l$ and replaces it by a new element that
results in a new basis, and with the new element assigned label $l$. We prove
that one labelled basis can be reconfigured to another if and only if for every
label, the initial and final elements with that label lie in the same connected
component of the matroid. Furthermore, we prove that when the reconfiguration
is possible, the number of basis exchange steps required is $O(r^{1.5})$ for a
rank $r$ matroid. For a graphic matroid we improve the bound to $O(r \log r)$.
|
[
{
"version": "v1",
"created": "Sat, 3 Dec 2016 11:34:53 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Lubiw",
"Anna",
""
],
[
"Pathak",
"Vinayak",
""
]
] |
new_dataset
| 0.999626 |
1612.00962
|
Leen De Baets
|
Leen De Baets, Joeri Ruyssinck, Thomas Peiffer, Johan Decruyenaere,
Filip De Turck, Femke Ongenae, Tom Dhaene
|
Positive blood culture detection in time series data using a BiLSTM
network
| null | null | null | null |
cs.LG cs.NE q-bio.QM stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The presence of bacteria or fungi in the bloodstream of patients is abnormal
and can lead to life-threatening conditions. A computational model based on a
bidirectional long short-term memory artificial neural network, is explored to
assist doctors in the intensive care unit to predict whether examination of
blood cultures of patients will return positive. As input it uses nine
monitored clinical parameters, presented as time series data, collected from
2177 ICU admissions at the Ghent University Hospital. Our main goal is to
determine if general machine learning methods and more specific, temporal
models, can be used to create an early detection system. This preliminary
research obtains an area of 71.95% under the precision recall curve, proving
the potential of temporal neural networks in this context.
|
[
{
"version": "v1",
"created": "Sat, 3 Dec 2016 12:16:21 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"De Baets",
"Leen",
""
],
[
"Ruyssinck",
"Joeri",
""
],
[
"Peiffer",
"Thomas",
""
],
[
"Decruyenaere",
"Johan",
""
],
[
"De Turck",
"Filip",
""
],
[
"Ongenae",
"Femke",
""
],
[
"Dhaene",
"Tom",
""
]
] |
new_dataset
| 0.994373 |
1612.00969
|
Subhro Roy
|
Subhro Roy and Dan Roth
|
Unit Dependency Graph and its Application to Arithmetic Word Problem
Solving
|
AAAI 2017
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Math word problems provide a natural abstraction to a range of natural
language understanding problems that involve reasoning about quantities, such
as interpreting election results, news about casualties, and the financial
section of a newspaper. Units associated with the quantities often provide
information that is essential to support this reasoning. This paper proposes a
principled way to capture and reason about units and shows how it can benefit
an arithmetic word problem solver. This paper presents the concept of Unit
Dependency Graphs (UDGs), which provides a compact representation of the
dependencies between units of numbers mentioned in a given problem. Inducing
the UDG alleviates the brittleness of the unit extraction system and allows for
a natural way to leverage domain knowledge about unit compatibility, for word
problem solving. We introduce a decomposed model for inducing UDGs with minimal
additional annotations, and use it to augment the expressions used in the
arithmetic word problem solver of (Roy and Roth 2015) via a constrained
inference framework. We show that introduction of UDGs reduces the error of the
solver by over 10 %, surpassing all existing systems for solving arithmetic
word problems. In addition, it also makes the system more robust to adaptation
to new vocabulary and equation forms .
|
[
{
"version": "v1",
"created": "Sat, 3 Dec 2016 14:14:11 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Roy",
"Subhro",
""
],
[
"Roth",
"Dan",
""
]
] |
new_dataset
| 0.997978 |
1612.00993
|
Tobias Glocker
|
Tobias Glocker, Timo Mantere and Mohammed Elmusrati
|
A Protocol for a Secure Remote Keyless Entry System Applicable in
Vehicles using Symmetric-Key Cryptography
| null | null | null | null |
cs.IT cs.CR math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In our modern society comfort became a standard. This comfort, especially in
cars can only be achieved by equipping the car with more electronic devices.
Some of the electronic devices must cooperate with each other and thus they
require a communication channel, which can be wired or wireless. In these days,
it would be hard to sell a new car operating with traditional keys. Almost all
modern cars can be locked or unlocked with a Remote Keyless System. A Remote
Keyless System consists of a key fob that communicates wirelessly with the car
transceiver that is responsible for locking and unlocking the car. However
there are several threats for wireless communication channels.
This paper describes the possible attacks against a Remote Keyless System and
introduces a secure protocol as well as a lightweight Symmetric Encryption
Algorithm for a Remote Keyless Entry System applicable in vehicles.
|
[
{
"version": "v1",
"created": "Sat, 3 Dec 2016 18:01:43 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Glocker",
"Tobias",
""
],
[
"Mantere",
"Timo",
""
],
[
"Elmusrati",
"Mohammed",
""
]
] |
new_dataset
| 0.992591 |
1612.01044
|
Yuanxin Wu
|
Yuanxin Wu, Danping Zou, Peilin Liu and Wenxian Yu
|
Dynamic Magnetometer Calibration and Alignment to Inertial Sensors by
Kalman Filtering
|
IEEE Trans. on Control System Technology, 2016
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Magnetometer and inertial sensors are widely used for orientation estimation.
Magnetometer usage is often troublesome, as it is prone to be interfered by
onboard or ambient magnetic disturbance. The onboard soft-iron material
distorts not only the magnetic field, but the magnetometer sensor frame
coordinate and the cross-sensor misalignment relative to inertial sensors. It
is desirable to conveniently put magnetic and inertial sensors information in a
common frame. Existing methods either split the problem into successive
intrinsic and cross-sensor calibrations, or rely on stationary accelerometer
measurements which is infeasible in dynamic conditions. This paper formulates
the magnetometer calibration and alignment to inertial sensors as a state
estimation problem, and collectively solves the magnetometer intrinsic and
cross-sensor calibrations, as well as the gyroscope bias estimation. Sufficient
conditions are derived for the problem to be globally observable, even when no
accelerometer information is used at all. An extended Kalman filter is designed
to implement the state estimation and comprehensive test data results show the
superior performance of the proposed approach. It is immune to acceleration
disturbance and applicable potentially in any dynamic conditions.
|
[
{
"version": "v1",
"created": "Sun, 4 Dec 2016 01:24:46 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Wu",
"Yuanxin",
""
],
[
"Zou",
"Danping",
""
],
[
"Liu",
"Peilin",
""
],
[
"Yu",
"Wenxian",
""
]
] |
new_dataset
| 0.995761 |
1612.01096
|
Jin Li
|
Jin Li, Aixian Zhang, Keqin Feng
|
Linear Codes over $\mathbb{F}_{q}[x]/(x^2)$ and $GR(p^2,m)$ Reaching the
Griesmer Bound
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We construct two series of linear codes over finite ring
$\mathbb{F}_{q}[x]/(x^2)$ and Galois ring $GR(p^2,m)$ respectively reaching the
Griesmer bound. They derive two series of codes over finite field
$\mathbb{F}_{q}$ by Gray map. The first series of codes over $\mathbb{F}_{q}$
derived from $\mathbb{F}_{q}[x]/(x^2)$ are linear and also reach the Griesmer
bound in some cases. Many of linear codes over finite field we constructed have
two Hamming (non-zero) weights.
|
[
{
"version": "v1",
"created": "Sun, 4 Dec 2016 10:41:34 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Li",
"Jin",
""
],
[
"Zhang",
"Aixian",
""
],
[
"Feng",
"Keqin",
""
]
] |
new_dataset
| 0.964997 |
1612.01189
|
Lin Xiang
|
Lin Xiang, Derrick Wing Kwan Ng, Robert Schober, and Vincent W.S. Wong
|
Cache-Enabled Physical-Layer Security for Video Streaming in Wireless
Networks with Limited Backhaul
|
Accepted for presentation at IEEE Globecom 2016, Washington, DC, Dec.
2016
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate for the first time the benefits of wireless
caching for the physical layer security (PLS) of wireless networks. In
particular, a caching scheme enabling power-efficient PLS is proposed for
cellular video streaming with constrained backhaul capacity. By sharing video
data across a subset of base stations (BSs) through both caching and backhaul
loading, secure cooperative transmission of several BSs is dynamically enabled
in accordance with the cache status, the channel conditions, and the backhaul
capacity. Thereby, caching reduces the data sharing overhead over the
capacity-constrained backhaul links. More importantly, caching introduces
additional secure degrees of freedom and enables a power-efficient design. We
investigate the optimal caching and transmission policies for minimizing the
total transmit power while providing quality of service (QoS) and guaranteeing
secrecy during video delivery. A two-stage non-convex mixed-integer
optimization problem is formulated, which optimizes the caching policy in an
offline video caching stage and the cooperative transmission policy in an
online video delivery stage. As the problem is NP-hard, suboptimal
polynomial-time algorithms are proposed for low-complexity cache training and
delivery control, respectively. Sufficient optimality conditions, under which
the proposed schemes attain global optimal solutions, are also provided.
Simulation results show that the proposed schemes achieve low secrecy outage
probability and high power efficiency simultaneously.
|
[
{
"version": "v1",
"created": "Sun, 4 Dec 2016 21:44:10 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Xiang",
"Lin",
""
],
[
"Ng",
"Derrick Wing Kwan",
""
],
[
"Schober",
"Robert",
""
],
[
"Wong",
"Vincent W. S.",
""
]
] |
new_dataset
| 0.997365 |
1612.01243
|
Orkun Karabasoglu
|
Zhiqian Qiao and Orkun Karabasoglu
|
Vehicle Powertrain Connected Route Optimization for Conventional, Hybrid
and Plug-in Electric Vehicles
|
Submitted to Transportation Research Part D: Transport and
Environment
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most navigation systems use data from satellites to provide drivers with the
shortest-distance, shortest-time or highway-preferred paths. However, when the
routing decisions are made for advanced vehicles, there are other factors
affecting cost, such as vehicle powertrain type, battery state of charge (SOC)
and the change of component efficiencies under traffic conditions, which are
not considered by traditional routing systems. The impact of the trade-off
between distance and traffic on the cost of the trip might change with the type
of vehicle technology and component dynamics. As a result, the least-cost paths
might be different from the shortest-distance or shortest-time paths. In this
work, a novel routing strategy has been proposed where the decision-making
process benefits from the aforementioned information to result in a least-cost
path for drivers. We integrate vehicle powertrain dynamics into route
optimization and call this strategy as Vehicle Powertrain Connected Route
Optimization (VPCRO). We find that the optimal paths might change significantly
for all types of vehicle powertrains when VPCRO is used instead of
shortest-distance strategy. About 81% and 58% of trips were replaced by
different optimal paths with VPCRO when the vehicle type was Conventional
Vehicle (CV) and Electrified Vehicle (EV), respectively. Changed routes had
reduced travel costs on an average of 15% up to a maximum of 60% for CVs and on
an average of 6% up to a maximum of 30% for EVs. Moreover, it was observed that
3% and 10% of trips had different optimal paths for a plug-in hybrid electric
vehicle, when initial battery SOC changed from 90% to 60% and 40%,
respectively. Paper shows that using sensory information from vehicle
powertrain for route optimization plays an important role to minimize travel
costs.
|
[
{
"version": "v1",
"created": "Mon, 5 Dec 2016 04:16:30 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Qiao",
"Zhiqian",
""
],
[
"Karabasoglu",
"Orkun",
""
]
] |
new_dataset
| 0.993137 |
1612.01445
|
Suleiman Yerima
|
BooJoong Kang, Suleiman Y. Yerima, Sakir Sezer and Kieran McLaughlin
|
N-gram Opcode Analysis for Android Malware Detection
| null |
International Journal on Cyber Situational Awareness, Vol. 1, No.
1, pp231-255 (2016)
| null | null |
cs.CR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Android malware has been on the rise in recent years due to the increasing
popularity of Android and the proliferation of third party application markets.
Emerging Android malware families are increasingly adopting sophisticated
detection avoidance techniques and this calls for more effective approaches for
Android malware detection. Hence, in this paper we present and evaluate an
n-gram opcode features based approach that utilizes machine learning to
identify and categorize Android malware. This approach enables automated
feature discovery without relying on prior expert or domain knowledge for
pre-determined features. Furthermore, by using a data segmentation technique
for feature selection, our analysis is able to scale up to 10-gram opcodes. Our
experiments on a dataset of 2520 samples showed an f-measure of 98% using the
n-gram opcode based approach. We also provide empirical findings that
illustrate factors that have probable impact on the overall n-gram opcodes
performance trends.
|
[
{
"version": "v1",
"created": "Mon, 5 Dec 2016 17:33:23 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Kang",
"BooJoong",
""
],
[
"Yerima",
"Suleiman Y.",
""
],
[
"Sezer",
"Sakir",
""
],
[
"McLaughlin",
"Kieran",
""
]
] |
new_dataset
| 0.988142 |
1612.01476
|
Ayush Pandey
|
Ayush Pandey, Siddharth Jha, Debashish Chakravarty
|
Modeling and Control of an Autonomous Three Wheeled Mobile Robot with
Front Steer
|
IEEE International Conference on Robotic Computing 2017. (under
review)
| null | null | null |
cs.SY cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modeling and control strategies for a design of an autonomous three wheeled
mobile robot with front wheel steer is presented. Although, the three-wheel
vehicle design with front wheel steer is common in automotive vehicles used
often in public transport, but its advantages in navigation and localization of
autonomous vehicles is seldom utilized. We present the system model for such a
robotic vehicle. A PID controller for speed control is designed for the model
obtained and has been implemented in a digital control framework. The
trajectory control framework, which is a challenging task for such a
three-wheeled robot has also been presented in the paper. The derived system
model has been verified using experimental results obtained for the robot
vehicle design. Controller performance and robustness issues have also been
discussed briefly.
|
[
{
"version": "v1",
"created": "Mon, 5 Dec 2016 18:55:45 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Pandey",
"Ayush",
""
],
[
"Jha",
"Siddharth",
""
],
[
"Chakravarty",
"Debashish",
""
]
] |
new_dataset
| 0.992474 |
1612.01495
|
Ondrej Miksik
|
Ondrej Miksik, Juan-Manuel P\'erez-R\'ua, Philip H. S. Torr, Patrick
P\'erez
|
ROAM: a Rich Object Appearance Model with Application to Rotoscoping
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rotoscoping, the detailed delineation of scene elements through a video shot,
is a painstaking task of tremendous importance in professional post-production
pipelines. While pixel-wise segmentation techniques can help for this task,
professional rotoscoping tools rely on parametric curves that offer the artists
a much better interactive control on the definition, editing and manipulation
of the segments of interest. Sticking to this prevalent rotoscoping paradigm,
we propose a novel framework to capture and track the visual aspect of an
arbitrary object in a scene, given a first closed outline of this object. This
model combines a collection of local foreground/background appearance models
spread along the outline, a global appearance model of the enclosed object and
a set of distinctive foreground landmarks. The structure of this rich
appearance model allows simple initialization, efficient iterative optimization
with exact minimization at each step, and on-line adaptation in videos. We
demonstrate qualitatively and quantitatively the merit of this framework
through comparisons with tools based on either dynamic segmentation with a
closed curve or pixel-wise binary labelling.
|
[
{
"version": "v1",
"created": "Mon, 5 Dec 2016 20:03:18 GMT"
}
] | 2016-12-06T00:00:00 |
[
[
"Miksik",
"Ondrej",
""
],
[
"Pérez-Rúa",
"Juan-Manuel",
""
],
[
"Torr",
"Philip H. S.",
""
],
[
"Pérez",
"Patrick",
""
]
] |
new_dataset
| 0.987763 |
1605.09779
|
Daniel Roche
|
Adam J. Aviv, Seung Geol Choi, Travis Mayberry, Daniel S. Roche
|
ObliviSync: Practical Oblivious File Backup and Synchronization
|
15 pages. Accepted to NDSS 2017
| null | null | null |
cs.CR
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Oblivious RAM (ORAM) protocols are powerful techniques that hide a client's
data as well as access patterns from untrusted service providers. We present an
oblivious cloud storage system, ObliviSync, that specifically targets one of
the most widely-used personal cloud storage paradigms: synchronization and
backup services, popular examples of which are Dropbox, iCloud Drive, and
Google Drive. This setting provides a unique opportunity because the above
privacy properties can be achieved with a simpler form of ORAM called
write-only ORAM, which allows for dramatically increased efficiency compared to
related work. Our solution is asymptotically optimal and practically efficient,
with a small constant overhead of approximately 4x compared with non-private
file storage, depending only on the total data size and parameters chosen
according to the usage rate, and not on the number or size of individual files.
Our construction also offers protection against timing-channel attacks, which
has not been previously considered in ORAM protocols. We built and evaluated a
full implementation of ObliviSync that supports multiple simultaneous read-only
clients and a single concurrent read/write client whose edits automatically and
seamlessly propagate to the readers. We show that our system functions under
high work loads, with realistic file size distributions, and with small
additional latency (as compared to a baseline encrypted file system) when
paired with Dropbox as the synchronization service.
|
[
{
"version": "v1",
"created": "Tue, 31 May 2016 19:28:58 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Dec 2016 20:28:40 GMT"
}
] | 2016-12-05T00:00:00 |
[
[
"Aviv",
"Adam J.",
""
],
[
"Choi",
"Seung Geol",
""
],
[
"Mayberry",
"Travis",
""
],
[
"Roche",
"Daniel S.",
""
]
] |
new_dataset
| 0.999885 |
1606.07972
|
Yimin Pang
|
Yimin Pang, Alireza Babaei, Jennifer Andreoli-Fang, Belal Hamzeh
|
Wi-Fi Coexistence with Duty Cycled LTE-U
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Coexistence of Wi-Fi and LTE-Unlicensed (LTE-U) technologies has drawn
significant concern in industry. In this paper, we investigate the Wi-Fi
performance in the presence of duty cycle based LTE-U transmission on the same
channel. More specifically, one LTE-U cell and one Wi-Fi basic service set
(BSS) coexist by allowing LTE-U devices transmit their signals only in
predetermined duty cycles. Wi-Fi stations, on the other hand, simply contend
the shared channel using the distributed coordination function (DCF) protocol
without cooperation with the LTE-U system or prior knowledge about the duty
cycle period or duty cycle of LTE-U transmission. We define the fairness of the
above scheme as the difference between Wi-Fi performance loss ratio
(considering a defined reference performance) and the LTE-U duty cycle (or
function of LTE-U duty cycle). Depending on the interference to noise ratio
(INR) being above or below -62dbm, we classify the LTE-U interference as strong
or weak and establish mathematical models accordingly. The average throughput
and average service time of Wi-Fi are both formulated as functions of Wi-Fi and
LTE-U system parameters using probability theory. Lastly, we use the Monte
Carlo analysis to demonstrate the fairness of Wi-Fi and LTE-U air time sharing.
|
[
{
"version": "v1",
"created": "Sat, 25 Jun 2016 21:57:46 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jun 2016 17:27:15 GMT"
},
{
"version": "v3",
"created": "Fri, 2 Dec 2016 20:32:02 GMT"
}
] | 2016-12-05T00:00:00 |
[
[
"Pang",
"Yimin",
""
],
[
"Babaei",
"Alireza",
""
],
[
"Andreoli-Fang",
"Jennifer",
""
],
[
"Hamzeh",
"Belal",
""
]
] |
new_dataset
| 0.981135 |
1612.00323
|
Jacopo Staiano
|
Bruno Lepri, Jacopo Staiano, David Sangokoya, Emmanuel Letouz\'e,
Nuria Oliver
|
The Tyranny of Data? The Bright and Dark Sides of Data-Driven
Decision-Making for Social Good
|
preprint version; book chapter to appear in "Transparent Data Mining
for Big and Small Data", Studies in Big Data Series, Springer
| null | null | null |
cs.CY physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The unprecedented availability of large-scale human behavioral data is
profoundly changing the world we live in. Researchers, companies, governments,
financial institutions, non-governmental organizations and also citizen groups
are actively experimenting, innovating and adapting algorithmic decision-making
tools to understand global patterns of human behavior and provide decision
support to tackle problems of societal importance. In this chapter, we focus
our attention on social good decision-making algorithms, that is algorithms
strongly influencing decision-making and resource optimization of public goods,
such as public health, safety, access to finance and fair employment. Through
an analysis of specific use cases and approaches, we highlight both the
positive opportunities that are created through data-driven algorithmic
decision-making, and the potential negative consequences that practitioners
should be aware of and address in order to truly realize the potential of this
emergent field. We elaborate on the need for these algorithms to provide
transparency and accountability, preserve privacy and be tested and evaluated
in context, by means of living lab approaches involving citizens. Finally, we
turn to the requirements which would make it possible to leverage the
predictive power of data-driven human behavior analysis while ensuring
transparency, accountability, and civic participation.
|
[
{
"version": "v1",
"created": "Thu, 1 Dec 2016 15:53:15 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Dec 2016 13:23:56 GMT"
}
] | 2016-12-05T00:00:00 |
[
[
"Lepri",
"Bruno",
""
],
[
"Staiano",
"Jacopo",
""
],
[
"Sangokoya",
"David",
""
],
[
"Letouzé",
"Emmanuel",
""
],
[
"Oliver",
"Nuria",
""
]
] |
new_dataset
| 0.991652 |
1612.00565
|
Justin Huang
|
Justin Huang and Maya Cakmak
|
Programming by Demonstration with User-Specified Perceptual Landmarks
|
Under review at the International Conference on Robotics and
Automation (ICRA) 2017
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Programming by demonstration (PbD) is an effective technique for developing
complex robot manipulation tasks, such as opening bottles or using human tools.
In order for such tasks to generalize to new scenes, the robot needs to be able
to perceive objects, object parts, or other task-relevant parts of the scene.
Previous work has relied on rigid, task-specific perception systems for this
purpose. This paper presents a flexible and open-ended perception system that
lets users specify perceptual "landmarks" during the demonstration, by
capturing parts of the point cloud from the demonstration scene. We present a
method for localizing landmarks in new scenes and experimentally evaluate this
method in a variety of settings. Then, we provide examples where user-specified
landmarks are used together with PbD on a PR2 robot to perform several complex
manipulation tasks. Finally, we present findings from a user evaluation of our
landmark specification interface demonstrating its feasibility as an end-user
tool.
|
[
{
"version": "v1",
"created": "Fri, 2 Dec 2016 04:44:10 GMT"
}
] | 2016-12-05T00:00:00 |
[
[
"Huang",
"Justin",
""
],
[
"Cakmak",
"Maya",
""
]
] |
new_dataset
| 0.954677 |
1612.00606
|
Li Yi
|
Li Yi, Hao Su, Xingwen Guo, Leonidas Guibas
|
SyncSpecCNN: Synchronized Spectral CNN for 3D Shape Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the problem of semantic annotation on 3D models that
are represented as shape graphs. A functional view is taken to represent
localized information on graphs, so that annotations such as part segment or
keypoint are nothing but 0-1 indicator vertex functions. Compared with images
that are 2D grids, shape graphs are irregular and non-isomorphic data
structures. To enable the prediction of vertex functions on them by
convolutional neural networks, we resort to spectral CNN method that enables
weight sharing by parameterizing kernels in the spectral domain spanned by
graph laplacian eigenbases. Under this setting, our network, named SyncSpecCNN,
strive to overcome two key challenges: how to share coefficients and conduct
multi-scale analysis in different parts of the graph for a single shape, and
how to share information across related but different shapes that may be
represented by very different graphs. Towards these goals, we introduce a
spectral parameterization of dilated convolutional kernels and a spectral
transformer network. Experimentally we tested our SyncSpecCNN on various tasks,
including 3D shape part segmentation and 3D keypoint prediction.
State-of-the-art performance has been achieved on all benchmark datasets.
|
[
{
"version": "v1",
"created": "Fri, 2 Dec 2016 09:27:34 GMT"
}
] | 2016-12-05T00:00:00 |
[
[
"Yi",
"Li",
""
],
[
"Su",
"Hao",
""
],
[
"Guo",
"Xingwen",
""
],
[
"Guibas",
"Leonidas",
""
]
] |
new_dataset
| 0.995525 |
1612.00625
|
Vijendra Singh
|
Singh Vijendra, Nisha Vasudeva and Hem Jyotsana Parashar
|
Recognition of Text Image Using Multilayer Perceptron
|
2011 IEEE 3rd International Conference on Machine Learning and
Computing (ICMLC 2011, Singapore, PP 547-550
| null | null |
978-1-4244-925 3-4 /11/IEEE
|
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The biggest challenge in the field of image processing is to recognize
documents both in printed and handwritten format. Optical Character Recognition
OCR is a type of document image analysis where scanned digital image that
contains either machine printed or handwritten script input into an OCR
software engine and translating it into an editable machine readable digital
text format. A Neural network is designed to model the way in which the brain
performs a particular task or function of interest: The neural network is
simulated in software on a digital computer. Character Recognition refers to
the process of converting printed Text documents into translated Unicode Text.
The printed documents available in the form of books, papers, magazines, etc.
are scanned using standard scanners which produce an image of the scanned
document. Lines are identifying by an algorithm where we identify top and
bottom of line. Then in each line character boundaries are calculated by an
algorithm then using these calculation, characters is isolated from the image
and then we classify each character by basic back propagation. Each image
character is comprised of 30*20 pixels. We have used the Back propagation
Neural Network for efficient recognition where the errors were corrected
through back propagation and rectified neuron values were transmitted by
feed-forward method in the neural network of multiple layers.
|
[
{
"version": "v1",
"created": "Fri, 2 Dec 2016 10:43:04 GMT"
}
] | 2016-12-05T00:00:00 |
[
[
"Vijendra",
"Singh",
""
],
[
"Vasudeva",
"Nisha",
""
],
[
"Parashar",
"Hem Jyotsana",
""
]
] |
new_dataset
| 0.997683 |
1612.00675
|
Johannes Schmidt
|
Johannes Schmidt
|
The Weight in Enumeration
|
12 main pages + 5 appendix pages
| null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In our setting enumeration amounts to generate all solutions of a problem
instance without duplicates. We address the problem of enumerating the models
of B-formulae. A B-formula is a propositional formula whose connectives are
taken from a fixed set B of Boolean connectives. Without imposing any specific
order to output the solutions, this task is solved. We completely classify the
complexity of this enumeration task for all possible sets of connectives B
imposing the orders of (1) non-decreasing weight, (2) non-increasing weight;
the weight of a model being the number of variables assigned to 1. We consider
also the weighted variants where a non-negative integer weight is assigned to
each variable and show that this add-on leads to more sophisticated enumeration
algorithms and even renders previously tractable cases intractable, contrarily
to the constraint setting. As a by-product we obtain complete complexity
classifications for the optimization problems known as Min-Ones and Max-Ones
which are in the B-formula setting two different tasks.
|
[
{
"version": "v1",
"created": "Fri, 2 Dec 2016 13:32:41 GMT"
}
] | 2016-12-05T00:00:00 |
[
[
"Schmidt",
"Johannes",
""
]
] |
new_dataset
| 0.990887 |
1612.00800
|
Shubhi Asthana
|
Shubhi Asthana, Ray Strong, and Aly Megahed
|
HealthAdvisor: Recommendation System for Wearable Technologies enabling
Proactive Health Monitoring
|
NIPS Workshop on Machine Learning for Health 2016, Barcelona, Spain
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Proactive monitoring of one's health could avoid serious diseases as well as
better maintain the individual's well-being. In today's IoT world, there has
been numerous wearable technological devices to monitor/measure different
health attributes. However, with that increasing number of attributes and
wearables, it becomes unclear to the individual which ones they should be
using. The aim of this paper is to provide a recommendation engine for
personalized recommended wearables for any given individual. The way the engine
works is through first identifying the diseases that this person is at risk of,
given his/her attributes and medical history. We built a machine learning
classification model for this task. Second, these diseases are mapped to the
attributes that need to be measured in order to monitor such diseases. Third,
we map these measurements to the appropriate wearable technologies. This is
done via a textual analytics model that we developed that uses available
information of different wearables to map the aforementioned measurements to
these wearables. The output can be used to recommend the wearables to
individuals as well as provide a feedback to wearable developers for common
measurements that do not have corresponding wearables today.
|
[
{
"version": "v1",
"created": "Fri, 2 Dec 2016 19:28:58 GMT"
}
] | 2016-12-05T00:00:00 |
[
[
"Asthana",
"Shubhi",
""
],
[
"Strong",
"Ray",
""
],
[
"Megahed",
"Aly",
""
]
] |
new_dataset
| 0.983339 |
1609.01409
|
M.M.A. Hashem
|
Md. Siddiqur Rahman Tanveer, M.M.A. Hashem, Md. Kowsar Hossain
|
Android Assistant EyeMate for Blind and Blind Tracker
|
arXiv admin note: text overlap with arXiv:1611.09480 by other author
|
2015 18th International Conference on Computer and Information
Technology (ICCIT)
|
10.1109/ICCITechn.2015.7488080
| null |
cs.HC cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
At present many blind assistive systems have been implemented but there is no
such kind of good system to navigate a blind person and also to track the
movement of a blind person and rescue him/her if he/she is lost. In this paper,
we have presented a blind assistive and tracking embedded system. In this
system the blind person is navigated through a spectacle interfaced with an
android application. The blind person is guided through Bengali/English voice
commands generated by the application according to the obstacle position. Using
voice command a blind person can establish voice call to a predefined number
without touching the phone just by pressing the headset button. The blind
assistive application gets the latitude and longitude using GPS and then sends
them to a server. The movement of the blind person is tracked through another
android application that points out the current position in Google map. We took
distances from several surfaces like concrete and tiles floor in our experiment
where the error rate is 5%.
|
[
{
"version": "v1",
"created": "Tue, 6 Sep 2016 06:29:32 GMT"
}
] | 2016-12-04T00:00:00 |
[
[
"Tanveer",
"Md. Siddiqur Rahman",
""
],
[
"Hashem",
"M. M. A.",
""
],
[
"Hossain",
"Md. Kowsar",
""
]
] |
new_dataset
| 0.989838 |
1611.08647
|
Mithileysh Sathiyanarayanan Mr
|
Mithileysh Sathiyanarayanan and Babangida Abubhakar
|
Dual MCDRR Scheduler for Hybrid TDM/WDM Optical Networks
|
5 pages, 6 figures, Networks & Soft Computing (ICNSC), 2014 First
International Conference on. arXiv admin note: text overlap with
arXiv:1308.5092
| null |
10.1109/CNSC.2014.6906708
| null |
cs.NI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In this paper we propose and investigate the performance of a dual
multi-channel deficit round-robin (D-MCDRR) scheduler based on the existing
single MCDRR scheduler. The existing scheduler is used for multiple channels
with tunable transmitters and fixed receivers in hybrid time division
multiplexing (TDM)/wavelength division multiplexing (WDM) optical networks. The
proposed dual scheduler will also be used in the same optical networks. We
extend the existing MCDRR scheduling algorithm for n channels to the case of
considering two schedulers for the same n channels. Simulation results show
that the proposed dual MCDRR (D-MCDRR) scheduler can provide better throughput
when compared to the existing single MCDRR scheduler.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2016 01:39:08 GMT"
}
] | 2016-12-04T00:00:00 |
[
[
"Sathiyanarayanan",
"Mithileysh",
""
],
[
"Abubhakar",
"Babangida",
""
]
] |
new_dataset
| 0.99906 |
1607.03949
|
Chris Sweeney
|
Chris Sweeney, Victor Fragoso, Tobias Hollerer and Matthew Turk
|
Large Scale SfM with the Distributed Camera Model
|
Published at 2016 3DV Conference
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the distributed camera model, a novel model for
Structure-from-Motion (SfM). This model describes image observations in terms
of light rays with ray origins and directions rather than pixels. As such, the
proposed model is capable of describing a single camera or multiple cameras
simultaneously as the collection of all light rays observed. We show how the
distributed camera model is a generalization of the standard camera model and
describe a general formulation and solution to the absolute camera pose problem
that works for standard or distributed cameras. The proposed method computes a
solution that is up to 8 times more efficient and robust to rotation
singularities in comparison with gDLS. Finally, this method is used in an novel
large-scale incremental SfM pipeline where distributed cameras are accurately
and robustly merged together. This pipeline is a direct generalization of
traditional incremental SfM; however, instead of incrementally adding one
camera at a time to grow the reconstruction the reconstruction is grown by
adding a distributed camera. Our pipeline produces highly accurate
reconstructions efficiently by avoiding the need for many bundle adjustment
iterations and is capable of computing a 3D model of Rome from over 15,000
images in just 22 minutes.
|
[
{
"version": "v1",
"created": "Wed, 13 Jul 2016 22:39:11 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Dec 2016 02:09:31 GMT"
}
] | 2016-12-02T00:00:00 |
[
[
"Sweeney",
"Chris",
""
],
[
"Fragoso",
"Victor",
""
],
[
"Hollerer",
"Tobias",
""
],
[
"Turk",
"Matthew",
""
]
] |
new_dataset
| 0.986163 |
1609.02946
|
Amir Ghiasi
|
Omar Hussain, Amir Ghiasi, Xiaopeng Li
|
Freeway Lane Management Approach in Mixed Traffic Environment with
Connected Autonomous Vehicles
| null | null | null | null |
cs.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Connected autonomous vehicles (CAV) technologies are about to be in the
market in the near future. This requires transportation facilities ready to
operate in a mixed traffic environment where a portion of vehicles are CAVs and
the remaining are manual vehicles. Since CAVs are able to run with less spacing
and headway compared with manual vehicles or mixed traffic, allocating a number
of freeway lanes exclusive to CAVs may improve the overall performance of
freeways. In this paper, we propose an analytical managed lane model to
evaluate the freeway flow in mixed traffic and to determine the optimal number
of lanes to be allocated to CAVs. The proposed model is investigated in two
different operation environments: single-lane and managed lane environments. We
further define three different CAV technology scenarios: neutral, conservative,
and aggressive. In the single-lane problem, the influence of CAV penetration
rates on mixed traffic capacity is examined in each scenario. In the managed
lanes problem, we propose a method to determine the optimal number of dedicated
lanes for CAVs under different settings. A number of numerical examples with
different geometries and demand levels are investigated for all three
scenarios. A sensitivity analysis on the penetration rates is conducted. The
results show that more aggressive CAV technologies need less specific allocated
lanes because they can follow the vehicles with less time and space headways.
|
[
{
"version": "v1",
"created": "Fri, 9 Sep 2016 21:15:09 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Sep 2016 18:48:39 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Nov 2016 21:13:47 GMT"
}
] | 2016-12-02T00:00:00 |
[
[
"Hussain",
"Omar",
""
],
[
"Ghiasi",
"Amir",
""
],
[
"Li",
"Xiaopeng",
""
]
] |
new_dataset
| 0.957516 |
1612.00118
|
Minjia Shi
|
Yan Liu, Minjia Shi, Patrick Sol\'e
|
Two-weight and three-weight codes from trace codes over
$\mathbb{F}_p+u\mathbb{F}_p+v\mathbb{F}_p+uv\mathbb{F}_p$
|
11 pages, submitted on 29 November,2016
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We construct an infinite family of two-Lee-weight and three-Lee-weight codes
over the non-chain ring
$\mathbb{F}_p+u\mathbb{F}_p+v\mathbb{F}_p+uv\mathbb{F}_p,$ where
$u^2=0,v^2=0,uv=vu.$ These codes are defined as trace codes. They have the
algebraic structure of abelian codes. Their Lee weight distribution is computed
by using Gauss sums. With a linear Gray map, we obtain a class of abelian
three-weight codes and two-weight codes over $\mathbb{F}_p$. In particular, the
two-weight codes we describe are shown to be optimal by application of the
Griesmer bound. We also discuss their dual Lee distance. Finally, an
application to secret sharing schemes is given.
|
[
{
"version": "v1",
"created": "Thu, 1 Dec 2016 02:47:07 GMT"
}
] | 2016-12-02T00:00:00 |
[
[
"Liu",
"Yan",
""
],
[
"Shi",
"Minjia",
""
],
[
"Solé",
"Patrick",
""
]
] |
new_dataset
| 0.999874 |
1612.00155
|
Pedro Tabacof
|
Pedro Tabacof, Julia Tavares, Eduardo Valle
|
Adversarial Images for Variational Autoencoders
|
Workshop on Adversarial Training, NIPS 2016, Barcelona, Spain
| null | null | null |
cs.NE cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate adversarial attacks for autoencoders. We propose a procedure
that distorts the input image to mislead the autoencoder in reconstructing a
completely different target image. We attack the internal latent
representations, attempting to make the adversarial input produce an internal
representation as similar as possible as the target's. We find that
autoencoders are much more robust to the attack than classifiers: while some
examples have tolerably small input distortion, and reasonable similarity to
the target image, there is a quasi-linear trade-off between those aims. We
report results on MNIST and SVHN datasets, and also test regular deterministic
autoencoders, reaching similar conclusions in all cases. Finally, we show that
the usual adversarial attack for classifiers, while being much easier, also
presents a direct proportion between distortion on the input, and misdirection
on the output. That proportionality however is hidden by the normalization of
the output, which maps a linear layer into non-linear probabilities.
|
[
{
"version": "v1",
"created": "Thu, 1 Dec 2016 05:59:57 GMT"
}
] | 2016-12-02T00:00:00 |
[
[
"Tabacof",
"Pedro",
""
],
[
"Tavares",
"Julia",
""
],
[
"Valle",
"Eduardo",
""
]
] |
new_dataset
| 0.995808 |
1612.00423
|
Shenlong Wang
|
Shenlong Wang, Min Bai, Gellert Mattyus, Hang Chu, Wenjie Luo, Bin
Yang, Justin Liang, Joel Cheverie, Sanja Fidler, Raquel Urtasun
|
TorontoCity: Seeing the World with a Million Eyes
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we introduce the TorontoCity benchmark, which covers the full
greater Toronto area (GTA) with 712.5 $km^2$ of land, 8439 $km$ of road and
around 400,000 buildings. Our benchmark provides different perspectives of the
world captured from airplanes, drones and cars driving around the city.
Manually labeling such a large scale dataset is infeasible. Instead, we propose
to utilize different sources of high-precision maps to create our ground truth.
Towards this goal, we develop algorithms that allow us to align all data
sources with the maps while requiring minimal human supervision. We have
designed a wide variety of tasks including building height estimation
(reconstruction), road centerline and curb extraction, building instance
segmentation, building contour extraction (reorganization), semantic labeling
and scene type classification (recognition). Our pilot study shows that most of
these tasks are still difficult for modern convolutional neural networks.
|
[
{
"version": "v1",
"created": "Thu, 1 Dec 2016 20:39:49 GMT"
}
] | 2016-12-02T00:00:00 |
[
[
"Wang",
"Shenlong",
""
],
[
"Bai",
"Min",
""
],
[
"Mattyus",
"Gellert",
""
],
[
"Chu",
"Hang",
""
],
[
"Luo",
"Wenjie",
""
],
[
"Yang",
"Bin",
""
],
[
"Liang",
"Justin",
""
],
[
"Cheverie",
"Joel",
""
],
[
"Fidler",
"Sanja",
""
],
[
"Urtasun",
"Raquel",
""
]
] |
new_dataset
| 0.999819 |
1608.08483
|
Stefan Schmid
|
Kim G. Larsen, Stefan Schmid, Bingtian Xue
|
WNetKAT: A Weighted SDN Programming and Verification Language
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Programmability and verifiability lie at the heart of the software-defined
networking paradigm. While OpenFlow and its match-action concept provide
primitive operations to manipulate hardware configurations, over the last
years, several more expressive network programming languages have been
developed. This paper presents WNetKAT, the first network programming language
accounting for the fact that networks are inherently weighted, and
communications subject to capacity constraints (e.g., in terms of bandwidth)
and costs (e.g., latency or monetary costs). WNetKAT is based on a syntactic
and semantic extension of the NetKAT algebra. We demonstrate several relevant
applications for WNetKAT, including cost- and capacity-aware reachability, as
well as quality-of-service and fairness aspects. These applications do not only
apply to classic, splittable and unsplittable (s; t)-flows, but also generalize
to more complex network functions and service chains. For example, WNetKAT
allows to model flows which need to traverse certain waypoint functions, which
may change the traffic rate. This paper also shows the relation between the
equivalence problem of WNetKAT and the equivalence problem of the weighted
finite automata, which implies undecidability of the former. However, this
paper also succeeds to prove the decidability of another useful problem, which
is sufficient in many practical scnearios: whether an expression equals to 0.
Moreover, we initiate the discussion of decidable subsets of the whole
language.
|
[
{
"version": "v1",
"created": "Tue, 30 Aug 2016 14:56:53 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Sep 2016 17:51:41 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Sep 2016 14:24:06 GMT"
},
{
"version": "v4",
"created": "Tue, 29 Nov 2016 21:34:35 GMT"
}
] | 2016-12-01T00:00:00 |
[
[
"Larsen",
"Kim G.",
""
],
[
"Schmid",
"Stefan",
""
],
[
"Xue",
"Bingtian",
""
]
] |
new_dataset
| 0.999346 |
1609.01797
|
Christoph Studer
|
Oscar Casta\~neda, Tom Goldstein, Christoph Studer
|
Data Detection in Large Multi-Antenna Wireless Systems via Approximate
Semidefinite Relaxation
| null |
IEEE Transactions on Circuits and Systems I: Regular Papers (TCAS
I), Vol. 63, No. 12, Dec. 2016
| null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Practical data detectors for future wireless systems with hundreds of
antennas at the base station must achieve high throughput and low error rate at
low complexity. Since the complexity of maximum-likelihood (ML) data detection
is prohibitive for such large wireless systems, approximate methods are
necessary. In this paper, we propose a novel data detection algorithm referred
to as Triangular Approximate SEmidefinite Relaxation (TASER), which is suitable
for two application scenarios: (i) coherent data detection in large multi-user
multiple-input multiple-output (MU-MIMO) wireless systems and (ii) joint
channel estimation and data detection in large single-input multiple-output
(SIMO) wireless systems. For both scenarios, we show that TASER achieves
near-ML error-rate performance at low complexity by relaxing the associated
ML-detection problems into a semidefinite program, which we solve approximately
using a preconditioned forward-backward splitting procedure. Since the
resulting problem is non-convex, we provide convergence guarantees for our
algorithm. To demonstrate the efficacy of TASER in practice, we design a
systolic architecture that enables our algorithm to achieve high throughput at
low hardware complexity, and we develop reference field-programmable gate array
(FPGA) and application-specific integrated circuit (ASIC) designs for various
antenna configurations.
|
[
{
"version": "v1",
"created": "Wed, 7 Sep 2016 01:31:22 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Nov 2016 16:56:54 GMT"
}
] | 2016-12-01T00:00:00 |
[
[
"Castañeda",
"Oscar",
""
],
[
"Goldstein",
"Tom",
""
],
[
"Studer",
"Christoph",
""
]
] |
new_dataset
| 0.974923 |
1611.07832
|
Marcus Hardt
|
A. Biancini, L. Florio, M. Haase, M. Hardt, M. Jankowski, J. Jensen,
C. Kanellopoulos, N. Liampotis, S. Licehammer, S. Memon, N. van Dijk, S.
Paetow, M. Prochazka, M. Sall\'e, P. Solagna, U. Stevanovic, D. Vaghetti
|
AARC: First draft of the Blueprint Architecture for Authentication and
Authorisation Infrastructures
|
This text was part of a (public) EU deliverable document. It has a
main part and a long appendix with more details about example infrastructures
that were taken into acount
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
AARC (Authentication and Authorisation for Research Communities) is a
two-year EC-funded project to develop and pilot an integrated cross-discipline
authentication and authorisation framework, building on existing authentication
and authorisation infrastructures (AAIs) and production federated
infrastructure. AARC also champions federated access and offers tailored
training to complement the actions needed to test AARC results and to promote
AARC outcomes. This article describes a high-level blueprint architectures for
interoperable AAIs.
|
[
{
"version": "v1",
"created": "Wed, 23 Nov 2016 15:13:49 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Nov 2016 08:40:56 GMT"
}
] | 2016-12-01T00:00:00 |
[
[
"Biancini",
"A.",
""
],
[
"Florio",
"L.",
""
],
[
"Haase",
"M.",
""
],
[
"Hardt",
"M.",
""
],
[
"Jankowski",
"M.",
""
],
[
"Jensen",
"J.",
""
],
[
"Kanellopoulos",
"C.",
""
],
[
"Liampotis",
"N.",
""
],
[
"Licehammer",
"S.",
""
],
[
"Memon",
"S.",
""
],
[
"van Dijk",
"N.",
""
],
[
"Paetow",
"S.",
""
],
[
"Prochazka",
"M.",
""
],
[
"Sallé",
"M.",
""
],
[
"Solagna",
"P.",
""
],
[
"Stevanovic",
"U.",
""
],
[
"Vaghetti",
"D.",
""
]
] |
new_dataset
| 0.980151 |
1611.09809
|
Saptarshi Das
|
Indranil Pan and Saptarshi Das
|
Fractional Order Fuzzy Control of Hybrid Power System with Renewable
Generation Using Chaotic PSO
|
21 pages, 12 figures, 4 tables
|
ISA Transactions, Volume 62, May 2016, Pages 19-29
|
10.1016/j.isatra.2015.03.003
| null |
cs.SY cs.AI math.OC nlin.CD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates the operation of a hybrid power system through a
novel fuzzy control scheme. The hybrid power system employs various autonomous
generation systems like wind turbine, solar photovoltaic, diesel engine,
fuel-cell, aqua electrolyzer etc. Other energy storage devices like the
battery, flywheel and ultra-capacitor are also present in the network. A novel
fractional order (FO) fuzzy control scheme is employed and its parameters are
tuned with a particle swarm optimization (PSO) algorithm augmented with two
chaotic maps for achieving an improved performance. This FO fuzzy controller
shows better performance over the classical PID, and the integer order fuzzy
PID controller in both linear and nonlinear operating regimes. The FO fuzzy
controller also shows stronger robustness properties against system parameter
variation and rate constraint nonlinearity, than that with the other controller
structures. The robustness is a highly desirable property in such a scenario
since many components of the hybrid power system may be switched on/off or may
run at lower/higher power output, at different time instants.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2016 19:54:44 GMT"
}
] | 2016-12-01T00:00:00 |
[
[
"Pan",
"Indranil",
""
],
[
"Das",
"Saptarshi",
""
]
] |
new_dataset
| 0.969258 |
1611.09915
|
Rui Campos Dr.
|
Pedro J\'ulio, Filipe Ribeiro, Jaime Dias, Jorge Mamede, Rui Campos
|
Stub Wireless Multi-hop Networks using Self-configurable Wi-Fi Basic
Service Set Cascading
|
Submitted to IEEE/IFIP Wireless Days 2017, 6 pages, 7 figures
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The increasing trend in wireless Internet access has been boosted by IEEE
802.11. However, the application scenarios are still limited by its short radio
range. Stub Wireless Multi-hop Networks (WMNs) are a robust, flexible, and
cost-effective solution to the problem. Yet, typically they are formed by
single radio mesh nodes and suffer from hidden node, unfairness, and
scalability problems.
We propose a simple multi-radio, multi-channel WMN solution, named Wi-Fi
network Infrastructure eXtension - Dual-Radio (WiFIX-DR), to overcome these
problems. WiFIX-DR reuses IEEE 802.11 built-in mechanisms and beacons to form a
Stub WMN as a set of self-configurable interconnected Basic Service Sets
(BSSs). Experimental results show the improved scalability enabled by the
proposed solution when compared to single-radio WMNs.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2016 22:17:46 GMT"
}
] | 2016-12-01T00:00:00 |
[
[
"Júlio",
"Pedro",
""
],
[
"Ribeiro",
"Filipe",
""
],
[
"Dias",
"Jaime",
""
],
[
"Mamede",
"Jorge",
""
],
[
"Campos",
"Rui",
""
]
] |
new_dataset
| 0.998704 |
1611.09968
|
Hanxu Hou
|
Hanxu Hou and Yunghsiang S. Han
|
Cauchy MDS Array Codes With Efficient Decoding Method
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Array codes have been widely used in communication and storage systems. To
reduce computational complexity, one important property of the array codes is
that only XOR operation is used in the encoding and decoding process. In this
work, we present a novel family of maximal-distance separable (MDS) array codes
based on Cauchy matrix, which can correct up to any number of failures. We also
propose an efficient decoding method for the new codes to recover the failures.
We show that the encoding/decoding complexities of the proposed approach are
lower than those of existing Cauchy MDS array codes, such as Rabin-Like codes
and CRS codes. Thus, the proposed MDS array codes are attractive for
distributed storage systems.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2016 01:55:36 GMT"
}
] | 2016-12-01T00:00:00 |
[
[
"Hou",
"Hanxu",
""
],
[
"Han",
"Yunghsiang S.",
""
]
] |
new_dataset
| 0.998352 |
1611.10010
|
Debidatta Dwibedi
|
Debidatta Dwibedi, Tomasz Malisiewicz, Vijay Badrinarayanan, Andrew
Rabinovich
|
Deep Cuboid Detection: Beyond 2D Bounding Boxes
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a Deep Cuboid Detector which takes a consumer-quality RGB image of
a cluttered scene and localizes all 3D cuboids (box-like objects). Contrary to
classical approaches which fit a 3D model from low-level cues like corners,
edges, and vanishing points, we propose an end-to-end deep learning system to
detect cuboids across many semantic categories (e.g., ovens, shipping boxes,
and furniture). We localize cuboids with a 2D bounding box, and simultaneously
localize the cuboid's corners, effectively producing a 3D interpretation of
box-like objects. We refine keypoints by pooling convolutional features
iteratively, improving the baseline method significantly. Our deep learning
cuboid detector is trained in an end-to-end fashion and is suitable for
real-time applications in augmented reality (AR) and robotics.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2016 06:00:47 GMT"
}
] | 2016-12-01T00:00:00 |
[
[
"Dwibedi",
"Debidatta",
""
],
[
"Malisiewicz",
"Tomasz",
""
],
[
"Badrinarayanan",
"Vijay",
""
],
[
"Rabinovich",
"Andrew",
""
]
] |
new_dataset
| 0.999164 |
1611.10210
|
Ruby Annette
|
Annette J Ruby, Banu W Aisha and Chandran P Subash
|
RenderSelect: a Cloud Broker Framework for Cloud Renderfarm Services
|
13 pages, 10 figures
|
International Journal of Applied Engineering Research ,Vol.10,
No.20 ,2015
| null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the 3D studios the animation scene files undergo a process called as
rendering, where the 3D wire frame models are converted into 3D photorealistic
images. As the rendering process is both a computationally intensive and a time
consuming task, the cloud services based rendering in cloud render farms is
gaining popularity among the animators. Though cloud render farms offer many
benefits, the animators hesitate to move from their traditional offline
rendering to cloud services based render farms as they lack the knowledge,
expertise and the time to compare the render farm service providers based on
the Quality of Service (QoS) offered by them, negotiate the QoS and monitor
whether the agreed upon QoS is actually offered by the renderfarm service
providers. In this paper we propose a Cloud Service Broker (CSB) framework
called the RenderSelect that helps in the dynamic ranking, selection,
negotiation and monitoring of the cloud based render farm services. The cloud
services based renderfarms are ranked and selected services based on multi
criteria QoS requirements. Analytical Hierarchical Process (AHP), the popular
Multi Criteria Decision Making (MCDM) method is used for ranking and selecting
the cloud services based renderfarms. The AHP method of ranking is illustrated
in detail with an example. It could be verified that AHP method ranks the cloud
services effectively with less time and complexity.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2016 04:34:10 GMT"
}
] | 2016-12-01T00:00:00 |
[
[
"Ruby",
"Annette J",
""
],
[
"Aisha",
"Banu W",
""
],
[
"Subash",
"Chandran P",
""
]
] |
new_dataset
| 0.998801 |
1610.09534
|
Lan Xu
|
Lan Xu, Lu Fang, Wei Cheng, Kaiwen Guo, Guyue Zhou, Qionghai Dai, and
Yebin Liu
|
FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying
Cameras
|
This paper has been withdrawn by the author due to a crucial sign
error
| null | null | null |
cs.CV cs.GR cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aiming at automatic, convenient and non-instrusive motion capture, this paper
presents a new generation markerless motion capture technique, the FlyCap
system, to capture surface motions of moving characters using multiple
autonomous flying cameras (autonomous unmanned aerial vehicles(UAV) each
integrated with an RGBD video camera). During data capture, three cooperative
flying cameras automatically track and follow the moving target who performs
large scale motions in a wide space. We propose a novel non-rigid surface
registration method to track and fuse the depth of the three flying cameras for
surface motion tracking of the moving target, and simultaneously calculate the
pose of each flying camera. We leverage the using of visual-odometry
information provided by the UAV platform, and formulate the surface tracking
problem in a non-linear objective function that can be linearized and
effectively minimized through a Gaussian-Newton method. Quantitative and
qualitative experimental results demonstrate the competent and plausible
surface and motion reconstruction results
|
[
{
"version": "v1",
"created": "Sat, 29 Oct 2016 15:44:07 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Nov 2016 05:33:30 GMT"
},
{
"version": "v3",
"created": "Tue, 29 Nov 2016 08:30:19 GMT"
}
] | 2016-11-30T00:00:00 |
[
[
"Xu",
"Lan",
""
],
[
"Fang",
"Lu",
""
],
[
"Cheng",
"Wei",
""
],
[
"Guo",
"Kaiwen",
""
],
[
"Zhou",
"Guyue",
""
],
[
"Dai",
"Qionghai",
""
],
[
"Liu",
"Yebin",
""
]
] |
new_dataset
| 0.999533 |
1611.09433
|
Phung Manh Duong
|
P. M. Duong, T. T. Hoang, N. T. T. Van, D. A. Viet, T. Q. Vinh
|
A novel platform for internet-based mobile robot systems
|
In 2012 7th IEEE Conference on Industrial Electronics and
Applications (ICIEA)
| null |
10.1109/ICIEA.2012.6361052
| null |
cs.RO cs.HC cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce a software and hardware structure for on-line
mobile robotic systems. The hardware mainly consists of a Multi-Sensor Smart
Robot connected to the Internet through 3G mobile network. The system employs a
client-server software architecture in which the exchanged data between the
client and the server is transmitted through different transport protocols.
Autonomous mechanisms such as obstacle avoidance and safe-point achievement are
implemented to ensure the robot safety. This architecture is put into operation
on the real Internet and the preliminary result is promising. By adopting this
structure, it will be very easy to construct an experimental platform for the
research on diverse tele-operation topics such as remote control algorithms,
interface designs, network protocols and applications etc.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2016 23:47:43 GMT"
}
] | 2016-11-30T00:00:00 |
[
[
"Duong",
"P. M.",
""
],
[
"Hoang",
"T. T.",
""
],
[
"Van",
"N. T. T.",
""
],
[
"Viet",
"D. A.",
""
],
[
"Vinh",
"T. Q.",
""
]
] |
new_dataset
| 0.982213 |
1611.09472
|
EPTCS
|
Victor Winter (University of Nebraska-Omaha), Betty Love (University
of Nebraska-Omaha), Cindy Corritore (Creighton University)
|
The Bricklayer Ecosystem - Art, Math, and Code
|
In Proceedings TFPIE 2015/6, arXiv:1611.08651
|
EPTCS 230, 2016, pp. 47-61
|
10.4204/EPTCS.230.4
| null |
cs.PL cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes the Bricklayer Ecosystem - a freely-available online
educational ecosystem created for people of all ages and coding backgrounds.
Bricklayer is designed in accordance with a "low-threshold infinite ceiling"
philosophy and has been successfully used to teach coding to primary school
students, middle school students, university freshmen, and in-service secondary
math teachers. Bricklayer programs are written in the functional programming
language SML and, when executed, create 2D and 3D artifacts. These artifacts
can be viewed using a variety of third-party tools such as LEGO Digital
Designer (LDD), LDraw, Minecraft clients, Brickr, as well as STereoLithography
viewers.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2016 03:39:56 GMT"
}
] | 2016-11-30T00:00:00 |
[
[
"Winter",
"Victor",
"",
"University of Nebraska-Omaha"
],
[
"Love",
"Betty",
"",
"University\n of Nebraska-Omaha"
],
[
"Corritore",
"Cindy",
"",
"Creighton University"
]
] |
new_dataset
| 0.999706 |
1611.09473
|
EPTCS
|
Prabhakar Ragde (University of Waterloo)
|
Proust: A Nano Proof Assistant
|
In Proceedings TFPIE 2015/6, arXiv:1611.08651
|
EPTCS 230, 2016, pp. 63-75
|
10.4204/EPTCS.230.5
| null |
cs.PL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Proust is a small Racket program offering rudimentary interactive assistance
in the development of verified proofs for propositional and predicate logic. It
is constructed in stages, some of which are done by students before using it to
complete proof exercises, and in parallel with the study of its theoretical
underpinnings, including elements of Martin-Lof type theory. The goal is
twofold: to demystify some of the machinery behind full-featured proof
assistants such as Coq and Agda, and to better integrate the study of formal
logic with other core elements of an undergraduate computer science curriculum.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2016 03:40:04 GMT"
}
] | 2016-11-30T00:00:00 |
[
[
"Ragde",
"Prabhakar",
"",
"University of Waterloo"
]
] |
new_dataset
| 0.998563 |
1611.09475
|
EPTCS
|
Cezar Ionescu (Chalmers University of Technology), Patrik Jansson
(Chalmers University of Technology)
|
Domain-Specific Languages of Mathematics: Presenting Mathematical
Analysis Using Functional Programming
|
In Proceedings TFPIE 2015/6, arXiv:1611.08651
|
EPTCS 230, 2016, pp. 1-15
|
10.4204/EPTCS.230.1
| null |
cs.CY cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the approach underlying a course on "Domain-Specific Languages of
Mathematics", currently being developed at Chalmers in response to difficulties
faced by third-year students in learning and applying classical mathematics
(mainly real and complex analysis). The main idea is to encourage the students
to approach mathematical domains from a functional programming perspective: to
identify the main functions and types involved and, when necessary, to
introduce new abstractions; to give calculational proofs; to pay attention to
the syntax of the mathematical expressions; and, finally, to organise the
resulting functions and types in domain-specific languages.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2016 03:42:04 GMT"
}
] | 2016-11-30T00:00:00 |
[
[
"Ionescu",
"Cezar",
"",
"Chalmers University of Technology"
],
[
"Jansson",
"Patrik",
"",
"Chalmers University of Technology"
]
] |
new_dataset
| 0.993049 |
1611.09480
|
Ramiro Velazquez
|
Ramiro Velazquez
|
Wearable Assistive Devices for the Blind
|
Book Chapter
|
LNEE 75, Springer, pp 331-349, 2010
|
10.1007/978-3-642-15687-8_17
| null |
cs.RO cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Assistive devices are a key aspect in wearable systems for biomedical
applications, as they represent potential aids for people with physical and
sensory disabilities that might lead to improvements in the quality of life.
This chapter focuses on wearable assistive devices for the blind. It intends to
review the most significant work done in this area, to present the latest
approaches for assisting this population and to understand universal design
concepts for the development of wearable assistive devices and systems for the
blind.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2016 04:10:44 GMT"
}
] | 2016-11-30T00:00:00 |
[
[
"Velazquez",
"Ramiro",
""
]
] |
new_dataset
| 0.999603 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.