id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1705.05727
|
Cecilia Murrugarra
|
Cecilia Murrugarra, Osberth De Castro, Juan Carlos Grieco and Gerardo
Fernandez
|
A General Scheme Implicit Force Control for a Flexible-Link Manipulator
|
16 pages, 14 figures
| null | null | null |
cs.RO cs.SY nlin.CD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we propose an implicit force control scheme for a one-link
flexible manipulator that interact with a compliant environment. The controller
was based in the mathematical model of the manipulator, considering the
dynamics of the beam flexible and the gravitational force. With this method,
the controller parameters are obtained from the structural parameters of the
beam (link) of the manipulator. This controller ensure the stability based in
the Lyapunov Theory. The controller proposed has two closed loops: the inner
loop is a tracking control with gravitational force and vibration frequencies
compensation and the outer loop is a implicit force control. To evaluate the
performance of the controller, we have considered to three different
manipulators (the length, the diameter were modified) and three environments
with compliance modified. The results obtained from simulations verify the
asymptotic tracking and regulated in position and force respectively and the
vibrations suppression of the beam in a finite time.
|
[
{
"version": "v1",
"created": "Sat, 13 May 2017 18:28:04 GMT"
}
] | 2017-05-17T00:00:00 |
[
[
"Murrugarra",
"Cecilia",
""
],
[
"De Castro",
"Osberth",
""
],
[
"Grieco",
"Juan Carlos",
""
],
[
"Fernandez",
"Gerardo",
""
]
] |
new_dataset
| 0.984605 |
1705.05745
|
Hassan Foroosh
|
Vildan Atalay Aydin and Hassan Foroosh
|
Volumetric Super-Resolution of Multispectral Data
|
arXiv admin note: text overlap with arXiv:1705.01258
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most multispectral remote sensors (e.g. QuickBird, IKONOS, and Landsat 7
ETM+) provide low-spatial high-spectral resolution multispectral (MS) or
high-spatial low-spectral resolution panchromatic (PAN) images, separately. In
order to reconstruct a high-spatial/high-spectral resolution multispectral
image volume, either the information in MS and PAN images are fused (i.e.
pansharpening) or super-resolution reconstruction (SRR) is used with only MS
images captured on different dates. Existing methods do not utilize temporal
information of MS and high spatial resolution of PAN images together to improve
the resolution. In this paper, we propose a multiframe SRR algorithm using
pansharpened MS images, taking advantage of both temporal and spatial
information available in multispectral imagery, in order to exceed spatial
resolution of given PAN images. We first apply pansharpening to a set of
multispectral images and their corresponding PAN images captured on different
dates. Then, we use the pansharpened multispectral images as input to the
proposed wavelet-based multiframe SRR method to yield full volumetric SRR. The
proposed SRR method is obtained by deriving the subband relations between
multitemporal MS volumes. We demonstrate the results on Landsat 7 ETM+ images
comparing our method to conventional techniques.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2017 03:53:16 GMT"
}
] | 2017-05-17T00:00:00 |
[
[
"Aydin",
"Vildan Atalay",
""
],
[
"Foroosh",
"Hassan",
""
]
] |
new_dataset
| 0.972188 |
1705.05786
|
Gwena\"el Richomme
|
Gwena\"el Richomme
|
A Characterization of Infinite LSP Words
| null | null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
G. Fici proved that a finite word has a minimal suffix automaton if and only
if all its left special factors occur as prefixes. He called LSP all finite and
infinite words having this latter property. We characterize here infinite LSP
words in terms of $S$-adicity. More precisely we provide a finite set of
morphisms $S$ and an automaton ${\cal A}$ such that an infinite word is LSP if
and only if it is $S$-adic and all its directive words are recognizable by
${\cal A}$.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2017 16:08:06 GMT"
}
] | 2017-05-17T00:00:00 |
[
[
"Richomme",
"Gwenaël",
""
]
] |
new_dataset
| 0.997072 |
1609.01257
|
Nuno Fachada
|
Nuno Fachada, Vitor V. Lopes, Rui C. Martins, Agostinho C. Rosa
|
cf4ocl: a C framework for OpenCL
|
The peer-reviewed version of this paper is published in Science of
Computer Programming at
http://www.sciencedirect.com/science/article/pii/S0167642317300540 . This
version is typeset by the authors and differs only in pagination and
typographical detail
|
Science of Computer Programming, 143, pp. 9-19, 2017
|
10.1016/j.scico.2017.03.005
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
OpenCL is an open standard for parallel programming of heterogeneous compute
devices, such as GPUs, CPUs, DSPs or FPGAs. However, the verbosity of its C
host API can hinder application development. In this paper we present cf4ocl, a
software library for rapid development of OpenCL programs in pure C. It aims to
reduce the verbosity of the OpenCL API, offering straightforward memory
management, integrated profiling of events (e.g., kernel execution and data
transfers), simple but extensible device selection mechanism and user-friendly
error management. We compare two versions of a conceptual application example,
one based on cf4ocl, the other developed directly with the OpenCL host API.
Results show that the former is simpler to implement and offers more features,
at the cost of an effectively negligible computational overhead. Additionally,
the tools provided with cf4ocl allowed for a quick analysis on how to optimize
the application.
|
[
{
"version": "v1",
"created": "Mon, 5 Sep 2016 19:07:48 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Feb 2017 22:40:33 GMT"
},
{
"version": "v3",
"created": "Fri, 12 May 2017 22:26:23 GMT"
}
] | 2017-05-16T00:00:00 |
[
[
"Fachada",
"Nuno",
""
],
[
"Lopes",
"Vitor V.",
""
],
[
"Martins",
"Rui C.",
""
],
[
"Rosa",
"Agostinho C.",
""
]
] |
new_dataset
| 0.999502 |
1609.01826
|
Youlong Cao
|
Youlong Cao, Meixia Tao, Fan Xu and Kangqi Liu
|
Fundamental Storage-Latency Tradeoff in Cache-Aided MIMO Interference
Networks
|
to appear in IEEE Trans. on Wireless Communications. Part of this
work was presented at IEEE GLOBECOM 2016
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Caching is an effective technique to improve user perceived experience for
content delivery in wireless networks. Wireless caching differs from
traditional web caching in that it can exploit the broadcast nature of wireless
medium and hence opportunistically change the network topologies. This paper
studies a cache-aided MIMO interference network with 3 transmitters each
equipped with M antennas and 3 receivers each with N antennas. With caching at
both the transmitter and receiver sides, the network is changed to hybrid forms
of MIMO broadcast channel, MIMO X channel, and MIMO multicast channels. We
analyze the degrees of freedom (DoF) of these new channel models using
practical interference management schemes. Based on the collective use of these
DoF results, we then obtain an achievable normalized delivery time (NDT) of the
network, an information-theoretic metric that evaluates the worst-case delivery
time at given cache sizes. The obtained NDT is for arbitrary M, N and any
feasible cache sizes. It is shown to be optimal in certain cases and within a
multiplicative gap of 3 from the optimum in other cases. The extension to the
network with arbitrary number of transmitters and receivers is also discussed.
|
[
{
"version": "v1",
"created": "Wed, 7 Sep 2016 04:02:57 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Feb 2017 05:38:36 GMT"
},
{
"version": "v3",
"created": "Sun, 14 May 2017 14:05:57 GMT"
}
] | 2017-05-16T00:00:00 |
[
[
"Cao",
"Youlong",
""
],
[
"Tao",
"Meixia",
""
],
[
"Xu",
"Fan",
""
],
[
"Liu",
"Kangqi",
""
]
] |
new_dataset
| 0.998901 |
1701.07118
|
Hoang Dau
|
Hoang Dau and Iwan Duursma and Han Mao Kiah and Olgica Milenkovic
|
Repairing Reed-Solomon Codes With Two Erasures
|
ISIT'17 (accepted), 5 pages. arXiv admin note: substantial text
overlap with arXiv:1612.01361
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite their exceptional error-correcting properties, Reed-Solomon (RS)
codes have been overlooked in distributed storage applications due to the
common belief that they have poor repair bandwidth: A naive repair approach
would require the whole file to be reconstructed in order to recover a single
erased codeword symbol. In a recent work, Guruswami and Wootters (STOC'16)
proposed a single-erasure repair method for RS codes that achieves the optimal
repair bandwidth amongst all linear encoding schemes. We extend their trace
collection technique to cope with two erasures.
|
[
{
"version": "v1",
"created": "Wed, 25 Jan 2017 00:42:22 GMT"
},
{
"version": "v2",
"created": "Sun, 14 May 2017 16:01:33 GMT"
}
] | 2017-05-16T00:00:00 |
[
[
"Dau",
"Hoang",
""
],
[
"Duursma",
"Iwan",
""
],
[
"Kiah",
"Han Mao",
""
],
[
"Milenkovic",
"Olgica",
""
]
] |
new_dataset
| 0.983325 |
1703.07342
|
Dylan Hutchison
|
Dylan Hutchison, Bill Howe, Dan Suciu
|
LaraDB: A Minimalist Kernel for Linear and Relational Algebra
Computation
|
10 pages, to appear in the BeyondMR workshop at the 2017 ACM SIGMOD
conference
| null |
10.1145/3070607.3070608
| null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Analytics tasks manipulate structured data with variants of relational
algebra (RA) and quantitative data with variants of linear algebra (LA). The
two computational models have overlapping expressiveness, motivating a common
programming model that affords unified reasoning and algorithm design. At the
logical level we propose Lara, a lean algebra of three operators, that
expresses RA and LA as well as relevant optimization rules. We show a series of
proofs that position Lara %formal and informal at just the right level of
expressiveness for a middleware algebra: more explicit than MapReduce but more
general than RA or LA. At the physical level we find that the Lara operators
afford efficient implementations using a single primitive that is available in
a variety of backend engines: range scans over partitioned sorted maps.
To evaluate these ideas, we implemented the Lara operators as range iterators
in Apache Accumulo, a popular implementation of Google's BigTable. First we
show how Lara expresses a sensor quality control task, and we measure the
performance impact of optimizations Lara admits on this task. Second we show
that the LaraDB implementation outperforms Accumulo's native MapReduce
integration on a core task involving join and aggregation in the form of matrix
multiply, especially at smaller scales that are typically a poor fit for
scale-out approaches. We find that LaraDB offers a conceptually lean framework
for optimizing mixed-abstraction analytics tasks, without giving up fast
record-level updates and scans.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2017 17:56:47 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Mar 2017 19:29:22 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Apr 2017 22:29:28 GMT"
}
] | 2017-05-16T00:00:00 |
[
[
"Hutchison",
"Dylan",
""
],
[
"Howe",
"Bill",
""
],
[
"Suciu",
"Dan",
""
]
] |
new_dataset
| 0.999534 |
1705.02090
|
Kai Xu
|
Jun Li, Kai Xu, Siddhartha Chaudhuri, Ersin Yumer, Hao Zhang, Leonidas
Guibas
|
GRASS: Generative Recursive Autoencoders for Shape Structures
|
Corresponding author: Kai Xu ([email protected])
|
ACM Transactions on Graphics (SIGGRAPH 2017) 36, 4, Article 52
|
10.1145/3072959.3073613
| null |
cs.GR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a novel neural network architecture for encoding and synthesis
of 3D shapes, particularly their structures. Our key insight is that 3D shapes
are effectively characterized by their hierarchical organization of parts,
which reflects fundamental intra-shape relationships such as adjacency and
symmetry. We develop a recursive neural net (RvNN) based autoencoder to map a
flat, unlabeled, arbitrary part layout to a compact code. The code effectively
captures hierarchical structures of man-made 3D objects of varying structural
complexities despite being fixed-dimensional: an associated decoder maps a code
back to a full hierarchy. The learned bidirectional mapping is further tuned
using an adversarial setup to yield a generative model of plausible structures,
from which novel structures can be sampled. Finally, our structure synthesis
framework is augmented by a second trained module that produces fine-grained
part geometry, conditioned on global and local structural context, leading to a
full generative pipeline for 3D shapes. We demonstrate that without
supervision, our network learns meaningful structural hierarchies adhering to
perceptual grouping principles, produces compact codes which enable
applications such as shape classification and partial matching, and supports
shape synthesis and interpolation with significant variations in topology and
geometry.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2017 05:45:10 GMT"
},
{
"version": "v2",
"created": "Sat, 13 May 2017 04:49:23 GMT"
}
] | 2017-05-16T00:00:00 |
[
[
"Li",
"Jun",
""
],
[
"Xu",
"Kai",
""
],
[
"Chaudhuri",
"Siddhartha",
""
],
[
"Yumer",
"Ersin",
""
],
[
"Zhang",
"Hao",
""
],
[
"Guibas",
"Leonidas",
""
]
] |
new_dataset
| 0.998326 |
1705.02667
|
Subhabrata Mukherjee
|
Subhabrata Mukherjee, Gerhard Weikum
|
People on Media: Jointly Identifying Credible News and Trustworthy
Citizen Journalists in Online Communities
| null | null |
10.1145/2806416.2806537
| null |
cs.AI cs.CL cs.IR cs.SI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Media seems to have become more partisan, often providing a biased coverage
of news catering to the interest of specific groups. It is therefore essential
to identify credible information content that provides an objective narrative
of an event. News communities such as digg, reddit, or newstrust offer
recommendations, reviews, quality ratings, and further insights on journalistic
works. However, there is a complex interaction between different factors in
such online communities: fairness and style of reporting, language clarity and
objectivity, topical perspectives (like political viewpoint), expertise and
bias of community members, and more. This paper presents a model to
systematically analyze the different interactions in a news community between
users, news, and sources. We develop a probabilistic graphical model that
leverages this joint interaction to identify 1) highly credible news articles,
2) trustworthy news sources, and 3) expert users who perform the role of
"citizen journalists" in the community. Our method extends CRF models to
incorporate real-valued ratings, as some communities have very fine-grained
scales that cannot be easily discretized without losing information. To the
best of our knowledge, this paper is the first full-fledged analysis of
credibility, trust, and expertise in news communities.
|
[
{
"version": "v1",
"created": "Sun, 7 May 2017 17:41:31 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2017 16:40:16 GMT"
}
] | 2017-05-16T00:00:00 |
[
[
"Mukherjee",
"Subhabrata",
""
],
[
"Weikum",
"Gerhard",
""
]
] |
new_dataset
| 0.992636 |
1705.03551
|
Mandar Joshi
|
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
|
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for
Reading Comprehension
|
Added references, fixed typos, minor baseline update
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present TriviaQA, a challenging reading comprehension dataset containing
over 650K question-answer-evidence triples. TriviaQA includes 95K
question-answer pairs authored by trivia enthusiasts and independently gathered
evidence documents, six per question on average, that provide high quality
distant supervision for answering the questions. We show that, in comparison to
other recently introduced large-scale datasets, TriviaQA (1) has relatively
complex, compositional questions, (2) has considerable syntactic and lexical
variability between questions and corresponding answer-evidence sentences, and
(3) requires more cross sentence reasoning to find answers. We also present two
baseline algorithms: a feature-based classifier and a state-of-the-art neural
network, that performs well on SQuAD reading comprehension. Neither approach
comes close to human performance (23% and 40% vs. 80%), suggesting that
TriviaQA is a challenging testbed that is worth significant future study. Data
and code available at -- http://nlp.cs.washington.edu/triviaqa/
|
[
{
"version": "v1",
"created": "Tue, 9 May 2017 21:35:07 GMT"
},
{
"version": "v2",
"created": "Sat, 13 May 2017 21:12:37 GMT"
}
] | 2017-05-16T00:00:00 |
[
[
"Joshi",
"Mandar",
""
],
[
"Choi",
"Eunsol",
""
],
[
"Weld",
"Daniel S.",
""
],
[
"Zettlemoyer",
"Luke",
""
]
] |
new_dataset
| 0.999686 |
1705.04437
|
Berk Gulmezoglu
|
Berk Gulmezoglu, Andreas Zankl, Thomas Eisenbarth and Berk Sunar
|
PerfWeb: How to Violate Web Privacy with Hardware Performance Events
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The browser history reveals highly sensitive information about users, such as
financial status, health conditions, or political views. Private browsing modes
and anonymity networks are consequently important tools to preserve the privacy
not only of regular users but in particular of whistleblowers and dissidents.
Yet, in this work we show how a malicious application can infer opened websites
from Google Chrome in Incognito mode and from Tor Browser by exploiting
hardware performance events (HPEs). In particular, we analyze the browsers'
microarchitectural footprint with the help of advanced Machine Learning
techniques: k-th Nearest Neighbors, Decision Trees, Support Vector Machines,
and in contrast to previous literature also Convolutional Neural Networks. We
profile 40 different websites, 30 of the top Alexa sites and 10 whistleblowing
portals, on two machines featuring an Intel and an ARM processor. By monitoring
retired instructions, cache accesses, and bus cycles for at most 5 seconds, we
manage to classify the selected websites with a success rate of up to 86.3%.
The results show that hardware performance events can clearly undermine the
privacy of web users. We therefore propose mitigation strategies that impede
our attacks and still allow legitimate use of HPEs.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2017 03:58:50 GMT"
}
] | 2017-05-16T00:00:00 |
[
[
"Gulmezoglu",
"Berk",
""
],
[
"Zankl",
"Andreas",
""
],
[
"Eisenbarth",
"Thomas",
""
],
[
"Sunar",
"Berk",
""
]
] |
new_dataset
| 0.971141 |
1705.04888
|
Hung La
|
Hung M. La
|
Automated Robotic Monitoring and Inspection of Steel Structures and
Bridges
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents visual and 3D structure inspection for steel structures
and bridges using a developed climbing robot. The robot can move freely on a
steel surface, carry sensors, collect data and then send to the ground station
in real time for monitoring as well as further processing. Steel surface image
stitching and 3D map building are conducted to provide a current condition of
the structure. Also, a computer vision-based method is implemented to detect
surface defects on stitched images. The effectiveness of the climbing robot's
inspection is tested in multiple circumstances to ensure strong steel adhesion
and successful data collection. The detection method was also successfully
evaluated on various test images, where steel cracks could be automatically
identified, without the requirement of some heuristic reasoning.
|
[
{
"version": "v1",
"created": "Sat, 13 May 2017 22:12:39 GMT"
}
] | 2017-05-16T00:00:00 |
[
[
"La",
"Hung M.",
""
]
] |
new_dataset
| 0.994481 |
1705.04909
|
Chuili Kong
|
Chuili Kong, Caijun Zhong, Shi Jin, Sheng Yang, Hai Lin, and Zhaoyang
Zhang
|
Full-Duplex Massive MIMO Relaying Systems with Low-Resolution ADCs
|
14 pages, 10 figures, Accepted to appear in IEEE Transactions on
Wireless Communications
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers a multipair amplify-and-forward massive MIMO relaying
system with low-resolution ADCs at both the relay and destinations. The channel
state information (CSI) at the relay is obtained via pilot training, which is
then utilized to perform simple maximum-ratio combining/maximum-ratio
transmission processing by the relay. Also, it is assumed that the destinations
use statistical CSI to decode the transmitted signals. Exact and approximated
closed-form expressions for the achievable sum rate are presented, which enable
the efficient evaluation of the impact of key system parameters on the system
performance. In addition, optimal relay power allocation scheme is studied, and
power scaling law is characterized. It is found that, with only low-resolution
ADCs at the relay, increasing the number of relay antennas is an effective
method to compensate for the rate loss caused by coarse quantization. However,
it becomes ineffective to handle the detrimental effect of low-resolution ADCs
at the destination. Moreover, it is shown that deploying massive relay antenna
arrays can still bring significant power savings, i.e., the transmit power of
each source can be cut down proportional to $1/M$ to maintain a constant rate,
where $M$ is the number of relay antennas.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2017 02:25:17 GMT"
}
] | 2017-05-16T00:00:00 |
[
[
"Kong",
"Chuili",
""
],
[
"Zhong",
"Caijun",
""
],
[
"Jin",
"Shi",
""
],
[
"Yang",
"Sheng",
""
],
[
"Lin",
"Hai",
""
],
[
"Zhang",
"Zhaoyang",
""
]
] |
new_dataset
| 0.995628 |
1705.04988
|
Bing Li
|
Tsun-Ming Tseng, Bing Li, Tsung-Yi Ho, Ulf Schlichtmann
|
Storage and Caching: Synthesis of Flow-based Microfluidic Biochips
|
IEEE Design and Test, December 2015
| null |
10.1109/MDAT.2015.2492473
| null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Flow-based microfluidic biochips are widely used in lab- on-a-chip
experiments. In these chips, devices such as mixers and detectors connected by
micro-channels execute specific operations. Intermediate fluid samples are
saved in storage temporarily until target devices become avail- able. However,
if the storage unit does not have enough capacity, fluid samples must wait in
devices, reducing their efficiency and thus increasing the overall execution
time. Consequently, storage and caching of fluid samples in such microfluidic
chips must be considered during synthesis to balance execution efficiency and
chip area.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2017 16:06:35 GMT"
}
] | 2017-05-16T00:00:00 |
[
[
"Tseng",
"Tsun-Ming",
""
],
[
"Li",
"Bing",
""
],
[
"Ho",
"Tsung-Yi",
""
],
[
"Schlichtmann",
"Ulf",
""
]
] |
new_dataset
| 0.996604 |
1705.04991
|
Bing Li
|
Tsun-Ming Tseng, Bing Li, Ching-Feng Yeh, Hsiang-Chieh Jhan, Zuo-Ming
Tsai, Mark Po-Hung Lin, and Ulf Schlichtmann
|
Novel CMOS RFIC Layout Generation with Concurrent Device Placement and
Fixed-Length Microstrip Routing
|
ACM/IEEE Design Automation Conference (DAC), 2016
| null |
10.1145/2897937.2898052
| null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With advancing process technologies and booming IoT markets, millimeter-wave
CMOS RFICs have been widely developed in re- cent years. Since the performance
of CMOS RFICs is very sensi- tive to the precision of the layout, precise
placement of devices and precisely matched microstrip lengths to given values
have been a labor-intensive and time-consuming task, and thus become a major
bottleneck for time to market. This paper introduces a progressive
integer-linear-programming-based method to gener- ate high-quality RFIC layouts
satisfying very stringent routing requirements of microstrip lines, including
spacing/non-crossing rules, precise length, and bend number minimization,
within a given layout area. The resulting RFIC layouts excel in both per-
formance and area with much fewer bends compared with the simulation-tuning
based manual layout, while the layout gener- ation time is significantly
reduced from weeks to half an hour.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2017 16:15:25 GMT"
}
] | 2017-05-16T00:00:00 |
[
[
"Tseng",
"Tsun-Ming",
""
],
[
"Li",
"Bing",
""
],
[
"Yeh",
"Ching-Feng",
""
],
[
"Jhan",
"Hsiang-Chieh",
""
],
[
"Tsai",
"Zuo-Ming",
""
],
[
"Lin",
"Mark Po-Hung",
""
],
[
"Schlichtmann",
"Ulf",
""
]
] |
new_dataset
| 0.992765 |
1705.04996
|
Bing Li
|
Chunfeng Liu, Bing Li, Bhargab B. Bhattacharya, Krishnendu
Chakrabarty, Tsung-Yi Ho, Ulf Schlichtmann
|
Testing Microfluidic Fully Programmable Valve Arrays (FPVAs)
|
Design, Automation and Test in Europe (DATE), March 2017
| null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fully Programmable Valve Array (FPVA) has emerged as a new architecture for
the next-generation flow-based microfluidic biochips. This 2D-array consists of
regularly-arranged valves, which can be dynamically configured by users to
realize microfluidic devices of different shapes and sizes as well as
interconnections. Additionally, the regularity of the underlying structure
renders FPVAs easier to integrate on a tiny chip. However, these arrays may
suffer from various manufacturing defects such as blockage and leakage in
control and flow channels. Unfortunately, no efficient method is yet known for
testing such a general-purpose architecture. In this paper, we present a novel
formulation using the concept of flow paths and cut-sets, and describe an
ILP-based hierarchical strategy for generating compact test sets that can
detect multiple faults in FPVAs. Simulation results demonstrate the efficacy of
the proposed method in detecting manufacturing faults with only a small number
of test vectors.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2017 16:31:36 GMT"
}
] | 2017-05-16T00:00:00 |
[
[
"Liu",
"Chunfeng",
""
],
[
"Li",
"Bing",
""
],
[
"Bhattacharya",
"Bhargab B.",
""
],
[
"Chakrabarty",
"Krishnendu",
""
],
[
"Ho",
"Tsung-Yi",
""
],
[
"Schlichtmann",
"Ulf",
""
]
] |
new_dataset
| 0.9918 |
1705.05005
|
Andrew Thangaraj
|
Sourbh Bhadane and Andrew Thangaraj
|
Irregular Recovery and Unequal Locality for Locally Recoverable Codes
with Availability
|
expanded version of a paper that appeared in the National Conference
on Communications 2017, IIT Madras, Chennai, India
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A code is said to be a Locally Recoverable Code (LRC) with availability if
every coordinate can be recovered from multiple disjoint sets of other
coordinates called recovering sets. The vector of sizes of recovering sets of a
coordinate is called its recovery profile. In this work, we consider LRCs with
availability under two different settings: (1) irregular recovery: non-constant
recovery profile that remains fixed for all coordinates, (2) unequal locality:
regular recovery profile that can vary with coordinates. For each setting, we
derive bounds for the minimum distance that generalize previously known bounds
to the cases of irregular or varying recovery profiles. For the case of regular
and fixed recovery profile, we show that a specific Tamo-Barg
polynomial-evaluation construction is optimal for all-symbol locality, and we
provide parity-check matrix constructions for information locality with
availability.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2017 17:18:17 GMT"
}
] | 2017-05-16T00:00:00 |
[
[
"Bhadane",
"Sourbh",
""
],
[
"Thangaraj",
"Andrew",
""
]
] |
new_dataset
| 0.994749 |
1705.05137
|
Jun Inoue
|
Jun Inoue and Yoriyuki Yamagata
|
Operational Semantics of Process Monitors
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
CSPe is a specification language for runtime monitors that can directly
express concurrency in a bottom-up manner that composes the system from
simpler, interacting components. It includes constructs to explicitly flag
failures to the monitor, which unlike deadlocks and livelocks in conventional
process algebras, propagate globally and aborts the whole system's execution.
Although CSPe has a trace semantics along with an implementation demonstrating
acceptable performance, it lacks an operational semantics. An operational
semantics is not only more accessible than trace semantics but also
indispensable for ensuring the correctness of the implementation. Furthermore,
a process algebra like CSPe admits multiple denotational semantics appropriate
for different purposes, and an operational semantics is the basis for
justifying such semantics' integrity and relevance. In this paper, we develop
an SOS-style operational semantics for CSPe, which properly accounts for
explicit failures and will serve as a basis for further study of its
properties, its optimization, and its use in runtime verification.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2017 09:44:50 GMT"
}
] | 2017-05-16T00:00:00 |
[
[
"Inoue",
"Jun",
""
],
[
"Yamagata",
"Yoriyuki",
""
]
] |
new_dataset
| 0.995018 |
1705.05247
|
Brayden Hollis
|
Brayden Hollis, Stacy Patterson, Jeff Trinkle
|
Compressed Sensing for Scalable Robotic Tactile Skins
|
arXiv admin note: text overlap with arXiv:1609.07542
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The potential of large tactile arrays to improve robot perception for safe
operation in human-dominated environments and of high-resolution tactile arrays
to enable human-level dexterous manipulation is well accepted. However, the
increase in the number of tactile sensing elements introduces challenges
including wiring complexity, data acquisition, and data processing. To help
address these challenges, we develop a tactile sensing technique based on
compressed sensing. Compressed sensing simultaneously performs data sampling
and compression with recovery guarantees and has been successfully applied in
computer vision. We use compressed sensing techniques for tactile data
acquisition to reduce hardware complexity and data transmission, while allowing
fast, accurate reconstruction of the full-resolution signal. For our simulated
test array of 4096 taxels, we achieve reconstruction quality equivalent to
measuring all taxel signals independently (the full signal) from just 1024
measurements (the compressed signal) at a rate over 100Hz. We then apply
tactile compressed sensing to the problem of object classification.
Specifically, we perform object classification on the compressed tactile data
based on a method called compressed learning. We obtain up to 98%
classification accuracy, even with a compression ratio of 64:1.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2017 16:40:16 GMT"
}
] | 2017-05-16T00:00:00 |
[
[
"Hollis",
"Brayden",
""
],
[
"Patterson",
"Stacy",
""
],
[
"Trinkle",
"Jeff",
""
]
] |
new_dataset
| 0.996105 |
1110.4978
|
W{\l}odzimierz Drabent
|
W{\l}odzimierz Drabent
|
Logic + control: On program construction and verification
|
29 pages. Version 3 substantially reworked, in particular all
informal reasoning replaced by proofs, part of the content moved to 1412.8739
and 1411.3015. Versions 4, 5 and this one -- various modifications and
extensions. Under consideration in Theory and Practice of Logic Programming
(TPLP)
| null | null | null |
cs.LO cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an example of formal reasoning about the semantics of a
Prolog program of practical importance (the SAT solver of Howe and King). The
program is treated as a definite clause logic program with added control. The
logic program is constructed by means of stepwise refinement, hand in hand with
its correctness and completeness proofs. The proofs are declarative - they do
not refer to any operational semantics. Each step of the logic program
construction follows a systematic approach to constructing programs which are
provably correct and complete. We also prove that correctness and completeness
of the logic program is preserved in the final Prolog program. Additionally, we
prove termination, occur-check freedom and non-floundering.
Our example shows how dealing with "logic" and with "control" can be
separated. Most of the proofs can be done at the "logic" level, abstracting
from any operational semantics.
The example employs approximate specifications; they are crucial in
simplifying reasoning about logic programs. It also shows that the paradigm of
semantics-preserving program transformations may be not sufficient. We suggest
considering transformations which preserve correctness and completeness with
respect to an approximate specification.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2011 15:29:28 GMT"
},
{
"version": "v2",
"created": "Sat, 26 May 2012 15:19:39 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Dec 2015 23:16:26 GMT"
},
{
"version": "v4",
"created": "Wed, 28 Dec 2016 21:36:40 GMT"
},
{
"version": "v5",
"created": "Fri, 13 Jan 2017 14:04:45 GMT"
},
{
"version": "v6",
"created": "Fri, 12 May 2017 16:54:29 GMT"
}
] | 2017-05-15T00:00:00 |
[
[
"Drabent",
"Włodzimierz",
""
]
] |
new_dataset
| 0.984142 |
1703.06503
|
Cedric Nugteren
|
Cedric Nugteren and Valeriu Codreanu
|
CLTune: A Generic Auto-Tuner for OpenCL Kernels
|
8 pages, published in MCSoC '15, IEEE 9th International Symposium on
Embedded Multicore/Many-core Systems-on-Chip (MCSoC), 2015
| null |
10.1109/MCSoC.2015.10
| null |
cs.PF cs.AI cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents CLTune, an auto-tuner for OpenCL kernels. It evaluates and
tunes kernel performance of a generic, user-defined search space of possible
parameter-value combinations. Example parameters include the OpenCL workgroup
size, vector data-types, tile sizes, and loop unrolling factors. CLTune can be
used in the following scenarios: 1) when there are too many tunable parameters
to explore manually, 2) when performance portability across OpenCL devices is
desired, or 3) when the optimal parameters change based on input argument
values (e.g. matrix dimensions). The auto-tuner is generic, easy to use,
open-source, and supports multiple search strategies including simulated
annealing and particle swarm optimisation. CLTune is evaluated on two GPU
case-studies inspired by the recent successes in deep learning: 2D convolution
and matrix-multiplication (GEMM). For 2D convolution, we demonstrate the need
for auto-tuning by optimizing for different filter sizes, achieving performance
on-par or better than the state-of-the-art. For matrix-multiplication, we use
CLTune to explore a parameter space of more than two-hundred thousand
configurations, we show the need for device-specific tuning, and outperform the
clBLAS library on NVIDIA, AMD and Intel GPUs.
|
[
{
"version": "v1",
"created": "Sun, 19 Mar 2017 20:10:00 GMT"
}
] | 2017-05-15T00:00:00 |
[
[
"Nugteren",
"Cedric",
""
],
[
"Codreanu",
"Valeriu",
""
]
] |
new_dataset
| 0.999796 |
1705.04149
|
Jiajun Jiang
|
Jiajun Jiang and Yingfei Xiong
|
Can defects be fixed with weak test suites? An analysis of 50 defects
from Defects4J
|
Submitted to Empirical Software Engineering
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated program repair techniques, which target to generating correct
patches for real world defects automatically, have gained a lot of attention in
the last decade. Many different techniques and tools have been proposed and
developed. However, even the most sophisticated program repair techniques can
only repair a small portion of defects while producing a lot of incorrect
patches. A possible reason for this low performance is that the test suites of
real world programs are usually too weak to guarantee the behavior of the
program. To understand to what extent defects can be fixed with weak test
suites, we analyzed 50 real world defects from Defects4J, in which we found
that up to 84% of them could be correctly fixed. This result suggests that
there is plenty of space for current automated program repair techniques to
improve. Furthermore, we summarized seven fault localization strategies and
seven patch generation strategies that were useful in localizing and fixing
these defects, and compared those strategies with current repair techniques.
The results indicate potential directions to improve automatic program repair
in the future research.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2017 13:08:04 GMT"
},
{
"version": "v2",
"created": "Fri, 12 May 2017 02:13:26 GMT"
}
] | 2017-05-15T00:00:00 |
[
[
"Jiang",
"Jiajun",
""
],
[
"Xiong",
"Yingfei",
""
]
] |
new_dataset
| 0.950713 |
1705.04434
|
Peng Qi
|
Peng Qi and Christopher D. Manning
|
Arc-swift: A Novel Transition System for Dependency Parsing
|
Accepted at ACL 2017
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transition-based dependency parsers often need sequences of local shift and
reduce operations to produce certain attachments. Correct individual decisions
hence require global information about the sentence context and mistakes cause
error propagation. This paper proposes a novel transition system, arc-swift,
that enables direct attachments between tokens farther apart with a single
transition. This allows the parser to leverage lexical information more
directly in transition decisions. Hence, arc-swift can achieve significantly
better performance with a very small beam size. Our parsers reduce error by
3.7--7.6% relative to those using existing transition systems on the Penn
Treebank dependency parsing task and English Universal Dependencies.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2017 03:44:34 GMT"
}
] | 2017-05-15T00:00:00 |
[
[
"Qi",
"Peng",
""
],
[
"Manning",
"Christopher D.",
""
]
] |
new_dataset
| 0.996507 |
1705.04441
|
Mingming Cai
|
Mingming Cai, J. Nicholas Laneman and Bertrand Hochwald
|
Beamforming Codebook Compensation for Beam Squint with Channel Capacity
Constraint
|
5 pages, to be published in Proc. IEEE ISIT 2017, Aachen, Germany
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Analog beamforming with phased arrays is a promising technique for 5G
wireless communication in millimeter wave bands. A beam focuses on a small
range of angles of arrival or departure and corresponds to a set of fixed phase
shifts across frequency due to practical hardware constraints. In switched
beamforming, a discrete codebook consisting of multiple beams is used to cover
a larger angle range. However, for sufficiently large bandwidth, the gain
provided by the phased array is frequency dependent even if the radiation
pattern of the antenna elements is frequency independent, an effect called beam
squint. This paper shows that the beam squint reduces channel capacity of a
uniform linear array (ULA). The beamforming codebook is designed to compensate
for the beam squint by imposing a channel capacity constraint. For example, our
codebook design algorithm can improve the channel capacity by 17.8% for a ULA
with 64 antennas operating at bandwidth of 2.5 GHz and carrier frequency of 73
GHz. Analysis and numerical examples suggest that a denser codebook is required
to compensate for the beam squint compared to the case without beam squint.
Furthermore, the effect of beam squint is shown to increase as bandwidth
increases, and the beam squint limits the bandwidth given the number of
antennas in the array.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2017 05:01:31 GMT"
}
] | 2017-05-15T00:00:00 |
[
[
"Cai",
"Mingming",
""
],
[
"Laneman",
"J. Nicholas",
""
],
[
"Hochwald",
"Bertrand",
""
]
] |
new_dataset
| 0.999567 |
1705.04469
|
Luka \v{C}ehovin Zajc
|
Luka \v{C}ehovin
|
TraX: The visual Tracking eXchange Protocol and Library
| null | null |
10.1016/j.neucom.2017.02.036
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we address the problem of developing on-line visual tracking
algorithms. We present a specialized communication protocol that serves as a
bridge between a tracker implementation and utilizing application. It decouples
development of algorithms and application, encouraging re-usability. The
primary use case is algorithm evaluation where the protocol facilitates more
complex evaluation scenarios that are used nowadays thus pushing forward the
field of visual tracking. We present a reference implementation of the protocol
that makes it easy to use in several popular programming languages and discuss
where the protocol is already used and some usage scenarios that we envision
for the future.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2017 08:33:20 GMT"
}
] | 2017-05-15T00:00:00 |
[
[
"Čehovin",
"Luka",
""
]
] |
new_dataset
| 0.964767 |
1705.04497
|
Wiktor Daszczuk
|
Wiktor B. Daszczuk, Jerzy Mie\'scicki
|
Distributed management of Personal Rapid Transit (PRT) vehicles under
unusual transport conditions
|
6 pages, 1 figure, 3 tables
|
Logistyka Vol. 4/2015, pp. 2896-2901
| null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper presents a flexibility of management of vehicles in Personal Rapid
Transit (PRT) network. The algorithm used for delivering empty vehicles for
waiting passengers is based on multiparameter analysis. Due to its distributed
construction, the algorithm has a horizon parameter, which specifies the
maximum distance between stations the communications is performed. Every
decision is made basing on an information about situation (number of vehicles
standing at a station, number of vehicles travelling to a station, number of
passengers waiting) sent between stations, without any central data base
containing traffic conditions. The simulation of the traffic in random case
(typical) and in unusual case of delivering people to a social event occurring
at single place is presented. It is shown that simple manipulation with horizon
parameter allows to adapt the network to extremely uneven demand and
destination choice.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2017 10:15:12 GMT"
}
] | 2017-05-15T00:00:00 |
[
[
"Daszczuk",
"Wiktor B.",
""
],
[
"Mieścicki",
"Jerzy",
""
]
] |
new_dataset
| 0.952598 |
1705.04669
|
Ifeanyi Ubah
|
Ifeanyi W. Ubah, Lars Kroll, Alexandru A. Ormenisan, Seif Haridi
|
KompicsTesting - Unit Testing Event Streams
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present KompicsTesting, a framework for unit testing
components in the Kompics component model. Components in Kompics are
event-driven entities which communicate asynchronously solely by message
passing. Similar to actors in the actor model, they do not share their internal
state in message-passing, making them less prone to errors, compared to other
models of concurrency using shared state. However, they are neither immune to
simpler logical and specification errors nor errors such as dataraces that stem
from nondeterminism. As a result, there exists a need for tools that enable
rapid and iterative development and testing of message passing components in
general, in a manner similar to the xUnit frameworks for functions and modular
segments code. These frameworks work in an imperative manner, ill suited for
testing message-passing components given that the behavior of such components
are encoded in the streams of messages that they send and receive. In this
work, we present a theoretical framework for describing and verifying the
behavior of message-passing components, independent of the model and framework
implementation, in a manner similar to describing a stream of characters using
regular expressions. We show how this approach can be used to perform both
black box and white box testing of components and illustrate its feasibility
through the design and implementation a prototype based on this approach,
KompicsTesting.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2017 17:35:13 GMT"
}
] | 2017-05-15T00:00:00 |
[
[
"Ubah",
"Ifeanyi W.",
""
],
[
"Kroll",
"Lars",
""
],
[
"Ormenisan",
"Alexandru A.",
""
],
[
"Haridi",
"Seif",
""
]
] |
new_dataset
| 0.981999 |
1705.04670
|
Sandip Roy Mr.
|
Tanaya Roy, Sandip Roy
|
Atypical Stable Multipath Routing Strategy in MANET
|
4 pages, 4 figures
| null | null | null |
cs.NI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
MANET is a collection of mobile nodes operated by battery source with limited
energy reservoir. The dynamic topology and absence of pre-existing
infrastructure in MANET makes routing technique more thought-provoking. The
arbitrary movement of nodes may lead towards more packet drop, routing overhead
and end-to-end delay. Moreover power deficiency in nodes affects the packet
forwarding ability and thus reduces network lifetime. So a power aware stable
routing strategy is in demand in MANET. In this manuscript we have proposed a
novel multipath routing strategy that could select multiple stable routes
between source and destination during data transmission depending on two
factors residual energy and link expiration time (LET) of nodes. Our proposed
energy aware stable multipath routing strategy could attain the reliability,
load balancing, and bandwidth aggregation in order to increase the network
lifetime.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2017 17:38:03 GMT"
}
] | 2017-05-15T00:00:00 |
[
[
"Roy",
"Tanaya",
""
],
[
"Roy",
"Sandip",
""
]
] |
new_dataset
| 0.99178 |
1612.07289
|
Italo Atzeni Dr.
|
Italo Atzeni and Marios Kountouris
|
Full-Duplex MIMO Small-Cell Networks with Interference Cancellation
|
Submitted to the IEEE for possible publication
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Full-duplex (FD) technology is envisaged as a key component for future mobile
broadband networks due to its ability to boost the spectral efficiency. FD
systems can transmit and receive simultaneously on the same frequency at the
expense of residual self-interference (SI) and additional interference to the
network compared with half-duplex (HD) transmission. This paper analyzes the
performance of wireless networks with FD multi-antenna base stations (BSs) and
HD user equipments (UEs) using stochastic geometry. Our analytical results
quantify the success probability and the achievable spectral efficiency and
indicate the amount of SI cancellation needed for beneficial FD operation. The
advantages of multi-antenna BSs/UEs are shown and the performance gains
achieved by balancing desired signal power increase and interference
cancellation are derived. The proposed framework aims at shedding light on the
system-level gains of FD mode with respect to HD mode in terms of network
throughput, and provides design guidelines for the practical implementation of
FD technology in large small-cell networks.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2016 19:41:09 GMT"
},
{
"version": "v2",
"created": "Thu, 11 May 2017 10:39:11 GMT"
}
] | 2017-05-12T00:00:00 |
[
[
"Atzeni",
"Italo",
""
],
[
"Kountouris",
"Marios",
""
]
] |
new_dataset
| 0.998656 |
1704.04374
|
Paul Springer
|
Paul Springer, Tong Su, Paolo Bientinesi
|
HPTT: A High-Performance Tensor Transposition C++ Library
| null | null | null | null |
cs.MS cs.DC cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently we presented TTC, a domain-specific compiler for tensor
transpositions. Despite the fact that the performance of the generated code is
nearly optimal, due to its offline nature, TTC cannot be utilized in all the
application codes in which the tensor sizes and the necessary tensor
permutations are determined at runtime. To overcome this limitation, we
introduce the open-source C++ library High-Performance Tensor Transposition
(HPTT). Similar to TTC, HPTT incorporates optimizations such as blocking,
multi-threading, and explicit vectorization; furthermore it decomposes any
transposition into multiple loops around a so called micro-kernel. This modular
design---inspired by BLIS---makes HPTT easy to port to different architectures,
by only replacing the hand-vectorized micro-kernel (e.g., a 4x4 transpose).
HPTT also offers an optional autotuning framework---guided by a performance
model---that explores a vast search space of implementations at runtime
(similar to FFTW). Across a wide range of different tensor transpositions and
architectures (e.g., Intel Ivy Bridge, Intel Knights Landing, ARMv7, IBM
Power7), HPTT attains a bandwidth comparable to that of SAXPY, and yields
remarkable speedups over Eigen's tensor transposition implementation. Most
importantly, the integration of HPTT into the Cyclops Tensor Framework (CTF)
improves the overall performance of tensor contractions by up to 3.1x.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2017 09:45:06 GMT"
},
{
"version": "v2",
"created": "Wed, 10 May 2017 21:34:51 GMT"
}
] | 2017-05-12T00:00:00 |
[
[
"Springer",
"Paul",
""
],
[
"Su",
"Tong",
""
],
[
"Bientinesi",
"Paolo",
""
]
] |
new_dataset
| 0.984754 |
1705.03955
|
Yunfei Hou
|
Yunfei Hou, Abhishek Gupta, Tong Guan, Shaohan Hu, Lu Su, and Chunming
Qiao
|
VehSense: Slippery Road Detection Using Smartphones
|
2017 IEEE 85th Vehicular Technology Conference (VTC2017-Spring)
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates a new application of vehicular sensing: detecting and
reporting the slippery road conditions. We describe a system and associated
algorithm to monitor vehicle skidding events using smartphones and OBD-II (On
board Diagnostics) adaptors. This system, which we call the VehSense, gathers
data from smartphone inertial sensors and vehicle wheel speed sensors, and
processes the data to monitor slippery road conditions in real-time.
Specifically, two speed readings are collected: 1) ground speed, which is
estimated by vehicle acceleration and rotation, and 2) wheel speed, which is
retrieved from the OBD-II interface. The mismatch between these two speeds is
used to infer a skidding event. Without tapping into vehicle manufactures'
proprietary data (e.g., antilock braking system), VehSense is compatible with
most of the passenger vehicles, and thus can be easily deployed. We evaluate
our system on snow-covered roads at Buffalo, and show that it can detect
vehicle skidding effectively.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2017 21:16:12 GMT"
}
] | 2017-05-12T00:00:00 |
[
[
"Hou",
"Yunfei",
""
],
[
"Gupta",
"Abhishek",
""
],
[
"Guan",
"Tong",
""
],
[
"Hu",
"Shaohan",
""
],
[
"Su",
"Lu",
""
],
[
"Qiao",
"Chunming",
""
]
] |
new_dataset
| 0.99817 |
1705.03972
|
Johnnatan Messias
|
Julio C. S. Reis and Haewoon Kwak and Jisun An and Johnnatan Messias
and Fabricio Benevenuto
|
Demographics of News Sharing in the U.S. Twittersphere
| null | null |
10.1145/3078714.3078734
| null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The widespread adoption and dissemination of online news through social media
systems have been revolutionizing many segments of our society and ultimately
our daily lives. In these systems, users can play a central role as they share
content to their friends. Despite that, little is known about news spreaders in
social media. In this paper, we provide the first of its kind in-depth
characterization of news spreaders in social media. In particular, we
investigate their demographics, what kind of content they share, and the
audience they reach. Among our main findings, we show that males and white
users tend to be more active in terms of sharing news, biasing the news
audience to the interests of these demographic groups. Our results also
quantify differences in interests of news sharing across demographics, which
has implications for personalized news digests.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2017 23:19:16 GMT"
}
] | 2017-05-12T00:00:00 |
[
[
"Reis",
"Julio C. S.",
""
],
[
"Kwak",
"Haewoon",
""
],
[
"An",
"Jisun",
""
],
[
"Messias",
"Johnnatan",
""
],
[
"Benevenuto",
"Fabricio",
""
]
] |
new_dataset
| 0.998879 |
1705.04045
|
Matheus Araujo
|
Matheus Araujo, Yelena Mejova, Ingmar Weber, Fabricio Benevenuto
|
Using Facebook Ads Audiences for Global Lifestyle Disease Surveillance:
Promises and Limitations
|
Please cite the article published at WebSci'17 instead of this arxiv
version
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Every day, millions of users reveal their interests on Facebook, which are
then monetized via targeted advertisement marketing campaigns. In this paper,
we explore the use of demographically rich Facebook Ads audience estimates for
tracking non-communicable diseases around the world. Across 47 countries, we
compute the audiences of marker interests, and evaluate their potential in
tracking health conditions associated with tobacco use, obesity, and diabetes,
compared to the performance of placebo interests. Despite its huge potential,
we find that, for modeling prevalence of health conditions across countries,
differences in these interest audiences are only weakly indicative of the
corresponding prevalence rates. Within the countries, however, our approach
provides interesting insights on trends of health awareness across demographic
groups. Finally, we provide a temporal error analysis to expose the potential
pitfalls of using Facebook's Marketing API as a black box.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2017 07:34:24 GMT"
}
] | 2017-05-12T00:00:00 |
[
[
"Araujo",
"Matheus",
""
],
[
"Mejova",
"Yelena",
""
],
[
"Weber",
"Ingmar",
""
],
[
"Benevenuto",
"Fabricio",
""
]
] |
new_dataset
| 0.992135 |
1606.04236
|
Sabrina M\"uller
|
Sabrina M\"uller, Onur Atan, Mihaela van der Schaar, Anja Klein
|
Context-Aware Proactive Content Caching with Service Differentiation in
Wireless Networks
|
32 pages, 9 figures, to appear in IEEE Transactions on Wireless
Communications, see http://doi.org/10.1109/TWC.2016.2636139
|
IEEE Transactions on Wireless Communications, vol. 16, no. 2, pp.
1024-1036, Feb. 2017
|
10.1109/TWC.2016.2636139
| null |
cs.NI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Content caching in small base stations or wireless infostations is considered
to be a suitable approach to improve the efficiency in wireless content
delivery. Placing the optimal content into local caches is crucial due to
storage limitations, but it requires knowledge about the content popularity
distribution, which is often not available in advance. Moreover, local content
popularity is subject to fluctuations since mobile users with different
interests connect to the caching entity over time. Which content a user prefers
may depend on the user's context. In this paper, we propose a novel algorithm
for context-aware proactive caching. The algorithm learns context-specific
content popularity online by regularly observing context information of
connected users, updating the cache content and observing cache hits
subsequently. We derive a sublinear regret bound, which characterizes the
learning speed and proves that our algorithm converges to the optimal cache
content placement strategy in terms of maximizing the number of cache hits.
Furthermore, our algorithm supports service differentiation by allowing
operators of caching entities to prioritize customer groups. Our numerical
results confirm that our algorithm outperforms state-of-the-art algorithms in a
real world data set, with an increase in the number of cache hits of at least
14%.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2016 07:53:47 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Dec 2016 15:03:53 GMT"
}
] | 2017-05-11T00:00:00 |
[
[
"Müller",
"Sabrina",
""
],
[
"Atan",
"Onur",
""
],
[
"van der Schaar",
"Mihaela",
""
],
[
"Klein",
"Anja",
""
]
] |
new_dataset
| 0.993351 |
1704.06648
|
Tim Quatmann
|
Tim Quatmann, Sebastian Junges, Joost-Pieter Katoen
|
Markov Automata with Multiple Objectives
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Markov automata combine non-determinism, probabilistic branching, and
exponentially distributed delays. This compositional variant of continuous-time
Markov decision processes is used in reliability engineering, performance
evaluation and stochastic scheduling. Their verification so far focused on
single objectives such as (timed) reachability, and expected costs. In
practice, often the objectives are mutually dependent and the aim is to reveal
trade-offs. We present algorithms to analyze several objectives simultaneously
and approximate Pareto curves. This includes, e.g., several (timed)
reachability objectives, or various expected cost objectives. We also consider
combinations thereof, such as on-time-within-budget objectives - which policies
guarantee reaching a goal state within a deadline with at least probability $p$
while keeping the allowed average costs below a threshold? We adopt existing
approaches for classical Markov decision processes. The main challenge is to
treat policies exploiting state residence times, even for untimed objectives.
Experimental results show the feasibility and scalability of our approach.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2017 17:43:03 GMT"
},
{
"version": "v2",
"created": "Wed, 10 May 2017 14:14:49 GMT"
}
] | 2017-05-11T00:00:00 |
[
[
"Quatmann",
"Tim",
""
],
[
"Junges",
"Sebastian",
""
],
[
"Katoen",
"Joost-Pieter",
""
]
] |
new_dataset
| 0.996018 |
1705.03517
|
Roberto Bagnara
|
Roberto Bagnara
|
MISRA C, for Security's Sake!
|
4 pages, 2 tables, presented at the "14th Workshop on Automotive
Software & Systems", Milan, November 10, 2016
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A third of United States new cellular subscriptions in Q1 2016 were for cars.
There are now more than 112 million vehicles connected around the world. The
percentage of new cars shipped with Internet connectivity is expected to rise
from 13% in 2015 to 75% in 2020, and 98% of all vehicles will likely be
connected by 2025. Moreover, the news continuously report about "white hat"
hackers intruding on car software. For these reasons, security concerns in
automotive and other industries have skyrocketed. MISRA C, which is widely
respected as a safety-related coding standard, is equally applicable as a
security-related coding standard. In this presentation, we will show that
security-critical and safety-critical software have the same requirements. We
will then introduce the new documents MISRA C:2012 Amendment 1 (Additional
security guidelines for MISRA C:2012) and MISRA C:2012 Addendum 2 (Coverage of
MISRA C:2012 against ISO/IEC TS 17961:2013 "C Secure Coding Rules"). We will
illustrate the relationship between MISRA C, CERT C and ISO/IEC TS 17961, with
a particular focus on the objective of preventing security vulnerabilities (and
of course safety hazards) as opposed to trying to eradicate them once they have
been inserted in the code.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2017 20:00:48 GMT"
}
] | 2017-05-11T00:00:00 |
[
[
"Bagnara",
"Roberto",
""
]
] |
new_dataset
| 0.979459 |
1705.03550
|
Vincenzo Lomonaco
|
Vincenzo Lomonaco and Davide Maltoni
|
CORe50: a New Dataset and Benchmark for Continuous Object Recognition
| null | null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Continuous/Lifelong learning of high-dimensional data streams is a
challenging research problem. In fact, fully retraining models each time new
data become available is infeasible, due to computational and storage issues,
while na\"ive incremental strategies have been shown to suffer from
catastrophic forgetting. In the context of real-world object recognition
applications (e.g., robotic vision), where continuous learning is crucial, very
few datasets and benchmarks are available to evaluate and compare emerging
techniques. In this work we propose a new dataset and benchmark CORe50,
specifically designed for continuous object recognition, and introduce baseline
approaches for different continuous learning scenarios.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2017 21:32:19 GMT"
}
] | 2017-05-11T00:00:00 |
[
[
"Lomonaco",
"Vincenzo",
""
],
[
"Maltoni",
"Davide",
""
]
] |
new_dataset
| 0.999674 |
1705.03591
|
Tao Lu
|
Tao Lu, Ping Huang, Xubin He, Matthew Welch, Steven Gonzales, Ming
Zhang
|
IOTune: A G-states Driver for Elastic Performance of Block Storage
|
15 pages, 10 figures
| null | null | null |
cs.OS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Imagining a disk which provides baseline performance at a relatively low
price during low-load periods, but when workloads demand more resources, the
disk performance is automatically promoted in situ and in real time. In a
hardware era, this is hardly achievable. However, this imagined disk is
becoming reality due to the technical advances of software-defined storage,
which enable volume performance to be adjusted on the fly. We propose IOTune, a
resource management middleware which employs software-defined storage
primitives to implement G-states of virtual block devices. G-states enable
virtual block devices to serve at multiple performance gears, getting rid of
conflicts between immutable resource reservation and dynamic resource demands,
and always achieving resource right-provisioning for workloads. Accompanying
G-states, we also propose a new block storage pricing policy for cloud
providers. Our case study for applying G-states to cloud block storage verifies
the effectiveness of the IOTune framework. Trace-replay based evaluations
demonstrate that storage volumes with G-states adapt to workload fluctuations.
For tenants, G-states enable volumes to provide much better QoS with a same
cost of ownership, comparing with static IOPS provisioning and the I/O credit
mechanism. G-states also reduce I/O tail latencies by one to two orders of
magnitude. From the standpoint of cloud providers, G-states promote storage
utilization, creating values and benefiting competitiveness. G-states supported
by IOTune provide a new paradigm for storage resource management and pricing in
multi-tenant clouds.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2017 02:30:06 GMT"
}
] | 2017-05-11T00:00:00 |
[
[
"Lu",
"Tao",
""
],
[
"Huang",
"Ping",
""
],
[
"He",
"Xubin",
""
],
[
"Welch",
"Matthew",
""
],
[
"Gonzales",
"Steven",
""
],
[
"Zhang",
"Ming",
""
]
] |
new_dataset
| 0.999481 |
1705.03686
|
Daniel Neuen
|
Daniel Neuen and Pascal Schweitzer
|
Benchmark Graphs for Practical Graph Isomorphism
|
32 pages
| null | null | null |
cs.DS math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The state-of-the-art solvers for the graph isomorphism problem can readily
solve generic instances with tens of thousands of vertices. Indeed, experiments
show that on inputs without particular combinatorial structure the algorithms
scale almost linearly. In fact, it is non-trivial to create challenging
instances for such solvers and the number of difficult benchmark graphs
available is quite limited. We describe a construction to efficiently generate
small instances for the graph isomorphism problem that are difficult or even
infeasible for said solvers. Up to this point the only other available
instances posing challenges for isomorphism solvers were certain incidence
structures of combinatorial objects (such as projective planes, Hadamard
matrices, Latin squares, etc.). Experiments show that starting from 1500
vertices our new instances are several orders of magnitude more difficult on
comparable input sizes. More importantly, our method is generic and efficient
in the sense that one can quickly create many isomorphism instances on a
desired number of vertices. In contrast to this, said combinatorial objects are
rare and difficult to generate and with the new construction it is possible to
generate an abundance of instances of arbitrary size. Our construction hinges
on the multipedes of Gurevich and Shelah and the Cai-F\"{u}rer-Immerman gadgets
that realize a certain abelian automorphism group and have repeatedly played a
role in the context of graph isomorphism. Exploring limits of such
constructions, we also explain that there are group theoretic obstructions to
generalizing the construction with non-abelian gadgets.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2017 10:28:17 GMT"
}
] | 2017-05-11T00:00:00 |
[
[
"Neuen",
"Daniel",
""
],
[
"Schweitzer",
"Pascal",
""
]
] |
new_dataset
| 0.983376 |
1604.03505
|
Prithvijit Chattopadhyay Chattopadhyay
|
Prithvijit Chattopadhyay, Ramakrishna Vedantam, Ramprasaath R.
Selvaraju, Dhruv Batra, and Devi Parikh
|
Counting Everyday Objects in Everyday Scenes
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We are interested in counting the number of instances of object classes in
natural, everyday images. Previous counting approaches tackle the problem in
restricted domains such as counting pedestrians in surveillance videos. Counts
can also be estimated from outputs of other vision tasks like object detection.
In this work, we build dedicated models for counting designed to tackle the
large variance in counts, appearances, and scales of objects found in natural
scenes. Our approach is inspired by the phenomenon of subitizing - the ability
of humans to make quick assessments of counts given a perceptual signal, for
small count values. Given a natural scene, we employ a divide and conquer
strategy while incorporating context across the scene to adapt the subitizing
idea to counting. Our approach offers consistent improvements over numerous
baseline approaches for counting on the PASCAL VOC 2007 and COCO datasets.
Subsequently, we study how counting can be used to improve object detection. We
then show a proof of concept application of our counting methods to the task of
Visual Question Answering, by studying the `how many?' questions in the VQA and
COCO-QA datasets.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2016 18:31:43 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Dec 2016 17:34:20 GMT"
},
{
"version": "v3",
"created": "Tue, 9 May 2017 03:24:40 GMT"
}
] | 2017-05-10T00:00:00 |
[
[
"Chattopadhyay",
"Prithvijit",
""
],
[
"Vedantam",
"Ramakrishna",
""
],
[
"Selvaraju",
"Ramprasaath R.",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Parikh",
"Devi",
""
]
] |
new_dataset
| 0.992274 |
1611.00135
|
Jia Li
|
Jia Li, Changqun Xia and Xiaowu Chen
|
A Benchmark Dataset and Saliency-guided Stacked Autoencoders for
Video-based Salient Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image-based salient object detection (SOD) has been extensively studied in
the past decades. However, video-based SOD is much less explored since there
lack large-scale video datasets within which salient objects are unambiguously
defined and annotated. Toward this end, this paper proposes a video-based SOD
dataset that consists of 200 videos (64 minutes). In constructing the dataset,
we manually annotate all objects and regions over 7,650 uniformly sampled
keyframes and collect the eye-tracking data of 23 subjects that free-view all
videos. From the user data, we find salient objects in video can be defined as
objects that consistently pop-out throughout the video, and objects with such
attributes can be unambiguously annotated by combining manually annotated
object/region masks with eye-tracking data of multiple subjects. To the best of
our knowledge, it is currently the largest dataset for video-based salient
object detection.
Based on this dataset, this paper proposes an unsupervised baseline approach
for video-based SOD by using saliency-guided stacked autoencoders. In the
proposed approach, multiple spatiotemporal saliency cues are first extracted at
pixel, superpixel and object levels. With these saliency cues, stacked
autoencoders are unsupervisedly constructed which automatically infer a
saliency score for each pixel by progressively encoding the high-dimensional
saliency cues gathered from the pixel and its spatiotemporal neighbors.
Experimental results show that the proposed unsupervised approach outperforms
30 state-of-the-art models on the proposed dataset, including 19 image-based &
classic (unsupervised or non-deep learning), 6 image-based & deep learning, and
5 video-based & unsupervised. Moreover, benchmarking results show that the
proposed dataset is very challenging and has the potential to boost the
development of video-based SOD.
|
[
{
"version": "v1",
"created": "Tue, 1 Nov 2016 05:48:05 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2017 07:38:17 GMT"
}
] | 2017-05-10T00:00:00 |
[
[
"Li",
"Jia",
""
],
[
"Xia",
"Changqun",
""
],
[
"Chen",
"Xiaowu",
""
]
] |
new_dataset
| 0.9998 |
1702.00938
|
Pascal Giard
|
Pascal Giard, Alexios Balatsoukas-Stimming, Thomas Christoph M\"uller,
Andreas Burg, Claude Thibeault, and Warren J. Gross
|
A Multi-Gbps Unrolled Hardware List Decoder for a Systematic Polar Code
|
5 pages, 3 figures, appeared at the Asilomar Conference on Signals,
Systems, and Computers 2016
| null |
10.1109/ACSSC.2016.7869561
| null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Polar codes are a new class of block codes with an explicit construction that
provably achieve the capacity of various communications channels, even with the
low-complexity successive-cancellation (SC) decoding algorithm. Yet, the more
complex successive-cancellation list (SCL) decoding algorithm is gathering more
attention lately as it significantly improves the error-correction performance
of short- to moderate-length polar codes, especially when they are concatenated
with a cyclic redundancy check code. However, as SCL decoding explores several
decoding paths, existing hardware implementations tend to be significantly
slower than SC-based decoders. In this paper, we show how the unrolling
technique, which has already been used in the context of SC decoding, can be
adapted to SCL decoding yielding a multi-Gbps SCL-based polar decoder with an
error-correction performance that is competitive when compared to an LDPC code
of similar length and rate. Post-place-and-route ASIC results for 28 nm CMOS
are provided showing that this decoder can sustain a throughput greater than 10
Gbps at 468 MHz with an energy efficiency of 7.25 pJ/bit.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2017 08:48:52 GMT"
}
] | 2017-05-10T00:00:00 |
[
[
"Giard",
"Pascal",
""
],
[
"Balatsoukas-Stimming",
"Alexios",
""
],
[
"Müller",
"Thomas Christoph",
""
],
[
"Burg",
"Andreas",
""
],
[
"Thibeault",
"Claude",
""
],
[
"Gross",
"Warren J.",
""
]
] |
new_dataset
| 0.998526 |
1705.03008
|
David Alejandro Trejo Pizzo
|
David Alejandro Trejo Pizzo
|
Resistive communications based on neuristors
| null | null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Memristors are passive elements that allow us to store information using a
single element per bit. However, this is not the only utility of the memristor.
Considering the physical chemical structure of the element used, the memristor
can function at the same time as memory and as a communication unit. This paper
presents a new approach to the use of the memristor and develops the concept of
resistive communication.
|
[
{
"version": "v1",
"created": "Sun, 7 May 2017 23:02:07 GMT"
}
] | 2017-05-10T00:00:00 |
[
[
"Pizzo",
"David Alejandro Trejo",
""
]
] |
new_dataset
| 0.984484 |
1705.03042
|
Mohsen Moradi
|
Mohsen Moradi
|
Polar codes for secret sharing
|
5 pages, 4 tables
| null | null | null |
cs.CR cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A secret can be an encrypted message or a private key to decrypt the
ciphertext. One of the main issues in cryptography is keeping this secret safe.
Entrusting secret to one person or saving it in a computer can conclude
betrayal of the person or destruction of that device. For solving this issue,
secret sharing can be used between some individuals which a coalition of a
specific number of them can only get access to the secret. In practical issues,
some of the members have more power and by a coalition of fewer of them, they
should know about the secret. In a bank, for example, president and deputy can
have a union with two members by each other. In this paper, by using Polar
codes secret sharing has been studied and a secret sharing scheme based on
Polar codes has been introduced. Information needed for any member would be
sent by the channel which Polar codes are constructed by it.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2017 18:35:20 GMT"
}
] | 2017-05-10T00:00:00 |
[
[
"Moradi",
"Mohsen",
""
]
] |
new_dataset
| 0.999422 |
1705.03060
|
Shivram Tabibu
|
Shivram Tabibu
|
Communications for Wearable Devices
|
11 pages
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wearable devices are transforming computing and the human-computer
interaction and they are a primary means for motion recognition of reflexive
systems. We review basic wearable deployments and their open wireless
communications. An algorithm that uses accelerometer data to provide a control
and communication signal is described. Challenges in the further deployment of
wearable device in the field of body area network and biometric verification
are discussed.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2017 19:44:06 GMT"
}
] | 2017-05-10T00:00:00 |
[
[
"Tabibu",
"Shivram",
""
]
] |
new_dataset
| 0.999465 |
1705.03326
|
Ori Shental
|
Ori Shental, Benjamin M. Zaidel and Shlomo Shamai
|
Low-Density Code-Domain NOMA: Better Be Regular
|
Accepted for publication in the IEEE International Symposium on
Information Theory (ISIT), June 2017
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A closed-form analytical expression is derived for the limiting empirical
squared singular value density of a spreading (signature) matrix corresponding
to sparse low-density code-domain (LDCD) non-orthogonal multiple-access (NOMA)
with regular random user-resource allocation. The derivation relies on
associating the spreading matrix with the adjacency matrix of a large
semiregular bipartite graph. For a simple repetition-based sparse spreading
scheme, the result directly follows from a rigorous analysis of spectral
measures of infinite graphs. Turning to random (sparse) binary spreading, we
harness the cavity method from statistical physics, and show that the limiting
spectral density coincides in both cases. Next, we use this density to compute
the normalized input-output mutual information of the underlying vector channel
in the large-system limit. The latter may be interpreted as the achievable
total throughput per dimension with optimum processing in a corresponding
multiple-access channel setting or, alternatively, in a fully-symmetric
broadcast channel setting with full decoding capabilities at each receiver.
Surprisingly, the total throughput of regular LDCD-NOMA is found to be not only
superior to that achieved with irregular user-resource allocation, but also to
the total throughput of dense randomly-spread NOMA, for which optimum
processing is computationally intractable. In contrast, the superior
performance of regular LDCD-NOMA can be potentially achieved with a feasible
message-passing algorithm. This observation may advocate employing regular,
rather than irregular, LDCD-NOMA in 5G cellular physical layer design.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2017 13:36:27 GMT"
}
] | 2017-05-10T00:00:00 |
[
[
"Shental",
"Ori",
""
],
[
"Zaidel",
"Benjamin M.",
""
],
[
"Shamai",
"Shlomo",
""
]
] |
new_dataset
| 0.954494 |
1705.03345
|
Emiliano De Cristofaro
|
Despoina Chatzakou, Nicolas Kourtellis, Jeremy Blackburn, Emiliano De
Cristofaro, Gianluca Stringhini, Athena Vakali
|
Hate is not Binary: Studying Abusive Behavior of #GamerGate on Twitter
|
In 28th ACM Conference on Hypertext and Social Media (ACM HyperText
2017)
| null | null | null |
cs.SI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Over the past few years, online bullying and aggression have become
increasingly prominent, and manifested in many different forms on social media.
However, there is little work analyzing the characteristics of abusive users
and what distinguishes them from typical social media users. In this paper, we
start addressing this gap by analyzing tweets containing a great large amount
of abusiveness. We focus on a Twitter dataset revolving around the Gamergate
controversy, which led to many incidents of cyberbullying and cyberaggression
on various gaming and social media platforms. We study the properties of the
users tweeting about Gamergate, the content they post, and the differences in
their behavior compared to typical Twitter users.
We find that while their tweets are often seemingly about aggressive and
hateful subjects, "Gamergaters" do not exhibit common expressions of online
anger, and in fact primarily differ from typical users in that their tweets are
less joyful. They are also more engaged than typical Twitter users, which is an
indication as to how and why this controversy is still ongoing. Surprisingly,
we find that Gamergaters are less likely to be suspended by Twitter, thus we
analyze their properties to identify differences from typical users and what
may have led to their suspension. We perform an unsupervised machine learning
analysis to detect clusters of users who, though currently active, could be
considered for suspension since they exhibit similar behaviors with suspended
users. Finally, we confirm the usefulness of our analyzed features by emulating
the Twitter suspension mechanism with a supervised learning method, achieving
very good precision and recall.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2017 14:25:01 GMT"
}
] | 2017-05-10T00:00:00 |
[
[
"Chatzakou",
"Despoina",
""
],
[
"Kourtellis",
"Nicolas",
""
],
[
"Blackburn",
"Jeremy",
""
],
[
"De Cristofaro",
"Emiliano",
""
],
[
"Stringhini",
"Gianluca",
""
],
[
"Vakali",
"Athena",
""
]
] |
new_dataset
| 0.998624 |
1705.03352
|
Vaclav Kratochvil
|
Ji\v{r}ina Vejnarov\'a, V\'aclav Kratochv\'il
|
Composition of Credal Sets via Polyhedral Geometry
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently introduced composition operator for credal sets is an analogy of
such operators in probability, possibility, evidence and valuation-based
systems theories. It was designed to construct multidimensional models (in the
framework of credal sets) from a system of low- dimensional credal sets. In
this paper we study its potential from the computational point of view
utilizing methods of polyhedral geometry.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2017 14:46:44 GMT"
}
] | 2017-05-10T00:00:00 |
[
[
"Vejnarová",
"Jiřina",
""
],
[
"Kratochvíl",
"Václav",
""
]
] |
new_dataset
| 0.995965 |
1705.03415
|
Mohamed Alzenad
|
Mohamed Alzenad, Amr El-Keyi, Faraj Lagum, and Halim Yanikomeroglu
|
3D Placement of an Unmanned Aerial Vehicle Base Station (UAV-BS) for
Energy-Efficient Maximal Coverage
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned Aerial Vehicle mounted base stations (UAV-BSs) can provide wireless
services in a variety of scenarios. In this letter, we propose an optimal
placement algorithm for UAV-BSs that maximizes the number of covered users
using the minimum transmit power. We decouple the UAV-BS deployment problem in
the vertical and horizontal dimensions without any loss of optimality.
Furthermore, we model the UAV-BS deployment in the horizontal dimension as a
circle placement problem and a smallest enclosing circle problem. Simulations
are conducted to evaluate the performance of the proposed method for different
spatial distributions of the users.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2017 16:34:39 GMT"
}
] | 2017-05-10T00:00:00 |
[
[
"Alzenad",
"Mohamed",
""
],
[
"El-Keyi",
"Amr",
""
],
[
"Lagum",
"Faraj",
""
],
[
"Yanikomeroglu",
"Halim",
""
]
] |
new_dataset
| 0.989864 |
1608.07878
|
Luca de Alfaro
|
Luca de Alfaro and Marco Faella
|
TrueReview: A Platform for Post-Publication Peer Review
| null | null | null |
Technical Report UCSC-SOE-16-13
|
cs.DL cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
In post-publication peer review, scientific contributions are first published
in open-access forums, such as arXiv or other digital libraries, and are
subsequently reviewed and possibly ranked and/or evaluated. Compared to the
classical process of scientific publishing, in which review precedes
publication, post-publication peer review leads to faster dissemination of
ideas, and publicly-available reviews. The chief concern in post-publication
reviewing consists in eliciting high-quality, insightful reviews from
participants.
We describe the mathematical foundations and structure of TrueReview, an
open-source tool we propose to build in support of post-publication review. In
TrueReview, the motivation to review is provided via an incentive system that
promotes reviews and evaluations that are both truthful (they turn out to be
correct in the long run) and informative (they provide significant new
information). TrueReview organizes papers in venues, allowing different
scientific communities to set their own submission and review policies. These
venues can be manually set-up, or they can correspond to categories in
well-known repositories such as arXiv. The review incentives can be used to
form a reviewer ranking that can be prominently displayed alongside papers in
the various disciplines, thus offering a concrete benefit to reviewers. The
paper evaluations, in turn, reward the authors of the most significant papers,
both via an explicit paper ranking, and via increased visibility in search.
|
[
{
"version": "v1",
"created": "Mon, 29 Aug 2016 01:28:27 GMT"
},
{
"version": "v2",
"created": "Fri, 5 May 2017 18:59:56 GMT"
}
] | 2017-05-09T00:00:00 |
[
[
"de Alfaro",
"Luca",
""
],
[
"Faella",
"Marco",
""
]
] |
new_dataset
| 0.996954 |
1701.06338
|
Vahid Jamali
|
Vahid Jamali, Arman Ahmadzadeh, Nariman Farsad, and Robert Schober
|
SCW Codes for Optimal CSI-Free Detection in Diffusive Molecular
Communications
|
This is an extended version of a paper submitted to IEEE
International Symposium on Information Theory (ISIT) 2017
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Instantaneous or statistical channel state information (CSI) is needed for
most detection schemes developed in the molecular communication (MC)
literature. Since the MC channel changes, e.g., due to variations in the
velocity of flow, the temperature, or the distance between transmitter and
receiver, CSI acquisition has to be conducted repeatedly to keep track of CSI
variations. Frequent CSI acquisition may entail a large overhead whereas
infrequent CSI acquisition may result in a low CSI estimation quality. To cope
with these issues, we design codes which facilitate maximum likelihood sequence
detection at the receiver without instantaneous or statistical CSI. In
particular, assuming concentration shift keying modulation, we show that a
class of codes, referred to as strongly constant-weight (SCW) codes, enables
optimal CSI-free sequence detection at the cost of decreasing the data rate.
For the proposed SCW codes, we analyze the code rate and the error rate.
Simulation results verify our analytical derivations and reveal that the
proposed CSI-free detector for SCW codes outperforms the baseline coherent and
non-coherent detectors for uncoded transmission.
|
[
{
"version": "v1",
"created": "Mon, 23 Jan 2017 11:37:28 GMT"
},
{
"version": "v2",
"created": "Mon, 8 May 2017 07:03:13 GMT"
}
] | 2017-05-09T00:00:00 |
[
[
"Jamali",
"Vahid",
""
],
[
"Ahmadzadeh",
"Arman",
""
],
[
"Farsad",
"Nariman",
""
],
[
"Schober",
"Robert",
""
]
] |
new_dataset
| 0.998291 |
1705.02412
|
Fardin Abdi Taghi Abad
|
Fardin Abdi, Renato Mancuso, Rohan Tabish, Marco Caccamo
|
Restart-Based Fault-Tolerance: System Design and Schedulability Analysis
| null | null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Embedded systems in safety-critical environments are continuously required to
deliver more performance and functionality, while expected to provide verified
safety guarantees. Nonetheless, platform-wide software verification (required
for safety) is often expensive. Therefore, design methods that enable
utilization of components such as real-time operating systems (RTOS), without
requiring their correctness to guarantee safety, is necessary.
In this paper, we propose a design approach to deploy safe-by-design embedded
systems. To attain this goal, we rely on a small core of verified software to
handle faults in applications and RTOS and recover from them while ensuring
that timing constraints of safety-critical tasks are always satisfied. Faults
are detected by monitoring the application timing and fault-recovery is
achieved via full platform restart and software reload, enabled by the short
restart time of embedded systems. Schedulability analysis is used to ensure
that the timing constraints of critical plant control tasks are always
satisfied in spite of faults and consequent restarts. We derive schedulability
results for four restart-tolerant task models. We use a simulator to evaluate
and compare the performance of the considered scheduling models.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2017 22:38:40 GMT"
}
] | 2017-05-09T00:00:00 |
[
[
"Abdi",
"Fardin",
""
],
[
"Mancuso",
"Renato",
""
],
[
"Tabish",
"Rohan",
""
],
[
"Caccamo",
"Marco",
""
]
] |
new_dataset
| 0.988321 |
1705.02596
|
Yawen Huang
|
Yawen Huang, Ling Shao, Alejandro F. Frangi
|
Simultaneous Super-Resolution and Cross-Modality Synthesis of 3D Medical
Images using Weakly-Supervised Joint Convolutional Sparse Coding
|
10 pages, 6 figures. Accepted by CVPR 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Magnetic Resonance Imaging (MRI) offers high-resolution \emph{in vivo}
imaging and rich functional and anatomical multimodality tissue contrast. In
practice, however, there are challenges associated with considerations of
scanning costs, patient comfort, and scanning time that constrain how much data
can be acquired in clinical or research studies. In this paper, we explore the
possibility of generating high-resolution and multimodal images from
low-resolution single-modality imagery. We propose the weakly-supervised joint
convolutional sparse coding to simultaneously solve the problems of
super-resolution (SR) and cross-modality image synthesis. The learning process
requires only a few registered multimodal image pairs as the training set.
Additionally, the quality of the joint dictionary learning can be improved
using a larger set of unpaired images. To combine unpaired data from different
image resolutions/modalities, a hetero-domain image alignment term is proposed.
Local image neighborhoods are naturally preserved by operating on the whole
image domain (as opposed to image patches) and using joint convolutional sparse
coding. The paired images are enhanced in the joint learning process with
unpaired data and an additional maximum mean discrepancy term, which minimizes
the dissimilarity between their feature distributions. Experiments show that
the proposed method outperforms state-of-the-art techniques on both SR
reconstruction and simultaneous SR and cross-modality synthesis.
|
[
{
"version": "v1",
"created": "Sun, 7 May 2017 10:55:33 GMT"
}
] | 2017-05-09T00:00:00 |
[
[
"Huang",
"Yawen",
""
],
[
"Shao",
"Ling",
""
],
[
"Frangi",
"Alejandro F.",
""
]
] |
new_dataset
| 0.989457 |
1705.02700
|
Vincent Fiorentini
|
Vincent Fiorentini, Megan Shao, Julie Medero
|
Generating Memorable Mnemonic Encodings of Numbers
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The major system is a mnemonic system that can be used to memorize sequences
of numbers. In this work, we present a method to automatically generate
sentences that encode a given number. We propose several encoding models and
compare the most promising ones in a password memorability study. The results
of the study show that a model combining part-of-speech sentence templates with
an $n$-gram language model produces the most memorable password
representations.
|
[
{
"version": "v1",
"created": "Sun, 7 May 2017 21:16:35 GMT"
}
] | 2017-05-09T00:00:00 |
[
[
"Fiorentini",
"Vincent",
""
],
[
"Shao",
"Megan",
""
],
[
"Medero",
"Julie",
""
]
] |
new_dataset
| 0.998218 |
1705.02735
|
Amir Zadeh
|
Edmund Tong, Amir Zadeh, Cara Jones, Louis-Philippe Morency
|
Combating Human Trafficking with Deep Multimodal Models
|
ACL 2017 Long Paper
| null | null | null |
cs.CL cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human trafficking is a global epidemic affecting millions of people across
the planet. Sex trafficking, the dominant form of human trafficking, has seen a
significant rise mostly due to the abundance of escort websites, where human
traffickers can openly advertise among at-will escort advertisements. In this
paper, we take a major step in the automatic detection of advertisements
suspected to pertain to human trafficking. We present a novel dataset called
Trafficking-10k, with more than 10,000 advertisements annotated for this task.
The dataset contains two sources of information per advertisement: text and
images. For the accurate detection of trafficking advertisements, we designed
and trained a deep multimodal model called the Human Trafficking Deep Network
(HTDN).
|
[
{
"version": "v1",
"created": "Mon, 8 May 2017 03:48:01 GMT"
}
] | 2017-05-09T00:00:00 |
[
[
"Tong",
"Edmund",
""
],
[
"Zadeh",
"Amir",
""
],
[
"Jones",
"Cara",
""
],
[
"Morency",
"Louis-Philippe",
""
]
] |
new_dataset
| 0.99947 |
1605.05486
|
Pushpak Jagtap
|
Pushpak Jagtap and Majid Zamani
|
Backstepping Design for Incremental Stability of Stochastic Hamiltonian
Systems with Jumps
|
14 pages, 3 figures
| null | null | null |
cs.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Incremental stability is a property of dynamical systems ensuring the uniform
asymptotic stability of each trajectory rather than a fixed equilibrium point
or trajectory. Here, we introduce a notion of incremental stability for
stochastic control systems and provide its description in terms of existence of
a notion of so-called incremental Lyapunov functions. Moreover, we provide a
backstepping controller design scheme providing controllers along with
corresponding incremental Lyapunov functions rendering a class of stochastic
control systems, namely, stochastic Hamiltonian systems with jumps,
incrementally stable. To illustrate the effectiveness of the proposed approach,
we design a controller making a spring pendulum system in a noisy environment
incrementally stable.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2016 09:12:29 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Dec 2016 10:29:54 GMT"
},
{
"version": "v3",
"created": "Fri, 5 May 2017 12:47:36 GMT"
}
] | 2017-05-08T00:00:00 |
[
[
"Jagtap",
"Pushpak",
""
],
[
"Zamani",
"Majid",
""
]
] |
new_dataset
| 0.989936 |
1607.06408
|
Yongkang Wong
|
Wenhui Li, Yongkang Wong, An-An Liu, Yang Li, Yu-Ting Su, Mohan
Kankanhalli
|
Multi-Camera Action Dataset for Cross-Camera Action Recognition
Benchmarking
| null | null |
10.1109/WACV.2017.28
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Action recognition has received increasing attention from the computer vision
and machine learning communities in the last decade. To enable the study of
this problem, there exist a vast number of action datasets, which are recorded
under controlled laboratory settings, real-world surveillance environments, or
crawled from the Internet. Apart from the "in-the-wild" datasets, the training
and test split of conventional datasets often possess similar environments
conditions, which leads to close to perfect performance on constrained
datasets. In this paper, we introduce a new dataset, namely Multi-Camera Action
Dataset (MCAD), which is designed to evaluate the open view classification
problem under the surveillance environment. In total, MCAD contains 14,298
action samples from 18 action categories, which are performed by 20 subjects
and independently recorded with 5 cameras. Inspired by the well received
evaluation approach on the LFW dataset, we designed a standard evaluation
protocol and benchmarked MCAD under several scenarios. The benchmark shows that
while an average of 85% accuracy is achieved under the closed-view scenario,
the performance suffers from a significant drop under the cross-view scenario.
In the worst case scenario, the performance of 10-fold cross validation drops
from 87.0% to 47.4%.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2016 17:58:19 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jan 2017 10:00:59 GMT"
},
{
"version": "v3",
"created": "Fri, 5 May 2017 05:21:31 GMT"
}
] | 2017-05-08T00:00:00 |
[
[
"Li",
"Wenhui",
""
],
[
"Wong",
"Yongkang",
""
],
[
"Liu",
"An-An",
""
],
[
"Li",
"Yang",
""
],
[
"Su",
"Yu-Ting",
""
],
[
"Kankanhalli",
"Mohan",
""
]
] |
new_dataset
| 0.999714 |
1701.05045
|
Mustafa Sar{\i}
|
Mustafa Sari and Emre Kolotoglu
|
New quantum mds constacyl{\i}c codes
|
Some results are not true
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper is devoted to the study of the construction of new quantum MDS
codes. Based on constacyclic codes over Fq2 , we derive four new families of
quantum MDS codes, one of which is an explicit generalization of the
construction given in Theorem 7 in [22]. We also extend the result of Theorem
3:3 given in [17].
|
[
{
"version": "v1",
"created": "Wed, 18 Jan 2017 12:57:08 GMT"
},
{
"version": "v2",
"created": "Fri, 5 May 2017 09:34:56 GMT"
}
] | 2017-05-08T00:00:00 |
[
[
"Sari",
"Mustafa",
""
],
[
"Kolotoglu",
"Emre",
""
]
] |
new_dataset
| 0.999153 |
1705.01978
|
Eugene Syriani
|
Brice M. Bigendako and Eugene Syriani
|
Automatically Installing and Deploying Tools for Conducting Systematic
Reviews in ReLiS
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conducting systematic reviews (SR) is a time consuming endeavor that requires
several iterations to setup right. We present ReLiS, a framework to configure
and deploy projects while conducting a SR. It features a domain-specific
modeling editor tailored for researchers who perform SRs and an architecture
that enables live installation and deployment of multiple concurrently running
projects. See the accompanying video at http://youtu.be/U5zOmk2vWy8
|
[
{
"version": "v1",
"created": "Thu, 4 May 2017 19:12:09 GMT"
}
] | 2017-05-08T00:00:00 |
[
[
"Bigendako",
"Brice M.",
""
],
[
"Syriani",
"Eugene",
""
]
] |
new_dataset
| 0.994904 |
1705.01990
|
Klara Nahrstedt
|
Klara Nahrstedt, Christos G. Cassandras, and Charlie Catlett
|
City-Scale Intelligent Systems and Platforms
|
A Computing Community Consortium (CCC) white paper, 8 pages
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As of 2014, 54% of the earth's population resides in urban areas, and it is
steadily increasing, expecting to reach 66% by 2050. Urban areas range from
small cities with tens of thousands of people to megacities with greater than
10 million people. Roughly 12% of the global population today lives in 28
megacities, and at least 40 are projected by 2030. At these scales, the urban
infrastructure such as roads, buildings, and utility networks will cover areas
as large as New England. This steady urbanization and the resulting expansion
of infrastructure, combined with renewal of aging urban infrastructure,
represent tens of trillion of dollars in new urban infrastructure investment
over the coming decades. These investments must balance factors including
impact on clean air and water, energy and maintenance costs, and the
productivity and health of city dwellers. Moreover, cost-effective management
and sustainability of these growing urban areas will be one of the most
critical challenges to our society, motivating the concept of science- and
data-driven urban design, retrofit, and operation-that is, "Smart Cities".
|
[
{
"version": "v1",
"created": "Thu, 4 May 2017 19:50:06 GMT"
}
] | 2017-05-08T00:00:00 |
[
[
"Nahrstedt",
"Klara",
""
],
[
"Cassandras",
"Christos G.",
""
],
[
"Catlett",
"Charlie",
""
]
] |
new_dataset
| 0.995658 |
1705.02004
|
Ellen Zegura
|
Ellen Zegura, Beki Grinter, Elizabeth Belding, and Klara Nahrstedt
|
A Rural Lens on a Research Agenda for Intelligent Infrastructure
|
A Computing Community Consortium (CCC) white paper, 6 pages
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A National Agenda for Intelligent Infrastructure is not complete without
explicit consideration of the needs of rural communities. While the American
population has urbanized, the United States depends on rural communities for
agriculture, fishing, forestry, manufacturing and mining. Approximately 20% of
the US population lives in rural areas with a skew towards aging adults.
Further, nearly 25% of Veterans live in rural America. And yet, when
intelligent infrastructure is imagined, it is often done so with implicit or
explicit bias towards cities. In this brief we describe the unique
opportunities for rural communities and offer an inclusive vision of
intelligent infrastructure research. In this paper, we argue for a set of
coordinated actions to ensure that rural Americans are not left behind in this
digital revolution. These technological platforms and applications, supported
by appropriate policy, will address key issues in transportation, energy,
agriculture, public safety and health. We believe that rather than being a set
of needs, the rural United States presents a set of exciting possibilities for
novel innovation benefiting not just those living there, but the American
economy more broadly
|
[
{
"version": "v1",
"created": "Thu, 4 May 2017 20:24:29 GMT"
}
] | 2017-05-08T00:00:00 |
[
[
"Zegura",
"Ellen",
""
],
[
"Grinter",
"Beki",
""
],
[
"Belding",
"Elizabeth",
""
],
[
"Nahrstedt",
"Klara",
""
]
] |
new_dataset
| 0.997878 |
1705.02148
|
Noureldien Hussein
|
Noureldien Hussein, Efstratios Gavves and Arnold W.M. Smeulders
|
Unified Embedding and Metric Learning for Zero-Exemplar Event Detection
|
IEEE CVPR 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event detection in unconstrained videos is conceived as a content-based video
retrieval with two modalities: textual and visual. Given a text describing a
novel event, the goal is to rank related videos accordingly. This task is
zero-exemplar, no video examples are given to the novel event.
Related works train a bank of concept detectors on external data sources.
These detectors predict confidence scores for test videos, which are ranked and
retrieved accordingly. In contrast, we learn a joint space in which the visual
and textual representations are embedded. The space casts a novel event as a
probability of pre-defined events. Also, it learns to measure the distance
between an event and its related videos.
Our model is trained end-to-end on publicly available EventNet. When applied
to TRECVID Multimedia Event Detection dataset, it outperforms the
state-of-the-art by a considerable margin.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2017 09:45:58 GMT"
}
] | 2017-05-08T00:00:00 |
[
[
"Hussein",
"Noureldien",
""
],
[
"Gavves",
"Efstratios",
""
],
[
"Smeulders",
"Arnold W. M.",
""
]
] |
new_dataset
| 0.987969 |
1705.02210
|
Cheng-Hao Cai
|
Cheng-Hao Cai
|
SLDR-DL: A Framework for SLD-Resolution with Deep Learning
|
12 pages, 5 figures
| null | null | null |
cs.AI cs.LG cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces an SLD-resolution technique based on deep learning.
This technique enables neural networks to learn from old and successful
resolution processes and to use learnt experiences to guide new resolution
processes. An implementation of this technique is named SLDR-DL. It includes a
Prolog library of deep feedforward neural networks and some essential functions
of resolution. In the SLDR-DL framework, users can define logical rules in the
form of definite clauses and teach neural networks to use the rules in
reasoning processes.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2017 13:32:54 GMT"
}
] | 2017-05-08T00:00:00 |
[
[
"Cai",
"Cheng-Hao",
""
]
] |
new_dataset
| 0.968544 |
1604.02182
|
Joseph Robinson
|
Joseph P. Robinson, Ming Shao, Yue Wu, Yun Fu
|
Families in the Wild (FIW): Large-Scale Kinship Image Database and
Benchmarks
| null |
ACM MM (2016) 242-246
|
10.1145/2964284.2967219
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the largest kinship recognition dataset to date, Families in the
Wild (FIW). Motivated by the lack of a single, unified dataset for kinship
recognition, we aim to provide a dataset that captivates the interest of the
research community. With only a small team, we were able to collect, organize,
and label over 10,000 family photos of 1,000 families with our annotation tool
designed to mark complex hierarchical relationships and local label information
in a quick and efficient manner. We include several benchmarks for two
image-based tasks, kinship verification and family recognition. For this, we
incorporate several visual features and metric learning methods as baselines.
Also, we demonstrate that a pre-trained Convolutional Neural Network (CNN) as
an off-the-shelf feature extractor outperforms the other feature types. Then,
results were further boosted by fine-tuning two deep CNNs on FIW data: (1) for
kinship verification, a triplet loss function was learned on top of the network
of pre-trained weights; (2) for family recognition, a family-specific softmax
classifier was added to the network.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2016 21:45:53 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2017 03:15:48 GMT"
}
] | 2017-05-05T00:00:00 |
[
[
"Robinson",
"Joseph P.",
""
],
[
"Shao",
"Ming",
""
],
[
"Wu",
"Yue",
""
],
[
"Fu",
"Yun",
""
]
] |
new_dataset
| 0.999798 |
1607.01472
|
Muhammad Shakir
|
Mohamed Alzenad, Muhammad Zeeshan Shakir, Halim Yanikomeroglu, and
Mohamed-Slim Alouini
|
FSO-based Vertical Backhaul/Fronthaul Framework for 5G+ Wireless
Networks
|
Under Second Round of Revision in IEEE Communications Magazine,
April, 2017
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The presence of a super high rate, but also cost-efficient, easy-to-deploy,
and scalable, backhaul/fronthaul framework is essential in the upcoming
fifth-generation (5G) wireless networks \& beyond. Motivated by the mounting
interest in the unmanned flying platforms of various types including unmanned
aerial vehicles (UAVs), drones, balloons, and
high-altitude/medium-altitude/low-altitude platforms (HAPs/MAPs/LAPs), which we
refer to as the networked flying platforms (NFPs), for providing communications
services and the recent advances in free-space optics (FSO), this article
investigates the feasibility of a novel vertical backhaul/fronthaul framework
where the NFPs transport the backhaul/fronthaul traffic between the access and
core networks via point-to-point FSO links. The performance of the proposed
innovative approach is investigated under different weather conditions and a
broad range of system parameters. Simulation results demonstrate that the
FSO-based vertical backhaul/fronthaul framework can offer data rates higher
than the baseline alternatives, and thus can be considered as a promising
solution to the emerging backhaul/fronthaul requirements of the 5G+ wireless
networks, particularly in the presence of ultra-dense heterogeneous small
cells. The paper also presents the challenges that accompany such a novel
framework and provides some key ideas towards overcoming these challenges.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2016 03:33:37 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Feb 2017 12:45:42 GMT"
},
{
"version": "v3",
"created": "Thu, 4 May 2017 02:10:25 GMT"
}
] | 2017-05-05T00:00:00 |
[
[
"Alzenad",
"Mohamed",
""
],
[
"Shakir",
"Muhammad Zeeshan",
""
],
[
"Yanikomeroglu",
"Halim",
""
],
[
"Alouini",
"Mohamed-Slim",
""
]
] |
new_dataset
| 0.991422 |
1705.01598
|
Dmitry Liakh
|
Antti-Pekka Hynninen, Dmitry I. Lyakh
|
cuTT: A High-Performance Tensor Transpose Library for CUDA Compatible
GPUs
| null | null | null | null |
cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the CUDA Tensor Transpose (cuTT) library that implements
high-performance tensor transposes for NVIDIA GPUs with Kepler and above
architectures. cuTT achieves high performance by (a) utilizing two
GPU-optimized transpose algorithms that both use a shared memory buffer in
order to reduce global memory access scatter, and by (b) computing memory
positions of tensor elements using a thread-parallel algorithm. We evaluate the
performance of cuTT on a variety of benchmarks with tensor ranks ranging from 2
to 12 and show that cuTT performance is independent of the tensor rank and that
it performs no worse than an approach based on code generation. We develop a
heuristic scheme for choosing the optimal parameters for tensor transpose
algorithms by implementing an analytical GPU performance model that can be used
at runtime without need for performance measurements or profiling. Finally, by
integrating cuTT into the tensor algebra library TAL-SH, we significantly
reduce the tensor transpose overhead in tensor contractions, achieving as low
as just one percent overhead for arithmetically intensive tensor contractions.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2017 19:58:00 GMT"
}
] | 2017-05-05T00:00:00 |
[
[
"Hynninen",
"Antti-Pekka",
""
],
[
"Lyakh",
"Dmitry I.",
""
]
] |
new_dataset
| 0.997855 |
1705.01662
|
Omid Mashayekhi
|
Omid Mashayekhi, Hang Qu, Chinmayee Shah, Philip Levis
|
Execution Templates: Caching Control Plane Decisions for Strong Scaling
of Data Analytics
|
To appear at USENIX ATC 2017
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Control planes of cloud frameworks trade off between scheduling granularity
and performance. Centralized systems schedule at task granularity, but only
schedule a few thousand tasks per second. Distributed systems schedule hundreds
of thousands of tasks per second but changing the schedule is costly.
We present execution templates, a control plane abstraction that can schedule
hundreds of thousands of tasks per second while supporting fine-grained,
per-task scheduling decisions. Execution templates leverage a program's
repetitive control flow to cache blocks of frequently-executed tasks. Executing
a task in a template requires sending a single message. Large-scale scheduling
changes install new templates, while small changes apply edits to existing
templates.
Evaluations of execution templates in Nimbus, a data analytics framework,
find that they provide the fine-grained scheduling flexibility of centralized
control planes while matching the strong scaling of distributed ones. Execution
templates support complex, real-world applications, such as a fluid simulation
with a triply nested loop and data dependent branches.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2017 00:24:12 GMT"
}
] | 2017-05-05T00:00:00 |
[
[
"Mashayekhi",
"Omid",
""
],
[
"Qu",
"Hang",
""
],
[
"Shah",
"Chinmayee",
""
],
[
"Levis",
"Philip",
""
]
] |
new_dataset
| 0.995781 |
1705.01773
|
Maksims Dimitrijevs
|
Maksims Dimitrijevs, Abuzer Yakary{\i}lmaz
|
Uncountable realtime probabilistic classes
|
12 pages. Accepted to DCFS2017
| null | null | null |
cs.CC cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the minimum cases for realtime probabilistic machines that can
define uncountably many languages with bounded error. We show that logarithmic
space is enough for realtime PTMs on unary languages. On binary case, we follow
the same result for double logarithmic space, which is tight. When replacing
the worktape with some limited memories, we can follow uncountable results on
unary languages for two counters.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2017 10:08:06 GMT"
}
] | 2017-05-05T00:00:00 |
[
[
"Dimitrijevs",
"Maksims",
""
],
[
"Yakaryılmaz",
"Abuzer",
""
]
] |
new_dataset
| 0.973977 |
1705.01817
|
Christoph Schwering
|
Christoph Schwering
|
A Reasoning System for a First-Order Logic of Limited Belief
|
22 pages, 0 figures, Twenty-sixth International Joint Conference on
Artificial Intelligence (IJCAI-17)
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Logics of limited belief aim at enabling computationally feasible reasoning
in highly expressive representation languages. These languages are often
dialects of first-order logic with a weaker form of logical entailment that
keeps reasoning decidable or even tractable. While a number of such logics have
been proposed in the past, they tend to remain for theoretical analysis only
and their practical relevance is very limited. In this paper, we aim to go
beyond the theory. Building on earlier work by Liu, Lakemeyer, and Levesque, we
develop a logic of limited belief that is highly expressive while remaining
decidable in the first-order and tractable in the propositional case and
exhibits some characteristics that make it attractive for an implementation. We
introduce a reasoning system that employs this logic as representation language
and present experimental results that showcase the benefit of limited belief.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2017 12:39:27 GMT"
}
] | 2017-05-05T00:00:00 |
[
[
"Schwering",
"Christoph",
""
]
] |
new_dataset
| 0.973053 |
1705.01833
|
Somnath Roy
|
Somnath Roy
|
A Finite State and Rule-based Akshara to Prosodeme (A2P) Converter in
Hindi
|
If you need software (A2P Converter), you have to write for the same
at "[email protected]" or "[email protected]"
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article describes a software module called Akshara to Prosodeme (A2P)
converter in Hindi. It converts an input grapheme into prosedeme (sequence of
phonemes with the specification of syllable boundaries and prosodic labels).
The software is based on two proposed finite state machines\textemdash one for
the syllabification and another for the syllable labeling. In addition to that,
it also uses a set of nonlinear phonological rules proposed for foot formation
in Hindi, which encompass solutions to schwa-deletion in simple, compound,
derived and inflected words. The nonlinear phonological rules are based on
metrical phonology with the provision of recursive foot structure. A software
module is implemented in Python. The testing of the software for
syllabification, syllable labeling, schwa deletion and prosodic labeling yield
an accuracy of more than 99% on a lexicon of size 28664 words.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2017 13:33:00 GMT"
}
] | 2017-05-05T00:00:00 |
[
[
"Roy",
"Somnath",
""
]
] |
new_dataset
| 0.998189 |
1705.01862
|
Yehan Ma
|
Yehan Ma, Dolvara Gunatilaka, Bo Li, Humberto Gonzalez, and Chenyang
Lu
|
Holistic Cyber-Physical Management for Dependable Wireless Control
Systems
|
Submitted to TCPS
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wireless sensor-actuator networks (WSANs) are gaining momentum in industrial
process automation as a communication infrastructure for lowering deployment
and maintenance costs. In traditional wireless control systems the plant
controller and the network manager operate in isolation, which ignore the
significant influence of network reliability on plant control performance. To
enhance the dependability of industrial wireless control, we propose a holistic
cyber-physical management framework that employs run-time coordination between
the plant control and network management. Our design includes a holistic
controller that generates actuation signals to physical plants and reconfigures
the WSAN to maintain desired control performance while saving wireless
resources. As a concrete example of holistic control, we design a holistic
manager that dynamically reconfigures the number of transmissions in the WSAN
based on online observations of physical and cyber variables. We have
implemented the holistic management framework in the Wireless Cyber-Physical
Simulator (WCPS). A systematic case study has been presented based on two
5-state plants sharing a 16-node WSAN. Simulation results show that the
holistic management design has significantly enhanced the resilience of the
system against both wireless interferences and physical disturbances, while
effectively reducing the number of wireless transmissions.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2017 14:45:42 GMT"
}
] | 2017-05-05T00:00:00 |
[
[
"Ma",
"Yehan",
""
],
[
"Gunatilaka",
"Dolvara",
""
],
[
"Li",
"Bo",
""
],
[
"Gonzalez",
"Humberto",
""
],
[
"Lu",
"Chenyang",
""
]
] |
new_dataset
| 0.99633 |
1705.01923
|
Rahul Mangharam
|
Rahul Mangharam, Megan Reyerson, Steve Viscelli, Hamsa Balakrishanan,
Alexandre Bayen, Surabh Amin, Leslie Richards, Leo Bagley, and George Pappas
|
MOBILITY21: Strategic Investments for Transportation Infrastructure &
Technology
|
A Computing Community Consortium (CCC) white paper, 4 pages
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
America's transportation infrastructure is the backbone of our economy. A
strong infrastructure means a strong America - an America that competes
globally, supports local and regional economic development, and creates jobs.
Strategic investments in our transportation infrastructure are vital to our
national security, economic growth, transportation safety and our technology
leadership. This document outlines critical needs for our transportation
infrastructure, identifies new technology drivers and proposes strategic
investments for safe and efficient air, ground, rail and marine mobility of
people and goods.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2017 17:32:27 GMT"
}
] | 2017-05-05T00:00:00 |
[
[
"Mangharam",
"Rahul",
""
],
[
"Reyerson",
"Megan",
""
],
[
"Viscelli",
"Steve",
""
],
[
"Balakrishanan",
"Hamsa",
""
],
[
"Bayen",
"Alexandre",
""
],
[
"Amin",
"Surabh",
""
],
[
"Richards",
"Leslie",
""
],
[
"Bagley",
"Leo",
""
],
[
"Pappas",
"George",
""
]
] |
new_dataset
| 0.999833 |
1605.02435
|
Siamak Solat
|
Siamak Solat (LIP6), Maria Potop-Butucaru (LIP6)
|
ZeroBlock: Timestamp-Free Prevention of Block-Withholding Attack in
Bitcoin
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bitcoin was recently introduced as a peer-to-peer electronic currency in
order to facilitate transactions outside the traditional financial system. The
core of Bitcoin, the Blockchain, is the history of the transactions in the
system maintained by all miners as a distributed shared register. New blocks in
the Blockchain contain the last transactions in the system and are added by
miners after a block mining process that consists in solving a resource
consuming proof-of-work (cryptographic puzzle). The reward is a motivation for
mining process but also could be an incentive for attacks such as selfish
mining. In this paper we propose a solution for one of the major problems in
Bitcoin : selfish mining or block-withholding attack. This attack is conducted
by adversarial or selfish miners in order to either earn undue rewards or waste
the computational power of honest miners. Contrary to recent solutions, our
solution, ZeroBlock, prevents block-withholding using a technique free of
timestamp that can be forged. Moreover, we show that our solution is compliant
with nodes churn.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2016 07:00:38 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Nov 2016 09:24:28 GMT"
},
{
"version": "v3",
"created": "Wed, 3 May 2017 09:18:21 GMT"
}
] | 2017-05-04T00:00:00 |
[
[
"Solat",
"Siamak",
"",
"LIP6"
],
[
"Potop-Butucaru",
"Maria",
"",
"LIP6"
]
] |
new_dataset
| 0.999645 |
1608.03983
|
Ilya Loshchilov
|
Ilya Loshchilov and Frank Hutter
|
SGDR: Stochastic Gradient Descent with Warm Restarts
|
ICLR 2017 conference paper
| null | null | null |
cs.LG cs.NE math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR
|
[
{
"version": "v1",
"created": "Sat, 13 Aug 2016 13:46:05 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Aug 2016 13:05:07 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Feb 2017 14:33:00 GMT"
},
{
"version": "v4",
"created": "Mon, 6 Mar 2017 13:06:59 GMT"
},
{
"version": "v5",
"created": "Wed, 3 May 2017 16:28:09 GMT"
}
] | 2017-05-04T00:00:00 |
[
[
"Loshchilov",
"Ilya",
""
],
[
"Hutter",
"Frank",
""
]
] |
new_dataset
| 0.997805 |
1701.07518
|
Amr Abdelaziz
|
Amr Abdelaziz, C. Emre Koksal, Hesham El Gamal, Ashraf D. Elbayoumy
|
On The Compound MIMO Wiretap Channel with Mean Feedback
|
To appear at ISIT 2017 proceedings
| null | null | null |
cs.CR cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Compound MIMO wiretap channel with double sided uncertainty is considered
under channel mean information model. In mean information model, channel
variations are centered around its mean value which is fed back to the
transmitter. We show that the worst case main channel is anti-parallel to the
channel mean information resulting in an overall unit rank channel. Further,
the worst eavesdropper channel is shown to be isotropic around its mean
information. Accordingly, we provide the capacity achieving beamforming
direction. We show that the saddle point property holds under mean information
model, and thus, compound secrecy capacity equals to the worst case capacity
over the class of uncertainty. Moreover, capacity achieving beamforming
direction is found to require matrix inversion, thus, we derive the null
steering (NS) beamforming as an alternative suboptimal solution that does not
require matrix inversion. NS beamformer is in the direction orthogonal to the
eavesdropper mean channel that maintains the maximum possible gain in mean main
channel direction. Extensive computer simulation reveals that NS performs very
close to the optimal solution. It also verifies that, NS beamforming
outperforms both maximum ratio transmission (MRT) and zero forcing (ZF)
beamforming approaches over the entire SNR range. Finally, An equivalence
relation with MIMO wiretap channel in Rician fading environment is established.
|
[
{
"version": "v1",
"created": "Wed, 25 Jan 2017 23:34:39 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Feb 2017 02:17:42 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Mar 2017 08:28:39 GMT"
},
{
"version": "v4",
"created": "Wed, 3 May 2017 16:18:36 GMT"
}
] | 2017-05-04T00:00:00 |
[
[
"Abdelaziz",
"Amr",
""
],
[
"Koksal",
"C. Emre",
""
],
[
"Gamal",
"Hesham El",
""
],
[
"Elbayoumy",
"Ashraf D.",
""
]
] |
new_dataset
| 0.989652 |
1705.00202
|
Peter Trifonov
|
Peter Trifonov, Grigorii Trofimiuk
|
A Randomized Construction of Polar Subcodes
|
Accepted to ISIT 2017 Formatting changes
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A method for construction of polar subcodes is presented, which aims on
minimization of the number of low-weight codewords in the obtained codes, as
well as on improved performance under list or sequential decoding. Simulation
results are provided, which show that the obtained codes outperform LDPC and
turbo codes.
|
[
{
"version": "v1",
"created": "Sat, 29 Apr 2017 15:02:22 GMT"
},
{
"version": "v2",
"created": "Wed, 3 May 2017 08:14:32 GMT"
}
] | 2017-05-04T00:00:00 |
[
[
"Trifonov",
"Peter",
""
],
[
"Trofimiuk",
"Grigorii",
""
]
] |
new_dataset
| 0.983843 |
1705.00673
|
Prasanna Parthasarathi
|
Hoai Phuoc Truong, Prasanna Parthasarathi, Joelle Pineau
|
MACA: A Modular Architecture for Conversational Agents
|
The architecture needs to be tested further. Sorry for the
inconvenience. We should be putting up the paper up soon
| null | null | null |
cs.AI cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a software architecture designed to ease the implementation of
dialogue systems. The Modular Architecture for Conversational Agents (MACA)
uses a plug-n-play style that allows quick prototyping, thereby facilitating
the development of new techniques and the reproduction of previous work. The
architecture separates the domain of the conversation from the agent's dialogue
strategy, and as such can be easily extended to multiple domains. MACA provides
tools to host dialogue agents on Amazon Mechanical Turk (mTurk) for data
collection and allows processing of other sources of training data. The current
version of the framework already incorporates several domains and existing
dialogue strategies from the recent literature.
|
[
{
"version": "v1",
"created": "Mon, 1 May 2017 19:18:04 GMT"
},
{
"version": "v2",
"created": "Wed, 3 May 2017 01:20:26 GMT"
}
] | 2017-05-04T00:00:00 |
[
[
"Truong",
"Hoai Phuoc",
""
],
[
"Parthasarathi",
"Prasanna",
""
],
[
"Pineau",
"Joelle",
""
]
] |
new_dataset
| 0.988804 |
1705.01176
|
Eddie Santos
|
Eddie Antonio Santos, Carson McLean, Christopher Solinas, Abram Hindle
|
How does Docker affect energy consumption? Evaluating workloads in and
out of Docker containers
|
12 pages (minus references), 10 figures
| null | null | null |
cs.DC cs.PF
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Context: Virtual machines provide isolation of services at the cost of
hypervisors and more resource usage. This spurred the growth of systems like
Docker that enable single hosts to isolate several applications, similar to
VMs, within a low-overhead abstraction called containers.
Motivation: Although containers tout low overhead performance, do they still
have low energy consumption?
Methodology: This work statistically compares ($t$-test, Wilcoxon) the energy
consumption of three application workloads in Docker and on bare-metal Linux.
Results: In all cases, there was a statistically significant ($t$-test and
Wilcoxon $p < 0.05$) increase in energy consumption when running tests in
Docker, mostly due to the performance of I/O system calls.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2017 21:29:28 GMT"
}
] | 2017-05-04T00:00:00 |
[
[
"Santos",
"Eddie Antonio",
""
],
[
"McLean",
"Carson",
""
],
[
"Solinas",
"Christopher",
""
],
[
"Hindle",
"Abram",
""
]
] |
new_dataset
| 0.961152 |
1705.01225
|
EPTCS
|
Shilpi Goel (The University of Texas at Austin)
|
The x86isa Books: Features, Usage, and Future Plans
|
In Proceedings ACL2Workshop 2017, arXiv:1705.00766
|
EPTCS 249, 2017, pp. 1-17
|
10.4204/EPTCS.249.1
| null |
cs.PL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The x86isa library, incorporated in the ACL2 community books project,
provides a formal model of the x86 instruction-set architecture and supports
reasoning about x86 machine-code programs. However, analyzing x86 programs can
be daunting -- even for those familiar with program verification, in part due
to the complexity of the x86 ISA. Furthermore, the x86isa library is a large
framework, and using and/or contributing to it may not seem straightforward. We
present some typical ways of working with the x86isa library, and describe some
of its salient features that can make the analysis of x86 machine-code programs
less arduous. We also discuss some capabilities that are currently missing from
these books -- we hope that this will encourage the community to get involved
in this project.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2017 01:48:28 GMT"
}
] | 2017-05-04T00:00:00 |
[
[
"Goel",
"Shilpi",
"",
"The University of Texas at Austin"
]
] |
new_dataset
| 0.96599 |
1705.01258
|
Hassan Foroosh
|
Vildan Atalay Aydin and Hassan Foroosh
|
Super-Resolution of Wavelet-Encoded Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiview super-resolution image reconstruction (SRIR) is often cast as a
resampling problem by merging non-redundant data from multiple low-resolution
(LR) images on a finer high-resolution (HR) grid, while inverting the effect of
the camera point spread function (PSF). One main problem with multiview methods
is that resampling from nonuniform samples (provided by LR images) and the
inversion of the PSF are highly nonlinear and ill-posed problems. Non-linearity
and ill-posedness are typically overcome by linearization and regularization,
often through an iterative optimization process, which essentially trade off
the very same information (i.e. high frequency) that we want to recover. We
propose a novel point of view for multiview SRIR: Unlike existing multiview
methods that reconstruct the entire spectrum of the HR image from the multiple
given LR images, we derive explicit expressions that show how the
high-frequency spectra of the unknown HR image are related to the spectra of
the LR images. Therefore, by taking any of the LR images as the reference to
represent the low-frequency spectra of the HR image, one can reconstruct the
super-resolution image by focusing only on the reconstruction of the
high-frequency spectra. This is very much like single-image methods, which
extrapolate the spectrum of one image, except that we rely on information
provided by all other views, rather than by prior constraints as in
single-image methods (which may not be an accurate source of information). This
is made possible by deriving and applying explicit closed-form expressions that
define how the local high frequency information that we aim to recover for the
reference high resolution image is related to the local low frequency
information in the sequence of views. Results and comparisons with recently
published state-of-the-art methods show the superiority of the proposed
solution.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2017 05:42:14 GMT"
}
] | 2017-05-04T00:00:00 |
[
[
"Aydin",
"Vildan Atalay",
""
],
[
"Foroosh",
"Hassan",
""
]
] |
new_dataset
| 0.989131 |
1705.01263
|
Alexander Keller
|
Alexander Keller, Carsten W\"achter, Matthias Raab, Daniel Seibert,
Dietger van Antwerpen, Johann Kornd\"orfer and Lutz Kettner
|
The Iray Light Transport Simulation and Rendering System
| null | null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While ray tracing has become increasingly common and path tracing is well
understood by now, a major challenge lies in crafting an easy-to-use and
efficient system implementing these technologies. Following a purely
physically-based paradigm while still allowing for artistic workflows, the Iray
light transport simulation and rendering system allows for rendering complex
scenes by the push of a button and thus makes accurate light transport
simulation widely available. In this document we discuss the challenges and
implementation choices that follow from our primary design decisions,
demonstrating that such a rendering system can be made a practical, scalable,
and efficient real-world application that has been adopted by various companies
across many fields and is in use by many industry professionals today.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2017 06:03:08 GMT"
}
] | 2017-05-04T00:00:00 |
[
[
"Keller",
"Alexander",
""
],
[
"Wächter",
"Carsten",
""
],
[
"Raab",
"Matthias",
""
],
[
"Seibert",
"Daniel",
""
],
[
"van Antwerpen",
"Dietger",
""
],
[
"Korndörfer",
"Johann",
""
],
[
"Kettner",
"Lutz",
""
]
] |
new_dataset
| 0.995689 |
1705.01332
|
Bruno Guerreiro
|
Bruno J. Guerreiro, Carlos Silvestre, Rita Cunha, David Cabecinhas
|
LiDAR-based Control of Autonomous Rotorcraft for the Inspection of
Pier-like Structures: Proofs
|
[1] B. J. Guerreiro, C. Silvestre, R. Cunha, and D. Cabecinhas,
Lidar-based control of autonomous rotorcraft for the inspection of pier-like
structures, IEEE Transactions in Control Systems Technology, 2017. (to
appear)
| null | null | null |
cs.SY cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This is a complementary document to the paper presented in [1], to provide
more detailed proofs for some results. The main paper addresses the problem of
trajectory tracking control of autonomous rotorcraft in operation scenarios
where only relative position measurements obtained from LiDAR sensors are
possible. The proposed approach defines an alternative kinematic model,
directly based on LiDAR measurements, and uses a trajectory-dependent error
space to express the dynamic model of the vehicle. An LPV representation with
piecewise affine dependence on the parameters is adopted to describe the error
dynamics over a set of predefined operating regions, and a continuous-time
$H_2$ control problem is solved using LMIs and implemented within the scope of
gain-scheduling control theory.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2017 09:45:53 GMT"
}
] | 2017-05-04T00:00:00 |
[
[
"Guerreiro",
"Bruno J.",
""
],
[
"Silvestre",
"Carlos",
""
],
[
"Cunha",
"Rita",
""
],
[
"Cabecinhas",
"David",
""
]
] |
new_dataset
| 0.999164 |
1705.01452
|
Zhaopeng Tu
|
Hao Zhou, Zhaopeng Tu, Shujian Huang, Xiaohua Liu, Hang Li, Jiajun
Chen
|
Chunk-Based Bi-Scale Decoder for Neural Machine Translation
|
Accepted as a short paper by ACL 2017
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In typical neural machine translation~(NMT), the decoder generates a sentence
word by word, packing all linguistic granularities in the same time-scale of
RNN. In this paper, we propose a new type of decoder for NMT, which splits the
decode state into two parts and updates them in two different time-scales.
Specifically, we first predict a chunk time-scale state for phrasal modeling,
on top of which multiple word time-scale states are generated. In this way, the
target sentence is translated hierarchically from chunks to words, with
information in different granularities being leveraged. Experiments show that
our proposed model significantly improves the translation performance over the
state-of-the-art NMT model.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2017 14:39:56 GMT"
}
] | 2017-05-04T00:00:00 |
[
[
"Zhou",
"Hao",
""
],
[
"Tu",
"Zhaopeng",
""
],
[
"Huang",
"Shujian",
""
],
[
"Liu",
"Xiaohua",
""
],
[
"Li",
"Hang",
""
],
[
"Chen",
"Jiajun",
""
]
] |
new_dataset
| 0.997116 |
1611.02370
|
Abdelhadi Azzouni
|
Abdelhadi Azzouni, Othmen Braham, Nguyen Thi Mai Trang, Guy Pujolle,
and Raouf Boutaba
|
Fingerprinting OpenFlow controllers: The first step to attack an SDN
control plane
|
Peer reviewed version can be fount here
http://ieeexplore.ieee.org/document/7841843/
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software-Defined Networking (SDN) controllers are considered as Network
Operating Systems (NOSs) and often viewed as a single point of failure.
Detecting which SDN controller is managing a target network is a big step for
an attacker to launch specific/effective attacks against it. In this paper, we
demonstrate the feasibility of fingerpirinting SDN controllers. We propose
techniques allowing an attacker placed in the data plane, which is supposed to
be physically separate from the control plane, to detect which controller is
managing the network. To the best of our knowledge, this is the first work on
fingerprinting SDN controllers, with as primary goal to emphasize the necessity
to highly secure the controller. We focus on OpenFlow-based SDN networks since
OpenFlow is currently the most deployed SDN technology by hardware and software
vendors.
|
[
{
"version": "v1",
"created": "Tue, 8 Nov 2016 02:44:27 GMT"
},
{
"version": "v2",
"created": "Mon, 1 May 2017 20:41:09 GMT"
}
] | 2017-05-03T00:00:00 |
[
[
"Azzouni",
"Abdelhadi",
""
],
[
"Braham",
"Othmen",
""
],
[
"Trang",
"Nguyen Thi Mai",
""
],
[
"Pujolle",
"Guy",
""
],
[
"Boutaba",
"Raouf",
""
]
] |
new_dataset
| 0.998162 |
1702.03259
|
Vincent Cohen-Addad
|
Vincent Cohen-Addad, S{\o}ren Dahlgaard, and Christian Wulff-Nilsen
|
Fast and Compact Exact Distance Oracle for Planar Graphs
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For a given a graph, a distance oracle is a data structure that answers
distance queries between pairs of vertices. We introduce an $O(n^{5/3})$-space
distance oracle which answers exact distance queries in $O(\log n)$ time for
$n$-vertex planar edge-weighted digraphs. All previous distance oracles for
planar graphs with truly subquadratic space i.e., space $O(n^{2 - \epsilon})$
for some constant $\epsilon > 0$) either required query time polynomial in $n$
or could only answer approximate distance queries.
Furthermore, we show how to trade-off time and space: for any $S \ge
n^{3/2}$, we show how to obtain an $S$-space distance oracle that answers
queries in time $O((n^{5/2}/ S^{3/2}) \log n)$. This is a polynomial
improvement over the previous planar distance oracles with $o(n^{1/4})$ query
time.
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2017 17:27:35 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Apr 2017 08:23:30 GMT"
},
{
"version": "v3",
"created": "Tue, 2 May 2017 08:06:18 GMT"
}
] | 2017-05-03T00:00:00 |
[
[
"Cohen-Addad",
"Vincent",
""
],
[
"Dahlgaard",
"Søren",
""
],
[
"Wulff-Nilsen",
"Christian",
""
]
] |
new_dataset
| 0.99924 |
1702.05741
|
Jie Hao
|
Bin Chen, Shu-Tao Xia, and Jie Hao
|
Locally Repairable Codes with Multiple $(r_{i}, \delta_{i})$-Localities
|
6 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In distributed storage systems, locally repairable codes (LRCs) are
introduced to realize low disk I/O and repair cost. In order to tolerate
multiple node failures, the LRCs with \emph{$(r, \delta)$-locality} are further
proposed. Since hot data is not uncommon in a distributed storage system, both
Zeh \emph{et al.} and Kadhe \emph{et al.} focus on the LRCs with \emph{multiple
localities or unequal localities} (ML-LRCs) recently, which said that the
localities among the code symbols can be different. ML-LRCs are attractive and
useful in reducing repair cost for hot data. In this paper, we generalize the
ML-LRCs to the $(r,\delta)$-locality case of multiple node failures, and define
an LRC with multiple $(r_{i}, \delta_{i})_{i\in [s]}$ localities ($s\ge 2$),
where $r_{1}\leq r_{2}\leq\dots\leq r_{s}$ and
$\delta_{1}\geq\delta_{2}\geq\dots\geq\delta_{s}\geq2$. Such codes ensure that
some hot data could be repaired more quickly and have better failure-tolerance
in certain cases because of relatively smaller $r_{i}$ and larger $\delta_{i}$.
Then, we derive a Singleton-like upper bound on the minimum distance for the
proposed LRCs by employing the regenerating-set technique. Finally, we obtain a
class of explicit and structured constructions of optimal ML-LRCs, and further
extend them to the cases of multiple $(r_{i}, \delta)_{i\in [s]}$ localities.
|
[
{
"version": "v1",
"created": "Sun, 19 Feb 2017 12:02:47 GMT"
},
{
"version": "v2",
"created": "Tue, 2 May 2017 02:50:06 GMT"
}
] | 2017-05-03T00:00:00 |
[
[
"Chen",
"Bin",
""
],
[
"Xia",
"Shu-Tao",
""
],
[
"Hao",
"Jie",
""
]
] |
new_dataset
| 0.981534 |
1704.02853
|
Isabelle Augenstein
|
Isabelle Augenstein, Mrinal Das, Sebastian Riedel, Lakshmi Vikraman,
Andrew McCallum
|
SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations
from Scientific Publications
| null | null | null | null |
cs.CL cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe the SemEval task of extracting keyphrases and relations between
them from scientific documents, which is crucial for understanding which
publications describe which processes, tasks and materials. Although this was a
new task, we had a total of 26 submissions across 3 evaluation scenarios. We
expect the task and the findings reported in this paper to be relevant for
researchers working on understanding scientific content, as well as the broader
knowledge base population and information extraction communities.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2017 13:43:40 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Apr 2017 10:41:31 GMT"
},
{
"version": "v3",
"created": "Tue, 2 May 2017 15:32:41 GMT"
}
] | 2017-05-03T00:00:00 |
[
[
"Augenstein",
"Isabelle",
""
],
[
"Das",
"Mrinal",
""
],
[
"Riedel",
"Sebastian",
""
],
[
"Vikraman",
"Lakshmi",
""
],
[
"McCallum",
"Andrew",
""
]
] |
new_dataset
| 0.994227 |
1705.00047
|
Renaud Hartert
|
Renaud Hartert
|
Kiwi - A Minimalist CP Solver
| null | null | null | null |
cs.AI cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Kiwi is a minimalist and extendable Constraint Programming (CP) solver
specifically designed for education. The particularities of Kiwi stand in its
generic trailing state restoration mechanism and its modulable use of
variables. By developing Kiwi, the author does not aim to provide an
alternative to full featured constraint solvers but rather to provide readers
with a basic architecture that will (hopefully) help them to understand the
core mechanisms hidden under the hood of constraint solvers, to develop their
own extended constraint solver, or to test innovative ideas.
|
[
{
"version": "v1",
"created": "Fri, 28 Apr 2017 19:34:19 GMT"
},
{
"version": "v2",
"created": "Tue, 2 May 2017 01:18:47 GMT"
}
] | 2017-05-03T00:00:00 |
[
[
"Hartert",
"Renaud",
""
]
] |
new_dataset
| 0.999547 |
1705.00648
|
William Yang Wang
|
William Yang Wang
|
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News
Detection
|
ACL 2017
| null | null | null |
cs.CL cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic fake news detection is a challenging problem in deception
detection, and it has tremendous real-world political and social impacts.
However, statistical approaches to combating fake news has been dramatically
limited by the lack of labeled benchmark datasets. In this paper, we present
liar: a new, publicly available dataset for fake news detection. We collected a
decade-long, 12.8K manually labeled short statements in various contexts from
PolitiFact.com, which provides detailed analysis report and links to source
documents for each case. This dataset can be used for fact-checking research as
well. Notably, this new dataset is an order of magnitude larger than previously
largest public fake news datasets of similar type. Empirically, we investigate
automatic fake news detection based on surface-level linguistic patterns. We
have designed a novel, hybrid convolutional neural network to integrate
meta-data with text. We show that this hybrid approach can improve a text-only
deep learning model.
|
[
{
"version": "v1",
"created": "Mon, 1 May 2017 18:20:47 GMT"
}
] | 2017-05-03T00:00:00 |
[
[
"Wang",
"William Yang",
""
]
] |
new_dataset
| 0.999892 |
1705.00680
|
Ahsan Raza
|
Ahsan Raza, Wei Liu and Qing Shen
|
Thinned Coprime Arrays for DOA Estimation
|
This paper has been submitted to European Signal Processing
Conference (EUSIPCO 2017) and is under peer review at present
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sparse arrays can generate a larger aperture than traditional uniform linear
arrays (ULA) and offer enhanced degrees-of-freedom (DOFs) which can be
exploited in both beamforming and direction-of-arrival (DOA) estimation. One
class of sparse arrays is the coprime array, composed of two uniform linear
subarrays which yield an effective difference co-array with higher number of
DOFs. In this work, we present a new coprime array structure termed thinned
coprime array (TCA), which exploits the redundancy in the structure of the
existing coprime array and achieves the same virtual aperture and DOFs as the
conventional coprime array with much fewer number of sensors. An analysis of
the DOFs provided by the new structure in comparison with other sparse arrays
is provided and simulation results for DOA estimation using the compressive
sensing based method are provided.
|
[
{
"version": "v1",
"created": "Mon, 1 May 2017 19:38:05 GMT"
}
] | 2017-05-03T00:00:00 |
[
[
"Raza",
"Ahsan",
""
],
[
"Liu",
"Wei",
""
],
[
"Shen",
"Qing",
""
]
] |
new_dataset
| 0.98719 |
1705.00717
|
Reza Farahbakhsh
|
Reza Farahbakhsh, Angel Cuevas, Antonio M. Ortiz, Xiao Han, Noel
Crespi
|
How far is Facebook from me? Facebook network infrastructure analysis
|
Published in: IEEE Communications Magazine (Volume: 53, Issue: 9,
September 2015)
|
IEEE Communications Magazine 53.9 (2015): 134-142
|
10.1109/MCOM.2015.7263357
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Facebook is today the most popular social network with more than one billion
subscribers worldwide. To provide good quality of service (e.g., low access
delay) to their clients, FB relies on Akamai, which provides a worldwide
content distribution network with a large number of edge servers that are much
closer to FB subscribers. In this article we aim to depict a global picture of
the current FB network infrastructure deployment taking into account both
native FB servers and Akamai nodes. Toward this end, we have performed a
measurement-based analysis during a period of two weeks using 463 Planet- Lab
nodes distributed across 41 countries. Based on the obtained data we compare
the average access delay that nodes in different countries experience accessing
both native FB servers and Akamai nodes. In addition, we obtain a wide view of
the deployment of Akamai nodes serving FB users worldwide. Finally, we analyze
the geographical coverage of those nodes, and demonstrate that in most of the
cases Akamai nodes located in a particular country service not only local FB
subscribers, but also FB users located in nearby countries.
|
[
{
"version": "v1",
"created": "Mon, 1 May 2017 21:15:15 GMT"
}
] | 2017-05-03T00:00:00 |
[
[
"Farahbakhsh",
"Reza",
""
],
[
"Cuevas",
"Angel",
""
],
[
"Ortiz",
"Antonio M.",
""
],
[
"Han",
"Xiao",
""
],
[
"Crespi",
"Noel",
""
]
] |
new_dataset
| 0.960815 |
1705.00766
|
EPTCS
|
Anna Slobodova, Warren Hunt Jr
|
Proceedings 14th International Workshop on the ACL2 Theorem Prover and
its Applications
| null |
EPTCS 249, 2017
|
10.4204/EPTCS.249
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This volume contains the proceedings of the Fourteenth International Workshop
on the ACL2 Theorem Prover and Its Applications, ACL2 2017, a two-day workshop
held in Austin, Texas, USA, on May 22-23, 2017. ACL2 workshops occur at
approximately 18-month intervals, and they provide a technical forum for
researchers to present and discuss improvements and extensions to the theorem
prover, comparisons of ACL2 with other systems, and applications of ACL2 in
formal verification.
ACL2 is a state-of-the-art automated reasoning system that has been
successfully applied in academia, government, and industry for specification
and verification of computing systems and in teaching computer science courses.
Boyer, Kaufmann, and Moore were awarded the 2005 ACM Software System Award for
their work on ACL2 and the other theorem provers in the Boyer-Moore
theorem-prover family.
The proceedings of ACL2 2017 include the seven technical papers and two
extended abstracts that were presented at the workshop. Each submission
received two or three reviews. The workshop also included three invited talks:
"Using Mechanized Mathematics in an Organization with a Simulation-Based
Mentality", by Glenn Henry of Centaur Technology, Inc.; "Formal Verification of
Financial Algorithms, Progress and Prospects", by Grant Passmore of Aesthetic
Integration; and "Verifying Oracle's SPARC Processors with ACL2" by Greg
Grohoski of Oracle. The workshop also included several rump sessions discussing
ongoing research and the use of ACL2 within industry.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2017 02:29:49 GMT"
}
] | 2017-05-03T00:00:00 |
[
[
"Slobodova",
"Anna",
""
],
[
"Hunt",
"Warren",
"Jr"
]
] |
new_dataset
| 0.998934 |
1705.00770
|
Hualu Liu
|
Xiusheng Liu, Yun Fan and Hualu Liu
|
Galois LCD Codes over Finite Fields
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we study the complementary dual codes in more general setting
(which are called Galois LCD codes) by a uniform method. A necessary and
sufficient condition for linear codes to be Galois LCD codes is determined, and
constacyclic codes to be Galois LCD codes are characterized. Some illustrative
examples which constacyclic codes are Galois LCD MDS codes are provided as
well. In particular, we study Hermitian LCD constacyclic codes. Finally, we
present a construction of a class of Hermitian LCD codes which are also MDS
codes.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2017 02:41:04 GMT"
}
] | 2017-05-03T00:00:00 |
[
[
"Liu",
"Xiusheng",
""
],
[
"Fan",
"Yun",
""
],
[
"Liu",
"Hualu",
""
]
] |
new_dataset
| 0.999458 |
1705.00811
|
Wes Masri
|
Rawad Abou Assi, Chadi Trad, and Wes Masri
|
ACDC: Altering Control Dependence Chains for Automated Patch Generation
|
11 pages
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Once a failure is observed, the primary concern of the developer is to
identify what caused it in order to repair the code that induced the incorrect
behavior. Until a permanent repair is afforded, code repair patches are
invaluable. The aim of this work is to devise an automated patch generation
technique that proceeds as follows: Step1) It identifies a set of
failure-causing control dependence chains that are minimal in terms of number
and length. Step2) It identifies a set of predicates within the chains along
with associated execution instances, such that negating the predicates at the
given instances would exhibit correct behavior. Step3) For each candidate
predicate, it creates a classifier that dictates when the predicate should be
negated to yield correct program behavior. Step4) Prior to each candidate
predicate, the faulty program is injected with a call to its corresponding
classifier passing it the program state and getting a return value predictively
indicating whether to negate the predicate or not. The role of the classifiers
is to ensure that: 1) the predicates are not negated during passing runs; and
2) the predicates are negated at the appropriate instances within failing runs.
We implemented our patch generation approach for the Java platform and
evaluated our toolset using 148 defects from the Introclass and Siemens
benchmarks. The toolset identified 56 full patches and another 46 partial
patches, and the classification accuracy averaged 84%.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2017 06:17:31 GMT"
}
] | 2017-05-03T00:00:00 |
[
[
"Assi",
"Rawad Abou",
""
],
[
"Trad",
"Chadi",
""
],
[
"Masri",
"Wes",
""
]
] |
new_dataset
| 0.996692 |
1705.00823
|
Yuya Yoshikawa
|
Yuya Yoshikawa, Yutaro Shigeto, Akikazu Takeuchi
|
STAIR Captions: Constructing a Large-Scale Japanese Image Caption
Dataset
|
Accepted as ACL2017 short paper. 5 pages
| null | null | null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, automatic generation of image descriptions (captions), that
is, image captioning, has attracted a great deal of attention. In this paper,
we particularly consider generating Japanese captions for images. Since most
available caption datasets have been constructed for English language, there
are few datasets for Japanese. To tackle this problem, we construct a
large-scale Japanese image caption dataset based on images from MS-COCO, which
is called STAIR Captions. STAIR Captions consists of 820,310 Japanese captions
for 164,062 images. In the experiment, we show that a neural network trained
using STAIR Captions can generate more natural and better Japanese captions,
compared to those generated using English-Japanese machine translation after
generating English captions.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2017 07:07:55 GMT"
}
] | 2017-05-03T00:00:00 |
[
[
"Yoshikawa",
"Yuya",
""
],
[
"Shigeto",
"Yutaro",
""
],
[
"Takeuchi",
"Akikazu",
""
]
] |
new_dataset
| 0.999806 |
1705.00848
|
Linh Anh Nguyen D.Sc.
|
Linh Anh Nguyen
|
ExpTime Tableaux with Global Caching for Hybrid PDL
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the first direct tableau decision procedure with the ExpTime
complexity for HPDL (Hybrid Propositional Dynamic Logic). It checks whether a
given ABox (a finite set of assertions) in HPDL is satisfiable. Technically, it
combines global caching with checking fulfillment of eventualities and dealing
with nominals. Our procedure contains enough details for direct implementation
and has been implemented for the TGC2 (Tableaux with Global Caching) system. As
HPDL can be used as a description logic for representing and reasoning about
terminological knowledge, our procedure is useful for practical applications.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2017 08:14:41 GMT"
}
] | 2017-05-03T00:00:00 |
[
[
"Nguyen",
"Linh Anh",
""
]
] |
new_dataset
| 0.989045 |
1705.01049
|
Arpit Gupta
|
Arpit Gupta, Rob Harrison, Ankita Pawar, R\"udiger Birkner, Marco
Canini, Nick Feamster, Jennifer Rexford, Walter Willinger
|
Sonata: Query-Driven Network Telemetry
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Operating networks depends on collecting and analyzing measurement data.
Current technologies do not make it easy to do so, typically because they
separate data collection (e.g., packet capture or flow monitoring) from
analysis, producing either too much data to answer a general question or too
little data to answer a detailed question. In this paper, we present Sonata, a
network telemetry system that uses a uniform query interface to drive the joint
collection and analysis of network traffic. Sonata takes the advantage of two
emerging technologies---streaming analytics platforms and programmable network
devices---to facilitate joint collection and analysis. Sonata allows operators
to more directly express network traffic analysis tasks in terms of a
high-level language. The underlying runtime partitions each query into a
portion that runs on the switch and another that runs on the streaming
analytics platform iteratively refines the query to efficiently capture only
the traffic that pertains to the operator's query, and exploits sketches to
reduce state in switches in exchange for more approximate results. Through an
evaluation of a prototype implementation, we demonstrate that Sonata can
support a wide range of network telemetry tasks with less state in the network,
and lower data rates to streaming analytics systems, than current approaches
can achieve.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2017 16:17:16 GMT"
}
] | 2017-05-03T00:00:00 |
[
[
"Gupta",
"Arpit",
""
],
[
"Harrison",
"Rob",
""
],
[
"Pawar",
"Ankita",
""
],
[
"Birkner",
"Rüdiger",
""
],
[
"Canini",
"Marco",
""
],
[
"Feamster",
"Nick",
""
],
[
"Rexford",
"Jennifer",
""
],
[
"Willinger",
"Walter",
""
]
] |
new_dataset
| 0.97218 |
0710.3535
|
Andrea Maiorano
|
F. Belletti, M. Cotallo, A. Cruz, L. A. Fern\'andez, A. Gordillo, M.
Guidetti, A. Maiorano, F. Mantovani, E. Marinari, V. Mart\'in-Mayor, A.
Mu\~noz-Sudupe, D. Navarro, G. Parisi, S. P\'erez-Gaviro, M. Rossi, J. J.
Ruiz-Lorenzo, S. F. Schifano, D. Sciretti, A. Taranc\'on, R. Tripiccione, J.
L. Velasco
|
JANUS: an FPGA-based System for High Performance Scientific Computing
|
11 pages, 6 figures. Improved version, largely rewritten, submitted
to Computing in Science & Engineering
|
Computing in Science & Engineering 11 (2009 ) 48-58
|
10.1109/MCSE.2009.11
| null |
cs.AR
| null |
This paper describes JANUS, a modular massively parallel and reconfigurable
FPGA-based computing system. Each JANUS module has a computational core and a
host. The computational core is a 4x4 array of FPGA-based processing elements
with nearest-neighbor data links. Processors are also directly connected to an
I/O node attached to the JANUS host, a conventional PC. JANUS is tailored for,
but not limited to, the requirements of a class of hard scientific applications
characterized by regular code structure, unconventional data manipulation
instructions and not too large data-base size. We discuss the architecture of
this configurable machine, and focus on its use on Monte Carlo simulations of
statistical mechanics. On this class of application JANUS achieves impressive
performances: in some cases one JANUS processing element outperfoms high-end
PCs by a factor ~ 1000. We also discuss the role of JANUS on other classes of
scientific applications.
|
[
{
"version": "v1",
"created": "Thu, 18 Oct 2007 15:26:32 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2008 11:10:12 GMT"
}
] | 2017-05-02T00:00:00 |
[
[
"Belletti",
"F.",
""
],
[
"Cotallo",
"M.",
""
],
[
"Cruz",
"A.",
""
],
[
"Fernández",
"L. A.",
""
],
[
"Gordillo",
"A.",
""
],
[
"Guidetti",
"M.",
""
],
[
"Maiorano",
"A.",
""
],
[
"Mantovani",
"F.",
""
],
[
"Marinari",
"E.",
""
],
[
"Martín-Mayor",
"V.",
""
],
[
"Muñoz-Sudupe",
"A.",
""
],
[
"Navarro",
"D.",
""
],
[
"Parisi",
"G.",
""
],
[
"Pérez-Gaviro",
"S.",
""
],
[
"Rossi",
"M.",
""
],
[
"Ruiz-Lorenzo",
"J. J.",
""
],
[
"Schifano",
"S. F.",
""
],
[
"Sciretti",
"D.",
""
],
[
"Tarancón",
"A.",
""
],
[
"Tripiccione",
"R.",
""
],
[
"Velasco",
"J. L.",
""
]
] |
new_dataset
| 0.969467 |
1408.5999
|
Shenghui Su
|
Shenghui Su, Tao Xie, Shuwang Lu
|
A New Non-MDS Hash Function Resisting Birthday Attack and
Meet-in-the-middle Attack
|
18 Pages
|
Theoretical Computer Science, v654, Nov 2016, pp.128-142
| null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To examine the integrity and authenticity of an IP address efficiently and
economically, this paper proposes a new non-Merkle-Damgard structural (non-MDS)
hash function called JUNA that is based on a multivariate permutation problem
and an anomalous subset product problem to which no subexponential time
solutions are found so far. JUNA includes an initialization algorithm and a
compression algorithm, and converts a short message of n bits which is regarded
as only one block into a digest of m bits, where 80 <= m <= 232 and 80 <= m <=
n <= 4096. The analysis and proof show that the new hash is one-way, weakly
collision-free, and strongly collision-free, and its security against existent
attacks such as birthday attack and meet-in-the- middle attack is to O(2 ^ m).
Moreover, a detailed proof that the new hash function is resistant to the
birthday attack is given. Compared with the Chaum-Heijst-Pfitzmann hash based
on a discrete logarithm problem, the new hash is lightweight, and thus it opens
a door to convenience for utilization of lightweight digital signing schemes.
|
[
{
"version": "v1",
"created": "Tue, 26 Aug 2014 04:05:57 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Nov 2014 14:31:53 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Apr 2017 03:11:01 GMT"
}
] | 2017-05-02T00:00:00 |
[
[
"Su",
"Shenghui",
""
],
[
"Xie",
"Tao",
""
],
[
"Lu",
"Shuwang",
""
]
] |
new_dataset
| 0.992207 |
1408.6226
|
Shenghui Su
|
Shenghui Su, Shuwang Lu, Maozhi Xu, Tao Xie
|
A Public Key Cryptoscheme Using Bit-pairs and Probabilistic Mazes
|
16 Pages. arXiv admin note: text overlap with arXiv:1408.5999
|
Theoretical Computer Science, v654, Nov 2016, pp.113-127
| null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper gives the definition and property of a bit-pair shadow, and
devises the three algorithms of a public key cryptoscheme called JUOAN that is
based on a multivariate permutation problem and an anomalous subset product
problem to which no subexponential time solutions are found so far, and regards
a bit-pair as a manipulation unit. The authors demonstrate that the decryption
algorithm is correct, deduce the probability that a plaintext solution is
nonunique is nearly zero, analyze the security of the new cryptoscheme against
extracting a private key from a public key and recovering a plaintext from a
ciphertext on the assumption that an integer factorization problem, a discrete
logarithm problem, and a low-density subset sum problem can be solved
efficiently, and prove that the new cryptoscheme using random padding and
random permutation is semantically secure. The analysis shows that the bit-pair
method increases the density D of a related knapsack to a number more than 1,
and decreases the modulus length lgM of the new cryptoscheme to 464, 544, or
640.
|
[
{
"version": "v1",
"created": "Tue, 26 Aug 2014 09:34:17 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Nov 2014 14:25:43 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Apr 2017 03:13:54 GMT"
}
] | 2017-05-02T00:00:00 |
[
[
"Su",
"Shenghui",
""
],
[
"Lu",
"Shuwang",
""
],
[
"Xu",
"Maozhi",
""
],
[
"Xie",
"Tao",
""
]
] |
new_dataset
| 0.998465 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.