id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1502.00950
|
Helio M. de Oliveira
|
M.M.S. Lira, H.M. de Oliveira, M.A. Carvalho Jr, R.M. Campello de
Souza
|
Compactly Supported Wavelets Derived From Legendre Polynomials:
Spherical Harmonic Wavelets
|
6 pages, 6 figures, 1 table In: Computational Methods in Circuits and
Systems Applications, WSEAS press, pp.211-215, 2003. ISBN: 960-8052-88-2
| null | null | null |
cs.NA math.NA stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A new family of wavelets is introduced, which is associated with Legendre
polynomials. These wavelets, termed spherical harmonic or Legendre wavelets,
possess compact support. The method for the wavelet construction is derived
from the association of ordinary second order differential equations with
multiresolution filters. The low-pass filter associated with Legendre
multiresolution analysis is a linear phase finite impulse response filter
(FIR).
|
[
{
"version": "v1",
"created": "Tue, 3 Feb 2015 18:23:32 GMT"
}
] | 2015-02-04T00:00:00 |
[
[
"Lira",
"M. M. S.",
""
],
[
"de Oliveira",
"H. M.",
""
],
[
"Carvalho",
"M. A.",
"Jr"
],
[
"de Souza",
"R. M. Campello",
""
]
] |
new_dataset
| 0.999066 |
1502.00076
|
Shahriar Shahabuddin
|
Shahriar Shahabuddin, Janne Janhunen, Muhammet Fatih Bayramoglu,
Markku Juntti, Amanullah Ghazi, and Olli Silven
|
Design of a Unified Transport Triggered Processor for LDPC/Turbo Decoder
|
8 pages, 7 figures, conference
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper summarizes the design of a programmable processor with transport
triggered architecture (TTA) for decoding LDPC and turbo codes. The processor
architecture is designed in such a manner that it can be programmed for LDPC or
turbo decoding for the purpose of internetworking and roaming between different
networks. The standard trellis based maximum a posteriori (MAP) algorithm is
used for turbo decoding. Unlike most other implementations, a supercode based
sum-product algorithm is used for the check node message computation for LDPC
decoding. This approach ensures the highest hardware utilization of the
processor architecture for the two different algorithms. Up to our knowledge,
this is the first attempt to design a TTA processor for the LDPC decoder. The
processor is programmed with a high level language to meet the time-to-market
requirement. The optimization techniques and the usage of the function units
for both algorithms are explained in detail. The processor achieves 22.64 Mbps
throughput for turbo decoding with a single iteration and 10.12 Mbps throughput
for LDPC decoding with five iterations for a clock frequency of 200 MHz.
|
[
{
"version": "v1",
"created": "Sat, 31 Jan 2015 06:36:34 GMT"
}
] | 2015-02-03T00:00:00 |
[
[
"Shahabuddin",
"Shahriar",
""
],
[
"Janhunen",
"Janne",
""
],
[
"Bayramoglu",
"Muhammet Fatih",
""
],
[
"Juntti",
"Markku",
""
],
[
"Ghazi",
"Amanullah",
""
],
[
"Silven",
"Olli",
""
]
] |
new_dataset
| 0.998816 |
1502.00195
|
James J.Q. Yu
|
James J.Q. Yu and Victor O.K. Li and Albert Y.S. Lam
|
Sensor Deployment for Air Pollution Monitoring Using Public
Transportation System
| null | null |
10.1109/CEC.2012.6256495
| null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Air pollution monitoring is a very popular research topic and many monitoring
systems have been developed. In this paper, we formulate the Bus Sensor
Deployment Problem (BSDP) to select the bus routes on which sensors are
deployed, and we use Chemical Reaction Optimization (CRO) to solve BSDP. CRO is
a recently proposed metaheuristic designed to solve a wide range of
optimization problems. Using the real world data, namely Hong Kong Island bus
route data, we perform a series of simulations and the results show that CRO is
capable of solving this optimization problem efficiently.
|
[
{
"version": "v1",
"created": "Sun, 1 Feb 2015 04:48:18 GMT"
}
] | 2015-02-03T00:00:00 |
[
[
"Yu",
"James J. Q.",
""
],
[
"Li",
"Victor O. K.",
""
],
[
"Lam",
"Albert Y. S.",
""
]
] |
new_dataset
| 0.998169 |
1502.00367
|
Toshio Suzuki
|
Toshio Suzuki
|
A Solution to Yamakami's Problem on Advised Context-free Languages
| null | null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Yamakami [2011, Theoret. Comput. Sci.] studies context-free languages with
advice functions. Here, the length of an advice is assumed to be the same as
that of an input. Let CFL and CFL/n denote the class of all context-free
languages and that with advice functions, respectively. We let CFL(2) denote
the class of intersections of two context-free languages. An interesting
direction of a research is asking how complex CFL(2) is, relative to CFL.
Yamakami raised a problem whether there is a CFL-immune set in CFL(2) - CFL/n.
The best known so far is that LSPACE - CFL/n has a CFL-immune set, where LSPACE
denotes the class of languages recognized in logarithmic-space. We present an
affirmative solution to his problem. Two key concepts of our proof are the
nested palindrome and Yamakami's swapping lemma. The swapping lemma is
applicable to the setting where the pumping lemma (Bar-Hillel's lemma) does not
work. Our proof is an example showing how useful the swapping lemma is.
|
[
{
"version": "v1",
"created": "Mon, 2 Feb 2015 05:53:51 GMT"
}
] | 2015-02-03T00:00:00 |
[
[
"Suzuki",
"Toshio",
""
]
] |
new_dataset
| 0.995355 |
1407.6812
|
Robert Hoehndorf
|
Robert Hoehndorf and Luke Slater and Paul N. Schofield and Georgios V.
Gkoutos
|
Aber-OWL: a framework for ontology-based data access in biology
| null | null |
10.1186/s12859-015-0456-9
| null |
cs.DB cs.IR q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many ontologies have been developed in biology and these ontologies
increasingly contain large volumes of formalized knowledge commonly expressed
in the Web Ontology Language (OWL). Computational access to the knowledge
contained within these ontologies relies on the use of automated reasoning. We
have developed the Aber-OWL infrastructure that provides reasoning services for
bio-ontologies. Aber-OWL consists of an ontology repository, a set of web
services and web interfaces that enable ontology-based semantic access to
biological data and literature. Aber-OWL is freely available at
http://aber-owl.net.
|
[
{
"version": "v1",
"created": "Fri, 25 Jul 2014 08:33:12 GMT"
}
] | 2015-02-02T00:00:00 |
[
[
"Hoehndorf",
"Robert",
""
],
[
"Slater",
"Luke",
""
],
[
"Schofield",
"Paul N.",
""
],
[
"Gkoutos",
"Georgios V.",
""
]
] |
new_dataset
| 0.993565 |
1409.0575
|
Olga Russakovsky
|
Olga Russakovsky and Jia Deng and Hao Su and Jonathan Krause and
Sanjeev Satheesh and Sean Ma and Zhiheng Huang and Andrej Karpathy and Aditya
Khosla and Michael Bernstein and Alexander C. Berg and Li Fei-Fei
|
ImageNet Large Scale Visual Recognition Challenge
|
43 pages, 16 figures. v3 includes additional comparisons with PASCAL
VOC (per-category comparisons in Table 3, distribution of localization
difficulty in Fig 16), a list of queries used for obtaining object detection
images (Appendix C), and some additional references
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ImageNet Large Scale Visual Recognition Challenge is a benchmark in
object category classification and detection on hundreds of object categories
and millions of images. The challenge has been run annually from 2010 to
present, attracting participation from more than fifty institutions.
This paper describes the creation of this benchmark dataset and the advances
in object recognition that have been possible as a result. We discuss the
challenges of collecting large-scale ground truth annotation, highlight key
breakthroughs in categorical object recognition, provide a detailed analysis of
the current state of the field of large-scale image classification and object
detection, and compare the state-of-the-art computer vision accuracy with human
accuracy. We conclude with lessons learned in the five years of the challenge,
and propose future directions and improvements.
|
[
{
"version": "v1",
"created": "Mon, 1 Sep 2014 22:29:38 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Dec 2014 01:08:31 GMT"
},
{
"version": "v3",
"created": "Fri, 30 Jan 2015 01:23:59 GMT"
}
] | 2015-02-02T00:00:00 |
[
[
"Russakovsky",
"Olga",
""
],
[
"Deng",
"Jia",
""
],
[
"Su",
"Hao",
""
],
[
"Krause",
"Jonathan",
""
],
[
"Satheesh",
"Sanjeev",
""
],
[
"Ma",
"Sean",
""
],
[
"Huang",
"Zhiheng",
""
],
[
"Karpathy",
"Andrej",
""
],
[
"Khosla",
"Aditya",
""
],
[
"Bernstein",
"Michael",
""
],
[
"Berg",
"Alexander C.",
""
],
[
"Fei-Fei",
"Li",
""
]
] |
new_dataset
| 0.999272 |
1501.05703
|
Ning Zhang
|
Ning Zhang, Manohar Paluri, Yaniv Taigman, Rob Fergus, Lubomir Bourdev
|
Beyond Frontal Faces: Improving Person Recognition Using Multiple Cues
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We explore the task of recognizing peoples' identities in photo albums in an
unconstrained setting. To facilitate this, we introduce the new People In Photo
Albums (PIPA) dataset, consisting of over 60000 instances of 2000 individuals
collected from public Flickr photo albums. With only about half of the person
images containing a frontal face, the recognition task is very challenging due
to the large variations in pose, clothing, camera viewpoint, image resolution
and illumination. We propose the Pose Invariant PErson Recognition (PIPER)
method, which accumulates the cues of poselet-level person recognizers trained
by deep convolutional networks to discount for the pose variations, combined
with a face recognizer and a global recognizer. Experiments on three different
settings confirm that in our unconstrained setup PIPER significantly improves
on the performance of DeepFace, which is one of the best face recognizers as
measured on the LFW dataset.
|
[
{
"version": "v1",
"created": "Fri, 23 Jan 2015 02:35:01 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jan 2015 18:48:27 GMT"
}
] | 2015-02-02T00:00:00 |
[
[
"Zhang",
"Ning",
""
],
[
"Paluri",
"Manohar",
""
],
[
"Taigman",
"Yaniv",
""
],
[
"Fergus",
"Rob",
""
],
[
"Bourdev",
"Lubomir",
""
]
] |
new_dataset
| 0.999634 |
1501.07686
|
Ludovic Mignot
|
Younes Guellouma, Ludovic Mignot, Hadda Cherroun and Djelloul Ziadi
|
Construction of rational expression from tree automata using a
generalization of Arden's Lemma
| null | null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Arden's Lemma is a classical result in language theory allowing the
computation of a rational expression denoting the language recognized by a
finite string automaton. In this paper we generalize this important lemma to
the rational tree languages. Moreover, we propose also a construction of a
rational tree expression which denotes the accepted tree language of a finite
tree automaton.
|
[
{
"version": "v1",
"created": "Fri, 30 Jan 2015 07:34:10 GMT"
}
] | 2015-02-02T00:00:00 |
[
[
"Guellouma",
"Younes",
""
],
[
"Mignot",
"Ludovic",
""
],
[
"Cherroun",
"Hadda",
""
],
[
"Ziadi",
"Djelloul",
""
]
] |
new_dataset
| 0.954767 |
1501.07692
|
Matthew Sottile
|
Matthew Sottile
|
Blob indentation identification via curvature measurement
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel method for identifying indentations on the
boundary of solid 2D shape. It uses the signed curvature at a set of points
along the boundary to identify indentations and provides one parameter for
tuning the selection mechanism for discriminating indentations from other
boundary irregularities. An efficient implementation is described based on the
Fourier transform for calculating curvature from a sequence of points obtained
from the boundary of a binary blob.
|
[
{
"version": "v1",
"created": "Fri, 30 Jan 2015 08:12:48 GMT"
}
] | 2015-02-02T00:00:00 |
[
[
"Sottile",
"Matthew",
""
]
] |
new_dataset
| 0.9974 |
1407.1109
|
Dusan Jakovetic
|
Dusan Jakovetic, Dragana Bajovic, Dejan Vukobratovic, and Vladimir
Crnojevic
|
Cooperative Slotted Aloha for Multi-Base Station Systems
|
extended version of a paper submitted for journal publication;
revised Nov 6, 2014, and Jan 24, 2015
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a framework to study slotted Aloha with cooperative base
stations. Assuming a geographic-proximity communication model, we propose
several decoding algorithmswith different degrees of base stations' cooperation
(non-cooperative, spatial, temporal, and spatio-temporal). With spatial
cooperation, neighboring base stations inform each other whenever they collect
a user within their coverage overlap; temporal cooperation corresponds to
(temporal) successive interference cancellation done locally at each station.
We analyze the four decoding algorithms and establish several fundamental
results. With all algorithms, the peak throughput (average number of decoded
users per slot, across all base stations) increases linearly with the number of
base stations. Further, temporal and spatio-temporal cooperations exhibit a
threshold behavior with respect to the normalized load (number of users per
station, per slot). There exists a positive load $G^\star$, such that, below
$G^\star$, the decoding probability is asymptotically maximal possible, equal
the probability that a user is heard by at least one base station; with
non-cooperative decoding and spatial cooperation, we show that $G^\star$ is
zero. Finally, with spatio-temporal cooperation, we optimize the degree
distribution according to which users transmit their packet replicas; the
optimum is in general very different from the corresponding optimal
distribution of the single-base station system.
|
[
{
"version": "v1",
"created": "Fri, 4 Jul 2014 01:39:23 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Nov 2014 14:33:56 GMT"
},
{
"version": "v3",
"created": "Thu, 29 Jan 2015 09:35:13 GMT"
}
] | 2015-01-30T00:00:00 |
[
[
"Jakovetic",
"Dusan",
""
],
[
"Bajovic",
"Dragana",
""
],
[
"Vukobratovic",
"Dejan",
""
],
[
"Crnojevic",
"Vladimir",
""
]
] |
new_dataset
| 0.994203 |
1408.1987
|
Hong Xing
|
Hong Xing, Liang Liu, Rui Zhang
|
Secrecy Wireless Information and Power Transfer in Fading Wiretap
Channel
|
to appear in IEEE Transactions on Vehicular Technology
| null |
10.1109/TVT.2015.2395725
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Simultaneous wireless information and power transfer (SWIPT) has recently
drawn significant interests for its dual use of radio signals to provide
wireless data and energy access at the same time. However, a challenging
secrecy communication issue arises as the messages sent to the information
receivers (IRs) may be eavesdropped by the energy receivers (ERs), which are
presumed to harvest energy only from the received signals. To tackle this
problem, we propose in this paper an artificial noise (AN) aided transmission
scheme to facilitate the secrecy information transmission to IRs and yet meet
the energy harvesting requirement for ERs, under the assumption that the AN can
be cancelled at IRs but not at ERs. Specifically, the proposed scheme splits
the transmit power into two parts, to send the confidential message to the IR
and an AN to interfere with the ER, respectively. Under a simplified three-node
wiretap channel setup, the transmit power allocations and power splitting
ratios over fading channels are jointly optimized to minimize the outage
probability for delay-limited secrecy information transmission, or to maximize
the average rate for no-delay-limited secrecy information transmission, subject
to a combination of average and peak power constraints at the transmitter as
well as an average energy harvesting constraint at the ER. Both the secrecy
outage probability minimization and average rate maximization problems are
shown to be non-convex, for each of which we propose the optimal solution based
on the dual decomposition as well as suboptimal solution based on the
alternating optimization. Furthermore, two benchmark schemes are introduced for
comparison. Finally, the performances of proposed schemes are evaluated by
simulations in terms of various trade-offs for wireless (secrecy) information
versus energy transmissions.
|
[
{
"version": "v1",
"created": "Fri, 8 Aug 2014 21:27:46 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Jan 2015 23:40:41 GMT"
}
] | 2015-01-30T00:00:00 |
[
[
"Xing",
"Hong",
""
],
[
"Liu",
"Liang",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.983029 |
1411.1830
|
Fabrizio Lecci
|
Brittany Terese Fasy, Jisu Kim, Fabrizio Lecci, Cl\'ement Maria
|
Introduction to the R package TDA
| null | null | null | null |
cs.MS cs.CG stat.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a short tutorial and introduction to using the R package TDA,
which provides some tools for Topological Data Analysis. In particular, it
includes implementations of functions that, given some data, provide
topological information about the underlying space, such as the distance
function, the distance to a measure, the kNN density estimator, the kernel
density estimator, and the kernel distance. The salient topological features of
the sublevel sets (or superlevel sets) of these functions can be quantified
with persistent homology. We provide an R interface for the efficient
algorithms of the C++ libraries GUDHI, Dionysus and PHAT, including a function
for the persistent homology of the Rips filtration, and one for the persistent
homology of sublevel sets (or superlevel sets) of arbitrary functions evaluated
over a grid of points. The significance of the features in the resulting
persistence diagrams can be analyzed with functions that implement recently
developed statistical methods. The R package TDA also includes the
implementation of an algorithm for density clustering, which allows us to
identify the spatial organization of the probability mass associated to a
density function and visualize it by means of a dendrogram, the cluster tree.
|
[
{
"version": "v1",
"created": "Fri, 7 Nov 2014 05:10:34 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jan 2015 17:21:36 GMT"
}
] | 2015-01-30T00:00:00 |
[
[
"Fasy",
"Brittany Terese",
""
],
[
"Kim",
"Jisu",
""
],
[
"Lecci",
"Fabrizio",
""
],
[
"Maria",
"Clément",
""
]
] |
new_dataset
| 0.975151 |
1501.07250
|
Alejandro Torre\~no
|
Alejandro Torre\~no, Eva Onaindia, \'Oscar Sapena
|
FMAP: Distributed Cooperative Multi-Agent Planning
|
21 pages, 11 figures
|
Applied Intelligence, Volume 41, Issue 2, pp. 606-626, Year 2014
|
10.1007/s10489-014-0540-2
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes FMAP (Forward Multi-Agent Planning), a fully-distributed
multi-agent planning method that integrates planning and coordination. Although
FMAP is specifically aimed at solving problems that require cooperation among
agents, the flexibility of the domain-independent planning model allows FMAP to
tackle multi-agent planning tasks of any type. In FMAP, agents jointly explore
the plan space by building up refinement plans through a complete and flexible
forward-chaining partial-order planner. The search is guided by $h_{DTG}$, a
novel heuristic function that is based on the concepts of Domain Transition
Graph and frontier state and is optimized to evaluate plans in distributed
environments. Agents in FMAP apply an advanced privacy model that allows them
to adequately keep private information while communicating only the data of the
refinement plans that is relevant to each of the participating agents.
Experimental results show that FMAP is a general-purpose approach that
efficiently solves tightly-coupled domains that have specialized agents and
cooperative goals as well as loosely-coupled problems. Specifically, the
empirical evaluation shows that FMAP outperforms current MAP systems at solving
complex planning tasks that are adapted from the International Planning
Competition benchmarks.
|
[
{
"version": "v1",
"created": "Wed, 28 Jan 2015 19:38:35 GMT"
}
] | 2015-01-30T00:00:00 |
[
[
"Torreño",
"Alejandro",
""
],
[
"Onaindia",
"Eva",
""
],
[
"Sapena",
"Óscar",
""
]
] |
new_dataset
| 0.996748 |
1501.07431
|
Bappaditya Ghosh
|
Bappaditya Ghosh
|
Negacyclic codes of odd length over the ring $\mathbb{F}_p[u,v]/\langle
u^2,v^2,uv-vu\rangle$
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We discuss the structure of negacyclic codes of odd length over the ring
$\mathbb{F}_p[u, v]/ \langle u^2, v^2, uv-vu \rangle$. We find the unique
generating set, the rank and the minimum distance for these negacyclic codes.
|
[
{
"version": "v1",
"created": "Thu, 29 Jan 2015 12:17:17 GMT"
}
] | 2015-01-30T00:00:00 |
[
[
"Ghosh",
"Bappaditya",
""
]
] |
new_dataset
| 0.999812 |
0901.4180
|
Bj{\o}rn Kjos-Hanssen
|
Bj{\o}rn Kjos-Hanssen and Alberto J. Evangelista
|
Google distance between words
|
Presented at Frontiers in Undergraduate Research, University of
Connecticut, 2006
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cilibrasi and Vitanyi have demonstrated that it is possible to extract the
meaning of words from the world-wide web. To achieve this, they rely on the
number of webpages that are found through a Google search containing a given
word and they associate the page count to the probability that the word appears
on a webpage. Thus, conditional probabilities allow them to correlate one word
with another word's meaning. Furthermore, they have developed a similarity
distance function that gauges how closely related a pair of words is. We
present a specific counterexample to the triangle inequality for this
similarity distance function.
|
[
{
"version": "v1",
"created": "Tue, 27 Jan 2009 06:29:10 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Jan 2015 20:10:34 GMT"
}
] | 2015-01-29T00:00:00 |
[
[
"Kjos-Hanssen",
"Bjørn",
""
],
[
"Evangelista",
"Alberto J.",
""
]
] |
new_dataset
| 0.971403 |
1501.07114
|
Natalia Silberstein
|
Natalia Silberstein and Alexander Zeh
|
Optimal Binary Locally Repairable Codes via Anticodes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a construction for several families of optimal binary
locally repairable codes (LRCs) with small locality (2 and 3). This
construction is based on various anticodes. It provides binary LRCs which
attain the Cadambe-Mazumdar bound. Moreover, most of these codes are optimal
with respect to the Griesmer bound.
|
[
{
"version": "v1",
"created": "Wed, 28 Jan 2015 14:32:55 GMT"
}
] | 2015-01-29T00:00:00 |
[
[
"Silberstein",
"Natalia",
""
],
[
"Zeh",
"Alexander",
""
]
] |
new_dataset
| 0.982903 |
1501.07130
|
Balaji Sb
|
S. B. Balaji and P. Vijay Kumar
|
On Partial Maximally-Recoverable and Maximally-Recoverable Codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An [n, k] linear code C that is subject to locality constraints imposed by a
parity check matrix H0 is said to be a maximally recoverable (MR) code if it
can recover from any erasure pattern that some k-dimensional subcode of the
null space of H0 can recover from. The focus in this paper is on MR codes
constrained to have all-symbol locality r. Given that it is challenging to
construct MR codes having small field size, we present results in two
directions. In the first, we relax the MR constraint and require only that
apart from the requirement of being an optimum all-symbol locality code, the
code must yield an MDS code when punctured in a single, specific pattern which
ensures that each local code is punctured in precisely one coordinate and that
no two local codes share the same punctured coordinate. We term these codes as
partially maximally recoverable (PMR) codes. We provide a simple construction
for high-rate PMR codes and then provide a general, promising approach that
needs further investigation. In the second direction, we present three
constructions of MR codes with improved parameters, primarily the size of the
finite field employed in the construction
|
[
{
"version": "v1",
"created": "Wed, 28 Jan 2015 15:06:25 GMT"
}
] | 2015-01-29T00:00:00 |
[
[
"Balaji",
"S. B.",
""
],
[
"Kumar",
"P. Vijay",
""
]
] |
new_dataset
| 0.996911 |
1501.06683
|
Birenjith Sasidharan
|
Birenjith Sasidharan, Gaurav Kumar Agarwal, and P. Vijay Kumar
|
Codes With Hierarchical Locality
|
12 pages, submitted to ISIT 2015
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the notion of {\em codes with hierarchical locality}
that is identified as another approach to local recovery from multiple
erasures. The well-known class of {\em codes with locality} is said to possess
hierarchical locality with a single level. In a {\em code with two-level
hierarchical locality}, every symbol is protected by an inner-most local code,
and another middle-level code of larger dimension containing the local code. We
first consider codes with two levels of hierarchical locality, derive an upper
bound on the minimum distance, and provide optimal code constructions of low
field-size under certain parameter sets. Subsequently, we generalize both the
bound and the constructions to hierarchical locality of arbitrary levels.
|
[
{
"version": "v1",
"created": "Tue, 27 Jan 2015 08:13:05 GMT"
}
] | 2015-01-28T00:00:00 |
[
[
"Sasidharan",
"Birenjith",
""
],
[
"Agarwal",
"Gaurav Kumar",
""
],
[
"Kumar",
"P. Vijay",
""
]
] |
new_dataset
| 0.98212 |
1501.06751
|
Daphna Weinshall
|
Chaim Ginzburg, Amit Raphael and Daphna Weinshall
|
A Cheap System for Vehicle Speed Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The reliable detection of speed of moving vehicles is considered key to
traffic law enforcement in most countries, and is seen by many as an important
tool to reduce the number of traffic accidents and fatalities. Many automatic
systems and different methods are employed in different countries, but as a
rule they tend to be expensive and/or labor intensive, often employing outdated
technology due to the long development time. Here we describe a speed detection
system that relies on simple everyday equipment - a laptop and a consumer web
camera. Our method is based on tracking the license plates of cars, which gives
the relative movement of the cars in the image. This image displacement is
translated to actual motion by using the method of projection to a reference
plane, where the reference plane is the road itself. However, since license
plates do not touch the road, we must compensate for the entailed distortion in
speed measurement. We show how to compute the compensation factor using
knowledge of the license plate standard dimensions. Consequently our system
computes the true speed of moving vehicles fast and accurately. We show
promising results on videos obtained in a number of scenes and with different
car models.
|
[
{
"version": "v1",
"created": "Tue, 27 Jan 2015 11:51:58 GMT"
}
] | 2015-01-28T00:00:00 |
[
[
"Ginzburg",
"Chaim",
""
],
[
"Raphael",
"Amit",
""
],
[
"Weinshall",
"Daphna",
""
]
] |
new_dataset
| 0.998902 |
1305.1824
|
Marc Hellmuth
|
Marc Hellmuth, Manuel Noll and Lydia Ostermeier
|
Strong Products of Hypergraphs: Unique Prime Factorization Theorems and
Algorithms
| null | null |
10.1016/j.dam.2014.02.017
| null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is well-known that all finite connected graphs have a unique prime factor
decomposition (PFD) with respect to the strong graph product which can be
computed in polynomial time. Essential for the PFD computation is the
construction of the so-called Cartesian skeleton of the graphs under
investigation.
In this contribution, we show that every connected thin hypergraph H has a
unique prime factorization with respect to the normal and strong (hypergraph)
product. Both products coincide with the usual strong graph product whenever H
is a graph. We introduce the notion of the Cartesian skeleton of hypergraphs as
a natural generalization of the Cartesian skeleton of graphs and prove that it
is uniquely defined for thin hypergraphs. Moreover, we show that the Cartesian
skeleton of hypergraphs can be determined in O(|E|^2) time and that the PFD can
be computed in O(|V|^2|E|) time, for hypergraphs H = (V,E) with bounded degree
and bounded rank.
|
[
{
"version": "v1",
"created": "Wed, 8 May 2013 14:12:38 GMT"
}
] | 2015-01-27T00:00:00 |
[
[
"Hellmuth",
"Marc",
""
],
[
"Noll",
"Manuel",
""
],
[
"Ostermeier",
"Lydia",
""
]
] |
new_dataset
| 0.97524 |
1407.1239
|
Hong Xu
|
Shuhao Liu, Wei Bai, Hong Xu, Kai Chen, Zhiping Cai
|
RepNet: Cutting Tail Latency in Data Center Networks with Flow
Replication
| null | null | null | null |
cs.NI cs.DC cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data center networks need to provide low latency, especially at the tail, as
demanded by many interactive applications. To improve tail latency, existing
approaches require modifications to switch hardware and/or end-host operating
systems, making them difficult to be deployed. We present the design,
implementation, and evaluation of RepNet, an application layer transport that
can be deployed today. RepNet exploits the fact that only a few paths among
many are congested at any moment in the network, and applies simple flow
replication to mice flows to opportunistically use the less congested path.
RepNet has two designs for flow replication: (1) RepSYN, which only replicates
SYN packets and uses the first connection that finishes TCP handshaking for
data transmission, and (2) RepFlow which replicates the entire mice flow. We
implement RepNet on {\tt node.js}, one of the most commonly used platforms for
networked interactive applications. {\tt node}'s single threaded event-loop and
non-blocking I/O make flow replication highly efficient. Performance evaluation
on a real network testbed and in Mininet reveals that RepNet is able to reduce
the tail latency of mice flows, as well as application completion times, by
more than 50\%.
|
[
{
"version": "v1",
"created": "Fri, 4 Jul 2014 14:17:40 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jan 2015 06:57:31 GMT"
}
] | 2015-01-27T00:00:00 |
[
[
"Liu",
"Shuhao",
""
],
[
"Bai",
"Wei",
""
],
[
"Xu",
"Hong",
""
],
[
"Chen",
"Kai",
""
],
[
"Cai",
"Zhiping",
""
]
] |
new_dataset
| 0.997499 |
1408.0500
|
Da Zheng
|
Da Zheng, Disa Mhembere, Randal Burns, Joshua Vogelstein, Carey E.
Priebe, Alexander S. Szalay
|
FlashGraph: Processing Billion-Node Graphs on an Array of Commodity SSDs
|
published in FAST'15
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph analysis performs many random reads and writes, thus, these workloads
are typically performed in memory. Traditionally, analyzing large graphs
requires a cluster of machines so the aggregate memory exceeds the graph size.
We demonstrate that a multicore server can process graphs with billions of
vertices and hundreds of billions of edges, utilizing commodity SSDs with
minimal performance loss. We do so by implementing a graph-processing engine on
top of a user-space SSD file system designed for high IOPS and extreme
parallelism. Our semi-external memory graph engine called FlashGraph stores
vertex state in memory and edge lists on SSDs. It hides latency by overlapping
computation with I/O. To save I/O bandwidth, FlashGraph only accesses edge
lists requested by applications from SSDs; to increase I/O throughput and
reduce CPU overhead for I/O, it conservatively merges I/O requests. These
designs maximize performance for applications with different I/O
characteristics. FlashGraph exposes a general and flexible vertex-centric
programming interface that can express a wide variety of graph algorithms and
their optimizations. We demonstrate that FlashGraph in semi-external memory
performs many algorithms with performance up to 80% of its in-memory
implementation and significantly outperforms PowerGraph, a popular distributed
in-memory graph engine.
|
[
{
"version": "v1",
"created": "Sun, 3 Aug 2014 13:44:09 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jan 2015 06:49:18 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Jan 2015 01:41:54 GMT"
}
] | 2015-01-27T00:00:00 |
[
[
"Zheng",
"Da",
""
],
[
"Mhembere",
"Disa",
""
],
[
"Burns",
"Randal",
""
],
[
"Vogelstein",
"Joshua",
""
],
[
"Priebe",
"Carey E.",
""
],
[
"Szalay",
"Alexander S.",
""
]
] |
new_dataset
| 0.998347 |
1412.4361
|
Yelena Mejova
|
Sofiane Abbar, Yelena Mejova, Ingmar Weber
|
You Tweet What You Eat: Studying Food Consumption Through Twitter
| null | null | null | null |
cs.CY cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Food is an integral part of our lives, cultures, and well-being, and is of
major interest to public health. The collection of daily nutritional data
involves keeping detailed diaries or periodic surveys and is limited in scope
and reach. Alternatively, social media is infamous for allowing its users to
update the world on the minutiae of their daily lives, including their eating
habits. In this work we examine the potential of Twitter to provide insight
into US-wide dietary choices by linking the tweeted dining experiences of 210K
users to their interests, demographics, and social networks. We validate our
approach by relating the caloric values of the foods mentioned in the tweets to
the state-wide obesity rates, achieving a Pearson correlation of 0.77 across
the 50 US states and the District of Columbia. We then build a model to predict
county-wide obesity and diabetes statistics based on a combination of
demographic variables and food names mentioned on Twitter. Our results show
significant improvement over previous CHI research (Culotta'14). We further
link this data to societal and economic factors, such as education and income,
illustrating that, for example, areas with higher education levels tweet about
food that is significantly less caloric. Finally, we address the somewhat
controversial issue of the social nature of obesity (first raised by Christakis
& Fowler in 2007) by inducing two social networks using mentions and reciprocal
following relationships.
|
[
{
"version": "v1",
"created": "Sun, 14 Dec 2014 14:09:33 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Jan 2015 09:12:19 GMT"
}
] | 2015-01-27T00:00:00 |
[
[
"Abbar",
"Sofiane",
""
],
[
"Mejova",
"Yelena",
""
],
[
"Weber",
"Ingmar",
""
]
] |
new_dataset
| 0.999451 |
1501.03389
|
Mikhail Ivanov
|
Mikhail Ivanov, Fredrik Brannstrom, Alexandre Graell i Amat, Petar
Popovski
|
All-to-all Broadcast for Vehicular Networks Based on Coded Slotted ALOHA
|
v2: small typos fixed
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an uncoordinated all-to-all broadcast protocol for periodic
messages in vehicular networks based on coded slotted ALOHA (CSA). Unlike
classical CSA, each user acts as both transmitter and receiver in a half-duplex
mode. As in CSA, each user transmits its packet several times. The half-duplex
mode gives rise to an interesting design trade-off: the more the user repeats
its packet, the higher the probability that this packet is decoded by other
users, but the lower the probability for this user to decode packets from
others. We compare the proposed protocol with carrier sense multiple access
with collision avoidance, currently adopted as a multiple access protocol for
vehicular networks. The results show that the proposed protocol greatly
increases the number of users in the network that reliably communicate with
each other. We also provide analytical tools to predict the performance of the
proposed protocol.
|
[
{
"version": "v1",
"created": "Wed, 14 Jan 2015 16:14:13 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jan 2015 17:10:50 GMT"
}
] | 2015-01-27T00:00:00 |
[
[
"Ivanov",
"Mikhail",
""
],
[
"Brannstrom",
"Fredrik",
""
],
[
"Amat",
"Alexandre Graell i",
""
],
[
"Popovski",
"Petar",
""
]
] |
new_dataset
| 0.987719 |
1501.04553
|
Nidhi Lal
|
Nidhi Lal, Anurag Prakash Singh, Shishupal Kumar, Shikha Mittal,
Meenakshi Singh
|
A Heuristic EDF Uplink Scheduler for Real Time Application in WiMAX
Communication
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
WiMAX, Worldwide Interoperability for Microwave Access, is a developing
wireless communication scheme that can provide broadband access to large-scale
coverage. WiMAX belongs to the family of standards of IEEE-802.16. To satisfy
user demands and support a new set of real time services and applications, a
realistic and dynamic resource allocation algorithm is mandatory. One of the
most efficient algorithm is EDF (earliest deadline first). But the problem is
that when the difference between deadlines is large enough, then lower priority
queues have to starve. So in this paper, we present a heuristic earliest
deadline first (H-EDF) approach of the uplink scheduler of the WiMAX real time
system. This H-EDF presents a way for efficient allocation of the bandwidth for
uplink, so that bandwidth utilization is proper and appropriate fairness is
provided to the system. We use Opnet simulator for implementing the WiMAX
network, which uses this H-EDF scheduling algorithm. We will analysis the
performance of the H-EDF algorithm in consideration with throughput as well as
involvement of delay.
|
[
{
"version": "v1",
"created": "Mon, 19 Jan 2015 16:49:11 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Jan 2015 01:30:39 GMT"
}
] | 2015-01-27T00:00:00 |
[
[
"Lal",
"Nidhi",
""
],
[
"Singh",
"Anurag Prakash",
""
],
[
"Kumar",
"Shishupal",
""
],
[
"Mittal",
"Shikha",
""
],
[
"Singh",
"Meenakshi",
""
]
] |
new_dataset
| 0.994375 |
1501.05927
|
Jun Zhou
|
Z. Wang, A. Chini, M. Kilani, and J. Zhou
|
Multiple-Symbol Interleaved RS Codes and Two-Pass Decoding Algorithm
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For communication systems with heavy burst noise, an optimal Forward Error
Correction (FEC) scheme is expected to have a large burst error correction
capacity while simultaneously owning moderate random error correction
capability. This letter presents a new FEC scheme based on multiple-symbol
interleaved Reed-Solomon codes and an associated two-pass decoding algorithm.
It is shown that the proposed multi-symbol interleaved coding scheme can
achieve nearly twice as much as the burst error correction capability of
conventional symbol-interleaved Reed-Solomon codes with the same code length
and code rate.
|
[
{
"version": "v1",
"created": "Fri, 23 Jan 2015 19:49:16 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jan 2015 07:51:27 GMT"
}
] | 2015-01-27T00:00:00 |
[
[
"Wang",
"Z.",
""
],
[
"Chini",
"A.",
""
],
[
"Kilani",
"M.",
""
],
[
"Zhou",
"J.",
""
]
] |
new_dataset
| 0.999117 |
1501.06283
|
Alessio Meloni Ph.D.
|
Alessio Meloni and Maurizio Murroni
|
Random access congestion control in DVB-RCS2 interactive satellite
terminals
|
IEEE International Symposium on Broadband Multimedia Systems and
Broadcasting (BMSB), 2013. arXiv admin note: text overlap with
arXiv:1501.05809
| null |
10.1109/BMSB.2013.6621777
| null |
cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The next generation of interactive satellite terminals is going to play a
crucial role in the future of DVB standards. As a matter of fact in the current
standard, satellite terminals are expected to be interactive thus offering
apart from the possibility of logon signalling and control signalling also data
transmission in the return channel with satisfying quality. Considering the
nature of the traffic from terminals that is by nature bursty and with big
periods of inactivity, the use of a Random Access technique could be preferred.
In this paper Random Access congestion control in DVB-RCS2 is considered with
particular regard to the recently introduced Contention Resolution Diversity
Slotted Aloha technique, able to boost the performance compared to Slotted
Aloha. The paper analyzes the stability of such a channel with particular
emphasis on the design and on limit control procedures that can be applied in
order to ensure stability of the channel even in presence of possible
instability due to statistical fluctuations.
|
[
{
"version": "v1",
"created": "Mon, 26 Jan 2015 08:26:41 GMT"
}
] | 2015-01-27T00:00:00 |
[
[
"Meloni",
"Alessio",
""
],
[
"Murroni",
"Maurizio",
""
]
] |
new_dataset
| 0.958061 |
1501.06363
|
Rainer Plaga
|
Rainer Plaga and Dominik Merli
|
A new Definition and Classification of Physical Unclonable Functions
|
6 pages, 3 figures; Proceedings "CS2 '15 Proceedings of the Second
Workshop on Cryptography and Security in Computing Systems", Amsterdam, 2015,
ACM Digital Library
| null |
10.1145/2694805.2694807
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A new definition of "Physical Unclonable Functions" (PUFs), the first one
that fully captures its intuitive idea among experts, is presented. A PUF is an
information-storage system with a security mechanism that is
1. meant to impede the duplication of a precisely described
storage-functionality in another, separate system and
2. remains effective against an attacker with temporary access to the whole
original system.
A novel classification scheme of the security objectives and mechanisms of
PUFs is proposed and its usefulness to aid future research and security
evaluation is demonstrated. One class of PUF security mechanisms that prevents
an attacker to apply all addresses at which secrets are stored in the
information-storage system, is shown to be closely analogous to cryptographic
encryption. Its development marks the dawn of a new fundamental primitive of
hardware-security engineering: cryptostorage. These results firmly establish
PUFs as a fundamental concept of hardware security.
|
[
{
"version": "v1",
"created": "Mon, 26 Jan 2015 12:34:57 GMT"
}
] | 2015-01-27T00:00:00 |
[
[
"Plaga",
"Rainer",
""
],
[
"Merli",
"Dominik",
""
]
] |
new_dataset
| 0.954701 |
1501.06398
|
Jens Ma{\ss}berg
|
Jens Ma{\ss}berg
|
Solitaire Chess is NP-complete
|
7 pages
| null | null | null |
cs.CC math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
"Solitaire Chess" is a logic puzzle published by Thinkfun, that can be seen
as a single person version of traditional chess. Given a chess board with some
chess pieces of the same color placed on it, the task is to capture all pieces
but one using only moves that are allowed in chess. Moreover, in each move one
piece has to be captured. We prove that deciding if a given instance of
Solitaire Chess is solvable is NP-complete.
|
[
{
"version": "v1",
"created": "Mon, 26 Jan 2015 14:02:29 GMT"
}
] | 2015-01-27T00:00:00 |
[
[
"Maßberg",
"Jens",
""
]
] |
new_dataset
| 0.999646 |
1501.05673
|
Limin Jia
|
Limin Jia, Shayak Sen, Deepak Garg, and Anupam Datta
|
System M: A Program Logic for Code Sandboxing and Identification
| null | null | null | null |
cs.CR cs.LO cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Security-sensitive applications that execute untrusted code often check the
code's integrity by comparing its syntax to a known good value or sandbox the
code to contain its effects. System M is a new program logic for reasoning
about such security-sensitive applications. System M extends Hoare Type Theory
(HTT) to trace safety properties and, additionally, contains two new reasoning
principles. First, its type system internalizes logical equality, facilitating
reasoning about applications that check code integrity. Second, a confinement
rule assigns an effect type to a computation based solely on knowledge of the
computation's sandbox. We prove the soundness of system M relative to a
step-indexed trace-based semantic model. We illustrate both new reasoning
principles of system M by verifying the main integrity property of the design
of Memoir, a previously proposed trusted computing system for ensuring state
continuity of isolated security-sensitive applications.
|
[
{
"version": "v1",
"created": "Thu, 22 Jan 2015 22:22:44 GMT"
}
] | 2015-01-26T00:00:00 |
[
[
"Jia",
"Limin",
""
],
[
"Sen",
"Shayak",
""
],
[
"Garg",
"Deepak",
""
],
[
"Datta",
"Anupam",
""
]
] |
new_dataset
| 0.997607 |
1501.05709
|
Jeremy Kepner
|
Jeremy Kepner, Julian Chaidez, Vijay Gadepally, Hayden Jansen
|
Associative Arrays: Unified Mathematics for Spreadsheets, Databases,
Matrices, and Graphs
|
4 pages, 6 figures; New England Database Summit 2015
| null | null | null |
cs.DB cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data processing systems impose multiple views on data as it is processed by
the system. These views include spreadsheets, databases, matrices, and graphs.
The common theme amongst these views is the need to store and operate on data
as whole sets instead of as individual data elements. This work describes a
common mathematical representation of these data sets (associative arrays) that
applies across a wide range of applications and technologies. Associative
arrays unify and simplify these different approaches for representing and
manipulating data into common two-dimensional view of data. Specifically,
associative arrays (1) reduce the effort required to pass data between steps in
a data processing system, (2) allow steps to be interchanged with full
confidence that the results will be unchanged, and (3) make it possible to
recognize when steps can be simplified or eliminated. Most database system
naturally support associative arrays via their tabular interfaces. The D4M
implementation of associative arrays uses this feature to provide a common
interface across SQL, NoSQL, and NewSQL databases.
|
[
{
"version": "v1",
"created": "Fri, 23 Jan 2015 04:16:04 GMT"
}
] | 2015-01-26T00:00:00 |
[
[
"Kepner",
"Jeremy",
""
],
[
"Chaidez",
"Julian",
""
],
[
"Gadepally",
"Vijay",
""
],
[
"Jansen",
"Hayden",
""
]
] |
new_dataset
| 0.972791 |
1501.05789
|
Minxian Xu
|
Minxian Xu, Wenhong Tian, Xinyang Wang, Qin Xiong
|
FlexCloud: A Flexible and Extendible Simulator for Performance
Evaluation of Virtual Machine Allocation
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cloud Data centers aim to provide reliable, sustainable and scalable services
for all kinds of applications. Resource scheduling is one of keys to cloud
services. To model and evaluate different scheduling policies and algorithms,
we propose FlexCloud, a flexible and scalable simulator that enables users to
simulate the process of initializing cloud data centers, allocating virtual
machine requests and providing performance evaluation for various scheduling
algorithms. FlexCloud can be run on a single computer with JVM to simulate
large scale cloud environments with focus on infrastructure as a service;
adopts agile design patterns to assure the flexibility and extensibility;
models virtual machine migrations which is lack in the existing tools; provides
user-friendly interfaces for customized configurations and replaying. Comparing
to existing simulators, FlexCloud has combining features for supporting public
cloud providers, load-balance and energy-efficiency scheduling. FlexCloud has
advantage in computing time and memory consumption to support large-scale
simulations. The detailed design of FlexCloud is introduced and performance
evaluation is provided.
|
[
{
"version": "v1",
"created": "Fri, 23 Jan 2015 13:05:35 GMT"
}
] | 2015-01-26T00:00:00 |
[
[
"Xu",
"Minxian",
""
],
[
"Tian",
"Wenhong",
""
],
[
"Wang",
"Xinyang",
""
],
[
"Xiong",
"Qin",
""
]
] |
new_dataset
| 0.999232 |
1501.05800
|
Matthew Johnson
|
Carl Feghali, Matthew Johnson, Dani\"el Paulusma
|
A Reconfigurations Analogue of Brooks' Theorem and its Consequences
|
20 pages
| null | null | null |
cs.CC cs.DM cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $G$ be a simple undirected graph on $n$ vertices with maximum
degree~$\Delta$. Brooks' Theorem states that $G$ has a $\Delta$-colouring
unless~$G$ is a complete graph, or a cycle with an odd number of vertices. To
recolour $G$ is to obtain a new proper colouring by changing the colour of one
vertex. We show an analogue of Brooks' Theorem by proving that from any
$k$-colouring, $k>\Delta$, a $\Delta$-colouring of $G$ can be obtained by a
sequence of $O(n^2)$ recolourings using only the original $k$ colours unless
$G$ is a complete graph or a cycle with an odd number of vertices, or
$k=\Delta+1$, $G$ is $\Delta$-regular and, for each vertex $v$ in $G$, no two
neighbours of $v$ are coloured alike.
We use this result to study the reconfiguration graph $R_k(G)$ of the
$k$-colourings of $G$. The vertex set of $R_k(G)$ is the set of all possible
$k$-colourings of $G$ and two colourings are adjacent if they differ on exactly
one vertex. We prove that for $\Delta\geq 3$, $R_{\Delta+1}(G)$ consists of
isolated vertices and at most one further component which has diameter
$O(n^2)$. This result enables us to complete both a structural classification
and an algorithmic classification for reconfigurations of colourings of graphs
of bounded maximum degree.
|
[
{
"version": "v1",
"created": "Fri, 23 Jan 2015 13:50:07 GMT"
}
] | 2015-01-26T00:00:00 |
[
[
"Feghali",
"Carl",
""
],
[
"Johnson",
"Matthew",
""
],
[
"Paulusma",
"Daniël",
""
]
] |
new_dataset
| 0.999708 |
1501.05802
|
Ashutosh Patri
|
Ashutosh Patri, Devidas S. Nimaje
|
Radio Frequency Propagation Model and Fading of Wireless Signal at 2.4
GHz in Underground Coal Mine
|
21 pages, 6 figures, 4 tables
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deployment of wireless sensor networks and wireless communication systems
have become indispensable for better real-time data acquisition from ground
monitoring devices, gas sensors, and equipment used in underground mines as
well as in locating the miners, since conventional methods like use of wireline
communication are rendered ineffective in the event of mine hazards such as
roof-falls, fire hazard etc. Before implementation of any wireless system, the
variable path loss indices for different work place should be determined; this
helps in better signal reception and sensor-node localisation. This also
improves the method by which miner carrying the wireless device is tracked.
This paper proposes a novel method for parameter determination of a suitable
radio propagation model with the help of results of a practical experiment
carried out in an underground coal mine of Southern India. The path loss
indices along with other essential parameters for accurate localisation have
been determined using XBee module and ZigBee protocol at 2.4 GHz frequency.
|
[
{
"version": "v1",
"created": "Fri, 23 Jan 2015 13:58:02 GMT"
}
] | 2015-01-26T00:00:00 |
[
[
"Patri",
"Ashutosh",
""
],
[
"Nimaje",
"Devidas S.",
""
]
] |
new_dataset
| 0.981793 |
1501.05809
|
Alessio Meloni
|
Alessio Meloni and Maurizio Murroni
|
CRDSA, CRDSA++ and IRSA: Stability and Performance Evaluation
|
6th Advanced Satellite Multimedia Systems Conference (ASMS) and 12th
Signal Processing for Space Communications Workshop (SPSC), 2012
| null |
10.1109/ASMS-SPSC.2012.6333080
| null |
cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the recent past, new enhancements based on the well established Aloha
technique (CRDSA, CRDSA++, IRSA) have demonstrated the capability to reach
higher throughput than traditional SA, in bursty traffic conditions and without
any need of coordination among terminals. In this paper, retransmissions and
related stability for these new techniques are discussed. A model is also
formulated in order to provide a basis for the analysis of the stability and
the performance both for finite and infinite users population. This model can
be used as a framework for the design of such a communication system.
|
[
{
"version": "v1",
"created": "Fri, 23 Jan 2015 14:30:02 GMT"
}
] | 2015-01-26T00:00:00 |
[
[
"Meloni",
"Alessio",
""
],
[
"Murroni",
"Maurizio",
""
]
] |
new_dataset
| 0.999589 |
1411.2874
|
Hsi-Ming Ho
|
Hsi-Ming Ho and Joel Ouaknine
|
The Cyclic-Routing UAV Problem is PSPACE-Complete
|
19 pages. Full version of the FoSSaCS'15 paper with the same title
| null | null | null |
cs.LO cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Consider a finite set of targets, with each target assigned a relative
deadline, and each pair of targets assigned a fixed transit flight time. Given
a flock of identical UAVs, can one ensure that every target is repeatedly
visited by some UAV at intervals of duration at most the target's relative
deadline? The Cyclic-Routing UAV Problem (CR-UAV) is the question of whether
this task has a solution.
This problem can straightforwardly be solved in PSPACE by modelling it as a
network of timed automata. The special case of there being a single UAV is
claimed to be NP-complete in the literature. In this paper, we show that the
CR-UAV Problem is in fact PSPACE-complete even in the single-UAV case.
|
[
{
"version": "v1",
"created": "Mon, 10 Nov 2014 01:10:37 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jan 2015 18:49:50 GMT"
}
] | 2015-01-23T00:00:00 |
[
[
"Ho",
"Hsi-Ming",
""
],
[
"Ouaknine",
"Joel",
""
]
] |
new_dataset
| 0.959445 |
1411.4156
|
Peter Patel-Schneider
|
Peter F. Patel-Schneider
|
Using Description Logics for RDF Constraint Checking and Closed-World
Recognition
|
Extended version of a paper of the same name that will appear in
AAAI-2015
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
RDF and Description Logics work in an open-world setting where absence of
information is not information about absence. Nevertheless, Description Logic
axioms can be interpreted in a closed-world setting and in this setting they
can be used for both constraint checking and closed-world recognition against
information sources. When the information sources are expressed in well-behaved
RDF or RDFS (i.e., RDF graphs interpreted in the RDF or RDFS semantics) this
constraint checking and closed-world recognition is simple to describe. Further
this constraint checking can be implemented as SPARQL querying and thus
effectively performed.
|
[
{
"version": "v1",
"created": "Sat, 15 Nov 2014 15:33:38 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jan 2015 21:09:56 GMT"
}
] | 2015-01-23T00:00:00 |
[
[
"Patel-Schneider",
"Peter F.",
""
]
] |
new_dataset
| 0.951308 |
1501.05425
|
Dmitriy Traytel
|
Jasmin Christian Blanchette and Andrei Popescu and Dmitriy Traytel
|
Foundational Extensible Corecursion
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a formalized framework for defining corecursive functions
safely in a total setting, based on corecursion up-to and relational
parametricity. The end product is a general corecursor that allows corecursive
(and even recursive) calls under well-behaved operations, including
constructors. Corecursive functions that are well behaved can be registered as
such, thereby increasing the corecursor's expressiveness. The metatheory is
formalized in the Isabelle proof assistant and forms the core of a prototype
tool. The corecursor is derived from first principles, without requiring new
axioms or extensions of the logic.
|
[
{
"version": "v1",
"created": "Thu, 22 Jan 2015 08:45:10 GMT"
}
] | 2015-01-23T00:00:00 |
[
[
"Blanchette",
"Jasmin Christian",
""
],
[
"Popescu",
"Andrei",
""
],
[
"Traytel",
"Dmitriy",
""
]
] |
new_dataset
| 0.996758 |
1501.05472
|
Subhadip Basu
|
Ram Sarkar, Bibhash Sen, Nibaran Das, Subhadip Basu
|
Handwritten Devanagari Script Segmentation: A non-linear Fuzzy Approach
|
In Proceedings of IEEE Conference on AI Tools and Engineering
(ICAITE-08), March 6-8, 2008, Pune
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper concentrates on improvement of segmentation accuracy by addressing
some of the key challenges of handwritten Devanagari word image segmentation
technique. In the present work, we have developed a new feature based approach
for identification of Matra pixels from a word image, design of a non-linear
fuzzy membership functions for headline estimation and finally design of a
non-linear fuzzy functions for identifying segmentation points on the Matra.
The segmentation accuracy achieved by the current technique is 94.8%. This
shows an improvement of performance by 1.8% over the previous technique [1] on
a 300-word dataset, used for the current experiment.
|
[
{
"version": "v1",
"created": "Thu, 22 Jan 2015 12:05:25 GMT"
}
] | 2015-01-23T00:00:00 |
[
[
"Sarkar",
"Ram",
""
],
[
"Sen",
"Bibhash",
""
],
[
"Das",
"Nibaran",
""
],
[
"Basu",
"Subhadip",
""
]
] |
new_dataset
| 0.998008 |
1501.05542
|
Meo Mespotine
|
Meo Mespotine
|
Mespotine-RLE-basic v0.9 - An overhead-reduced and improved
Run-Length-Encoding Method
|
16 pages and algorithm-flowcharts
| null | null | null |
cs.DS cs.IT math.IT
|
http://creativecommons.org/licenses/by/3.0/
|
Run Length Encoding(RLE) is one of the oldest algorithms for data-compression
available, a method used for compression of large data into smaller and
therefore more compact data. It compresses by looking at the data for
repetitions of the same character in a row and storing the amount(called run)
and the respective character(called run_value) as target-data. Unfortunately it
only compresses within strict and special cases. Outside of these cases, it
increases the data-size, even doubles the size in worst cases compared to the
original, unprocessed data. In this paper, we will discuss modifications to
RLE, with which we will only store the run for characters, that are actually
compressible, getting rid of a lot of useless data like the runs of the
characters, that are uncompressible in the first place. This will be achieved
by storing the character first and the run second. Additionally we create a
bit-list of 256 positions(one for every possible ASCII-character), in which we
will store, if a specific (ASCII-)character is compressible(1) or not(0). Using
this list, we can now say, if a character is compressible (store [the
character]+[it's run]) or if it is not compressible (store [the character] only
and the next character is NOT a run, but the following character instead).
Using this list, we can also successfully decode the data(if the character is
compressible, the next character is a run, if not compressible, the next
character is a normal character). With that, we store runs only for characters,
that are compressible in the first place. In fact, in the worst case scenario,
the encoded data will create always just an overhead of the size of the
bit-list itself. With an alphabet of 256 different characters(i.e. ASCII) it
would be only a maximum of 32 bytes, no matter how big the original data was.
[...]
|
[
{
"version": "v1",
"created": "Thu, 22 Jan 2015 15:51:32 GMT"
}
] | 2015-01-23T00:00:00 |
[
[
"Mespotine",
"Meo",
""
]
] |
new_dataset
| 0.99953 |
1501.05595
|
Georg B\"ocherer
|
Fabian Steiner and Georg B\"ocherer and Gianluigi Liva
|
Protograph-Based LDPC Code Design for Bit-Metric Decoding
|
5 pages, 8 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A protograph-based low-density parity-check (LDPC) code design technique for
bandwidth-efficient coded modulation is presented. The approach jointly
optimizes the LDPC code node degrees and the mapping of the coded bits to the
bit-interleaved coded modulation (BICM) bit-channels. For BICM with uniform
input and for BICM with probabilistic shaping, binary-input symmetric-output
surrogate channels are constructed and used for code design. The constructed
codes perform as good as multi-edge type codes of Zhang and Kschischang (2013).
For 64-ASK with probabilistic shaping, a blocklength 64800 code is constructed
that operates within 0.69 dB of 0.5log(1+SNR) at a spectral efficiency of 4.2
bits/channel use and a frame error rate of 1e-3.
|
[
{
"version": "v1",
"created": "Thu, 22 Jan 2015 18:36:00 GMT"
}
] | 2015-01-23T00:00:00 |
[
[
"Steiner",
"Fabian",
""
],
[
"Böcherer",
"Georg",
""
],
[
"Liva",
"Gianluigi",
""
]
] |
new_dataset
| 0.997 |
1401.4734
|
Natalia Silberstein
|
Natalia Silberstein and Tuvi Etzion
|
Optimal Fractional Repetition Codes based on Graphs and Designs
| null | null | null | null |
cs.IT cs.DM math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fractional repetition (FR) codes is a family of codes for distributed storage
systems that allow for uncoded exact repairs having the minimum repair
bandwidth. However, in contrast to minimum bandwidth regenerating (MBR) codes,
where a random set of a certain size of available nodes is used for a node
repair, the repairs with FR codes are table based. This usually allows to store
more data compared to MBR codes. In this work, we consider bounds on the
fractional repetition capacity, which is the maximum amount of data that can be
stored using an FR code. Optimal FR codes which attain these bounds are
presented. The constructions of these FR codes are based on combinatorial
designs and on families of regular and biregular graphs. These constructions of
FR codes for given parameters raise some interesting questions in graph theory.
These questions and some of their solutions are discussed in this paper. In
addition, based on a connection between FR codes and batch codes, we propose a
new family of codes for DSS, namely fractional repetition batch codes, which
have the properties of batch codes and FR codes simultaneously. These are the
first codes for DSS which allow for uncoded efficient exact repairs and load
balancing which can be performed by several users in parallel. Other concepts
related to FR codes are also discussed.
|
[
{
"version": "v1",
"created": "Sun, 19 Jan 2014 20:26:50 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Sep 2014 10:52:57 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Jan 2015 13:57:41 GMT"
}
] | 2015-01-22T00:00:00 |
[
[
"Silberstein",
"Natalia",
""
],
[
"Etzion",
"Tuvi",
""
]
] |
new_dataset
| 0.999311 |
1501.05060
|
Anoop Thomas
|
Anoop Thomas and B. Sundar Rajan
|
Error Correcting Index Codes and Matroids
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The connection between index coding and matroid theory have been well studied
in the recent past. El Rouayheb et al. established a connection between
multilinear representation of matroids and wireless index coding. Muralidharan
and Rajan showed that a vector linear solution to an index coding problem
exists if and only if there exists a representable discrete polymatroid
satisfying certain conditions. Recently index coding with erroneous
transmission was considered by Dau et al.. Error correcting index codes in
which all receivers are able to correct a fixed number of errors was studied.
In this paper we consider a more general scenario in which each receiver is
able to correct a desired number of errors, calling such index codes
differential error correcting index codes. A link between differential error
correcting index codes and certain matroids is established. We define matroidal
differential error correcting index codes and we show that a scalar linear
differential error correcting index code exists if and only if it is matroidal
differential error correcting index code associated with a representable
matroid.
|
[
{
"version": "v1",
"created": "Wed, 21 Jan 2015 05:34:55 GMT"
}
] | 2015-01-22T00:00:00 |
[
[
"Thomas",
"Anoop",
""
],
[
"Rajan",
"B. Sundar",
""
]
] |
new_dataset
| 0.963359 |
1501.05136
|
Catarina Moreira
|
Silvana Roque de Oliveira and Catarina Moreira and Jos\'e Borbinha and
Mar\'ia \'Amgeles Zuleta Garcia
|
Uma an\'alise bibliom\'etrica do Congresso Nacional de Bibliotec\'arios,
Arquivistas e Documentalistas (1985-2012)
|
in Portuguese
|
Cadernos BAD, N. 1/2 (2012/2013), 2013
| null | null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article is the first bibliometric analysis of the 708 lectures published
by The Librarians and Archivists National Congress between 1985 and 2012,
having been developed markers for production, productivity, institutional
origin and thematic analysis, in a quantitative, relational and diachronic
perspective. Its results show a dynamic congress, essentially national and
professional, with a strong majority of individual authorships, even with the
recent growth of the ratio of collaborations. In its thematic approach,
emphasis is given to public services of information, with the greatest focus
being on libraries, while still giving relevance to reflections on professional
and academic training in the area of Information Sciences, and also following
the most recent technological developments.
|
[
{
"version": "v1",
"created": "Wed, 21 Jan 2015 11:33:42 GMT"
}
] | 2015-01-22T00:00:00 |
[
[
"de Oliveira",
"Silvana Roque",
""
],
[
"Moreira",
"Catarina",
""
],
[
"Borbinha",
"José",
""
],
[
"Garcia",
"María Ámgeles Zuleta",
""
]
] |
new_dataset
| 0.999046 |
1501.05177
|
Natalia Silberstein
|
Natalia Silberstein and Tuvi Etzion
|
Optimal Fractional Repetition Codes and Fractional Repetition Batch
Codes
|
arXiv admin note: substantial text overlap with arXiv:1401.4734
| null | null | null |
cs.IT cs.DM math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fractional repetition (FR) codes is a family of codes for distributed storage
systems (DSS) that allow uncoded exact repairs with minimum repair bandwidth.
In this work, we consider a bound on the maximum amount of data that can be
stored using an FR code. Optimal FR codes which attain this bound are
presented. The constructions of these FR codes are based on families of regular
graphs, such as Tur\'an graphs and graphs with large girth; and on
combinatorial designs, such as transversal designs and generalized polygons. In
addition, based on a connection between FR codes and batch codes, we propose a
new family of codes for DSS, called fractional repetition batch codes, which
allow uncoded efficient exact repairs and load balancing which can be performed
by several users in parallel.
|
[
{
"version": "v1",
"created": "Wed, 21 Jan 2015 14:26:01 GMT"
}
] | 2015-01-22T00:00:00 |
[
[
"Silberstein",
"Natalia",
""
],
[
"Etzion",
"Tuvi",
""
]
] |
new_dataset
| 0.998504 |
1501.05180
|
Henning Urbat
|
Jiri Adamek, Stefan Milius, Robert Myers and Henning Urbat
|
Varieties of Languages in a Category
| null | null | null | null |
cs.FL cs.LO math.CT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Eilenberg's variety theorem, a centerpiece of algebraic automata theory,
establishes a bijective correspondence between varieties of languages and
pseudovarieties of monoids. In the present paper this result is generalized to
an abstract pair of algebraic categories: we introduce varieties of languages
in a category C, and prove that they correspond to pseudovarieties of monoids
in a closed monoidal category D, provided that C and D are dual on the level of
finite objects. By suitable choices of these categories our result uniformly
covers Eilenberg's theorem and three variants due to Pin, Polak and Reutenauer,
respectively, and yields new Eilenberg-type correspondences.
|
[
{
"version": "v1",
"created": "Wed, 21 Jan 2015 14:31:04 GMT"
}
] | 2015-01-22T00:00:00 |
[
[
"Adamek",
"Jiri",
""
],
[
"Milius",
"Stefan",
""
],
[
"Myers",
"Robert",
""
],
[
"Urbat",
"Henning",
""
]
] |
new_dataset
| 0.99837 |
1209.5325
|
Radu Grigore
|
Radu Grigore, Dino Distefano, Rasmus Lerchedahl Petersen, Nikos
Tzevelekos
|
Runtime Verification Based on Register Automata
|
TACAS 2013 (plus proofs)
| null | null | null |
cs.FL cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose TOPL automata as a new method for runtime verification of systems
with unbounded resource generation. Paradigmatic such systems are
object-oriented programs which can dynamically generate an unbounded number of
fresh object identities during their execution. Our formalism is based on
register automata, a particularly successful approach in automata over infinite
alphabets which administers a finite-state machine with boundedly many
input-storing registers. We show that TOPL automata are equally expressive to
register automata and yet suitable to express properties of programs. Compared
to other runtime verification methods, our technique can handle a class of
properties beyond the reach of current tools. We show in particular that
properties which require value updates are not expressible with current
techniques yet are naturally captured by TOPL machines. On the practical side,
we present a tool for runtime verification of Java programs via TOPL
properties, where the trade-off between the coverage and the overhead of the
monitoring system is tunable by means of a number of parameters. We validate
our technique by checking properties involving multiple objects and chaining of
values on large open source projects.
|
[
{
"version": "v1",
"created": "Mon, 24 Sep 2012 16:33:13 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Feb 2013 09:08:17 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Jan 2015 07:57:18 GMT"
}
] | 2015-01-21T00:00:00 |
[
[
"Grigore",
"Radu",
""
],
[
"Distefano",
"Dino",
""
],
[
"Petersen",
"Rasmus Lerchedahl",
""
],
[
"Tzevelekos",
"Nikos",
""
]
] |
new_dataset
| 0.999149 |
1412.5034
|
Mikkel Abrahamsen
|
Mikkel Abrahamsen
|
Spiral Toolpaths for High-Speed Machining of 2D Pockets with or without
Islands
|
22 pages, 13 figures
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe new methods for the construction of spiral toolpaths for
high-speed machining. In the simplest case, our method takes a polygon as input
and a number $\delta>0$ and returns a spiral starting at a central point in the
polygon, going around towards the boundary while morphing to the shape of the
polygon. The spiral consists of linear segments and circular arcs, it is $G^1$
continuous, it has no self-intersections, and the distance from each point on
the spiral to each of the neighboring revolutions is at most $\delta$. Our
method has the advantage over previously described methods that it is easily
adjustable to the case where there is an island in the polygon to be avoided by
the spiral. In that case, the spiral starts at the island and morphs the island
to the outer boundary of the polygon. It is shown how to apply that method to
make significantly shorter spirals in polygons with no islands. Finally, we
show how to make a spiral in a polygon with multiple islands by connecting the
islands into one island.
|
[
{
"version": "v1",
"created": "Tue, 16 Dec 2014 15:17:15 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Dec 2014 06:44:42 GMT"
},
{
"version": "v3",
"created": "Sat, 20 Dec 2014 09:10:07 GMT"
},
{
"version": "v4",
"created": "Tue, 20 Jan 2015 12:40:54 GMT"
}
] | 2015-01-21T00:00:00 |
[
[
"Abrahamsen",
"Mikkel",
""
]
] |
new_dataset
| 0.982334 |
1501.04719
|
Quang-Cuong Pham
|
St\'ephane Caron, Quang-Cuong Pham, Yoshihiko Nakamura
|
Stability of Surface Contacts for Humanoid Robots: Closed-Form Formulae
of the Contact Wrench Cone for Rectangular Support Areas
|
14 pages, 4 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humanoid robots locomote by making and breaking contacts with their
environment. A crucial problem is therefore to find precise criteria for a
given contact to remain stable or to break. For rigid surface contacts, the
most general criterion is the Contact Wrench Condition (CWC). To check whether
a motion satisfies the CWC, existing approaches take into account a large
number of individual contact forces (for instance, one at each vertex of the
support polygon), which is computationally costly and prevents the use of
efficient inverse-dynamics methods. Here we argue that the CWC can be
explicitly computed without reference to individual contact forces, and give
closed-form formulae in the case of rectangular surfaces -- which is of
practical importance. It turns out that these formulae simply and naturally
express three conditions: (i) Coulomb friction on the resultant force, (ii) ZMP
inside the support area, and (iii) bounds on the yaw torque. Conditions (i) and
(ii) are already known, but condition (iii) is, to the best of our knowledge,
novel. It is also of particular interest for biped locomotion, where undesired
foot yaw rotations are a known issue. We also show that our formulae yield
simpler and faster computations than existing approaches for humanoid motions
in single support, and demonstrate their consistency in the OpenHRP simulator.
|
[
{
"version": "v1",
"created": "Tue, 20 Jan 2015 06:28:33 GMT"
}
] | 2015-01-21T00:00:00 |
[
[
"Caron",
"Stéphane",
""
],
[
"Pham",
"Quang-Cuong",
""
],
[
"Nakamura",
"Yoshihiko",
""
]
] |
new_dataset
| 0.999588 |
1501.04786
|
Arnaud Martin
|
Mouna Chebbah (IRISA), Mouloud Kharoune (IRISA), Arnaud Martin
(IRISA), Boutheina Ben Yaghlane
|
Consid{\'e}rant la d{\'e}pendance dans la th{\'e}orie des fonctions de
croyance
|
in French
|
Revue des Nouvelles Technologies Informatiques (RNTI), 2014,
Fouille de donn{\'e}es complexes, RNTI-E-27, pp.43-64
| null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose to learn sources independence in order to choose
the appropriate type of combination rules when aggregating their beliefs. Some
combination rules are used with the assumption of their sources independence
whereas others combine beliefs of dependent sources. Therefore, the choice of
the combination rule depends on the independence of sources involved in the
combination. In this paper, we propose also a measure of independence, positive
and negative dependence to integrate in mass functions before the combinaision
with the independence assumption.
|
[
{
"version": "v1",
"created": "Tue, 20 Jan 2015 12:48:41 GMT"
}
] | 2015-01-21T00:00:00 |
[
[
"Chebbah",
"Mouna",
"",
"IRISA"
],
[
"Kharoune",
"Mouloud",
"",
"IRISA"
],
[
"Martin",
"Arnaud",
"",
"IRISA"
],
[
"Yaghlane",
"Boutheina Ben",
""
]
] |
new_dataset
| 0.982118 |
1501.04797
|
Sven Puchinger
|
Wenhui Li, Johan S. R. Nielsen, Sven Puchinger, Vladimir Sidorenko
|
Solving Shift Register Problems over Skew Polynomial Rings using Module
Minimisation
|
10 pages, submitted to WCC 2015
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For many algebraic codes the main part of decoding can be reduced to a shift
register synthesis problem. In this paper we present an approach for solving
generalised shift register problems over skew polynomial rings which occur in
error and erasure decoding of $\ell$-Interleaved Gabidulin codes. The algorithm
is based on module minimisation and has time complexity $O(\ell \mu^2)$ where
$\mu$ measures the size of the input problem.
|
[
{
"version": "v1",
"created": "Tue, 20 Jan 2015 13:07:59 GMT"
}
] | 2015-01-21T00:00:00 |
[
[
"Li",
"Wenhui",
""
],
[
"Nielsen",
"Johan S. R.",
""
],
[
"Puchinger",
"Sven",
""
],
[
"Sidorenko",
"Vladimir",
""
]
] |
new_dataset
| 0.951278 |
1501.04843
|
Jean-Lou De Carufel
|
Aritra Banik, Jean-Lou De Carufel, Anil Maheshwari and Michiel Smid
|
Discrete Voronoi Games and $\epsilon$-Nets, in Two and Three Dimensions
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The one-round discrete Voronoi game, with respect to a $n$-point user set
$U$, consists of two players Player 1 ($\mathcal{P}_1$) and Player 2
($\mathcal{P}_2$). At first, $\mathcal{P}_1$ chooses a set of facilities $F_1$
following which $\mathcal{P}_2$ chooses another set of facilities $F_2$,
disjoint from $F_1$. The payoff of $\mathcal{P}_2$ is defined as the
cardinality of the set of points in $U$ which are closer to a facility in $F_2$
than to every facility in $F_1$, and the payoff of $\mathcal{P}_1$ is the
difference between the number of users in $U$ and the payoff of
$\mathcal{P}_2$. The objective of both the players in the game is to maximize
their respective payoffs. In this paper we study the one-round discrete Voronoi
game where $\mathcal{P}_1$ places $k$ facilities and $\mathcal{P}_2$ places one
facility and we have denoted this game as $VG(k,1)$. Although the optimal
solution of this game can be found in polynomial time, the polynomial has a
very high degree. In this paper, we focus on achieving approximate solutions to
$VG(k,1)$ with significantly better running times. We provide a constant-factor
approximate solution to the optimal strategy of $\mathcal{P}_1$ in $VG(k,1)$ by
establishing a connection between $VG(k,1)$ and weak $\epsilon$-nets. To the
best of our knowledge, this is the first time that Voronoi games are studied
from the point of view of $\epsilon$-nets.
|
[
{
"version": "v1",
"created": "Tue, 20 Jan 2015 15:16:12 GMT"
}
] | 2015-01-21T00:00:00 |
[
[
"Banik",
"Aritra",
""
],
[
"De Carufel",
"Jean-Lou",
""
],
[
"Maheshwari",
"Anil",
""
],
[
"Smid",
"Michiel",
""
]
] |
new_dataset
| 0.997972 |
1501.04850
|
Abdulsalam Yassine Dr.
|
Abdulsalam Yassine
|
AAPPeC: Agent-based Architecture for Privacy Payoff in eCommerce
|
Thesis
| null | null | null |
cs.SE cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid development of applications in open distributed environments
such as eCommerce, privacy of information is becoming a critical issue. Today,
many online companies are gathering information and have assembled
sophisticated databases that know a great deal about many people, generally
without the knowledge of those people. Such information changes hands or
ownership as a normal part of eCommerce transactions, or through strategic
decisions that often includes the sale of users' information to other firms.
The key commercial value of users' personal information derives from the
ability of firms to identify consumers and charge them personalized prices for
goods and services they have previously used or may wish to use in the future.
A look at present-day practices reveals that consumers' profile data is now
considered as one of the most valuable assets owned by online businesses. In
this thesis, we argue the following: if consumers' private data is such a
valuable asset, should they not be entitled to commercially benefit from their
asset as well? The scope of this thesis is on developing architecture for
privacy payoff as a means of rewarding consumers for sharing their personal
information with online businesses. The architecture is a multi-agent system in
which several agents employ various requirements for personal information
valuation and interaction capabilities that most users cannot do on their own.
The agents in the system bear the responsibility of working on behalf of
consumers to categorize their personal data objects, report to consumers on
online businesses' trustworthiness and reputation, determine the value of their
compensation using risk-based financial models, and, finally, negotiate for a
payoff value in return for the dissemination of users' information.
|
[
{
"version": "v1",
"created": "Tue, 20 Jan 2015 15:41:54 GMT"
}
] | 2015-01-21T00:00:00 |
[
[
"Yassine",
"Abdulsalam",
""
]
] |
new_dataset
| 0.998042 |
1410.0382
|
Mircea Andrecut Dr
|
M. Andrecut
|
A String-Based Public Key Cryptosystem
|
In this revised version of the paper we show that the eavesdropper's
problem of the proposed cryptosystem has a solution, and we give the details
of the solution
| null | null | null |
cs.CR physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional methods in public key cryptography are based on number theory,
and suffer from problems such as dealing with very large numbers, making key
creation cumbersome. Here, we propose a new public key cryptosystem based on
strings only, which avoids the difficulties of the traditional number theory
approach. The security mechanism for public and secret keys generation is
ensured by a recursive encoding mechanism embedded in a
quasi-commutative-random function, resulted from the composition of a
quasi-commutative function with a pseudo-random function. In this revised
version of the paper we show that the eavesdropper's problem of the proposed
cryptosystem has a solution, and we give the details of the solution.
|
[
{
"version": "v1",
"created": "Fri, 5 Sep 2014 18:44:31 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Jan 2015 18:53:35 GMT"
}
] | 2015-01-20T00:00:00 |
[
[
"Andrecut",
"M.",
""
]
] |
new_dataset
| 0.998691 |
1412.0501
|
Reza Farrahi Moghaddam
|
Reza Farrahi Moghaddam and Mohamed Cheriet
|
SmartPacket: Re-Distributing the Routing Intelligence among Network
Components in SDNs
|
9 pages, 3 figures, 5 tables. To be presented in SDS 2015, 9-13 March
2015, Tempe, AZ, USA
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, a new region-based, multipath-enabled packet routing is
presented and called SmartPacket Routing. The proposed approach provides
several opportunities to re-distribute the smartness and decision making among
various elements of a network including the packets themselves toward providing
a decentralized solution for SDNs. This would bring efficiency and scalability,
and therefore also lower environmental footprint for the ever-growing networks.
In particular, a region-based representation of the network topology is
proposed which is then used to describe the routing actions along the possible
paths for a packet flow. In addition to a region stack that expresses a partial
or full regional path of a packet, QoS requirements of the packet (or its
associated flow) is considered in the packet header in order to enable possible
QoS-aware routing at region level without requiring a centralized controller.
|
[
{
"version": "v1",
"created": "Mon, 1 Dec 2014 15:05:09 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Dec 2014 13:55:12 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Jan 2015 03:40:41 GMT"
}
] | 2015-01-20T00:00:00 |
[
[
"Moghaddam",
"Reza Farrahi",
""
],
[
"Cheriet",
"Mohamed",
""
]
] |
new_dataset
| 0.999466 |
1501.04100
|
Aws Albarghouthi
|
Aws Albarghouthi, Josh Berdine, Byron Cook, Zachary Kincaid
|
Spatial Interpolants
|
Short version published in ESOP 2015
| null | null | null |
cs.LO cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Splinter, a new technique for proving properties of
heap-manipulating programs that marries (1) a new separation logic-based
analysis for heap reasoning with (2) an interpolation-based technique for
refining heap-shape invariants with data invariants. Splinter is property
directed, precise, and produces counterexample traces when a property does not
hold. Using the novel notion of spatial interpolants modulo theories, Splinter
can infer complex invariants over general recursive predicates, e.g., of the
form all elements in a linked list are even or a binary tree is sorted.
Furthermore, we treat interpolation as a black box, which gives us the freedom
to encode data manipulation in any suitable theory for a given program (e.g.,
bit vectors, arrays, or linear arithmetic), so that our technique immediately
benefits from any future advances in SMT solving and interpolation.
|
[
{
"version": "v1",
"created": "Fri, 16 Jan 2015 17:10:32 GMT"
}
] | 2015-01-20T00:00:00 |
[
[
"Albarghouthi",
"Aws",
""
],
[
"Berdine",
"Josh",
""
],
[
"Cook",
"Byron",
""
],
[
"Kincaid",
"Zachary",
""
]
] |
new_dataset
| 0.969847 |
1501.04138
|
Chien-Chun Ni
|
Chien-Chun Ni, Yu-Yao Lin, Jie Gao, Xianfeng David Gu and Emil Saucan
|
Ricci Curvature of the Internet Topology
|
9 pages, 16 figures. To be appear on INFOCOM 2015
| null | null | null |
cs.SI cs.CG cs.NI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Analysis of Internet topologies has shown that the Internet topology has
negative curvature, measured by Gromov's "thin triangle condition", which is
tightly related to core congestion and route reliability. In this work we
analyze the discrete Ricci curvature of the Internet, defined by Ollivier, Lin,
etc. Ricci curvature measures whether local distances diverge or converge. It
is a more local measure which allows us to understand the distribution of
curvatures in the network. We show by various Internet data sets that the
distribution of Ricci cuvature is spread out, suggesting the network topology
to be non-homogenous. We also show that the Ricci curvature has interesting
connections to both local measures such as node degree and clustering
coefficient, global measures such as betweenness centrality and network
connectivity, as well as auxilary attributes such as geographical distances.
These observations add to the richness of geometric structures in complex
network theory.
|
[
{
"version": "v1",
"created": "Sat, 17 Jan 2015 00:44:00 GMT"
}
] | 2015-01-20T00:00:00 |
[
[
"Ni",
"Chien-Chun",
""
],
[
"Lin",
"Yu-Yao",
""
],
[
"Gao",
"Jie",
""
],
[
"Gu",
"Xianfeng David",
""
],
[
"Saucan",
"Emil",
""
]
] |
new_dataset
| 0.992593 |
1501.04167
|
Chengqing Li
|
Xiaowei Li, Chengqing Li, Seok-Tae Kim, In-Kwon Lee
|
An optical image encryption scheme based on depth-conversion integral
imaging and chaotic maps
|
18 pages, 12 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Integral imaging-based cryptographic algorithms provides a new way to design
secure and robust image encryption schemes. In this paper, we introduce a
performance-enhanced image encryption schemes based on depth-conversion
integral imaging and chaotic maps, aiming to meet the requirements of secure
image transmission. First, the input image is decomposed into an elemental
image array (EIA) by utilizing a pinhole array. Then, the obtained image are
encrypted by combining the use of cellular automata and chaotic logistic maps.
In the image reconstruction process, the conventional computational integral
imaging reconstruction (CIIR) technique is a pixel-superposition technique; the
resolution of the reconstructed image is dramatically degraded due to the large
magnification in the superposition process as the pickup distance increases.
The smart mapping technique is introduced to improve the problem of CIIR. A
novel property of the proposed scheme is its depth-conversion ability, which
converts original elemental images recorded at long distance to ones recorded
near the pinhole array and consequently reduce the magnification factor. The
results of numerical simulations demonstrate the effectiveness and security of
this proposed scheme.
|
[
{
"version": "v1",
"created": "Sat, 17 Jan 2015 06:28:44 GMT"
}
] | 2015-01-20T00:00:00 |
[
[
"Li",
"Xiaowei",
""
],
[
"Li",
"Chengqing",
""
],
[
"Kim",
"Seok-Tae",
""
],
[
"Lee",
"In-Kwon",
""
]
] |
new_dataset
| 0.979283 |
1501.04264
|
Anyu Wang
|
Anyu Wang and Zhifang Zhang
|
Achieving Arbitrary Locality and Availability in Binary Codes
|
5 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The $i$th coordinate of an $(n,k)$ code is said to have locality $r$ and
availability $t$ if there exist $t$ disjoint groups, each containing at most
$r$ other coordinates that can together recover the value of the $i$th
coordinate. This property is particularly useful for codes for distributed
storage systems because it permits local repair and parallel accesses of hot
data. In this paper, for any positive integers $r$ and $t$, we construct a
binary linear code of length $\binom{r+t}{t}$ which has locality $r$ and
availability $t$ for all coordinates. The information rate of this code attains
$\frac{r}{r+t}$, which is always higher than that of the direct product code,
the only known construction that can achieve arbitrary locality and
availability.
|
[
{
"version": "v1",
"created": "Sun, 18 Jan 2015 04:39:15 GMT"
}
] | 2015-01-20T00:00:00 |
[
[
"Wang",
"Anyu",
""
],
[
"Zhang",
"Zhifang",
""
]
] |
new_dataset
| 0.998783 |
1501.04388
|
Boris Brimkov
|
Boris Brimkov and Illya V. Hicks
|
Chromatic and flow polynomials of generalized vertex join graphs and
outerplanar graphs
|
14 pages
| null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A generalized vertex join of a graph is obtained by joining an arbitrary
multiset of its vertices to a new vertex. We present a low-order polynomial
time algorithm for finding the chromatic polynomials of generalized vertex
joins of trees, and by duality we find the flow polynomials of arbitrary
outerplanar graphs. We also present closed formulas for the chromatic and flow
polynomials of vertex joins of cliques and cycles, otherwise known as
"generalized wheel" graphs.
|
[
{
"version": "v1",
"created": "Mon, 19 Jan 2015 05:09:25 GMT"
}
] | 2015-01-20T00:00:00 |
[
[
"Brimkov",
"Boris",
""
],
[
"Hicks",
"Illya V.",
""
]
] |
new_dataset
| 0.999835 |
1501.04402
|
Tadashi Wadyama
|
Tadashi Wadayama, Taizuke Izumi, Hirotaka Ono
|
Subgraph Domatic Problem and Writing Capacity of Memory Devises with
Restricted State Transitions
|
7 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A code design problem for memory devises with restricted state transitions is
formulated as a combinatorial optimization problem that is called a subgraph
domatic partition (subDP) problem. If any neighbor set of a given state
transition graph contains all the colors, then the coloring is said to be
valid. The goal of a subDP problem is to find a valid coloring with the largest
number of colors for a subgraph of a given directed graph. The number of colors
in an optimal valid coloring gives the writing capacity of a given state
transition graph. The subDP problems are computationally hard; it is proved to
be NP-complete in this paper. One of our main contributions in this paper is to
show the asymptotic behavior of the writing capacity $C(G)$ for sequences of
dense bidirectional graphs, that is given by C(G)=Omega(n/ln n) where n is the
number of nodes. A probabilistic method called Lovasz local lemma (LLL) plays
an essential role to derive the asymptotic expression.
|
[
{
"version": "v1",
"created": "Mon, 19 Jan 2015 06:22:09 GMT"
}
] | 2015-01-20T00:00:00 |
[
[
"Wadayama",
"Tadashi",
""
],
[
"Izumi",
"Taizuke",
""
],
[
"Ono",
"Hirotaka",
""
]
] |
new_dataset
| 0.995425 |
1501.04552
|
Benson Muite
|
S. Aseeri and O. Batra\v{s}ev and M. Icardi and B. Leu and A. Liu and
N. Li and B.K. Muite and E. M\"uller and B. Palen and M. Quell and H. Servat
and P. Sheth and R. Speck and M. Van Moer and J. Vienne
|
Solving the Klein-Gordon equation using Fourier spectral methods: A
benchmark test for computer performance
|
10 pages
| null | null | null |
cs.PF cs.DC math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The cubic Klein-Gordon equation is a simple but non-trivial partial
differential equation whose numerical solution has the main building blocks
required for the solution of many other partial differential equations. In this
study, the library 2DECOMP&FFT is used in a Fourier spectral scheme to solve
the Klein-Gordon equation and strong scaling of the code is examined on
thirteen different machines for a problem size of 512^3. The results are useful
in assessing likely performance of other parallel fast Fourier transform based
programs for solving partial differential equations. The problem is chosen to
be large enough to solve on a workstation, yet also of interest to solve
quickly on a supercomputer, in particular for parametric studies. Unlike other
high performance computing benchmarks, for this problem size, the time to
solution will not be improved by simply building a bigger supercomputer.
|
[
{
"version": "v1",
"created": "Mon, 19 Jan 2015 16:48:00 GMT"
}
] | 2015-01-20T00:00:00 |
[
[
"Aseeri",
"S.",
""
],
[
"Batrašev",
"O.",
""
],
[
"Icardi",
"M.",
""
],
[
"Leu",
"B.",
""
],
[
"Liu",
"A.",
""
],
[
"Li",
"N.",
""
],
[
"Muite",
"B. K.",
""
],
[
"Müller",
"E.",
""
],
[
"Palen",
"B.",
""
],
[
"Quell",
"M.",
""
],
[
"Servat",
"H.",
""
],
[
"Sheth",
"P.",
""
],
[
"Speck",
"R.",
""
],
[
"Van Moer",
"M.",
""
],
[
"Vienne",
"J.",
""
]
] |
new_dataset
| 0.99732 |
1407.3121
|
Jeroen Keiren
|
Jeroen J.A. Keiren
|
Benchmarks for Parity Games (extended version)
|
The corresponding tool and benchmarks are available from
https://github.com/jkeiren/paritygame-generator. This is an extended version
of the paper that has been accepted for FSEN 2015
| null | null | null |
cs.LO cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a benchmark suite for parity games that includes all benchmarks
that have been used in the literature, and make it available online. We give an
overview of the parity games, including a description of how they have been
generated. We also describe structural properties of parity games, and using
these properties we show that our benchmarks are representative. With this work
we provide a starting point for further experimentation with parity games.
|
[
{
"version": "v1",
"created": "Fri, 11 Jul 2014 11:45:25 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jan 2015 14:05:32 GMT"
}
] | 2015-01-19T00:00:00 |
[
[
"Keiren",
"Jeroen J. A.",
""
]
] |
new_dataset
| 0.998013 |
1501.03996
|
Nicolo' Michelusi
|
Nicolo Michelusi and Urbashi Mitra
|
Capacity of electron-based communication over bacterial cables: the
full-CSI case
|
submitted to IEEE Journal on Selected Areas in Communications
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by recent discoveries of microbial communities that transfer
electrons across centimeter-length scales, this paper studies the information
capacity of bacterial cables via electron transfer, which coexists with
molecular communications, under the assumption of full causal channel state
information (CSI). The bacterial cable is modeled as an electron queue that
transfers electrons from the encoder at the electron donor source, which
controls the desired input electron intensity, to the decoder at the electron
acceptor sink. Clogging due to local ATP saturation along the cable is modeled.
A discrete-time scheme is investigated, enabling the computation of an
achievable rate. The regime of asymptotically small time-slot duration is
analyzed, and the optimality of binary input distributions is proved, i.e., the
encoder transmits at either maximum or minimum intensity, as dictated by the
physical constraints of the cable. A dynamic programming formulation of the
capacity is proposed, and the optimal binary signaling is determined via policy
iteration. It is proved that the optimal signaling has smaller intensity than
that given by the myopic policy, which greedily maximizes the instantaneous
information rate but neglects its effect on the steady-state cable
distribution. In contrast, the optimal scheme balances the tension between
achieving high instantaneous information rate, and inducing a favorable
steady-state distribution, such that those states characterized by high
information rates are visited more frequently, thus revealing the importance of
CSI. This work represents a first contribution towards the design of electron
signaling schemes in complex microbial structures, e.g., bacterial cables and
biofilms, where the tension between maximizing the transfer of information and
guaranteeing the well-being of the overall bacterial community arises.
|
[
{
"version": "v1",
"created": "Fri, 16 Jan 2015 14:48:01 GMT"
}
] | 2015-01-19T00:00:00 |
[
[
"Michelusi",
"Nicolo",
""
],
[
"Mitra",
"Urbashi",
""
]
] |
new_dataset
| 0.989153 |
1501.04006
|
Ha Bui
|
P. Rajeev, Ha H. Bui, N. Sivakugan
|
Seismic Earth Pressure Development in Sheet Pile Retaining Walls: A
Numerical Study
| null | null | null | null |
cs.CE physics.geo-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The design of retaining walls requires the complete knowledge of the earth
pressure distribution behind the wall. Due to the complex soil-structure
effect, the estimation of earth pressure is not an easy task; even in the
static case. The problem becomes even more complex for the dynamic (i.e.,
seismic) analysis and design of retaining walls. Several earth pressure models
have been developed over the years to integrate the dynamic earth pressure with
the static earth pressure and to improve the design of retaining wall in
seismic regions. Among all the models, MononobeOkabe (M-O) method is commonly
used to estimate the magnitude of seismic earth pressures in retaining walls
and is adopted in design practices around the world (e.g., EuroCode and
Australian Standards). However, the M-O method has several drawbacks and does
not provide reliable estimate of the earth pressure in many instances. This
study investigates the accuracy of the M-O method to predict the dynamic earth
pressure in sheet pile wall. A 2D plane strain finite element model of the
wall-soil system was developed in DIANA. The backfill soil was modelled with
Mohr-Coulomb failure criterion while the wall was assumed behave elastically.
The numerically predicted dynamic earth pressure was compared with the M-O
model prediction. Further, the point of application of total dynamic force was
determined and compared with the static case. Finally, the applicability of M-O
methods to compute the seismic earth pressure was discussed.
|
[
{
"version": "v1",
"created": "Fri, 16 Jan 2015 15:03:20 GMT"
}
] | 2015-01-19T00:00:00 |
[
[
"Rajeev",
"P.",
""
],
[
"Bui",
"Ha H.",
""
],
[
"Sivakugan",
"N.",
""
]
] |
new_dataset
| 0.952241 |
1501.03719
|
Tal Hassner
|
Gil Levi and Tal Hassner
|
LATCH: Learned Arrangements of Three Patch Codes
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel means of describing local image appearances using binary
strings. Binary descriptors have drawn increasing interest in recent years due
to their speed and low memory footprint. A known shortcoming of these
representations is their inferior performance compared to larger, histogram
based descriptors such as the SIFT. Our goal is to close this performance gap
while maintaining the benefits attributed to binary representations. To this
end we propose the Learned Arrangements of Three Patch Codes descriptors, or
LATCH. Our key observation is that existing binary descriptors are at an
increased risk from noise and local appearance variations. This, as they
compare the values of pixel pairs; changes to either of the pixels can easily
lead to changes in descriptor values, hence damaging its performance. In order
to provide more robustness, we instead propose a novel means of comparing pixel
patches. This ostensibly small change, requires a substantial redesign of the
descriptors themselves and how they are produced. Our resulting LATCH
representation is rigorously compared to state-of-the-art binary descriptors
and shown to provide far better performance for similar computation and space
requirements.
|
[
{
"version": "v1",
"created": "Thu, 15 Jan 2015 15:38:57 GMT"
}
] | 2015-01-16T00:00:00 |
[
[
"Levi",
"Gil",
""
],
[
"Hassner",
"Tal",
""
]
] |
new_dataset
| 0.968483 |
1501.03196
|
Tuan-Anh Le
|
Tuan-Anh Le and Loc X. Bui
|
Forward Delay-based Packet Scheduling Algorithm for Multipath TCP
|
6 pages
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multipath TCP (MPTCP) is a transport layer protocol that allows network
devices to transfer data over multiple concurrent paths, and hence, utilizes
the network resources more effectively than does the traditional single-path
TCP. However, as a reliable protocol, MPTCP still needs to deliver data packets
(to the upper application) at the receiver in the same order they are
transmitted at the sender. The out-of-order packet problem becomes more severe
for MPTCP due to the heterogeneous nature of delay and bandwidth of each path.
In this paper, we propose the forward-delay-based packet scheduling (FDPS)
algorithm for MPTCP to address that problem. The main idea is that the sender
dispatches packets via concurrent paths according to their estimated forward
delay and throughput differences. Via simulations with various network
conditions, the results show that our algorithm significantly maintains
in-order arrival packets at the receiver compared with several previous
algorithms.
|
[
{
"version": "v1",
"created": "Tue, 13 Jan 2015 22:32:31 GMT"
}
] | 2015-01-15T00:00:00 |
[
[
"Le",
"Tuan-Anh",
""
],
[
"Bui",
"Loc X.",
""
]
] |
new_dataset
| 0.986095 |
1501.03235
|
Bo Yuan
|
Bo Yuan, Keshab K. Parhi
|
Successive Cancellation Decoding of Polar Codes using Stochastic
Computing
|
accepted by International Symposium on Circuits and Systems (ISCAS)
2015
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Polar codes have emerged as the most favorable channel codes for their unique
capacity-achieving property. To date, numerous works have been reported for
efficient design of polar codes decoder. However, these prior efforts focused
on design of polar decoders via deterministic computation, while the behavior
of stochastic polar decoder, which can have potential advantages such as low
complexity and strong error-resilience, has not been studied in existing
literatures. This paper, for the first time, investigates polar decoding using
stochastic logic. Specifically, the commonly-used successive cancellation (SC)
algorithm is reformulated into the stochastic form. Several methods that can
potentially improve decoding performance are discussed and analyzed. Simulation
results show that a stochastic SC decoder can achieve similar error-correcting
performance as its deterministic counterpart. This work can pave the way for
future hardware design of stochastic polar codes decoders.
|
[
{
"version": "v1",
"created": "Wed, 14 Jan 2015 02:30:49 GMT"
}
] | 2015-01-15T00:00:00 |
[
[
"Yuan",
"Bo",
""
],
[
"Parhi",
"Keshab K.",
""
]
] |
new_dataset
| 0.995609 |
1501.03353
|
Fabian Bendun
|
Michael Backes, Fabian Bendun, Joerg Hoffmann, Ninja Marnau
|
PriCL: Creating a Precedent A Framework for Reasoning about Privacy Case
Law
|
Extended version
| null | null | null |
cs.CR cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce PriCL: the first framework for expressing and automatically
reasoning about privacy case law by means of precedent. PriCL is parametric in
an underlying logic for expressing world properties, and provides support for
court decisions, their justification, the circumstances in which the
justification applies as well as court hierarchies. Moreover, the framework
offers a tight connection between privacy case law and the notion of norms that
underlies existing rule-based privacy research. In terms of automation, we
identify the major reasoning tasks for privacy cases such as deducing legal
permissions or extracting norms. For solving these tasks, we provide generic
algorithms that have particularly efficient realizations within an expressive
underlying logic. Finally, we derive a definition of deducibility based on
legal concepts and subsequently propose an equivalent characterization in terms
of logic satisfiability.
|
[
{
"version": "v1",
"created": "Wed, 14 Jan 2015 14:05:18 GMT"
}
] | 2015-01-15T00:00:00 |
[
[
"Backes",
"Michael",
""
],
[
"Bendun",
"Fabian",
""
],
[
"Hoffmann",
"Joerg",
""
],
[
"Marnau",
"Ninja",
""
]
] |
new_dataset
| 0.999556 |
1501.02854
|
Luis Sentis
|
Ye Zhao, Nicholas Paine, Kwan Suk Kim, Luis Sentis
|
Stability and Performance Limits of Latency-Prone Distributed Feedback
Controllers
|
13 pages, 10 figures, 2 tables, 31 reference
| null | null | null |
cs.SY cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robotic control systems are increasingly relying on distributed feedback
controllers to tackle complex sensing and decision problems such as those found
in highly articulated human-centered robots. These demands come at the cost of
a growing computational burden and, as a result, larger controller latencies.
To maximize robustness to mechanical disturbances by maximizing control
feedback gains, this paper emphasizes the necessity for compromise between
high- and low-level feedback control effort in distributed controllers.
Specifically, the effect of distributed impedance controllers is studied where
damping feedback effort is executed in close proximity to the control plant and
stiffness feedback effort is executed in a latency-prone centralized control
process. A central observation is that the stability of high impedance
distributed controllers is very sensitive to damping feedback delay but much
less to stiffness feedback delay. This study pursues a detailed analysis of
this observation that leads to a physical understanding of the disparity. Then
a practical controller breakdown gain rule is derived to aim at enabling
control designers to consider the benefits of implementing their control
applications in a distributed fashion. These considerations are further
validated through the analysis, simulation and experimental testing on high
performance actuators and on an omnidirectional mobile base.
|
[
{
"version": "v1",
"created": "Tue, 13 Jan 2015 00:15:45 GMT"
}
] | 2015-01-14T00:00:00 |
[
[
"Zhao",
"Ye",
""
],
[
"Paine",
"Nicholas",
""
],
[
"Kim",
"Kwan Suk",
""
],
[
"Sentis",
"Luis",
""
]
] |
new_dataset
| 0.971733 |
1501.02887
|
Sunil Kumar Kopparapu Dr
|
Lajish VL and Sunil Kumar Kopparapu
|
Online Handwritten Devanagari Stroke Recognition Using Extended
Directional Features
|
8th International Conference on Signal Processing and Communication
Systems 15 - 17 December 2014, Gold Coast, Australia
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes a new feature set, called the extended directional
features (EDF) for use in the recognition of online handwritten strokes. We use
EDF specifically to recognize strokes that form a basis for producing
Devanagari script, which is the most widely used Indian language script. It
should be noted that stroke recognition in handwritten script is equivalent to
phoneme recognition in speech signals and is generally very poor and of the
order of 20% for singing voice. Experiments are conducted for the automatic
recognition of isolated handwritten strokes. Initially we describe the proposed
feature set, namely EDF and then show how this feature can be effectively
utilized for writer independent script recognition through stroke recognition.
Experimental results show that the extended directional feature set performs
well with about 65+% stroke level recognition accuracy for writer independent
data set.
|
[
{
"version": "v1",
"created": "Sun, 11 Jan 2015 16:53:05 GMT"
}
] | 2015-01-14T00:00:00 |
[
[
"VL",
"Lajish",
""
],
[
"Kopparapu",
"Sunil Kumar",
""
]
] |
new_dataset
| 0.999695 |
1501.02921
|
Wei Xu
|
W. Xu, M. Wu, H. Zhang, X. You, C. Zhao
|
ACO-OFDM-Specified Recoverable Upper Clipping With Efficient Detection
for Optical Wireless Communications
|
appear in IEEE Photonics Journal
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The high peak-to-average-power ratio (PAPR) of orthogonal frequency-division
multiplexing (OFDM) degrades the performance in optical wireless communication
systems. This paper proposes a modified asymmetrically clipped optical OFDM
(ACOOFDM) with low PAPR via introducing a recoverable upper-clipping (RoC)
procedure. Although some information is clipped by a predetermined peak
threshold, the clipped error information is kept and repositioned in our
proposed scheme, which is named RoC-ACO-OFDM, instead of simply being dropped
in conventional schemes. The proposed method makes full use of the specific
structure of ACO-OFDM signals in the time domain, where half of the positions
are forced to zeros within an OFDM symbol. The zero-valued positions are
utilized to carry the clipped error information. Moreover, we accordingly
present an optimal maximum a posteriori (MAP) detection for the RoC-ACO-OFDM
system. To facilitate the usage of RoC-ACO-OFDM in practical applications, an
efficient detection method is further developed with near-optimal performance.
Simulation results show that the proposed RoC-ACO-OFDM achieves a significant
PAPR reduction, while maintaining a competitive bit-error rate performance
compared with the conventional schemes.
|
[
{
"version": "v1",
"created": "Tue, 13 Jan 2015 09:09:39 GMT"
}
] | 2015-01-14T00:00:00 |
[
[
"Xu",
"W.",
""
],
[
"Wu",
"M.",
""
],
[
"Zhang",
"H.",
""
],
[
"You",
"X.",
""
],
[
"Zhao",
"C.",
""
]
] |
new_dataset
| 0.968199 |
1501.02973
|
Wanlu Sun
|
Wanlu Sun, Erik G. Str\"om, Fredrik Br\"annstr\"om, Yutao Sui, and Kin
Cheong Sou
|
D2D-based V2V Communications with Latency and Reliability Constraints
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Direct device-to-device (D2D) communication has been proposed as a possible
enabler for vehicle-to-vehicle (V2V) applications, where the incurred
intra-cell interference and the stringent latency and reliability requirements
are challenging issues. In this paper, we investigate the radio resource
management problem for D2D-based V2V communications. Firstly, we analyze and
mathematically model the actual requirements for vehicular communications and
traditional cellular links. Secondly, we propose a problem formulation to
fulfill these requirements, and then a Separate Resource Block allocation and
Power control (SRBP) algorithm to solve this problem. Finally, simulations are
presented to illustrate the improved performance of the proposed SRBP scheme
compared to some other existing methods.
|
[
{
"version": "v1",
"created": "Tue, 13 Jan 2015 12:15:55 GMT"
}
] | 2015-01-14T00:00:00 |
[
[
"Sun",
"Wanlu",
""
],
[
"Ström",
"Erik G.",
""
],
[
"Brännström",
"Fredrik",
""
],
[
"Sui",
"Yutao",
""
],
[
"Sou",
"Kin Cheong",
""
]
] |
new_dataset
| 0.98388 |
1501.03093
|
Vojtech Forejt
|
Tom\'a\v{s} Br\'azdil, Krishnendu Chatterjee, Vojt\v{e}ch Forejt, and
Anton\'in Ku\v{c}era
|
MultiGain: A controller synthesis tool for MDPs with multiple
mean-payoff objectives
|
Extended version for a TACAS 2015 tool demo paper
| null | null | null |
cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present MultiGain, a tool to synthesize strategies for Markov decision
processes (MDPs) with multiple mean-payoff objectives. Our models are described
in PRISM, and our tool uses the existing interface and simulator of PRISM. Our
tool extends PRISM by adding novel algorithms for multiple mean-payoff
objectives, and also provides features such as (i)~generating strategies and
exploring them for simulation, and checking them with respect to other
properties; and (ii)~generating an approximate Pareto curve for two mean-payoff
objectives. In addition, we present a new practical algorithm for the analysis
of MDPs with multiple mean-payoff objectives under memoryless strategies.
|
[
{
"version": "v1",
"created": "Tue, 13 Jan 2015 18:04:46 GMT"
}
] | 2015-01-14T00:00:00 |
[
[
"Brázdil",
"Tomáš",
""
],
[
"Chatterjee",
"Krishnendu",
""
],
[
"Forejt",
"Vojtěch",
""
],
[
"Kučera",
"Antonín",
""
]
] |
new_dataset
| 0.996789 |
1501.03124
|
Amartansh Dubey
|
Amartansh Dubey and K. M. Bhurchandi
|
Robust and Real Time Detection of Curvy Lanes (Curves) with Desired
Slopes for Driving Assistance and Autonomous Vehicles
|
13 pages, 12 figures, published in International Conference on Signal
and Image Processing (AIRCC Publishing Corporation)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the biggest reasons for road accidents is curvy lanes and blind turns.
Even one of the biggest hurdles for new autonomous vehicles is to detect curvy
lanes, multiple lanes and lanes with a lot of discontinuity and noise. This
paper presents very efficient and advanced algorithm for detecting curves
having desired slopes (especially for detecting curvy lanes in real time) and
detection of curves (lanes) with a lot of noise, discontinuity and
disturbances. Overall aim is to develop robust method for this task which is
applicable even in adverse conditions. Even in some of most famous and useful
libraries like OpenCV and Matlab, there is no function available for detecting
curves having desired slopes , shapes, discontinuities. Only few predefined
shapes like circle, ellipse, etc, can be detected using presently available
functions. Proposed algorithm can not only detect curves with discontinuity,
noise, desired slope but also it can perform shadow and illumination correction
and detect/ differentiate between different curves.
|
[
{
"version": "v1",
"created": "Tue, 13 Jan 2015 19:35:18 GMT"
}
] | 2015-01-14T00:00:00 |
[
[
"Dubey",
"Amartansh",
""
],
[
"Bhurchandi",
"K. M.",
""
]
] |
new_dataset
| 0.992902 |
1401.5197
|
Shenghao Wang
|
Shenghao Wang, Kai Zhang, Zhili Wang, Kun Gao, Zhao Wu, Peiping Zhu
and Ziyu Wu
|
A user-friendly nano-CT image alignment and 3D reconstruction platform
based on LabVIEW
|
9 pages, 5 figures, 1 chart
|
2015 Chinese Physics C, 39 (1): 018001
|
10.1088/1674-1137/39/1/018001
| null |
cs.CE physics.med-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
X-ray computed tomography at the nanometer scale (nano-CT) offers a wide
range of applications in scientific and industrial areas. Here we describe a
reliable, user-friendly and fast software package based on LabVIEW that may
allow to perform all procedures after the acquisition of raw projection images
in order to obtain the inner structure of the investigated sample. A suitable
image alignment process to address misalignment problems among image series due
to mechanical manufacturing errors, thermal expansion and other external
factors has been considered together with a novel fast parallel beam 3D
reconstruction procedure, developed ad hoc to perform the tomographic
reconstruction. Remarkably improved reconstruction results obtained at the
Beijing Synchrotron Radiation Facility after the image calibration confirmed
the fundamental role of this image alignment procedure that minimizes unwanted
blurs and additional streaking artifacts always present in reconstructed
slices. Moreover, this nano-CT image alignment and its associated 3D
reconstruction procedure fully based on LabVIEW routines, significantly reduce
the data post-processing cycle, thus making faster and easier the activity of
the users during experimental runs.
|
[
{
"version": "v1",
"created": "Tue, 21 Jan 2014 07:01:37 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Feb 2014 06:53:38 GMT"
},
{
"version": "v3",
"created": "Tue, 15 Apr 2014 08:31:25 GMT"
}
] | 2015-01-13T00:00:00 |
[
[
"Wang",
"Shenghao",
""
],
[
"Zhang",
"Kai",
""
],
[
"Wang",
"Zhili",
""
],
[
"Gao",
"Kun",
""
],
[
"Wu",
"Zhao",
""
],
[
"Zhu",
"Peiping",
""
],
[
"Wu",
"Ziyu",
""
]
] |
new_dataset
| 0.99966 |
1409.5370
|
Emanuel Gluskin
|
Emanuel Gluskin
|
On the physical and circuit-theoretic significance of the Memristor
|
9 pages/ The present version is strongly extended in the sense of the
circuit theory discussion
| null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is noticed that the inductive and capacitive features of the memristor
reflect (and are a quintessence of) such features of any resistor. The very
presence in the resistive characteristic v = f(i) of the voltage and current
state variables, associated by their electrodynamics sense with electrical and
magnetic fields, forces any resister to cause to accumulate some magnetic and
electrostatic fields and energies around itself. The present version is
strongly extended in the sense of the circuit theory discussion.
|
[
{
"version": "v1",
"created": "Thu, 18 Sep 2014 16:44:23 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Sep 2014 05:12:48 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Jan 2015 15:56:38 GMT"
}
] | 2015-01-13T00:00:00 |
[
[
"Gluskin",
"Emanuel",
""
]
] |
new_dataset
| 0.995369 |
1501.02378
|
Muhammad Zubair Ahmad
|
Muhammad Zubair Ahmad, Ayyaz Akhtar, Abdul Qadeer Khan, Amir Ali Khan,
Muhammad Murtaza Khan
|
Low Cost Semi-Autonomous Agricultural Robots In Pakistan-Vision Based
Navigation Scalable methodology for wheat harvesting
| null | null | null | null |
cs.RO cs.CV cs.CY
|
http://creativecommons.org/licenses/by/3.0/
|
Robots have revolutionized our way of life in recent years.One of the domains
that has not yet completely benefited from the robotic automation is the
agricultural sector. Agricultural Robotics should complement humans in the
arduous tasks during different sub-domains of this sector. Extensive research
in Agricultural Robotics has been carried out in Japan, USA, Australia and
Germany focusing mainly on the heavy agricultural machinery. Pakistan is an
agricultural rich country and its economy and food security are closely tied
with agriculture in general and wheat in particular. However, agricultural
research in Pakistan is still carried out using the conventional methodologies.
This paper is an attempt to trigger the research in this modern domain so that
we can benefit from cost effective and resource efficient autonomous
agricultural methodologies. This paper focuses on a scalable low cost
semi-autonomous technique for wheat harvest which primarily focuses on the
farmers with small land holdings. The main focus will be on the vision part of
the navigation system deployed by the proposed robot.
|
[
{
"version": "v1",
"created": "Sat, 10 Jan 2015 18:14:10 GMT"
}
] | 2015-01-13T00:00:00 |
[
[
"Ahmad",
"Muhammad Zubair",
""
],
[
"Akhtar",
"Ayyaz",
""
],
[
"Khan",
"Abdul Qadeer",
""
],
[
"Khan",
"Amir Ali",
""
],
[
"Khan",
"Muhammad Murtaza",
""
]
] |
new_dataset
| 0.985366 |
1501.02379
|
Muhammad Zubair Ahmad
|
Abdul Qadeer Khan, Ayyaz Akhtar, Muhammad Zubair Ahmad
|
Autonomous Farm Vehicles: Prototype of Power Reaper
| null | null | null | null |
cs.RO cs.CV cs.CY
|
http://creativecommons.org/licenses/by/3.0/
|
Chapter 2 will begin with introduction of Agricultural Robotics. There will
be a literature review of the mechanical structure, vision and control
algorithms. In chapter 3 we will discuss the methodology in detail using block
diagrams and flowcharts. The results of the tested and the proposed algorithms
will also be displayed. In chapter 4 we will discuss the results in detail and
how they are of significance in our work. In chapter 5 we will conclude our
work and discuss some future perspectives. In appendices we will provide some
background information necessary regarding this project.
|
[
{
"version": "v1",
"created": "Sat, 10 Jan 2015 18:21:48 GMT"
}
] | 2015-01-13T00:00:00 |
[
[
"Khan",
"Abdul Qadeer",
""
],
[
"Akhtar",
"Ayyaz",
""
],
[
"Ahmad",
"Muhammad Zubair",
""
]
] |
new_dataset
| 0.99345 |
1501.02475
|
T\'ulio Casagrande Alberto
|
Eduardo G. Pinheiro and T\'ulio C. Alberto
|
Teleoperando Rob\^os Pioneer Utilizando Android
|
in Portuguese
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an application with ROS, Aria and RosAria to control a
ModelSim simulated Pioneer 3-DX robot. The navigation applies a simple
autonomous algorithm and a teleoperation control using an Android device
sending the gyroscope generated information.
|
[
{
"version": "v1",
"created": "Sun, 11 Jan 2015 17:52:45 GMT"
}
] | 2015-01-13T00:00:00 |
[
[
"Pinheiro",
"Eduardo G.",
""
],
[
"Alberto",
"Túlio C.",
""
]
] |
new_dataset
| 0.963556 |
1501.02530
|
Anna Senina
|
Anna Rohrbach, Marcus Rohrbach, Niket Tandon, Bernt Schiele
|
A Dataset for Movie Description
| null | null | null | null |
cs.CV cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Descriptive video service (DVS) provides linguistic descriptions of movies
and allows visually impaired people to follow a movie along with their peers.
Such descriptions are by design mainly visual and thus naturally form an
interesting data source for computer vision and computational linguistics. In
this work we propose a novel dataset which contains transcribed DVS, which is
temporally aligned to full length HD movies. In addition we also collected the
aligned movie scripts which have been used in prior work and compare the two
different sources of descriptions. In total the Movie Description dataset
contains a parallel corpus of over 54,000 sentences and video snippets from 72
HD movies. We characterize the dataset by benchmarking different approaches for
generating video descriptions. Comparing DVS to scripts, we find that DVS is
far more visual and describes precisely what is shown rather than what should
happen according to the scripts created prior to movie production.
|
[
{
"version": "v1",
"created": "Mon, 12 Jan 2015 03:31:33 GMT"
}
] | 2015-01-13T00:00:00 |
[
[
"Rohrbach",
"Anna",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Tandon",
"Niket",
""
],
[
"Schiele",
"Bernt",
""
]
] |
new_dataset
| 0.999875 |
1501.02659
|
Damianos Gavalas
|
Thomas Chatzidimitris, Damianos Gavalas, Vlasios Kasapakis
|
PacMap: Transferring PacMan to the Physical Realm
|
6 pages, 3 figures, Proceedings of the International Conference on
Pervasive Games (PERGAMES'2014), Rome, Italy, 27 October 2014
| null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper discusses the implementation of the pervasive game PacMap.
Openness and portability have been the main design objectives for PacMap. We
elaborate on programming techniques which may be applicable to a broad range of
location-based games that involve the movement of virtual characters over map
interfaces. In particular, we present techniques to execute shortest path
algorithms on spatial environments bypassing the restrictions imposed by
commercial mapping services. Last, we present ways to improve the movement and
enhance the intelligence of virtual characters taking into consideration the
actions and position of players in location-based games.
|
[
{
"version": "v1",
"created": "Mon, 12 Jan 2015 14:27:37 GMT"
}
] | 2015-01-13T00:00:00 |
[
[
"Chatzidimitris",
"Thomas",
""
],
[
"Gavalas",
"Damianos",
""
],
[
"Kasapakis",
"Vlasios",
""
]
] |
new_dataset
| 0.995787 |
1309.7818
|
Guillaume Berhault
|
Guillaume Berhault, Camille Leroux, Christophe Jego, Dominique Dallet
|
Partial Sums Generation Architecture for Successive Cancellation
Decoding of Polar Codes
|
Submitted to IEEE Workshop on Signal Processing Systems (SiPS)(26
April 2012). Accepted (28 June 2013)
| null |
10.1109/SiPS.2013.6674541
| null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Polar codes are a new family of error correction codes for which efficient
hardware architectures have to be defined for the encoder and the decoder.
Polar codes are decoded using the successive cancellation decoding algorithm
that includes partial sums computations. We take advantage of the recursive
structure of polar codes to introduce an efficient partial sums computation
unit that can also implements the encoder. The proposed architecture is
synthesized for several codelengths in 65nm ASIC technology. The area of the
resulting design is reduced up to 26% and the maximum working frequency is
improved by ~25%.
|
[
{
"version": "v1",
"created": "Mon, 30 Sep 2013 12:20:47 GMT"
}
] | 2015-01-12T00:00:00 |
[
[
"Berhault",
"Guillaume",
""
],
[
"Leroux",
"Camille",
""
],
[
"Jego",
"Christophe",
""
],
[
"Dallet",
"Dominique",
""
]
] |
new_dataset
| 0.995476 |
1402.5769
|
Atsushi Miyauchi
|
Tomomi Matsui, Noriyoshi Sukegawa, Atsushi Miyauchi
|
Fractional programming formulation for the vertex coloring problem
|
6 pages, 5 tables
|
Information Processing Letters 114, 706-709 (2014)
|
10.1016/j.ipl.2014.06.010
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We devise a new formulation for the vertex coloring problem. Different from
other formulations, decision variables are associated with the pairs of
vertices. Consequently, colors will be distinguishable. Although the objective
function is fractional, it can be replaced by a piece-wise linear convex
function. Numerical experiments show that our formulation has significantly
good performance for dense graphs.
|
[
{
"version": "v1",
"created": "Mon, 24 Feb 2014 10:01:07 GMT"
}
] | 2015-01-12T00:00:00 |
[
[
"Matsui",
"Tomomi",
""
],
[
"Sukegawa",
"Noriyoshi",
""
],
[
"Miyauchi",
"Atsushi",
""
]
] |
new_dataset
| 0.986307 |
1501.02033
|
EPTCS
|
Jes\'us M. Almendros-Jim\'enez (Universidad de Almer\'ia)
|
XQOWL: An Extension of XQuery for OWL Querying and Reasoning
|
In Proceedings PROLE 2014, arXiv:1501.01693
|
EPTCS 173, 2015, pp. 41-55
|
10.4204/EPTCS.173.4
| null |
cs.PL cs.DB cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the main aims of the so-called Web of Data is to be able to handle
heterogeneous resources where data can be expressed in either XML or RDF. The
design of programming languages able to handle both XML and RDF data is a key
target in this context. In this paper we present a framework called XQOWL that
makes possible to handle XML and RDF/OWL data with XQuery. XQOWL can be
considered as an extension of the XQuery language that connects XQuery with
SPARQL and OWL reasoners. XQOWL embeds SPARQL queries (via Jena SPARQL engine)
in XQuery and enables to make calls to OWL reasoners (HermiT, Pellet and
FaCT++) from XQuery. It permits to combine queries against XML and RDF/OWL
resources as well as to reason with RDF/OWL data. Therefore input data can be
either XML or RDF/OWL and output data can be formatted in XML (also using
RDF/OWL XML serialization).
|
[
{
"version": "v1",
"created": "Fri, 9 Jan 2015 03:59:54 GMT"
}
] | 2015-01-12T00:00:00 |
[
[
"Almendros-Jiménez",
"Jesús M.",
"",
"Universidad de Almería"
]
] |
new_dataset
| 0.997503 |
1409.5911
|
Yessica Saez
|
X. Cao, Y. Saez, G. Pesti, L.B. Kish
|
On KLJN-based secure key distribution in vehicular communication
networks
|
Accepted for publication
|
Fluct. Noise Lett., Vol. 14, No. 1 (2015) 1550008 (11 pages)
|
10.1142/S021947751550008X
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a former paper [Fluct. Noise Lett., 13 (2014) 1450020] we introduced a
vehicular communication system with unconditionally secure key exchange based
on the Kirchhoff-Law-Johnson-Noise (KLJN) key distribution scheme. In this
paper, we address the secure KLJN key donation to vehicles. This KLJN key
donation solution is performed lane-by-lane by using roadside key provider
equipment embedded in the pavement. A method to compute the lifetime of the
KLJN key is also given. This key lifetime depends on the car density and gives
an upper limit of the lifetime of the KLJN key for vehicular communication
networks.
|
[
{
"version": "v1",
"created": "Sat, 20 Sep 2014 18:51:11 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jan 2015 21:12:52 GMT"
}
] | 2015-01-09T00:00:00 |
[
[
"Cao",
"X.",
""
],
[
"Saez",
"Y.",
""
],
[
"Pesti",
"G.",
""
],
[
"Kish",
"L. B.",
""
]
] |
new_dataset
| 0.991638 |
1410.6079
|
Ivan Pustogarov
|
Alex Biryukov and Ivan Pustogarov
|
Bitcoin over Tor isn't a good idea
|
11 pages, 4 figures, 4 tables
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bitcoin is a decentralized P2P digital currency in which coins are generated
by a distributed set of miners and transaction are broadcasted via a
peer-to-peer network. While Bitcoin provides some level of anonymity (or rather
pseudonymity) by encouraging the users to have any number of random-looking
Bitcoin addresses, recent research shows that this level of anonymity is rather
low. This encourages users to connect to the Bitcoin network through
anonymizers like Tor and motivates development of default Tor functionality for
popular mobile SPV clients. In this paper we show that combining Tor and
Bitcoin creates an attack vector for the deterministic and stealthy
man-in-the-middle attacks. A low-resource attacker can gain full control of
information flows between all users who chose to use Bitcoin over Tor. In
particular the attacker can link together user's transactions regardless of
pseudonyms used, control which Bitcoin blocks and transactions are relayed to
the user and can \ delay or discard user's transactions and blocks. In
collusion with a powerful miner double-spending attacks become possible and a
totally virtual Bitcoin reality can be created for such set of users. Moreover,
we show how an attacker can fingerprint users and then recognize them and learn
their IP address when they decide to connect to the Bitcoin network directly.
|
[
{
"version": "v1",
"created": "Wed, 22 Oct 2014 15:37:07 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jan 2015 00:42:57 GMT"
}
] | 2015-01-09T00:00:00 |
[
[
"Biryukov",
"Alex",
""
],
[
"Pustogarov",
"Ivan",
""
]
] |
new_dataset
| 0.999515 |
1501.01678
|
Changwang Zhang
|
Changwang Zhang, Shi Zhou, and Benjamin M. Chain
|
LeoTask: a fast, flexible and reliable framework for computational
research
| null | null | null | null |
cs.SE cs.DC cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LeoTask is a Java library for computation-intensive and time-consuming
research tasks. It automatically executes tasks in parallel on multiple CPU
cores on a computing facility. It uses a configuration file to enable automatic
exploration of parameter space and flexible aggregation of results, and
therefore allows researchers to focus on programming the key logic of a
computing task. It also supports reliable recovery from interruptions, dynamic
and cloneable networks, and integration with the plotting software Gnuplot.
|
[
{
"version": "v1",
"created": "Wed, 7 Jan 2015 22:33:40 GMT"
}
] | 2015-01-09T00:00:00 |
[
[
"Zhang",
"Changwang",
""
],
[
"Zhou",
"Shi",
""
],
[
"Chain",
"Benjamin M.",
""
]
] |
new_dataset
| 0.999776 |
1501.01693
|
EPTCS
|
Santiago Escobar (Universitat Polit\'ecnica de Val\'encia)
|
Proceedings XIV Jornadas sobre Programaci\'on y Lenguajes
| null |
EPTCS 173, 2015
|
10.4204/EPTCS.173
| null |
cs.PL cs.LO cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This volume contains a selection of the papers presented at the XIV Jornadas
sobre Programaci\'on y Lenguajes (PROLE 2014), held at C\'adiz, Spain, during
September 17th-19th, 2014. Previous editions of the workshop were held in
Madrid (2013), Almer\'ia (2012), A Coru\~na (2011), Val\'encia (2010), San
Sebasti\'an (2009), Gij\'on (2008), Zaragoza (2007), Sitges (2006), Granada
(2005), M\'alaga (2004), Alicante (2003), El Escorial (2002), and Almagro
(2001).
Programming languages provide a conceptual framework which is necessary for
the development, analysis, optimization and understanding of programs and
programming tasks. The aim of the PROLE series of conferences (PROLE stems from
the spanish PROgramaci\'on y LEnguajes) is to serve as a meeting point for
spanish research groups which develop their work in the area of programming and
programming languages. The organization of this series of events aims at
fostering the exchange of ideas, experiences and results among these groups.
Promoting further collaboration is also one of the main goals of PROLE.
|
[
{
"version": "v1",
"created": "Thu, 8 Jan 2015 00:15:30 GMT"
}
] | 2015-01-09T00:00:00 |
[
[
"Escobar",
"Santiago",
"",
"Universitat Politécnica de Valéncia"
]
] |
new_dataset
| 0.988511 |
1501.01725
|
Seung-Eun Hong
|
Seung-Eun Hong and Kyoung-Sub Oh
|
Load-Modulated Single-RF MIMO Transmission for Spatially Multiplexed QAM
Signals
|
5 pages with 2-column format
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Today, MIMO has become an indispensable scheme for providing significant
spectral efficiency in wireless communication and for future wireless system,
recently, it goes to two extremes: massive MIMO and single-RF MIMO. This paper,
which is put in the latter, utilizes load-modulated arrays with only reactance
loads for single-RF transmission of spatially multiplexed QAM signals. To
alleviate the need for iterative processes while considering mutual coupling in
the compact antenna, we present a novel design methodology for the loading
network, which enables the exact computation of the three reactance loads per
antenna element and also the perfect matching to the source with the
opportunity to select appropriate analog tunable loads. We verify the design
methodology by comparing the calculated values for some key parameters with the
values from the circuit simulation. In addition, as an evaluation of the
proposed architecture, we perform the bit error rate (BER) comparison which
shows that our scheme with ideal loading is comparable to the conventional
MIMO.
|
[
{
"version": "v1",
"created": "Thu, 8 Jan 2015 04:16:19 GMT"
}
] | 2015-01-09T00:00:00 |
[
[
"Hong",
"Seung-Eun",
""
],
[
"Oh",
"Kyoung-Sub",
""
]
] |
new_dataset
| 0.99728 |
1501.01138
|
Jiyou Li
|
Jiyou Li, Daqing Wan and Jun Zhang
|
On the minimum distance of elliptic curve codes
|
13 pages
| null | null | null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computing the minimum distance of a linear code is one of the fundamental
problems in algorithmic coding theory. Vardy [14] showed that it is an \np-hard
problem for general linear codes. In practice, one often uses codes with
additional mathematical structure, such as AG codes. For AG codes of genus $0$
(generalized Reed-Solomon codes), the minimum distance has a simple explicit
formula. An interesting result of Cheng [3] says that the minimum distance
problem is already \np-hard (under \rp-reduction) for general elliptic curve
codes (ECAG codes, or AG codes of genus $1$). In this paper, we show that the
minimum distance of ECAG codes also has a simple explicit formula if the
evaluation set is suitably large (at least $2/3$ of the group order). Our
method is purely combinatorial and based on a new sieving technique from the
first two authors [8]. This method also proves a significantly stronger version
of the MDS (maximum distance separable) conjecture for ECAG codes.
|
[
{
"version": "v1",
"created": "Tue, 6 Jan 2015 10:42:31 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jan 2015 13:13:05 GMT"
}
] | 2015-01-08T00:00:00 |
[
[
"Li",
"Jiyou",
""
],
[
"Wan",
"Daqing",
""
],
[
"Zhang",
"Jun",
""
]
] |
new_dataset
| 0.998981 |
1501.01327
|
Rama Krishna Bandi
|
Rama Krishna Bandi and Maheshanand Bhaintwal
|
Cyclic codes over $\mathbb{Z}_4+u\mathbb{Z}_4$
|
arXiv admin note: text overlap with arXiv:1412.3751
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we have studied cyclic codes over the ring
$R=\mathbb{Z}_4+u\mathbb{Z}_4$, $u^2=0$. We have considered cyclic codes of odd
lengths. A sufficient condition for a cyclic code over $R$ to be a
$\mathbb{Z}_4$-free module is presented. We have provided the general form of
the generators of a cyclic code over $R$ and determined a formula for the ranks
of such codes. In this paper we have mainly focused on principally generated
cyclic codes of odd length over $R$. We have determined a necessary condition
and a sufficient condition for cyclic codes of odd lengths over $R$ to be
$R$-free.
|
[
{
"version": "v1",
"created": "Tue, 6 Jan 2015 22:19:02 GMT"
}
] | 2015-01-08T00:00:00 |
[
[
"Bandi",
"Rama Krishna",
""
],
[
"Bhaintwal",
"Maheshanand",
""
]
] |
new_dataset
| 0.993064 |
1501.01360
|
Gao Jian
|
Jian Gao, Minjia Shi, Tingting Wu, Fang-Wei Fu
|
On double cyclic codes over Z_4
|
16
| null | null | null |
cs.IT math.IT math.RA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $R=\mathbb{Z}_4$ be the integer ring mod $4$. A double cyclic code of
length $(r,s)$ over $R$ is a set that can be partitioned into two parts that
any cyclic shift of the coordinates of both parts leaves invariant the code.
These codes can be viewed as $R[x]$-submodules of $R[x]/(x^r-1)\times
R[x]/(x^s-1)$. In this paper, we determine the generator polynomials of this
family of codes as $R[x]$-submodules of $R[x]/(x^r-1)\times R[x]/(x^s-1)$.
Further, we also give the minimal generating sets of this family of codes as
$R$-submodules of $R[x]/(x^r-1)\times R[x]/(x^s-1)$. Some optimal or suboptimal
nonlinear binary codes are obtained from this family of codes. Finally, we
determine the relationship of generators between the double cyclic code and its
dual.
|
[
{
"version": "v1",
"created": "Wed, 7 Jan 2015 03:33:13 GMT"
}
] | 2015-01-08T00:00:00 |
[
[
"Gao",
"Jian",
""
],
[
"Shi",
"Minjia",
""
],
[
"Wu",
"Tingting",
""
],
[
"Fu",
"Fang-Wei",
""
]
] |
new_dataset
| 0.995597 |
1501.01363
|
Charlie Volkstorf
|
Charles Volkstorf
|
Program Synthesis from Axiomatic Proof of Correctness
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Program Synthesis is the mapping of a specification of what a computer
program is supposed to do, into a computer program that does what the
specification says to do. This is equivalent to constructing any computer
program and a sound proof that it meets the given specification.
We axiomatically prove statements of the form: program PROG meets
specification SPEC. We derive 7 axioms from the definition of the PHP
programming language in which the programs are to be written. For each
primitive function or process described, we write a program that uses only that
feature (function or process), and we have an axiom that this program meets the
specification described. Generic ways to alter or combine programs, that meet
known specifications, into new programs that meet known specifications, are our
7 rules of inference.
To efficiently prove statements that some program meets a given
specification, we work backwards from the specification. We apply the inverses
of the rules to the specifications that we must meet, until we reach axioms
that are combined by these rules to prove that a particular program meets the
given specification. Due to their distinct nature, typically few inverse rules
apply. To avoid complex wff and program manipulation algorithms, we advocate
the use of simple table maintenance and look-up functions to simulate these
complexities as a prototype.
Examples Include:
"$B=FALSE ; for ($a=1;!($j<$a);++$a){ $A=FALSE ; if (($a*$i)==$j) $A=TRUE ;
if ($A) $B=TRUE ; } ; echo $B ;" and "echo ($j % $i) == 0" : Is one number a
factor of another?
"for ($a=1 ; !($i<$a) ;++$a) {if (($i%$a) == 0) echo $a ; }" : List the
factors of I.
"$A=FALSE ; for ($a=1;$a<$i;++$a){ if (1<$a) { if (($i % $a) == 0) $A=TRUE ;
} ; } ; echo (!($A)) && (!($i<2)) ;" : Is I a prime number?
|
[
{
"version": "v1",
"created": "Wed, 7 Jan 2015 03:57:24 GMT"
}
] | 2015-01-08T00:00:00 |
[
[
"Volkstorf",
"Charles",
""
]
] |
new_dataset
| 0.981965 |
1501.01364
|
Bharath Mk
|
S.M. Vaitheeswaran, Bharath M.K., and Gokul M
|
Leader Follower Formation Control of Ground Vehicles Using Camshift
Based Guidance
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous ground vehicles have been designed for the purpose of that relies
on ranging and bearing information received from forward looking camera on the
Formation control . A visual guidance control algorithm is designed where real
time image processing is used to provide feedback signals. The vision subsystem
and control subsystem work in parallel to accomplish formation control. A
proportional navigation and line of sight guidance laws are used to estimate
the range and bearing information from the leader vehicle using the vision
subsystem. The algorithms for vision detection and localization used here are
similar to approaches for many computer vision tasks such as face tracking and
detection that are based color-and texture based features, and non-parametric
Continuously Adaptive Mean-shift algorithms to keep track of the leader. This
is being proposed for the first time in the leader follower framework. The
algorithms are simple but effective for real time and provide an alternate
approach to traditional based approaches like the Viola Jones algorithm.
Further to stabilize the follower to the leader trajectory, the sliding mode
controller is used to dynamically track the leader. The performance of the
results is demonstrated in simulation and in practical experiments.
|
[
{
"version": "v1",
"created": "Wed, 7 Jan 2015 04:00:26 GMT"
}
] | 2015-01-08T00:00:00 |
[
[
"Vaitheeswaran",
"S. M.",
""
],
[
"K.",
"Bharath M.",
""
],
[
"M",
"Gokul",
""
]
] |
new_dataset
| 0.950452 |
1501.01588
|
Nadeem Akhtar
|
Nadeem Akhtar, Anique Akhtar
|
KitRobot: A multi-platform graphical programming IDE to program
mini-robotic agents
|
9 pages, IISTE - Computer Engineering and Intelligent Systems, ISSN
2222-1719 (Paper) ISSN 2222-2863 (Online) Vol.5, No.3, 2014
| null | null | null |
cs.PL cs.HC cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The analysis, design and development of a graphical programming IDE for
mini-robotic agents allows novice users to program robotic agents by a
graphical drag and drop interface, without knowing the syntax and semantics of
the intermediate programming language. Our work started with the definition of
the syntax and semantics of the intermediate programming language. The major
work is the definition of grammar for this language. The use of a graphical
drag and drop interface for programming mini-robots offers a user-friendly
interface to novice users. The user can program graphically by drag and drop
program parts without having expertise of the intermediate programming
language. The IDE is highly flexible as it uses xml technology to store program
objects (i.e. loops, conditions) and robot objects (i.e. sensors, actuators).
Use of xml technology allows making major changes and updating the interface
without modifying the underlying design and programming.
|
[
{
"version": "v1",
"created": "Wed, 7 Jan 2015 18:54:55 GMT"
}
] | 2015-01-08T00:00:00 |
[
[
"Akhtar",
"Nadeem",
""
],
[
"Akhtar",
"Anique",
""
]
] |
new_dataset
| 0.999749 |
1106.5651
|
Loet Leydesdorff
|
Loet Leydesdorff
|
Hyperincursive Cogitata and Incursive Cogitantes: Scholarly Discourse as
a Strongly Anticipatory System
|
arXiv admin note: substantial text overlap with arXiv:1011.3244
|
International Journal of Computing Anticipatory Systems, 28,
173-186
| null | null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Strongly anticipatory systems-that is, systems which use models of themselves
for their further development-and which additionally may be able to run
hyperincursive routines-that is, develop only with reference to their future
states-cannot exist in res extensa, but can only be envisaged in res cogitans.
One needs incursive routines in cogitantes to instantiate these systems. Unlike
historical systems (with recursion), these hyper-incursive routines generate
redundancies by opening horizons of other possible states. Thus, intentional
systems can enrich our perceptions of the cases that have happened to occur.
The perspective of hindsight codified at the above-individual level enables us
furthermore to intervene technologically. The theory and computation of
anticipatory systems have made these loops between supra-individual
hyper-incursion, individual incursion (in instantiation), and historical
recursion accessible for modeling and empirical investigation.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2011 12:56:19 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jan 2015 06:01:18 GMT"
}
] | 2015-01-07T00:00:00 |
[
[
"Leydesdorff",
"Loet",
""
]
] |
new_dataset
| 0.996915 |
1202.1547
|
Bernhard von Stengel
|
Penelope Hernandez and Bernhard von Stengel
|
Nash Codes for Noisy Channels
|
More general main Theorem 6.5 with better proof. New examples and
introduction
|
Operations Research 62:6, 1221-1235 (2014)
|
10.1287/opre.2014.1311
| null |
cs.GT cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies the stability of communication protocols that deal with
transmission errors. We consider a coordination game between an informed sender
and an uninformed decision maker, the receiver, who communicate over a noisy
channel. The sender's strategy, called a code, maps states of nature to
signals. The receiver's best response is to decode the received channel output
as the state with highest expected receiver payoff. Given this decoding, an
equilibrium or "Nash code" results if the sender encodes every state as
prescribed. We show two theorems that give sufficient conditions for Nash
codes. First, a receiver-optimal code defines a Nash code. A second, more
surprising observation holds for communication over a binary channel which is
used independently a number of times, a basic model of information
transmission: Under a minimal "monotonicity" requirement for breaking ties when
decoding, which holds generically, EVERY code is a Nash code.
|
[
{
"version": "v1",
"created": "Tue, 7 Feb 2012 22:20:04 GMT"
},
{
"version": "v2",
"created": "Sun, 6 May 2012 23:41:49 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Feb 2014 15:30:22 GMT"
}
] | 2015-01-07T00:00:00 |
[
[
"Hernandez",
"Penelope",
""
],
[
"von Stengel",
"Bernhard",
""
]
] |
new_dataset
| 0.9993 |
1203.1528
|
Johnny Karout
|
Johnny Karout (Student Member, IEEE), Gerhard Kramer (Fellow, IEEE),
Frank R. Kschischang (Fellow, IEEE), and Erik Agrell
|
A Two-Dimensional Signal Space for Intensity-Modulated Channels
|
Submitted to IEEE Communications Letters, Feb. 2012
|
IEEE Communications Letters, vol. 16, no. 9, pp. 1361-1364, Sept.
2012
|
10.1109/LCOMM.2012.072012.121057
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A two-dimensional signal space for intensity- modulated channels is
presented. Modulation formats using this signal space are designed to maximize
the minimum distance between signal points while satisfying average and peak
power constraints. The uncoded, high-signal-to-noise ratio, power and spectral
efficiencies are compared to those of the best known formats. The new formats
are simpler than existing subcarrier formats, and are superior if the bandwidth
is measured as 90% in-band power. Existing subcarrier formats are better if the
bandwidth is measured as 99% in-band power.
|
[
{
"version": "v1",
"created": "Wed, 7 Mar 2012 16:35:41 GMT"
}
] | 2015-01-07T00:00:00 |
[
[
"Karout",
"Johnny",
"",
"Student Member, IEEE"
],
[
"Kramer",
"Gerhard",
"",
"Fellow, IEEE"
],
[
"Kschischang",
"Frank R.",
"",
"Fellow, IEEE"
],
[
"Agrell",
"Erik",
""
]
] |
new_dataset
| 0.99903 |
1302.1219
|
Stanislav Poslavsky
|
D.A. Bolotin and S.V. Poslavsky
|
Introduction to Redberry: a computer algebra system designed for tensor
manipulation
|
27 pages, 2 figures
| null | null | null |
cs.SC hep-ph hep-th
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we introduce Redberry --- an open source computer algebra
system with native support of tensorial expressions. It provides basic computer
algebra tools (algebraic manipulations, substitutions, basic simplifications
etc.) which are aware of specific features of indexed expressions: contractions
of indices, permutational symmetries, multiple index types etc. Redberry
supports conventional \LaTeX-style input notation for tensorial expressions.
The high energy physics package includes tools for Feynman diagrams
calculation: Dirac and SU(N) algebra, Levi-Civita simplifications and tools for
one-loop calculations in quantum field theory. In the paper we give detailed
overview of Redberry features: from basic manipulations with tensors to real
Feynman diagrams calculation, accompanied by many examples. Redberry is written
in Java 7 and provides convenient Groovy-based user interface inside the
high-level general purpose programming language environment. Redberry is
available from http://redberry.cc
|
[
{
"version": "v1",
"created": "Tue, 5 Feb 2013 22:15:43 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jan 2015 19:51:01 GMT"
}
] | 2015-01-07T00:00:00 |
[
[
"Bolotin",
"D. A.",
""
],
[
"Poslavsky",
"S. V.",
""
]
] |
new_dataset
| 0.987355 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.