id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1506.00425
|
Bo Jiang
|
Bo Jiang and Jiaying Wu and Xiuyu Shi and Ruhuan Huang
|
Hadoop Scheduling Base On Data Locality
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In hadoop, the job scheduling is an independent module, users can design
their own job scheduler based on their actual application requirements, thereby
meet their specific business needs. Currently, hadoop has three schedulers:
FIFO, computing capacity scheduling and fair scheduling policy, all of them are
take task allocation strategy that considerate data locality simply. They
neither support data locality well nor fully apply to all cases of jobs
scheduling. In this paper, we took the concept of resources-prefetch into
consideration, and proposed a job scheduling algorithm based on data locality.
By estimate the remaining time to complete a task, compared with the time to
transfer a resources block, to preselect candidate nodes for task allocation.
Then we preselect a non-local map tasks from the unfinished job queue as
resources-prefetch tasks. Getting information of resources blocks of
preselected map task, select a nearest resources blocks from the candidate node
and transferred to local through network. Thus we would ensure data locality
good enough. Eventually, we design a experiment and proved resources-prefetch
method can guarantee good job data locality and reduce the time to complete the
job to a certain extent.
|
[
{
"version": "v1",
"created": "Mon, 1 Jun 2015 10:25:09 GMT"
}
] | 2015-06-02T00:00:00 |
[
[
"Jiang",
"Bo",
""
],
[
"Wu",
"Jiaying",
""
],
[
"Shi",
"Xiuyu",
""
],
[
"Huang",
"Ruhuan",
""
]
] |
new_dataset
| 0.996671 |
1506.00462
|
Ulrich Pferschy
|
Andreas Darmann and Ulrich Pferschy and Joachim Schauer
|
On the shortest path game: extended version
|
Extended version contains the full description of the dynamic
programming arrays in Section 4
| null | null | null |
cs.DM cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we address a game theoretic variant of the shortest path
problem, in which two decision makers (players) move together along the edges
of a graph from a given starting vertex to a given destination. The two players
take turns in deciding in each vertex which edge to traverse next. The decider
in each vertex also has to pay the cost of the chosen edge. We want to
determine the path where each player minimizes its costs taking into account
that also the other player acts in a selfish and rational way. Such a solution
is a subgame perfect equilibrium and can be determined by backward induction in
the game tree of the associated finite game in extensive form.
We show that the decision problem associated with such a path is
PSPACE-complete even for bipartite graphs both for the directed and the
undirected version. The latter result is a surprising deviation from the
complexity status of the closely related game Geography.
On the other hand, we can give polynomial time algorithms for directed
acyclic graphs and for cactus graphs even in the undirected case. The latter is
based on a decomposition of the graph into components and their resolution by a
number of fairly involved dynamic programming arrays. Finally, we give some
arguments about closing the gap of the complexity status for graphs of bounded
treewidth.
|
[
{
"version": "v1",
"created": "Mon, 1 Jun 2015 12:08:12 GMT"
}
] | 2015-06-02T00:00:00 |
[
[
"Darmann",
"Andreas",
""
],
[
"Pferschy",
"Ulrich",
""
],
[
"Schauer",
"Joachim",
""
]
] |
new_dataset
| 0.978311 |
1506.00590
|
Long Thai MSc
|
Long Thai, Blesson Varghese, Adam Barker
|
Executing Bag of Distributed Tasks on Virtually Unlimited Cloud
Resources
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bag-of-Distributed-Tasks (BoDT) application is the collection of identical
and independent tasks each of which requires a piece of input data located
around the world. As a result, Cloud computing offers an ef- fective way to
execute BoT application as it not only consists of multiple geographically
distributed data centres but also allows a user to pay for what she actually
uses only. In this paper, BoDT on the Cloud using virtually unlimited cloud
resources. A heuristic algorithm is proposed to find an execution plan that
takes budget constraints into account. Compared with other approaches, with the
same given budget, our algorithm is able to reduce the overall execution time
up to 50%.
|
[
{
"version": "v1",
"created": "Mon, 1 Jun 2015 17:57:09 GMT"
}
] | 2015-06-02T00:00:00 |
[
[
"Thai",
"Long",
""
],
[
"Varghese",
"Blesson",
""
],
[
"Barker",
"Adam",
""
]
] |
new_dataset
| 0.999095 |
cs/0205032
|
Neal Young
|
Naveen Garg, Neal E. Young
|
On-Line End-to-End Congestion Control
|
Proceedings IEEE Symp. Foundations of Computer Science, 2002
|
The 43rd Annual IEEE Symposium on Foundations of Computer Science,
303-310 (2002)
|
10.1109/SFCS.2002.1181953
| null |
cs.DS cs.CC cs.NI
| null |
Congestion control in the current Internet is accomplished mainly by TCP/IP.
To understand the macroscopic network behavior that results from TCP/IP and
similar end-to-end protocols, one main analytic technique is to show that the
the protocol maximizes some global objective function of the network traffic.
Here we analyze a particular end-to-end, MIMD (multiplicative-increase,
multiplicative-decrease) protocol. We show that if all users of the network use
the protocol, and all connections last for at least logarithmically many
rounds, then the total weighted throughput (value of all packets received) is
near the maximum possible. Our analysis includes round-trip-times, and (in
contrast to most previous analyses) gives explicit convergence rates, allows
connections to start and stop, and allows capacities to change.
|
[
{
"version": "v1",
"created": "Sat, 18 May 2002 01:54:56 GMT"
},
{
"version": "v2",
"created": "Sat, 31 Aug 2002 03:43:22 GMT"
}
] | 2015-06-02T00:00:00 |
[
[
"Garg",
"Naveen",
""
],
[
"Young",
"Neal E.",
""
]
] |
new_dataset
| 0.980823 |
cs/0205033
|
Neal Young
|
Neal E. Young
|
On-Line File Caching
|
ACM-SIAM Symposium on Discrete Algorithms (1998)
|
Algorithmica 33:371-383 (2002)
|
10.1007/s00453-001-0124-5
| null |
cs.DS cs.CC cs.NI
| null |
In the on-line file-caching problem problem, the input is a sequence of
requests for files, given on-line (one at a time). Each file has a non-negative
size and a non-negative retrieval cost. The problem is to decide which files to
keep in a fixed-size cache so as to minimize the sum of the retrieval costs for
files that are not in the cache when requested. The problem arises in web
caching by browsers and by proxies. This paper describes a natural
generalization of LRU called Landlord and gives an analysis showing that it has
an optimal performance guarantee (among deterministic on-line algorithms).
The paper also gives an analysis of the algorithm in a so-called ``loosely''
competitive model, showing that on a ``typical'' cache size, either the
performance guarantee is O(1) or the total retrieval cost is insignificant.
|
[
{
"version": "v1",
"created": "Sat, 18 May 2002 02:04:59 GMT"
}
] | 2015-06-02T00:00:00 |
[
[
"Young",
"Neal E.",
""
]
] |
new_dataset
| 0.998954 |
cs/0205048
|
Neal E. Young
|
Mordecai Golin, Claire Mathieu, Neal E. Young
|
Huffman Coding with Letter Costs: A Linear-Time Approximation Scheme
| null |
SIAM Journal on Computing 41(3):684-713(2012)
|
10.1137/100794092
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We give a polynomial-time approximation scheme for the generalization of
Huffman Coding in which codeword letters have non-uniform costs (as in Morse
code, where the dash is twice as long as the dot). The algorithm computes a
(1+epsilon)-approximate solution in time O(n + f(epsilon) log^3 n), where n is
the input size.
|
[
{
"version": "v1",
"created": "Sat, 18 May 2002 18:57:04 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Apr 2012 19:50:06 GMT"
}
] | 2015-06-02T00:00:00 |
[
[
"Golin",
"Mordecai",
""
],
[
"Mathieu",
"Claire",
""
],
[
"Young",
"Neal E.",
""
]
] |
new_dataset
| 0.963401 |
cs/0205049
|
Neal Young
|
Mordecai Golin, Neal E. Young
|
Prefix Codes: Equiprobable Words, Unequal Letter Costs
|
proceedings version in ICALP (1994)
|
SIAM J. Computing 25(6):1281-1304 (1996)
|
10.1137/S0097539794268388
| null |
cs.DS
| null |
Describes a near-linear-time algorithm for a variant of Huffman coding, in
which the letters may have non-uniform lengths (as in Morse code), but with the
restriction that each word to be encoded has equal probability. [See also
``Huffman Coding with Unequal Letter Costs'' (2002).]
|
[
{
"version": "v1",
"created": "Sat, 18 May 2002 19:05:55 GMT"
}
] | 2015-06-02T00:00:00 |
[
[
"Golin",
"Mordecai",
""
],
[
"Young",
"Neal E.",
""
]
] |
new_dataset
| 0.999549 |
1310.3902
|
Shaoquan Jiang
|
Dajiang Chen, Shaoquan Jiang, Zhiguang Qin
|
Message Authentication Code over a Wiretap Channel
|
Formulation of model is changed
|
ISIT 2015
| null | null |
cs.IT cs.CR math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Message Authentication Code (MAC) is a keyed function $f_K$ such that when
Alice, who shares the secret $K$ with Bob, sends $f_K(M)$ to the latter, Bob
will be assured of the integrity and authenticity of $M$. Traditionally, it is
assumed that the channel is noiseless. However, Maurer showed that in this case
an attacker can succeed with probability $2^{-\frac{H(K)}{\ell+1}}$ after
authenticating $\ell$ messages. In this paper, we consider the setting where
the channel is noisy. Specifically, Alice and Bob are connected by a discrete
memoryless channel (DMC) $W_1$ and a noiseless but insecure channel. In
addition, an attacker Oscar is connected with Alice through DMC $W_2$ and with
Bob through a noiseless channel. In this setting, we study the framework that
sends $M$ over the noiseless channel and the traditional MAC $f_K(M)$ over
channel $(W_1, W_2)$. We regard the noisy channel as an expensive resource and
define the authentication rate $\rho_{auth}$ as the ratio of message length to
the number $n$ of channel $W_1$ uses. The security of this framework depends on
the channel coding scheme for $f_K(M)$. A natural coding scheme is to use the
secrecy capacity achieving code of Csisz\'{a}r and K\"{o}rner. Intuitively,
this is also the optimal strategy. However, we propose a coding scheme that
achieves a higher $\rho_{auth}.$ Our crucial point for this is that in the
secrecy capacity setting, Bob needs to recover $f_K(M)$ while in our coding
scheme this is not necessary. How to detect the attack without recovering
$f_K(M)$ is the main contribution of this work. We achieve this through random
coding techniques.
|
[
{
"version": "v1",
"created": "Tue, 15 Oct 2013 02:46:58 GMT"
},
{
"version": "v2",
"created": "Fri, 29 May 2015 17:46:37 GMT"
}
] | 2015-06-01T00:00:00 |
[
[
"Chen",
"Dajiang",
""
],
[
"Jiang",
"Shaoquan",
""
],
[
"Qin",
"Zhiguang",
""
]
] |
new_dataset
| 0.995906 |
1402.3545
|
Eike Hermann M\"uller
|
Eike Hermann M\"uller, Robert Scheichl, Eero Vainikko
|
Petascale elliptic solvers for anisotropic PDEs on GPU clusters
|
20 pages, 6 figures. Additional explanations and clarifications of
the characteristics of the PDE; discussion and estimate of the condition
number. Added section and figure on the robustness of both the single-level
and the multigrid method under variations of the Courant number. Clarified
the terminology in the performance analysis. Added section on preliminary
strong scaling results
| null | null | null |
cs.DC cs.NA math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Memory bound applications such as solvers for large sparse systems of
equations remain a challenge for GPUs. Fast solvers should be based on
numerically efficient algorithms and implemented such that global memory access
is minimised. To solve systems with up to one trillion ($10^{12}$) unknowns the
code has to make efficient use of several million individual processor cores on
large GPU clusters. We describe the multi-GPU implementation of two
algorithmically optimal iterative solvers for anisotropic elliptic PDEs which
are encountered in atmospheric modelling. In this application the condition
number is large but independent of the grid resolution and both methods are
asymptotically optimal, albeit with different absolute performance. We
parallelise the solvers and adapt them to the specific features of GPU
architectures, paying particular attention to efficient global memory access.
We achieve a performance of up to 0.78 PFLOPs when solving an equation with
$0.55\cdot 10^{12}$ unknowns on 16384 GPUs; this corresponds to about $3\%$ of
the theoretical peak performance of the machine and we use more than $40\%$ of
the peak memory bandwidth with a Conjugate Gradient (CG) solver. Although the
other solver, a geometric multigrid algorithm, has a slightly worse performance
in terms of FLOPs per second, overall it is faster as it needs less iterations
to converge; the multigrid algorithm can solve a linear PDE with half a
trillion unknowns in about one second.
|
[
{
"version": "v1",
"created": "Fri, 14 Feb 2014 18:30:04 GMT"
},
{
"version": "v2",
"created": "Fri, 29 May 2015 10:56:36 GMT"
}
] | 2015-06-01T00:00:00 |
[
[
"Müller",
"Eike Hermann",
""
],
[
"Scheichl",
"Robert",
""
],
[
"Vainikko",
"Eero",
""
]
] |
new_dataset
| 0.992051 |
1502.05110
|
Chuan Qin
|
Mingqiang Li, Chuan Qin, Patrick P. C. Lee
|
CDStore: Toward Reliable, Secure, and Cost-Efficient Cloud Storage via
Convergent Dispersal
| null | null | null | null |
cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present CDStore, which disperses users' backup data across multiple clouds
and provides a unified multi-cloud storage solution with reliability, security,
and cost-efficiency guarantees. CDStore builds on an augmented secret sharing
scheme called convergent dispersal, which supports deduplication by using
deterministic content-derived hashes as inputs to secret sharing. We present
the design of CDStore, and in particular, describe how it combines convergent
dispersal with two-stage deduplication to achieve both bandwidth and storage
savings and be robust against side-channel attacks. We evaluate the performance
of our CDStore prototype using real-world workloads on LAN and commercial cloud
testbeds. Our cost analysis also demonstrates that CDStore achieves a monetary
cost saving of 70% over a baseline cloud storage solution using
state-of-the-art secret sharing.
|
[
{
"version": "v1",
"created": "Wed, 18 Feb 2015 04:03:11 GMT"
},
{
"version": "v2",
"created": "Fri, 29 May 2015 08:13:30 GMT"
}
] | 2015-06-01T00:00:00 |
[
[
"Li",
"Mingqiang",
""
],
[
"Qin",
"Chuan",
""
],
[
"Lee",
"Patrick P. C.",
""
]
] |
new_dataset
| 0.998468 |
1505.07923
|
Anirban Dasgupta
|
Anirban Dasgupta, Aurobinda Routray
|
Fast Computation of PERCLOS and Saccadic Ratio
|
MS Thesis
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This thesis describes the development of fast algorithms for the computation
of PERcentage CLOSure of eyes (PERCLOS) and Saccadic Ratio (SR). PERCLOS and SR
are two ocular parameters reported to be measures of alertness levels in human
beings. PERCLOS is the percentage of time in which at least 80% of the eyelid
remains closed over the pupil. Saccades are fast and simultaneous movement of
both the eyes in the same direction. SR is the ratio of peak saccadic velocity
to the saccadic duration. This thesis addresses the issues of image based
estimation of PERCLOS and SR, prevailing in the literature such as illumination
variation, poor illumination conditions, head rotations etc. In this work,
algorithms for real-time PERCLOS computation has been developed and implemented
on an embedded platform. The platform has been used as a case study for
assessment of loss of attention in automotive drivers. The SR estimation has
been carried out offline as real-time implementation requires high frame rates
of processing which is difficult to achieve due to hardware limitations. The
accuracy in estimation of the loss of attention using PERCLOS and SR has been
validated using brain signals, which are reported to be an authentic cue for
estimating the state of alertness in human beings. The major contributions of
this thesis include database creation, design and implementation of fast
algorithms for estimating PERCLOS and SR on embedded computing platforms.
|
[
{
"version": "v1",
"created": "Fri, 29 May 2015 05:04:30 GMT"
}
] | 2015-06-01T00:00:00 |
[
[
"Dasgupta",
"Anirban",
""
],
[
"Routray",
"Aurobinda",
""
]
] |
new_dataset
| 0.964943 |
1505.07930
|
Tam Nguyen
|
Tam V. Nguyen, Jose Sepulveda
|
Salient Object Detection via Augmented Hypotheses
|
IJCAI 2015 paper
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/3.0/
|
In this paper, we propose using \textit{augmented hypotheses} which consider
objectness, foreground and compactness for salient object detection. Our
algorithm consists of four basic steps. First, our method generates the
objectness map via objectness hypotheses. Based on the objectness map, we
estimate the foreground margin and compute the corresponding foreground map
which prefers the foreground objects. From the objectness map and the
foreground map, the compactness map is formed to favor the compact objects. We
then derive a saliency measure that produces a pixel-accurate saliency map
which uniformly covers the objects of interest and consistently separates fore-
and background. We finally evaluate the proposed framework on two challenging
datasets, MSRA-1000 and iCoSeg. Our extensive experimental results show that
our method outperforms state-of-the-art approaches.
|
[
{
"version": "v1",
"created": "Fri, 29 May 2015 06:03:57 GMT"
}
] | 2015-06-01T00:00:00 |
[
[
"Nguyen",
"Tam V.",
""
],
[
"Sepulveda",
"Jose",
""
]
] |
new_dataset
| 0.994196 |
1109.0697
|
Han-Xin Yang
|
Han-Xin Yang, Wen-Xu Wang, Zhi-Xi Wu, and Bing-Hong Wang
|
Traffic dynamics in scale-free networks with limited packet-delivering
capacity
| null |
Physica A 387 (2008) 6857-6862
|
10.1016/j.physa.2008.09.016
| null |
cs.NI physics.soc-ph
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
We propose a limited packet-delivering capacity model for traffic dynamics in
scale-free networks. In this model, the total node's packet-delivering capacity
is fixed, and the allocation of packet-delivering capacity on node $i$ is
proportional to $k_{i}^{\phi}$, where $k_{i}$ is the degree of node $i$ and
$\phi$ is a adjustable parameter. We have applied this model on the shortest
path routing strategy as well as the local routing strategy, and found that
there exists an optimal value of parameter $\phi$ leading to the maximal
network capacity under both routing strategies. We provide some explanations
for the emergence of optimal $\phi$.
|
[
{
"version": "v1",
"created": "Sun, 4 Sep 2011 11:35:03 GMT"
}
] | 2015-05-30T00:00:00 |
[
[
"Yang",
"Han-Xin",
""
],
[
"Wang",
"Wen-Xu",
""
],
[
"Wu",
"Zhi-Xi",
""
],
[
"Wang",
"Bing-Hong",
""
]
] |
new_dataset
| 0.991488 |
1110.5844
|
Anirban Bandyopadhyay
|
Anirban Bandyopadhyay, Ranjit Pati, Satyajit Sahu, Ferdinand Peper,
Daisuke Fujita
|
Massively parallel computing on an organic molecular layer
|
25 pages, 6 figures
|
Nature Physics 6, 369 (2010)
|
10.1038/nphys1636
| null |
cs.ET physics.comp-ph
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
Current computers operate at enormous speeds of ~10^13 bits/s, but their
principle of sequential logic operation has remained unchanged since the 1950s.
Though our brain is much slower on a per-neuron base (~10^3 firings/s), it is
capable of remarkable decision-making based on the collective operations of
millions of neurons at a time in ever-evolving neural circuitry. Here we use
molecular switches to build an assembly where each molecule communicates-like
neurons-with many neighbors simultaneously. The assembly's ability to
reconfigure itself spontaneously for a new problem allows us to realize
conventional computing constructs like logic gates and Voronoi decompositions,
as well as to reproduce two natural phenomena: heat diffusion and the mutation
of normal cells to cancer cells. This is a shift from the current static
computing paradigm of serial bit-processing to a regime in which a large number
of bits are processed in parallel in dynamically changing hardware.
|
[
{
"version": "v1",
"created": "Mon, 17 Oct 2011 04:36:32 GMT"
}
] | 2015-05-30T00:00:00 |
[
[
"Bandyopadhyay",
"Anirban",
""
],
[
"Pati",
"Ranjit",
""
],
[
"Sahu",
"Satyajit",
""
],
[
"Peper",
"Ferdinand",
""
],
[
"Fujita",
"Daisuke",
""
]
] |
new_dataset
| 0.985244 |
1407.7146
|
Daniel Zappala
|
Mark O'Neill, Scott Ruoti, Kent Seamons, Daniel Zappala
|
TLS Proxies: Friend or Foe?
| null | null | null | null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of TLS proxies to intercept encrypted traffic is controversial since
the same mechanism can be used for both benevolent purposes, such as protecting
against malware, and for malicious purposes, such as identity theft or
warrantless government surveillance. To understand the prevalence and uses of
these proxies, we build a TLS proxy measurement tool and deploy it via Google
AdWords campaigns. We generate 15.2 million certificate tests across two
large-scale measurement studies. We find that 1 in 250 TLS connections are
TLS-proxied. The majority of these proxies appear to be benevolent, however we
identify over 3,600 cases where eight malware products are using this
technology nefariously. We also find numerous instances of negligent,
duplicitous, and suspicious behavior, some of which degrade security for users
without their knowledge. Distinguishing these types of practices is challenging
in practice, indicating a need for transparency and user awareness.
|
[
{
"version": "v1",
"created": "Sat, 26 Jul 2014 18:33:03 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Mar 2015 21:58:47 GMT"
},
{
"version": "v3",
"created": "Thu, 28 May 2015 17:41:07 GMT"
}
] | 2015-05-29T00:00:00 |
[
[
"O'Neill",
"Mark",
""
],
[
"Ruoti",
"Scott",
""
],
[
"Seamons",
"Kent",
""
],
[
"Zappala",
"Daniel",
""
]
] |
new_dataset
| 0.956007 |
1410.3560
|
Ryan Rossi
|
Ryan A. Rossi and Nesreen K. Ahmed
|
NetworkRepository: An Interactive Data Repository with Multi-scale
Visual Analytics
|
AAAI 2015 DT
| null | null | null |
cs.DL cs.HC cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Network Repository (NR) is the first interactive data repository with a
web-based platform for visual interactive analytics. Unlike other data
repositories (e.g., UCI ML Data Repository, and SNAP), the network data
repository (networkrepository.com) allows users to not only download, but to
interactively analyze and visualize such data using our web-based interactive
graph analytics platform. Users can in real-time analyze, visualize, compare,
and explore data along many different dimensions. The aim of NR is to make it
easy to discover key insights into the data extremely fast with little effort
while also providing a medium for users to share data, visualizations, and
insights. Other key factors that differentiate NR from the current data
repositories is the number of graph datasets, their size, and variety. While
other data repositories are static, they also lack a means for users to
collaboratively discuss a particular dataset, corrections, or challenges with
using the data for certain applications. In contrast, we have incorporated many
social and collaborative aspects into NR in hopes of further facilitating
scientific research (e.g., users can discuss each graph, post observations,
visualizations, etc.).
|
[
{
"version": "v1",
"created": "Tue, 14 Oct 2014 03:35:37 GMT"
},
{
"version": "v2",
"created": "Thu, 28 May 2015 19:58:23 GMT"
}
] | 2015-05-29T00:00:00 |
[
[
"Rossi",
"Ryan A.",
""
],
[
"Ahmed",
"Nesreen K.",
""
]
] |
new_dataset
| 0.971648 |
1505.07502
|
Lavanya Subramanian
|
Hiroyuki Usui, Lavanya Subramanian, Kevin Chang, Onur Mutlu
|
SQUASH: Simple QoS-Aware High-Performance Memory Scheduler for
Heterogeneous Systems with Hardware Accelerators
| null | null | null |
SAFARI Technical Report No. 2015-003
|
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern SoCs integrate multiple CPU cores and Hardware Accelerators (HWAs)
that share the same main memory system, causing interference among memory
requests from different agents. The result of this interference, if not
controlled well, is missed deadlines for HWAs and low CPU performance.
State-of-the-art mechanisms designed for CPU-GPU systems strive to meet a
target frame rate for GPUs by prioritizing the GPU close to the time when it
has to complete a frame. We observe two major problems when such an approach is
adapted to a heterogeneous CPU-HWA system. First, HWAs miss deadlines because
they are prioritized only close to their deadlines. Second, such an approach
does not consider the diverse memory access characteristics of different
applications running on CPUs and HWAs, leading to low performance for
latency-sensitive CPU applications and deadline misses for some HWAs, including
GPUs.
In this paper, we propose a Simple Quality of service Aware memory Scheduler
for Heterogeneous systems (SQUASH), that overcomes these problems using three
key ideas, with the goal of meeting deadlines of HWAs while providing high CPU
performance. First, SQUASH prioritizes a HWA when it is not on track to meet
its deadline any time during a deadline period. Second, SQUASH prioritizes HWAs
over memory-intensive CPU applications based on the observation that the
performance of memory-intensive applications is not sensitive to memory
latency. Third, SQUASH treats short-deadline HWAs differently as they are more
likely to miss their deadlines and schedules their requests based on worst-case
memory access time estimates.
Extensive evaluations across a wide variety of different workloads and
systems show that SQUASH achieves significantly better CPU performance than the
best previous scheduler while always meeting the deadlines for all HWAs,
including GPUs, thereby largely improving frame rates.
|
[
{
"version": "v1",
"created": "Wed, 27 May 2015 22:07:28 GMT"
}
] | 2015-05-29T00:00:00 |
[
[
"Usui",
"Hiroyuki",
""
],
[
"Subramanian",
"Lavanya",
""
],
[
"Chang",
"Kevin",
""
],
[
"Mutlu",
"Onur",
""
]
] |
new_dataset
| 0.989774 |
1505.07548
|
Andrew Smith
|
Jian Lou and Andrew M. Smith and Yevgeniy Vorobeychik
|
Multidefender Security Games
| null | null | null | null |
cs.GT cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stackelberg security game models and associated computational tools have seen
deployment in a number of high-consequence security settings, such as LAX
canine patrols and Federal Air Marshal Service. These models focus on isolated
systems with only one defender, despite being part of a more complex system
with multiple players. Furthermore, many real systems such as transportation
networks and the power grid exhibit interdependencies between targets and,
consequently, between decision makers jointly charged with protecting them. To
understand such multidefender strategic interactions present in security, we
investigate game theoretic models of security games with multiple defenders.
Unlike most prior analysis, we focus on the situations in which each defender
must protect multiple targets, so that even a single defender's best response
decision is, in general, highly non-trivial. We start with an analytical
investigation of multidefender security games with independent targets,
offering an equilibrium and price-of-anarchy analysis of three models with
increasing generality. In all models, we find that defenders have the incentive
to over-protect targets, at times significantly. Additionally, in the simpler
models, we find that the price of anarchy is unbounded, linearly increasing
both in the number of defenders and the number of targets per defender.
Considering interdependencies among targets, we develop a novel mixed-integer
linear programming formulation to compute a defender's best response, and make
use of this formulation in approximating Nash equilibria of the game. We apply
this approach towards computational strategic analysis of several models of
networks representing interdependencies, including real-world power networks.
Our analysis shows how network structure and the probability of failure spread
determine the propensity of defenders to over- or under-invest in security.
|
[
{
"version": "v1",
"created": "Thu, 28 May 2015 04:54:53 GMT"
}
] | 2015-05-29T00:00:00 |
[
[
"Lou",
"Jian",
""
],
[
"Smith",
"Andrew M.",
""
],
[
"Vorobeychik",
"Yevgeniy",
""
]
] |
new_dataset
| 0.951157 |
1505.07605
|
Fangjin Guo
|
Hongyu Meng, Fangjin Guo
|
Simple sorting algorithm test based on CUDA
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the development of computing technology, CUDA has become a very
important tool. In computer programming, sorting algorithm is widely used.
There are many simple sorting algorithms such as enumeration sort, bubble sort
and merge sort. In this paper, we test some simple sorting algorithm based on
CUDA and draw some useful conclusions.
|
[
{
"version": "v1",
"created": "Thu, 28 May 2015 09:08:23 GMT"
}
] | 2015-05-29T00:00:00 |
[
[
"Meng",
"Hongyu",
""
],
[
"Guo",
"Fangjin",
""
]
] |
new_dataset
| 0.977962 |
1505.07672
|
Debapriya Das
|
Niklas Ludtke, Debapriya Das, Lucas Theis, Matthias Bethge
|
A Generative Model of Natural Texture Surrogates
|
34 pages, 9 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Natural images can be viewed as patchworks of different textures, where the
local image statistics is roughly stationary within a small neighborhood but
otherwise varies from region to region. In order to model this variability, we
first applied the parametric texture algorithm of Portilla and Simoncelli to
image patches of 64X64 pixels in a large database of natural images such that
each image patch is then described by 655 texture parameters which specify
certain statistics, such as variances and covariances of wavelet coefficients
or coefficient magnitudes within that patch.
To model the statistics of these texture parameters, we then developed
suitable nonlinear transformations of the parameters that allowed us to fit
their joint statistics with a multivariate Gaussian distribution. We find that
the first 200 principal components contain more than 99% of the variance and
are sufficient to generate textures that are perceptually extremely close to
those generated with all 655 components. We demonstrate the usefulness of the
model in several ways: (1) We sample ensembles of texture patches that can be
directly compared to samples of patches from the natural image database and can
to a high degree reproduce their perceptual appearance. (2) We further
developed an image compression algorithm which generates surprisingly accurate
images at bit rates as low as 0.14 bits/pixel. Finally, (3) We demonstrate how
our approach can be used for an efficient and objective evaluation of samples
generated with probabilistic models of natural images.
|
[
{
"version": "v1",
"created": "Thu, 28 May 2015 12:37:15 GMT"
}
] | 2015-05-29T00:00:00 |
[
[
"Ludtke",
"Niklas",
""
],
[
"Das",
"Debapriya",
""
],
[
"Theis",
"Lucas",
""
],
[
"Bethge",
"Matthias",
""
]
] |
new_dataset
| 0.979398 |
1505.07702
|
Lalla Mouatadid
|
Lalla Mouatadid, Robert Robere
|
Path Graphs, Clique Trees, and Flowers
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An \emph{asteroidal triple} is a set of three independent vertices in a graph
such that any two vertices in the set are connected by a path which avoids the
neighbourhood of the third.
A classical result by Lekkerkerker and Boland \cite{6} showed that interval
graphs are precisely the chordal graphs that do not have asteroidal triples.
Interval graphs are chordal, as are the \emph{directed path graphs} and the
\emph{path graphs}.
Similar to Lekkerkerker and Boland, Cameron, Ho\'{a}ng, and L\'{e}v\^{e}que
\cite{4} gave a characterization of directed path graphs by a "special type" of
asteroidal triple, and asked whether or not there was such a characterization
for path graphs.
We give strong evidence that asteroidal triples alone are insufficient to
characterize the family of path graphs, and give a new characterization of path
graphs via a forbidden induced subgraph family that we call \emph{sun systems}.
Key to our new characterization is the study of \emph{asteroidal sets} in sun
systems, which are a natural generalization of asteroidal triples.
Our characterization of path graphs by forbidding sun systems also
generalizes a characterization of directed path graphs by forbidding odd suns
that was given by Chaplick et al.~\cite{9}.
|
[
{
"version": "v1",
"created": "Thu, 28 May 2015 14:34:54 GMT"
}
] | 2015-05-29T00:00:00 |
[
[
"Mouatadid",
"Lalla",
""
],
[
"Robere",
"Robert",
""
]
] |
new_dataset
| 0.999793 |
1107.4218
|
Maurizio Serva
|
Maurizio Serva
|
The settlement of Madagascar: what dialects and languages can tell
|
We find out the area and the modalities of the settlement of
Madagascar by Indonesian colonizers around 650 CE. Results are obtained
comparing 23 Malagasy dialects with Malay and Maanyan languages
| null |
10.1371/journal.pone.0030666
| null |
cs.CL q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The dialects of Madagascar belong to the Greater Barito East group of the
Austronesian family and it is widely accepted that the Island was colonized by
Indonesian sailors after a maritime trek which probably took place around 650
CE. The language most closely related to Malagasy dialects is Maanyan but also
Malay is strongly related especially for what concerns navigation terms. Since
the Maanyan Dayaks live along the Barito river in Kalimantan (Borneo) and they
do not possess the necessary skill for long maritime navigation, probably they
were brought as subordinates by Malay sailors.
In a recent paper we compared 23 different Malagasy dialects in order to
determine the time and the landing area of the first colonization. In this
research we use new data and new methods to confirm that the landing took place
on the south-east coast of the Island. Furthermore, we are able to state here
that it is unlikely that there were multiple settlements and, therefore,
colonization consisted in a single founding event.
To reach our goal we find out the internal kinship relations among all the 23
Malagasy dialects and we also find out the different kinship degrees of the 23
dialects versus Malay and Maanyan. The method used is an automated version of
the lexicostatistic approach. The data concerning Madagascar were collected by
the author at the beginning of 2010 and consist of Swadesh lists of 200 items
for 23 dialects covering all areas of the Island. The lists for Maanyan and
Malay were obtained from published datasets integrated by author's interviews.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2011 10:02:31 GMT"
}
] | 2015-05-28T00:00:00 |
[
[
"Serva",
"Maurizio",
""
]
] |
new_dataset
| 0.999696 |
1505.07161
|
EPTCS
|
Samuel Mimram (LIX, \'Ecole Polytechnique)
|
Presenting Finite Posets
|
In Proceedings TERMGRAPH 2014, arXiv:1505.06818
|
EPTCS 183, 2015, pp. 1-17
|
10.4204/EPTCS.183.1
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a monoidal category whose morphisms are finite partial orders,
with chosen minimal and maximal elements as source and target respectively.
After recalling the notion of presentation of a monoidal category by the means
of generators and relations, we construct a presentation of our category, which
corresponds to a variant of the notion of bialgebra.
|
[
{
"version": "v1",
"created": "Wed, 27 May 2015 00:47:42 GMT"
}
] | 2015-05-28T00:00:00 |
[
[
"Mimram",
"Samuel",
"",
"LIX, École Polytechnique"
]
] |
new_dataset
| 0.999549 |
1505.07293
|
Vijay Badrinarayanan
|
Vijay Badrinarayanan, Ankur Handa, Roberto Cipolla
|
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Robust
Semantic Pixel-Wise Labelling
|
This version was first submitted to CVPR' 15 on November 14, 2014
with paper Id 1468. A similar architecture was proposed more recently on May
17, 2015, see http://arxiv.org/pdf/1505.04366.pdf
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel deep architecture, SegNet, for semantic pixel wise image
labelling. SegNet has several attractive properties; (i) it only requires
forward evaluation of a fully learnt function to obtain smooth label
predictions, (ii) with increasing depth, a larger context is considered for
pixel labelling which improves accuracy, and (iii) it is easy to visualise the
effect of feature activation(s) in the pixel label space at any depth. SegNet
is composed of a stack of encoders followed by a corresponding decoder stack
which feeds into a soft-max classification layer. The decoders help map low
resolution feature maps at the output of the encoder stack to full input image
size feature maps. This addresses an important drawback of recent deep learning
approaches which have adopted networks designed for object categorization for
pixel wise labelling. These methods lack a mechanism to map deep layer feature
maps to input dimensions. They resort to ad hoc methods to upsample features,
e.g. by replication. This results in noisy predictions and also restricts the
number of pooling layers in order to avoid too much upsampling and thus reduces
spatial context. SegNet overcomes these problems by learning to map encoder
outputs to image pixel labels. We test the performance of SegNet on outdoor RGB
scenes from CamVid, KITTI and indoor scenes from the NYU dataset. Our results
show that SegNet achieves state-of-the-art performance even without use of
additional cues such as depth, video frames or post-processing with CRF models.
|
[
{
"version": "v1",
"created": "Wed, 27 May 2015 12:54:17 GMT"
}
] | 2015-05-28T00:00:00 |
[
[
"Badrinarayanan",
"Vijay",
""
],
[
"Handa",
"Ankur",
""
],
[
"Cipolla",
"Roberto",
""
]
] |
new_dataset
| 0.98481 |
1505.07375
|
Hong-Yi Dai
|
Hong-Yi Dai
|
The Mysteries of Lisp -- I: The Way to S-expression Lisp
| null | null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
Despite its old age, Lisp remains mysterious to many of its admirers. The
mysteries on one hand fascinate the language, on the other hand also obscure
it. Following Stoyan but paying attention to what he has neglected or omitted,
in this first essay of a series intended to unravel these mysteries, we trace
the development of Lisp back to its origin, revealing how the language has
evolved into its nowadays look and feel. The insights thus gained will not only
enhance existent understanding of the language but also inspires further
improvement of it.
|
[
{
"version": "v1",
"created": "Tue, 26 May 2015 04:16:50 GMT"
}
] | 2015-05-28T00:00:00 |
[
[
"Dai",
"Hong-Yi",
""
]
] |
new_dataset
| 0.998841 |
1505.07383
|
Lars Bergstrom
|
Brian Anderson and Lars Bergstrom and David Herman and Josh Matthews
and Keegan McAllister and Manish Goregaokar and Jack Moffitt and Simon Sapin
|
Experience Report: Developing the Servo Web Browser Engine using Rust
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
All modern web browsers - Internet Explorer, Firefox, Chrome, Opera, and
Safari - have a core rendering engine written in C++. This language choice was
made because it affords the systems programmer complete control of the
underlying hardware features and memory in use, and it provides a transparent
compilation model.
Servo is a project started at Mozilla Research to build a new web browser
engine that preserves the capabilities of these other browser engines but also
both takes advantage of the recent trends in parallel hardware and is more
memory-safe. We use a new language, Rust, that provides us a similar level of
control of the underlying system to C++ but which builds on many concepts
familiar to the functional programming community, forming a novelty - a useful,
safe systems programming language.
In this paper, we show how a language with an affine type system, regions,
and many syntactic features familiar to functional language programmers can be
successfully used to build state-of-the-art systems software. We also outline
several pitfalls encountered along the way and describe some potential areas
for future research.
|
[
{
"version": "v1",
"created": "Tue, 26 May 2015 18:49:02 GMT"
}
] | 2015-05-28T00:00:00 |
[
[
"Anderson",
"Brian",
""
],
[
"Bergstrom",
"Lars",
""
],
[
"Herman",
"David",
""
],
[
"Matthews",
"Josh",
""
],
[
"McAllister",
"Keegan",
""
],
[
"Goregaokar",
"Manish",
""
],
[
"Moffitt",
"Jack",
""
],
[
"Sapin",
"Simon",
""
]
] |
new_dataset
| 0.979986 |
1102.1480
|
Byung-Hak Kim
|
Byung-Hak Kim, Henry D. Pfister
|
Joint Decoding of LDPC Codes and Finite-State Channels via
Linear-Programming
|
Accepted to IEEE Journal of Selected Topics in Signal Processing
(Special Issue on Soft Detection for Wireless Transmission)
| null |
10.1109/JSTSP.2011.2165525
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers the joint-decoding (JD) problem for finite-state
channels (FSCs) and low-density parity-check (LDPC) codes. In the first part,
the linear-programming (LP) decoder for binary linear codes is extended to JD
of binary-input FSCs. In particular, we provide a rigorous definition of LP
joint-decoding pseudo-codewords (JD-PCWs) that enables evaluation of the
pairwise error probability between codewords and JD-PCWs in AWGN. This leads
naturally to a provable upper bound on decoder failure probability. If the
channel is a finite-state intersymbol interference channel, then the joint LP
decoder also has the maximum-likelihood (ML) certificate property and all
integer-valued solutions are codewords. In this case, the performance loss
relative to ML decoding can be explained completely by fractional-valued
JD-PCWs. After deriving these results, we discovered some elements were
equivalent to earlier work by Flanagan on LP receivers.
In the second part, we develop an efficient iterative solver for the joint LP
decoder discussed in the first part. In particular, we extend the approach of
iterative approximate LP decoding, proposed by Vontobel and Koetter and
analyzed by Burshtein, to this problem. By taking advantage of the dual-domain
structure of the JD-LP, we obtain a convergent iterative algorithm for joint LP
decoding whose structure is similar to BCJR-based turbo equalization (TE). The
result is a joint iterative decoder whose per-iteration complexity is similar
to that of TE but whose performance is similar to that of joint LP decoding.
The main advantage of this decoder is that it appears to provide the
predictability of joint LP decoding and superior performance with the
computational complexity of TE. One expected application is coding for magnetic
storage where the required block-error rate is extremely low and system
performance is difficult to verify by simulation.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2011 00:21:38 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Jul 2011 19:49:58 GMT"
}
] | 2015-05-27T00:00:00 |
[
[
"Kim",
"Byung-Hak",
""
],
[
"Pfister",
"Henry D.",
""
]
] |
new_dataset
| 0.974066 |
1312.0932
|
Inaki Estella
|
I\~naki Estella Aguerri and Deniz G\"und\"uz
|
Joint Source-Channel Coding with Time-Varying Channel and
Side-Information
|
Submitted to IEEE Transactions on Information Theory
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transmission of a Gaussian source over a time-varying Gaussian channel is
studied in the presence of time-varying correlated side information at the
receiver. A block fading model is considered for both the channel and the side
information, whose states are assumed to be known only at the receiver. The
optimality of separate source and channel coding in terms of average end-to-end
distortion is shown when the channel is static while the side information state
follows a discrete or a continuous and quasiconcave distribution. When both the
channel and side information states are time-varying, separate source and
channel coding is suboptimal in general. A partially informed encoder lower
bound is studied by providing the channel state information to the encoder.
Several achievable transmission schemes are proposed based on uncoded
transmission, separate source and channel coding, joint decoding as well as
hybrid digital-analog transmission. Uncoded transmission is shown to be optimal
for a class of continuous and quasiconcave side information state
distributions, while the channel gain may have an arbitrary distribution. To
the best of our knowledge, this is the first example in which the uncoded
transmission achieves the optimal performance thanks to the time-varying nature
of the states, while it is suboptimal in the static version of the same
problem. Then, the optimal \emph{distortion exponent}, that quantifies the
exponential decay rate of the expected distortion in the high SNR regime, is
characterized for Nakagami distributed channel and side information states, and
it is shown to be achieved by hybrid digital-analog and joint decoding schemes
in certain cases, illustrating the suboptimality of pure digital or analog
transmission in general.
|
[
{
"version": "v1",
"created": "Tue, 3 Dec 2013 20:53:25 GMT"
},
{
"version": "v2",
"created": "Tue, 26 May 2015 12:22:25 GMT"
}
] | 2015-05-27T00:00:00 |
[
[
"Aguerri",
"Iñaki Estella",
""
],
[
"Gündüz",
"Deniz",
""
]
] |
new_dataset
| 0.997029 |
1409.7556
|
Basura Fernando
|
Basura Fernando, Tatiana Tommasi, Tinne Tuytelaars
|
Location Recognition Over Large Time Lags
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Would it be possible to automatically associate ancient pictures to modern
ones and create fancy cultural heritage city maps? We introduce here the task
of recognizing the location depicted in an old photo given modern annotated
images collected from the Internet. We present an extensive analysis on
different features, looking for the most discriminative and most robust to the
image variability induced by large time lags. Moreover, we show that the
described task benefits from domain adaptation.
|
[
{
"version": "v1",
"created": "Fri, 26 Sep 2014 12:36:54 GMT"
},
{
"version": "v2",
"created": "Sat, 7 Mar 2015 15:58:17 GMT"
},
{
"version": "v3",
"created": "Tue, 26 May 2015 03:14:19 GMT"
}
] | 2015-05-27T00:00:00 |
[
[
"Fernando",
"Basura",
""
],
[
"Tommasi",
"Tatiana",
""
],
[
"Tuytelaars",
"Tinne",
""
]
] |
new_dataset
| 0.960826 |
1505.04947
|
Chun-Hung Liu
|
Chun-Hung Liu
|
The Mean SIR of Large-Scale Wireless Networks: Its Closed-Form
Expression and Main Applications
|
4 pages, 4 figures, letter
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a large-scale wireless ad hoc network in which all transmitters form a
homogeneous of Poisson point process, the statistics of the
signal-to-interference ratio (SIR) in prior work is only derived in closed-form
for the case of Rayleigh fading channels. In this letter, the mean SIR is found
in closed-form for general random channel (power) gain, transmission distance
and power control models. According to the derived mean SIR, we first show that
channel gain randomness actually benefits the mean SIR so that the upper bound
on the mean spectrum efficiency increases. Then we show that stochastic power
control and opportunistic scheduling that capture the randomness of channel
gain and transmission distance can significantly not only enhance the mean SIR
but reduce the outage probability. The mean-SIR-based throughput capacity is
proposed and it can be maximized by a unique optimal intensity of transmitters
if the derived supporting set of the intensity exists.
|
[
{
"version": "v1",
"created": "Tue, 19 May 2015 10:32:00 GMT"
},
{
"version": "v2",
"created": "Tue, 26 May 2015 12:29:24 GMT"
}
] | 2015-05-27T00:00:00 |
[
[
"Liu",
"Chun-Hung",
""
]
] |
new_dataset
| 0.999007 |
1505.06502
|
Can Xiang
|
Can Xiang and Hao Liu
|
The Complete Weight Enumerator of A Class of Linear Codes
|
This paper has been withdrawn by the author due to some results of us
are similiar to others
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Linear codes can be employed to construct authentication codes, which is an
interesting area of cryptography. The parameters of the authentication codes
depend on the complete weight enumerator of the underlying linear codes. In
order to obtain an authentication code with good parameters, the underlying
linear code must have proper parameters. The first objective of this paper is
to determine the complete weight enumerators of a class of linear codes with
two weights and three weights. The second is to employ these linear codes to
construct authentication codes with new parameters.
|
[
{
"version": "v1",
"created": "Mon, 25 May 2015 00:15:27 GMT"
},
{
"version": "v2",
"created": "Tue, 26 May 2015 08:35:23 GMT"
}
] | 2015-05-27T00:00:00 |
[
[
"Xiang",
"Can",
""
],
[
"Liu",
"Hao",
""
]
] |
new_dataset
| 0.997671 |
1505.06729
|
Vida Vakilian
|
Vida Vakilian, Hani Mehrpouyan, Yingbo Hua, and Hamid Jafarkhani
|
High Rate/Low Complexity Space-Time Block Codes for 2x2 Reconfigurable
MIMO Systems
|
arXiv admin note: text overlap with arXiv:1505.06466
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a full-rate full-diversity space-time block code
(STBC) for 2x2 reconfigurable multiple-input multiple-output (MIMO) systems
that require a low complexity maximum likelihood (ML) detector. We consider a
transmitter equipped with a linear antenna array where each antenna element can
be independently configured to create a directive radiation pattern toward a
selected direction. This property of transmit antennas allow us to increase the
data rate of the system, while reducing the computational complexity of the
receiver. The proposed STBC achieves a coding rate of two in a 2x2 MIMO system
and can be decoded via an ML detector with a complexity of order M, where M is
the cardinality of the transmitted symbol constellation. Our simulations
demonstrate the efficiency of the proposed code compared to existing STBCs in
the literature.
|
[
{
"version": "v1",
"created": "Sun, 24 May 2015 19:08:56 GMT"
}
] | 2015-05-27T00:00:00 |
[
[
"Vakilian",
"Vida",
""
],
[
"Mehrpouyan",
"Hani",
""
],
[
"Hua",
"Yingbo",
""
],
[
"Jafarkhani",
"Hamid",
""
]
] |
new_dataset
| 0.984589 |
1505.06769
|
Alexander Gruschina
|
Alexander Gruschina
|
VeinPLUS: A Transillumination and Reflection-based Hand Vein Database
|
Presented at OAGM Workshop, 2015 (arXiv:1505.01065)
| null | null |
OAGM/2015/03
|
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper gives a short summary of work related to the creation of a
department-hosted hand vein database. After the introducing section, special
properties of the hand vein acquisition are explained, followed by a comparison
table, which shows key differences to existing well-known hand vein databases.
At the end, the ROI extraction process is described and sample images and ROIs
are presented.
|
[
{
"version": "v1",
"created": "Mon, 25 May 2015 22:18:36 GMT"
}
] | 2015-05-27T00:00:00 |
[
[
"Gruschina",
"Alexander",
""
]
] |
new_dataset
| 0.999797 |
1505.06791
|
Jameson Toole
|
Jameson L. Toole, Yu-Ru Lin, Erich Muehlegger, Daniel Shoag, Marta C.
Gonzalez, David Lazer
|
Tracking Employment Shocks Using Mobile Phone Data
| null | null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Can data from mobile phones be used to observe economic shocks and their
consequences at multiple scales? Here we present novel methods to detect mass
layoffs, identify individuals affected by them, and predict changes in
aggregate unemployment rates using call detail records (CDRs) from mobile
phones. Using the closure of a large manufacturing plant as a case study, we
first describe a structural break model to correctly detect the date of a mass
layoff and estimate its size. We then use a Bayesian classification model to
identify affected individuals by observing changes in calling behavior
following the plant's closure. For these affected individuals, we observe
significant declines in social behavior and mobility following job loss. Using
the features identified at the micro level, we show that the same changes in
these calling behaviors, aggregated at the regional level, can improve
forecasts of macro unemployment rates. These methods and results highlight
promise of new data resources to measure micro economic behavior and improve
estimates of critical economic indicators.
|
[
{
"version": "v1",
"created": "Tue, 26 May 2015 02:18:07 GMT"
}
] | 2015-05-27T00:00:00 |
[
[
"Toole",
"Jameson L.",
""
],
[
"Lin",
"Yu-Ru",
""
],
[
"Muehlegger",
"Erich",
""
],
[
"Shoag",
"Daniel",
""
],
[
"Gonzalez",
"Marta C.",
""
],
[
"Lazer",
"David",
""
]
] |
new_dataset
| 0.975284 |
1505.06807
|
Ameet Talwalkar
|
Xiangrui Meng, Joseph Bradley, Burak Yavuz, Evan Sparks, Shivaram
Venkataraman, Davies Liu, Jeremy Freeman, DB Tsai, Manish Amde, Sean Owen,
Doris Xin, Reynold Xin, Michael J. Franklin, Reza Zadeh, Matei Zaharia, Ameet
Talwalkar
|
MLlib: Machine Learning in Apache Spark
| null | null | null | null |
cs.LG cs.DC cs.MS stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Apache Spark is a popular open-source platform for large-scale data
processing that is well-suited for iterative machine learning tasks. In this
paper we present MLlib, Spark's open-source distributed machine learning
library. MLlib provides efficient functionality for a wide range of learning
settings and includes several underlying statistical, optimization, and linear
algebra primitives. Shipped with Spark, MLlib supports several languages and
provides a high-level API that leverages Spark's rich ecosystem to simplify the
development of end-to-end machine learning pipelines. MLlib has experienced a
rapid growth due to its vibrant open-source community of over 140 contributors,
and includes extensive documentation to support further growth and to let users
quickly get up to speed.
|
[
{
"version": "v1",
"created": "Tue, 26 May 2015 05:12:23 GMT"
}
] | 2015-05-27T00:00:00 |
[
[
"Meng",
"Xiangrui",
""
],
[
"Bradley",
"Joseph",
""
],
[
"Yavuz",
"Burak",
""
],
[
"Sparks",
"Evan",
""
],
[
"Venkataraman",
"Shivaram",
""
],
[
"Liu",
"Davies",
""
],
[
"Freeman",
"Jeremy",
""
],
[
"Tsai",
"DB",
""
],
[
"Amde",
"Manish",
""
],
[
"Owen",
"Sean",
""
],
[
"Xin",
"Doris",
""
],
[
"Xin",
"Reynold",
""
],
[
"Franklin",
"Michael J.",
""
],
[
"Zadeh",
"Reza",
""
],
[
"Zaharia",
"Matei",
""
],
[
"Talwalkar",
"Ameet",
""
]
] |
new_dataset
| 0.998107 |
1505.06815
|
Vamsi Talla
|
Vamsi Talla, Bryce Kellogg, Benjamin Ransford, Saman Naderiparizi,
Shyamnath Gollakota and Joshua R. Smith
|
Powering the Next Billion Devices with Wi-Fi
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the first power over Wi-Fi system that delivers power and works
with existing Wi-Fi chipsets. Specifically, we show that a ubiquitous piece of
wireless communication infrastructure, the Wi-Fi router, can provide far field
wireless power without compromising the network's communication performance.
Building on our design we prototype, for the first time, battery-free
temperature and camera sensors that are powered using Wi-Fi chipsets with
ranges of 20 and 17 feet respectively. We also demonstrate the ability to
wirelessly recharge nickel-metal hydride and lithium-ion coin-cell batteries at
distances of up to 28 feet. Finally, we deploy our system in six homes in a
metropolitan area and show that our design can successfully deliver power via
Wi-Fi in real-world network conditions.
|
[
{
"version": "v1",
"created": "Tue, 26 May 2015 06:06:37 GMT"
}
] | 2015-05-27T00:00:00 |
[
[
"Talla",
"Vamsi",
""
],
[
"Kellogg",
"Bryce",
""
],
[
"Ransford",
"Benjamin",
""
],
[
"Naderiparizi",
"Saman",
""
],
[
"Gollakota",
"Shyamnath",
""
],
[
"Smith",
"Joshua R.",
""
]
] |
new_dataset
| 0.998085 |
1505.06851
|
Rossano Schifanella
|
Daniele Quercia, Rossano Schifanella, Luca Maria Aiello, Kate McLean
|
Smelly Maps: The Digital Life of Urban Smellscapes
|
11 pages, 7 figures, Proceedings of 9th International AAAI Conference
on Web and Social Media (ICWSM2015)
| null | null | null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Smell has a huge influence over how we perceive places. Despite its
importance, smell has been crucially overlooked by urban planners and
scientists alike, not least because it is difficult to record and analyze at
scale. One of the authors of this paper has ventured out in the urban world and
conducted smellwalks in a variety of cities: participants were exposed to a
range of different smellscapes and asked to record their experiences. As a
result, smell-related words have been collected and classified, creating the
first dictionary for urban smell. Here we explore the possibility of using
social media data to reliably map the smells of entire cities. To this end, for
both Barcelona and London, we collect geo-referenced picture tags from Flickr
and Instagram, and geo-referenced tweets from Twitter. We match those tags and
tweets with the words in the smell dictionary. We find that smell-related words
are best classified in ten categories. We also find that specific categories
(e.g., industry, transport, cleaning) correlate with governmental air quality
indicators, adding validity to our study.
|
[
{
"version": "v1",
"created": "Tue, 26 May 2015 08:39:07 GMT"
}
] | 2015-05-27T00:00:00 |
[
[
"Quercia",
"Daniele",
""
],
[
"Schifanella",
"Rossano",
""
],
[
"Aiello",
"Luca Maria",
""
],
[
"McLean",
"Kate",
""
]
] |
new_dataset
| 0.99944 |
1505.06865
|
Maria Potop-Butucaru
|
Silvia Bonomi (MIDLAB), Antonella Del Pozzo (MIDLAB, NPA), Maria
Potop-Butucaru (NPA)
|
Tight Mobile Byzantine Tolerant Atomic Storage
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes the first implementation of an atomic storage tolerant to
mobile Byzantine agents. Our implementation is designed for the round-based
synchronous model where the set of Byzantine nodes changes from round to round.
In this model we explore the feasibility of multi-writer multi-reader atomic
register prone to various mobile Byzantine behaviors. We prove upper and lower
bounds for solving the atomic storage in all the explored models. Our results,
significantly different from the static case, advocate for a deeper study of
the main building blocks of distributed computing while the system is prone to
mobile Byzantine failures.
|
[
{
"version": "v1",
"created": "Tue, 26 May 2015 09:15:31 GMT"
}
] | 2015-05-27T00:00:00 |
[
[
"Bonomi",
"Silvia",
"",
"MIDLAB"
],
[
"Del Pozzo",
"Antonella",
"",
"MIDLAB, NPA"
],
[
"Potop-Butucaru",
"Maria",
"",
"NPA"
]
] |
new_dataset
| 0.981969 |
1505.07050
|
Marco Dorigo
|
Nithin Mathews, Anders Lyhne Christensen, Rehan O'Grady, and Marco
Dorigo
|
Virtual Nervous Systems for Self-Assembling Robots - A preliminary
report
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We define the nervous system of a robot as the processing unit responsible
for controlling the robot body, together with the links between the processing
unit and the sensorimotor hardware of the robot - i.e., the equivalent of the
central nervous system in biological organisms. We present autonomous robots
that can merge their nervous systems when they physically connect to each
other, creating a "virtual nervous system" (VNS). We show that robots with a
VNS have capabilities beyond those found in any existing robotic system or
biological organism: they can merge into larger bodies with a single brain
(i.e., processing unit), split into separate bodies with independent brains,
and temporarily acquire sensing and actuating capabilities of specialized peer
robots. VNS-based robots can also self-heal by removing or replacing
malfunctioning body parts, including the brain.
|
[
{
"version": "v1",
"created": "Tue, 26 May 2015 17:11:53 GMT"
}
] | 2015-05-27T00:00:00 |
[
[
"Mathews",
"Nithin",
""
],
[
"Christensen",
"Anders Lyhne",
""
],
[
"O'Grady",
"Rehan",
""
],
[
"Dorigo",
"Marco",
""
]
] |
new_dataset
| 0.989227 |
1303.1717
|
Tomoyuki Yamakami
|
Tomoyuki Yamakami
|
Oracle Pushdown Automata, Nondeterministic Reducibilities, and the CFL
Hierarchy over the Family of Context-Free Languages
|
This is a complete version of an extended abstract that appeared
under a slightly different title in the Proceedings of the 40th International
Conference on Current Trends in Theory and Practice of Computer Science
(SOFSEM 2014), High Tatras, Slovakia, January 25-30, 2014, Lecture Notes in
Computer Science, Springer-Verlag, vol.8327, pp.514-525, 2014
| null | null | null |
cs.FL cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To expand a fundamental theory of context-free languages, we equip
nondeterministic one-way pushdown automata with additional oracle mechanisms,
which naturally induce various nondeterministic reducibilities among formal
languages. As a natural restriction of NP-reducibility, we introduce a notion
of many-one CFL reducibility and conduct a ground work to formulate a coherent
framework for further expositions. Two more powerful reducibilities--bounded
truth-table and Turing CFL-reducibilities--are also discussed in comparison.
The Turing CFL-reducibility, in particular, helps us introduce an exquisite
hierarchy, called the CFL hierarchy, built over the family CFL of context-free
languages. For each level of this hierarchy, its basic structural properties
are proven and three alternative characterizations are presented. The second
level is not included in NC(2) unless NP= NC(2). The first and second levels of
the hierarchy are different. The rest of the hierarchy (more strongly, the
Boolean hierarchy built over each level of the hierarchy) is also infinite
unless the polynomial hierarchy over NP collapses. This follows from a
characterization of the Boolean hierarchy over the k-th level of the polynomial
hierarchy in terms of the Boolean hierarchy over the k+1st level of the CFL
hierarchy using log-space many-one reductions. Similarly, the complexity class
Theta(k) is related to the closure of the k-th level of the CFL hierarchy under
log-space truth-table reductions. We also argue that the CFL hierarchy
coincides with a hierarchy over CFL built by application of many-one
CFL-reductions. We show that BPCFL--a bounded-error probabilistic version of
CFL--is not included in CFL even in the presence of advice. Employing a known
circuit lower bound and a switching lemma, we exhibit a relativized world where
BPCFL is not located within the second level of the CFL hierarchy.
|
[
{
"version": "v1",
"created": "Thu, 7 Mar 2013 15:26:50 GMT"
},
{
"version": "v2",
"created": "Sat, 23 May 2015 18:33:10 GMT"
}
] | 2015-05-26T00:00:00 |
[
[
"Yamakami",
"Tomoyuki",
""
]
] |
new_dataset
| 0.997521 |
1407.3859
|
Jeremy Kepner
|
Jeremy Kepner, Christian Anderson, William Arcand, David Bestor, Bill
Bergeron, Chansup Byun, Matthew Hubbell, Peter Michaleas, Julie Mullen, David
O'Gwynn, Andrew Prout, Albert Reuther, Antonio Rosa, Charles Yee (MIT)
|
D4M 2.0 Schema: A General Purpose High Performance Schema for the
Accumulo Database
|
6 pages; IEEE HPEC 2013
| null |
10.1109/HPEC.2013.6670318
| null |
cs.DB astro-ph.IM cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-traditional, relaxed consistency, triple store databases are the backbone
of many web companies (e.g., Google Big Table, Amazon Dynamo, and Facebook
Cassandra). The Apache Accumulo database is a high performance open source
relaxed consistency database that is widely used for government applications.
Obtaining the full benefits of Accumulo requires using novel schemas. The
Dynamic Distributed Dimensional Data Model (D4M)[http://d4m.mit.edu] provides a
uniform mathematical framework based on associative arrays that encompasses
both traditional (i.e., SQL) and non-traditional databases. For non-traditional
databases D4M naturally leads to a general purpose schema that can be used to
fully index and rapidly query every unique string in a dataset. The D4M 2.0
Schema has been applied with little or no customization to cyber,
bioinformatics, scientific citation, free text, and social media data. The D4M
2.0 Schema is simple, requires minimal parsing, and achieves the highest
published Accumulo ingest rates. The benefits of the D4M 2.0 Schema are
independent of the D4M interface. Any interface to Accumulo can achieve these
benefits by using the D4M 2.0 Schema
|
[
{
"version": "v1",
"created": "Tue, 15 Jul 2014 01:54:45 GMT"
}
] | 2015-05-26T00:00:00 |
[
[
"Kepner",
"Jeremy",
"",
"MIT"
],
[
"Anderson",
"Christian",
"",
"MIT"
],
[
"Arcand",
"William",
"",
"MIT"
],
[
"Bestor",
"David",
"",
"MIT"
],
[
"Bergeron",
"Bill",
"",
"MIT"
],
[
"Byun",
"Chansup",
"",
"MIT"
],
[
"Hubbell",
"Matthew",
"",
"MIT"
],
[
"Michaleas",
"Peter",
"",
"MIT"
],
[
"Mullen",
"Julie",
"",
"MIT"
],
[
"O'Gwynn",
"David",
"",
"MIT"
],
[
"Prout",
"Andrew",
"",
"MIT"
],
[
"Reuther",
"Albert",
"",
"MIT"
],
[
"Rosa",
"Antonio",
"",
"MIT"
],
[
"Yee",
"Charles",
"",
"MIT"
]
] |
new_dataset
| 0.986674 |
1505.06237
|
Gerhard Paar
|
Arnold Bauer, Karlheinz Gutjahr, Gerhard Paar, Heiner Kontrus and
Robert Glatzl
|
Tunnel Surface 3D Reconstruction from Unoriented Image Sequences
|
Presented at OAGM Workshop, 2015 (arXiv:1505.01065)
| null | null |
OAGM/2015/10
|
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The 3D documentation of the tunnel surface during construction requires fast
and robust measurement systems. In the solution proposed in this paper, during
tunnel advance a single camera is taking pictures of the tunnel surface from
several positions. The recorded images are automatically processed to gain a 3D
tunnel surface model. Image acquisition is realized by the
tunneling/advance/driving personnel close to the tunnel face (= the front end
of the advance). Based on the following fully automatic analysis/evaluation, a
decision on the quality of the outbreak can be made within a few minutes. This
paper describes the image recording system and conditions as well as the
stereo-photogrammetry based workflow for the continuously merged dense 3D
reconstruction of the entire advance region. Geo-reference is realized by means
of signalized targets that are automatically detected in the images. We report
on the results of recent testing under real construction conditions, and
conclude with prospects for further development in terms of on-site
performance.
|
[
{
"version": "v1",
"created": "Fri, 22 May 2015 22:00:55 GMT"
}
] | 2015-05-26T00:00:00 |
[
[
"Bauer",
"Arnold",
""
],
[
"Gutjahr",
"Karlheinz",
""
],
[
"Paar",
"Gerhard",
""
],
[
"Kontrus",
"Heiner",
""
],
[
"Glatzl",
"Robert",
""
]
] |
new_dataset
| 0.997757 |
1505.06263
|
Kenza Guenda
|
Nabil Bennenni, Kenza Guenda and Sihem Mesnager
|
New DNA Cyclic Codes over Rings
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
This paper is dealing with DNA cyclic codes which play an important role in
DNA computing and have attracted a particular attention in the literature.
Firstly, we introduce a new family of DNA cyclic codes over the ring
$R=\mathbb{F}_2[u]/(u^6)$. Such codes have theoretical advantages as well as
several applications in DNA computing. A direct link between the elements of
such a ring and the $64$ codons used in the amino acids of the living organisms
is established. Such a correspondence allows us to extend the notion of the
edit distance to the ring $R$ which is useful for the correction of the
insertion, deletion and substitution errors. Next, we define the Lee weight,
the Gray map over the ring $R$ as well as the binary image of the cyclic DNA
codes allowing the transfer of studying DNA codes into studying binary codes.
Secondly, we introduce another new family of DNA skew cyclic codes constructed
over the ring $\tilde {R}=\mathbb{F}_2+v\mathbb{F}_2=\{0,1,v,v+1\}$ where
$v^2=v$ and study their property of being reverse-complement. We show that the
obtained code is derived from the cyclic reverse-complement code over the ring
$\tilde {R}$. We shall provide the binary images and present some explicit
examples of such codes.
|
[
{
"version": "v1",
"created": "Sat, 23 May 2015 02:22:04 GMT"
}
] | 2015-05-26T00:00:00 |
[
[
"Bennenni",
"Nabil",
""
],
[
"Guenda",
"Kenza",
""
],
[
"Mesnager",
"Sihem",
""
]
] |
new_dataset
| 0.998949 |
1505.05947
|
Alexander Lavin
|
Alexander Lavin
|
A Pareto Front-Based Multiobjective Path Planning Algorithm
| null | null | null | null |
cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Path planning is one of the most vital elements of mobile robotics. With a
priori knowledge of the environment, global path planning provides a
collision-free route through the workspace. The global path plan can be
calculated with a variety of informed search algorithms, most notably the A*
search method, guaranteed to deliver a complete and optimal solution that
minimizes the path cost. Path planning optimization typically looks to minimize
the distance traversed from start to goal, yet many mobile robot applications
call for additional path planning objectives, presenting a multiobjective
optimization (MOO) problem. Past studies have applied genetic algorithms to MOO
path planning problems, but these may have the disadvantages of computational
complexity and suboptimal solutions. Alternatively, the algorithm in this paper
approaches MOO path planning with the use of Pareto fronts, or finding
non-dominated solutions. The algorithm presented incorporates Pareto optimality
into every step of A* search, thus it is named A*-PO. Results of simulations
show A*-PO outperformed several variations of the standard A* algorithm for MOO
path planning. A planetary exploration rover case study was added to
demonstrate the viability of A*-PO in a real-world application.
|
[
{
"version": "v1",
"created": "Fri, 22 May 2015 04:35:12 GMT"
}
] | 2015-05-25T00:00:00 |
[
[
"Lavin",
"Alexander",
""
]
] |
new_dataset
| 0.995797 |
1505.05958
|
Zhenyu Shen
|
Jingyu Hua, Zhenyu Shen, Sheng Zhong
|
We Can Track You If You Take the Metro: Tracking Metro Riders Using
Accelerometers on Smartphones
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motion sensors (e.g., accelerometers) on smartphones have been demonstrated
to be a powerful side channel for attackers to spy on users' inputs on
touchscreen. In this paper, we reveal another motion accelerometer-based attack
which is particularly serious: when a person takes the metro, a malicious
application on her smartphone can easily use accelerator readings to trace her.
We first propose a basic attack that can automatically extract metro-related
data from a large amount of mixed accelerator readings, and then use an
ensemble interval classier built from supervised learning to infer the riding
intervals of the user. While this attack is very effective, the supervised
learning part requires the attacker to collect labeled training data for each
station interval, which is a significant amount of effort. To improve the
efficiency of our attack, we further propose a semi-supervised learning
approach, which only requires the attacker to collect labeled data for a very
small number of station intervals with obvious characteristics. We conduct real
experiments on a metro line in a major city. The results show that the
inferring accuracy could reach 89\% and 92\% if the user takes the metro for 4
and 6 stations, respectively.
|
[
{
"version": "v1",
"created": "Fri, 22 May 2015 05:59:55 GMT"
}
] | 2015-05-25T00:00:00 |
[
[
"Hua",
"Jingyu",
""
],
[
"Shen",
"Zhenyu",
""
],
[
"Zhong",
"Sheng",
""
]
] |
new_dataset
| 0.959 |
1505.06003
|
Julien Ponge
|
Julien Ponge (CITI), Fr\'ed\'eric Le Mou\"el (CITI), Nicolas Stouls
(CITI), Yannick Loiseau (LIMOS)
|
Opportunities for a Truffle-based Golo Interpreter
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Golo is a simple dynamically-typed language for the Java Virtual Machine.
Initially implemented as a ahead-of-time compiler to JVM bytecode, it leverages
invokedy-namic and JSR 292 method handles to implement a reasonably efficient
runtime. Truffle is emerging as a framework for building interpreters for JVM
languages with self-specializing AST nodes. Combined with the Graal compiler,
Truffle offers a simple path towards writing efficient interpreters while
keeping the engineering efforts balanced. The Golo project is interested in
experimenting with a Truffle interpreter in the future, as it would provides
interesting comparison elements between invokedynamic versus Truffle for
building a language runtime.
|
[
{
"version": "v1",
"created": "Fri, 22 May 2015 09:29:05 GMT"
}
] | 2015-05-25T00:00:00 |
[
[
"Ponge",
"Julien",
"",
"CITI"
],
[
"Mouël",
"Frédéric Le",
"",
"CITI"
],
[
"Stouls",
"Nicolas",
"",
"CITI"
],
[
"Loiseau",
"Yannick",
"",
"LIMOS"
]
] |
new_dataset
| 0.998274 |
1407.7073
|
Weinan Zhang
|
Weinan Zhang, Shuai Yuan, Jun Wang, Xuehua Shen
|
Real-Time Bidding Benchmarking with iPinYou Dataset
|
UCL Technical Report 2014
| null | null | null |
cs.GT cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Being an emerging paradigm for display advertising, Real-Time Bidding (RTB)
drives the focus of the bidding strategy from context to users' interest by
computing a bid for each impression in real time. The data mining work and
particularly the bidding strategy development becomes crucial in this
performance-driven business. However, researchers in computational advertising
area have been suffering from lack of publicly available benchmark datasets,
which are essential to compare different algorithms and systems. Fortunately, a
leading Chinese advertising technology company iPinYou decided to release the
dataset used in its global RTB algorithm competition in 2013. The dataset
includes logs of ad auctions, bids, impressions, clicks, and final conversions.
These logs reflect the market environment as well as form a complete path of
users' responses from advertisers' perspective. This dataset directly supports
the experiments of some important research problems such as bid optimisation
and CTR estimation. To the best of our knowledge, this is the first publicly
available dataset on RTB display advertising. Thus, they are valuable for
reproducible research and understanding the whole RTB ecosystem. In this paper,
we first provide the detailed statistical analysis of this dataset. Then we
introduce the research problem of bid optimisation in RTB and the simple yet
comprehensive evaluation protocol. Besides, a series of benchmark experiments
are also conducted, including both click-through rate (CTR) estimation and bid
optimisation.
|
[
{
"version": "v1",
"created": "Fri, 25 Jul 2014 23:20:29 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Aug 2014 11:22:17 GMT"
},
{
"version": "v3",
"created": "Thu, 21 May 2015 18:20:30 GMT"
}
] | 2015-05-22T00:00:00 |
[
[
"Zhang",
"Weinan",
""
],
[
"Yuan",
"Shuai",
""
],
[
"Wang",
"Jun",
""
],
[
"Shen",
"Xuehua",
""
]
] |
new_dataset
| 0.999743 |
1503.08642
|
Sina Sanjari
|
S. Sanjari, S. Ozgoli
|
Generalized Integral Siding Mode Manifold Design: A Sum of Squares
Approach
| null | null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a general form of integral sliding mode manifold, and
proposes an algorithmic approach based on Sum of Squares (SOS) programming to
design generalized integral sliding mode manifold and controller for nonlinear
systems with both matched and unmatched uncertainties. The approach also gives
a sufficient condition for successful design of controller and manifold
parameters. The result of the paper is then verified by several simulation
examples and two practical applications, namely Glucose-insulin regulation
problem and the unicycle dynamics steering problem are considered.
|
[
{
"version": "v1",
"created": "Mon, 30 Mar 2015 11:25:57 GMT"
},
{
"version": "v2",
"created": "Thu, 21 May 2015 08:19:27 GMT"
}
] | 2015-05-22T00:00:00 |
[
[
"Sanjari",
"S.",
""
],
[
"Ozgoli",
"S.",
""
]
] |
new_dataset
| 0.986596 |
1505.05343
|
H. Paul Keeler Dr
|
Johannes G\"obel, Paul Keeler, Anthony E. Krzesinski, Peter G. Taylor
|
Bitcoin Blockchain Dynamics: the Selfish-Mine Strategy in the Presence
of Propagation Delay
|
14 pages, 13 Figures. Submitted to a peer-reviewed journal
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the context of the `selfish-mine' strategy proposed by Eyal and Sirer, we
study the effect of propagation delay on the evolution of the Bitcoin
blockchain. First, we use a simplified Markov model that tracks the contrasting
states of belief about the blockchain of a small pool of miners and the `rest
of the community' to establish that the use of block-hiding strategies, such as
selfish-mine, causes the rate of production of orphan blocks to increase. Then
we use a spatial Poisson process model to study values of Eyal and Sirer's
parameter $\gamma$, which denotes the proportion of the honest community that
mine on a previously-secret block released by the pool in response to the
mining of a block by the honest community. Finally, we use discrete-event
simulation to study the behaviour of a network of Bitcoin miners, a proportion
of which is colluding in using the selfish-mine strategy, under the assumption
that there is a propagation delay in the communication of information between
miners.
|
[
{
"version": "v1",
"created": "Wed, 20 May 2015 12:27:20 GMT"
},
{
"version": "v2",
"created": "Thu, 21 May 2015 16:32:04 GMT"
}
] | 2015-05-22T00:00:00 |
[
[
"Göbel",
"Johannes",
""
],
[
"Keeler",
"Paul",
""
],
[
"Krzesinski",
"Anthony E.",
""
],
[
"Taylor",
"Peter G.",
""
]
] |
new_dataset
| 0.996932 |
1505.05579
|
Ehab Mahmoud Mohamed Dr.
|
Ehab Mahmoud Mohamed, Kei Sakaguchi, and Seiichi Sampei
|
Millimeter Wave Beamforming Based on WiFi Fingerprinting in Indoor
Environment
|
6 pages, 9 Figures, 1 Table, ICC workshops 2015
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Millimeter Wave (mm-w), especially the 60 GHz band, has been receiving much
attention as a key enabler for the 5G cellular networks. Beamforming (BF) is
tremendously used with mm-w transmissions to enhance the link quality and
overcome the channel impairments. The current mm-w BF mechanism, proposed by
the IEEE 802.11ad standard, is mainly based on exhaustive searching the best
transmit (TX) and receive (RX) antenna beams. This BF mechanism requires a very
high setup time, which makes it difficult to coordinate a multiple number of
mm-w Access Points (APs) in mobile channel conditions as a 5G requirement. In
this paper, we propose a mm-w BF mechanism, which enables a mm-w AP to estimate
the best beam to communicate with a User Equipment (UE) using statistical
learning. In this scheme, the fingerprints of the UE WiFi signal and mm-w best
beam identification (ID) are collected in an offline phase on a grid of
arbitrary learning points (LPs) in target environments. Therefore, by just
comparing the current UE WiFi signal with the pre-stored UE WiFi fingerprints,
the mm-w AP can immediately estimate the best beam to communicate with the UE
at its current position. The proposed mm-w BF can estimate the best beam, using
a very small setup time, with a comparable performance to the exhaustive search
BF.
|
[
{
"version": "v1",
"created": "Thu, 21 May 2015 01:43:03 GMT"
}
] | 2015-05-22T00:00:00 |
[
[
"Mohamed",
"Ehab Mahmoud",
""
],
[
"Sakaguchi",
"Kei",
""
],
[
"Sampei",
"Seiichi",
""
]
] |
new_dataset
| 0.998715 |
1505.05601
|
S L Happy
|
S L Happy, Swarnadip Chatterjee, and Debdoot Sheet
|
Unsupervised Segmentation of Overlapping Cervical Cell Cytoplasm
|
2 pages, 2 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Overlapping of cervical cells and poor contrast of cell cytoplasm are the
major issues in accurate detection and segmentation of cervical cells. An
unsupervised cell segmentation approach is presented here. Cell clump
segmentation was carried out using the extended depth of field (EDF) image
created from the images of different focal planes. A modified Otsu method with
prior class weights is proposed for accurate segmentation of nuclei from the
cell clumps. The cell cytoplasm was further segmented from cell clump depending
upon the number of nucleus detected in that cell clump. Level set model was
used for cytoplasm segmentation.
|
[
{
"version": "v1",
"created": "Thu, 21 May 2015 04:24:48 GMT"
}
] | 2015-05-22T00:00:00 |
[
[
"Happy",
"S L",
""
],
[
"Chatterjee",
"Swarnadip",
""
],
[
"Sheet",
"Debdoot",
""
]
] |
new_dataset
| 0.970755 |
1505.05642
|
Chinnappillai Durairajan
|
J. Mahalakshmi and C. Durairajan
|
On the $Z_q$-MacDonald Code and its Weight Distribution of dimension 3
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we determine the parameters of $\mathbb{Z}_q$-MacDonald Code
of dimension k for any positive integer $q \geq 2.$ Further, we have obtained
the weight distribution of $\mathbb{Z}_q$-MacDonald code of dimension 3 and
furthermore, we have given the weight distribution of $\mathbb{Z}_q$-Simplex
code of dimension 3 for any positive integer $q \geq 2.$
|
[
{
"version": "v1",
"created": "Thu, 21 May 2015 08:19:11 GMT"
}
] | 2015-05-22T00:00:00 |
[
[
"Mahalakshmi",
"J.",
""
],
[
"Durairajan",
"C.",
""
]
] |
new_dataset
| 0.998857 |
1505.05717
|
Jesper S{\o}rensen
|
Jesper H. S{\o}rensen and Elisabeth de Carvalho
|
Pilot Decontamination Through Pilot Sequence Hopping in Massive MIMO
Systems
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work concerns wireless cellular networks applying massive multiple-input
multiple-output (MIMO) technology. In such a system, the base station in a
given cell is equipped with a very large number (hundreds or even thousands) of
antennas and serves multiple users. Estimation of the channel from the base
station to each user is performed at the base station using an uplink pilot
sequence. Such a channel estimation procedure suffers from pilot contamination.
Orthogonal pilot sequences are used in a given cell but, due to the shortage of
orthogonal sequences, the same pilot sequences must be reused in neighboring
cells, causing pilot contamination. The solution presented in this paper
suppresses pilot contamination, without the need for coordination among cells.
Pilot sequence hopping is performed at each transmission slot, which provides a
randomization of the pilot contamination. Using a modified Kalman filter, it is
shown that such randomized contamination can be significantly suppressed.
Comparisons with conventional estimation methods show that the mean squared
error can be lowered as much as an order of magnitude at low mobility.
|
[
{
"version": "v1",
"created": "Thu, 21 May 2015 13:08:01 GMT"
}
] | 2015-05-22T00:00:00 |
[
[
"Sørensen",
"Jesper H.",
""
],
[
"de Carvalho",
"Elisabeth",
""
]
] |
new_dataset
| 0.995146 |
1505.05726
|
Jesper S{\o}rensen
|
Jesper H. S{\o}rensen and Elisabeth de Carvalho and Petar Popovski
|
Massive MIMO for Crowd Scenarios: A Solution Based on Random Access
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a new approach to intra-cell pilot contamination in
crowded massive MIMO scenarios. The approach relies on two essential properties
of a massive MIMO system, namely near-orthogonality between user channels and
near-stability of channel powers. Signal processing techniques that take
advantage of these properties allow us to view a set of contaminated pilot
signals as a graph code on which iterative belief propagation can be performed.
This makes it possible to decontaminate pilot signals and increase the
throughput of the system. The proposed solution exhibits high performance with
large improvements over the conventional method. The improvements come at the
price of an increased error rate, although this effect is shown to decrease
significantly for increasing number of antennas at the base station.
|
[
{
"version": "v1",
"created": "Thu, 21 May 2015 13:25:14 GMT"
}
] | 2015-05-22T00:00:00 |
[
[
"Sørensen",
"Jesper H.",
""
],
[
"de Carvalho",
"Elisabeth",
""
],
[
"Popovski",
"Petar",
""
]
] |
new_dataset
| 0.994934 |
1404.6613
|
Shibashis Guha
|
Shibashis Guha, Chinmay Narayan and S. Arun-Kumar
|
Reducing Clocks in Timed Automata while Preserving Bisimulation
|
28 pages including reference, 8 figures, full version of paper
accepted in CONCUR 2014
| null | null | null |
cs.FL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Model checking timed automata becomes increasingly complex with the increase
in the number of clocks. Hence it is desirable that one constructs an automaton
with the minimum number of clocks possible. The problem of checking whether
there exists a timed automaton with a smaller number of clocks such that the
timed language accepted by the original automaton is preserved is known to be
undecidable. In this paper, we give a construction, which for any given timed
automaton produces a timed bisimilar automaton with the least number of clocks.
Further, we show that such an automaton with the minimum possible number of
clocks can be constructed in time that is doubly exponential in the number of
clocks of the original automaton.
|
[
{
"version": "v1",
"created": "Sat, 26 Apr 2014 06:00:58 GMT"
},
{
"version": "v2",
"created": "Wed, 20 May 2015 11:51:20 GMT"
}
] | 2015-05-21T00:00:00 |
[
[
"Guha",
"Shibashis",
""
],
[
"Narayan",
"Chinmay",
""
],
[
"Arun-Kumar",
"S.",
""
]
] |
new_dataset
| 0.994177 |
1504.04171
|
Chuangqiang Hu
|
Chuangqiang Hu and Chang-An Zhao
|
Multi-point Codes from Generalized Hermitian Curves
|
16 pages, 4 figures
| null | null | null |
cs.IT math.AG math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate multi-point algebraic geometric codes defined from curves
related to the generalized Hermitian curve introduced by Alp Bassa, Peter
Beelen, Arnaldo Garcia, and Henning Stichtenoth. Our main result is to find a
basis of the Riemann-Roch space of a series of divisors, which can be used to
construct multi-point codes explicitly. These codes turn out to have nice
properties similar to those of Hermitian codes, for example, they are easy to
describe, to encode and decode. It is shown that the duals are also such codes
and an explicit formula is given. In particular, this formula enables one to
calculate the parameters of these codes. Finally, we apply our results to
obtain linear codes attaining new records on the parameters. A new
record-giving $ [234,141,\geqslant 59] $-code over $ \mathbb{F}_{27} $ is
presented as one of the examples.
|
[
{
"version": "v1",
"created": "Tue, 14 Apr 2015 03:21:23 GMT"
},
{
"version": "v2",
"created": "Wed, 20 May 2015 05:31:44 GMT"
}
] | 2015-05-21T00:00:00 |
[
[
"Hu",
"Chuangqiang",
""
],
[
"Zhao",
"Chang-An",
""
]
] |
new_dataset
| 0.999185 |
1504.06755
|
Pingmei Xu
|
Pingmei Xu, Krista A Ehinger, Yinda Zhang, Adam Finkelstein, Sanjeev
R. Kulkarni, Jianxiong Xiao
|
TurkerGaze: Crowdsourcing Saliency with Webcam based Eye Tracking
|
9 pages, 14 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional eye tracking requires specialized hardware, which means
collecting gaze data from many observers is expensive, tedious and slow.
Therefore, existing saliency prediction datasets are order-of-magnitudes
smaller than typical datasets for other vision recognition tasks. The small
size of these datasets limits the potential for training data intensive
algorithms, and causes overfitting in benchmark evaluation. To address this
deficiency, this paper introduces a webcam-based gaze tracking system that
supports large-scale, crowdsourced eye tracking deployed on Amazon Mechanical
Turk (AMTurk). By a combination of careful algorithm and gaming protocol
design, our system obtains eye tracking data for saliency prediction comparable
to data gathered in a traditional lab setting, with relatively lower cost and
less effort on the part of the researchers. Using this tool, we build a
saliency dataset for a large number of natural images. We will open-source our
tool and provide a web server where researchers can upload their images to get
eye tracking results from AMTurk.
|
[
{
"version": "v1",
"created": "Sat, 25 Apr 2015 19:26:47 GMT"
},
{
"version": "v2",
"created": "Wed, 20 May 2015 18:51:23 GMT"
}
] | 2015-05-21T00:00:00 |
[
[
"Xu",
"Pingmei",
""
],
[
"Ehinger",
"Krista A",
""
],
[
"Zhang",
"Yinda",
""
],
[
"Finkelstein",
"Adam",
""
],
[
"Kulkarni",
"Sanjeev R.",
""
],
[
"Xiao",
"Jianxiong",
""
]
] |
new_dataset
| 0.99916 |
1505.05364
|
AlexanderArtikis
|
Alexander Artikis and Marek Sergot and Georgios Paliouras
|
Reactive Reasoning with the Event Calculus
|
International Workshop on Reactive Concepts in Knowledge
Representation (ReactKnow 2014), co-located with the 21st European Conference
on Artificial Intelligence (ECAI 2014). Proceedings of the International
Workshop on Reactive Concepts in Knowledge Representation (ReactKnow 2014),
pages 9-15, technical report, ISSN 1430-3701, Leipzig University, 2014.
http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-150562. 2014,1
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Systems for symbolic event recognition accept as input a stream of
time-stamped events from sensors and other computational devices, and seek to
identify high-level composite events, collections of events that satisfy some
pattern. RTEC is an Event Calculus dialect with novel implementation and
'windowing' techniques that allow for efficient event recognition, scalable to
large data streams. RTEC can deal with applications where event data arrive
with a (variable) delay from, and are revised by, the underlying sources. RTEC
can update already recognised events and recognise new events when data arrive
with a delay or following data revision. Our evaluation shows that RTEC can
support real-time event recognition and is capable of meeting the performance
requirements identified in a recent survey of event processing use cases.
|
[
{
"version": "v1",
"created": "Wed, 20 May 2015 13:26:36 GMT"
}
] | 2015-05-21T00:00:00 |
[
[
"Artikis",
"Alexander",
""
],
[
"Sergot",
"Marek",
""
],
[
"Paliouras",
"Georgios",
""
]
] |
new_dataset
| 0.99795 |
1505.05366
|
Joerg Puehrer
|
Gerhard Brewka and Stefan Ellmauthaler and J\"org P\"uhrer
|
Multi-Context Systems for Reactive Reasoning in Dynamic Environments
|
International Workshop on Reactive Concepts in Knowledge
Representation (ReactKnow 2014), co-located with the 21st European Conference
on Artificial Intelligence (ECAI 2014). Proceedings of the International
Workshop on Reactive Concepts in Knowledge Representation (ReactKnow 2014),
pages 23-29, technical report, ISSN 1430-3701, Leipzig University, 2014.
http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-150562
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show in this paper how managed multi-context systems (mMCSs) can be turned
into a reactive formalism suitable for continuous reasoning in dynamic
environments. We extend mMCSs with (abstract) sensors and define the notion of
a run of the extended systems. We then show how typical problems arising in
online reasoning can be addressed: handling potentially inconsistent sensor
input, modeling intelligent forms of forgetting, selective integration of
knowledge, and controlling the reasoning effort spent by contexts, like setting
contexts to an idle mode. We also investigate the complexity of some important
related decision problems and discuss different design choices which are given
to the knowledge engineer.
|
[
{
"version": "v1",
"created": "Wed, 20 May 2015 13:28:11 GMT"
}
] | 2015-05-21T00:00:00 |
[
[
"Brewka",
"Gerhard",
""
],
[
"Ellmauthaler",
"Stefan",
""
],
[
"Pührer",
"Jörg",
""
]
] |
new_dataset
| 0.95293 |
1505.05367
|
Stefan Ellmauthaler
|
Stefan Ellmauthaler and J\"org P\"uhrer
|
Asynchronous Multi-Context Systems
|
International Workshop on Reactive Concepts in Knowledge
Representation (ReactKnow 2014), co-located with the 21st European Conference
on Artificial Intelligence (ECAI 2014). Proceedings of the International
Workshop on Reactive Concepts in Knowledge Representation (ReactKnow 2014),
pages 31-37, technical report, ISSN 1430-3701, Leipzig University, 2014.
http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-150562
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present asynchronous multi-context systems (aMCSs), which
provide a framework for loosely coupling different knowledge representation
formalisms that allows for online reasoning in a dynamic environment. Systems
of this kind may interact with the outside world via input and output streams
and may therefore react to a continuous flow of external information. In
contrast to recent proposals, contexts in an aMCS communicate with each other
in an asynchronous way which fits the needs of many application domains and is
beneficial for scalability. The federal semantics of aMCSs renders our
framework an integration approach rather than a knowledge representation
formalism itself. We illustrate the introduced concepts by means of an example
scenario dealing with rescue services. In addition, we compare aMCSs to
reactive multi-context systems and describe how to simulate the latter with our
novel approach.
|
[
{
"version": "v1",
"created": "Wed, 20 May 2015 13:29:45 GMT"
}
] | 2015-05-21T00:00:00 |
[
[
"Ellmauthaler",
"Stefan",
""
],
[
"Pührer",
"Jörg",
""
]
] |
new_dataset
| 0.963386 |
1306.6311
|
Pascal Giard
|
Pascal Giard, Gabi Sarkis, Claude Thibeault, Warren J. Gross
|
Fast Software Polar Decoders
|
5 pages, 3 figures, submitted to ICASSP 2014
| null |
10.1109/ICASSP.2014.6855069
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Among error-correcting codes, polar codes are the first to provably achieve
channel capacity with an explicit construction. In this work, we present
software implementations of a polar decoder that leverage the capabilities of
modern general-purpose processors to achieve an information throughput in
excess of 200 Mbps, a throughput well suited for software-defined-radio
applications. We also show that, for a similar error-correction performance,
the throughput of polar decoders both surpasses that of LDPC decoders targeting
general-purpose processors and is competitive with that of state-of-the-art
software LDPC decoders running on graphic processing units.
|
[
{
"version": "v1",
"created": "Wed, 26 Jun 2013 18:36:26 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jan 2014 17:47:54 GMT"
}
] | 2015-05-20T00:00:00 |
[
[
"Giard",
"Pascal",
""
],
[
"Sarkis",
"Gabi",
""
],
[
"Thibeault",
"Claude",
""
],
[
"Gross",
"Warren J.",
""
]
] |
new_dataset
| 0.996055 |
1307.7154
|
Gabi Sarkis
|
Gabi Sarkis, Pascal Giard, Alexander Vardy, Claude Thibeault, and
Warren J. Gross
|
Fast Polar Decoders: Algorithm and Implementation
|
Submitted to the IEEE Journal on Selected Areas in Communications
(JSAC) on May 15th, 2013. 11 pages, 7 figures, 6 tables
|
IEEE Journal on Selected Areas in Communications, vol. 32, no. 5,
May 2014, pp. 946-957
|
10.1109/JSAC.2014.140514
| null |
cs.AR cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Polar codes provably achieve the symmetric capacity of a memoryless channel
while having an explicit construction. This work aims to increase the
throughput of polar decoder hardware by an order of magnitude relative to the
state of the art successive-cancellation decoder. We present an algorithm,
architecture, and FPGA implementation of a gigabit-per-second polar decoder.
|
[
{
"version": "v1",
"created": "Fri, 26 Jul 2013 20:07:04 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Dec 2013 23:38:49 GMT"
}
] | 2015-05-20T00:00:00 |
[
[
"Sarkis",
"Gabi",
""
],
[
"Giard",
"Pascal",
""
],
[
"Vardy",
"Alexander",
""
],
[
"Thibeault",
"Claude",
""
],
[
"Gross",
"Warren J.",
""
]
] |
new_dataset
| 0.997575 |
1401.4650
|
Vincent Vajnovszki
|
Antonio Bernini, Stefano Bilotta, Renzo Pinzani, Vincent Vajnovszki
|
A Gray Code for cross-bifix-free sets
| null | null |
10.1017/S0960129515000067
| null |
cs.IT cs.DM math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A cross-bifix-free set of words is a set in which no prefix of any length of
any word is the suffix of any other word in the set. A construction of
cross-bifix-free sets has recently been proposed by Chee {\it et al.} in 2013
within a constant factor of optimality. We propose a \emph{trace partitioned}
Gray code for these cross-bifix-free sets and a CAT algorithm generating it.
|
[
{
"version": "v1",
"created": "Sun, 19 Jan 2014 10:07:55 GMT"
}
] | 2015-05-20T00:00:00 |
[
[
"Bernini",
"Antonio",
""
],
[
"Bilotta",
"Stefano",
""
],
[
"Pinzani",
"Renzo",
""
],
[
"Vajnovszki",
"Vincent",
""
]
] |
new_dataset
| 0.997698 |
1412.6043
|
Pascal Giard
|
Pascal Giard and Gabi Sarkis and Claude Thibeault and Warren J. Gross
|
A 237 Gbps Unrolled Hardware Polar Decoder
|
4 pages, 3 figures
|
Electronics Lett., vol. 51, issue 10, May 2015, pp. 762-763
|
10.1049/el.2014.4432
| null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this letter we present a new architecture for a polar decoder using a
reduced complexity successive cancellation decoding algorithm. This novel
fully-unrolled, deeply-pipelined architecture is capable of achieving a coded
throughput of over 237 Gbps for a (1024,512) polar code implemented using an
FPGA. This decoder is two orders of magnitude faster than state-of-the-art
polar decoders.
|
[
{
"version": "v1",
"created": "Thu, 18 Dec 2014 20:07:22 GMT"
}
] | 2015-05-20T00:00:00 |
[
[
"Giard",
"Pascal",
""
],
[
"Sarkis",
"Gabi",
""
],
[
"Thibeault",
"Claude",
""
],
[
"Gross",
"Warren J.",
""
]
] |
new_dataset
| 0.998676 |
1005.5413
|
Irina Kostitsyna
|
Irina Kostitsyna, Valentin Polishchuk
|
Simple Wriggling is Hard unless You Are a Fat Hippo
|
A shorter version is to be presented at FUN 2010
| null |
10.1007/978-3-642-13122-6_27
| null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We prove that it is NP-hard to decide whether two points in a polygonal
domain with holes can be connected by a wire. This implies that finding any
approximation to the shortest path for a long snake amidst polygonal obstacles
is NP-hard. On the positive side, we show that snake's problem is
"length-tractable": if the snake is "fat", i.e., its length/width ratio is
small, the shortest path can be computed in polynomial time.
|
[
{
"version": "v1",
"created": "Fri, 28 May 2010 22:39:23 GMT"
}
] | 2015-05-19T00:00:00 |
[
[
"Kostitsyna",
"Irina",
""
],
[
"Polishchuk",
"Valentin",
""
]
] |
new_dataset
| 0.994852 |
1007.3353
|
Laurent Hubert
|
Laurent Hubert (INRIA - IRISA), Nicolas Barr\'e (INRIA - IRISA),
Fr\'ed\'eric Besson (INRIA - IRISA), Delphine Demange (INRIA - IRISA), Thomas
Jensen (INRIA - IRISA), Vincent Monfort (INRIA - IRISA), David Pichardie
(INRIA - IRISA), Tiphaine Turpin (INRIA - IRISA)
|
Sawja: Static Analysis Workshop for Java
| null |
The International Conference on Formal Verification of
Object-Oriented Software 2010.13 (2010) 253--267
|
10.1007/978-3-642-18070-5_7
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Static analysis is a powerful technique for automatic verification of
programs but raises major engineering challenges when developing a full-fledged
analyzer for a realistic language such as Java. This paper describes the Sawja
library: a static analysis framework fully compliant with Java 6 which provides
OCaml modules for efficiently manipulating Java bytecode programs. We present
the main features of the library, including (i) efficient functional
data-structures for representing program with implicit sharing and lazy
parsing, (ii) an intermediate stack-less representation, and (iii) fast
computation and manipulation of complete programs.
|
[
{
"version": "v1",
"created": "Tue, 20 Jul 2010 07:03:59 GMT"
}
] | 2015-05-19T00:00:00 |
[
[
"Hubert",
"Laurent",
"",
"INRIA - IRISA"
],
[
"Barré",
"Nicolas",
"",
"INRIA - IRISA"
],
[
"Besson",
"Frédéric",
"",
"INRIA - IRISA"
],
[
"Demange",
"Delphine",
"",
"INRIA - IRISA"
],
[
"Jensen",
"Thomas",
"",
"INRIA - IRISA"
],
[
"Monfort",
"Vincent",
"",
"INRIA - IRISA"
],
[
"Pichardie",
"David",
"",
"INRIA - IRISA"
],
[
"Turpin",
"Tiphaine",
"",
"INRIA - IRISA"
]
] |
new_dataset
| 0.986402 |
1008.4420
|
Sandor P. Fekete
|
Sandor P. Fekete, Chris Gray, Alexander Kroeller
|
Evacuation of rectilinear polygons
|
15 pages, 7 figures; to appear in COCOA 2010
| null |
10.1007/978-3-642-17458-2_3
| null |
cs.DS cs.CC cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the problem of creating fast evacuation plans for buildings
that are modeled as grid polygons, possibly containing exponentially many
cells. We study this problem in two contexts: the ``confluent'' context in
which the routes to exits remain fixed over time, and the ``non-confluent''
context in which routes may change. Confluent evacuation plans are simpler to
carry out, as they allocate contiguous regions to exits; non-confluent
allocation can possibly create faster evacuation plans. We give results on the
hardness of creating the evacuation plans and strongly polynomial algorithms
for finding confluent evacuation plans when the building has two exits. We also
give a pseudo-polynomial time algorithm for non-confluent evacuation plans.
Finally, we show that the worst-case bound between confluent and non-confluent
plans is 2-2/(k+1).
|
[
{
"version": "v1",
"created": "Thu, 26 Aug 2010 03:07:28 GMT"
}
] | 2015-05-19T00:00:00 |
[
[
"Fekete",
"Sandor P.",
""
],
[
"Gray",
"Chris",
""
],
[
"Kroeller",
"Alexander",
""
]
] |
new_dataset
| 0.997966 |
1011.2894
|
Michael Pinsker
|
Manuel Bodirsky and Michael Pinsker
|
Schaefer's theorem for graphs
|
54 pages
| null | null | null |
cs.CC math.CO math.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Schaefer's theorem is a complexity classification result for so-called
Boolean constraint satisfaction problems: it states that every Boolean
constraint satisfaction problem is either contained in one out of six classes
and can be solved in polynomial time, or is NP-complete.
We present an analog of this dichotomy result for the propositional logic of
graphs instead of Boolean logic. In this generalization of Schaefer's result,
the input consists of a set W of variables and a conjunction \Phi\ of
statements ("constraints") about these variables in the language of graphs,
where each statement is taken from a fixed finite set \Psi\ of allowed
quantifier-free first-order formulas; the question is whether \Phi\ is
satisfiable in a graph.
We prove that either \Psi\ is contained in one out of 17 classes of graph
formulas and the corresponding problem can be solved in polynomial time, or the
problem is NP-complete. This is achieved by a universal-algebraic approach,
which in turn allows us to use structural Ramsey theory. To apply the
universal-algebraic approach, we formulate the computational problems under
consideration as constraint satisfaction problems (CSPs) whose templates are
first-order definable in the countably infinite random graph. Our method to
classify the computational complexity of those CSPs is based on a
Ramsey-theoretic analysis of functions acting on the random graph, and we
develop general tools suitable for such an analysis which are of independent
mathematical interest.
|
[
{
"version": "v1",
"created": "Fri, 12 Nov 2010 12:15:48 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Oct 2011 17:46:16 GMT"
},
{
"version": "v3",
"created": "Mon, 1 Apr 2013 22:24:07 GMT"
},
{
"version": "v4",
"created": "Tue, 14 Oct 2014 16:19:40 GMT"
},
{
"version": "v5",
"created": "Sun, 17 May 2015 16:42:27 GMT"
}
] | 2015-05-19T00:00:00 |
[
[
"Bodirsky",
"Manuel",
""
],
[
"Pinsker",
"Michael",
""
]
] |
new_dataset
| 0.9708 |
1302.6814
|
David Heckerman
|
David Heckerman, John S. Breese
|
A New Look at Causal Independence
|
Appears in Proceedings of the Tenth Conference on Uncertainty in
Artificial Intelligence (UAI1994)
| null | null |
UAI-P-1994-PG-286-292
|
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Heckerman (1993) defined causal independence in terms of a set of temporal
conditional independence statements. These statements formalized certain types
of causal interaction where (1) the effect is independent of the order that
causes are introduced and (2) the impact of a single cause on the effect does
not depend on what other causes have previously been applied. In this paper, we
introduce an equivalent a temporal characterization of causal independence
based on a functional representation of the relationship between causes and the
effect. In this representation, the interaction between causes and effect can
be written as a nested decomposition of functions. Causal independence can be
exploited by representing this decomposition in the belief network, resulting
in representations that are more efficient for inference than general causal
models. We present empirical results showing the benefits of a
causal-independence representation for belief-network inference.
|
[
{
"version": "v1",
"created": "Wed, 27 Feb 2013 14:16:44 GMT"
},
{
"version": "v2",
"created": "Sun, 17 May 2015 00:03:17 GMT"
}
] | 2015-05-19T00:00:00 |
[
[
"Heckerman",
"David",
""
],
[
"Breese",
"John S.",
""
]
] |
new_dataset
| 0.985379 |
1402.5303
|
Andrew Winslow
|
Eli Fox-Epstein, Csaba T\'oth, and Andrew Winslow
|
Diffuse Reflection Radius in a Simple Polygon
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is shown that every simple polygon in general position with $n$ walls can
be illuminated from a single point light source $s$ after at most $\lfloor
(n-2)/4\rfloor$ diffuse reflections, and this bound is the best possible. A
point $s$ with this property can be computed in $O(n\log n)$ time. It is also
shown that the minimum number of diffuse reflections needed to illuminate a
given simple polygon from a single point can be approximated up to an additive
constant in polynomial time.
|
[
{
"version": "v1",
"created": "Fri, 21 Feb 2014 14:33:25 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Feb 2014 18:33:16 GMT"
},
{
"version": "v3",
"created": "Sun, 17 May 2015 22:26:29 GMT"
}
] | 2015-05-19T00:00:00 |
[
[
"Fox-Epstein",
"Eli",
""
],
[
"Tóth",
"Csaba",
""
],
[
"Winslow",
"Andrew",
""
]
] |
new_dataset
| 0.986745 |
1408.2098
|
Jiho Song
|
Jiho Song, Junil Choi, Stephen G. Larew, David J. Love, Timothy A.
Thomas, and Amitava Ghosh
|
Adaptive Millimeter Wave Beam Alignment for Dual-Polarized MIMO Systems
|
12 pages, 9 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fifth generation wireless systems are expected to employ multiple antenna
communication at millimeter wave (mmWave) frequencies using small cells within
heterogeneous cellular networks. The high path loss of mmWave as well as
physical obstructions make communication challenging. To compensate for the
severe path loss, mmWave systems may employ a beam alignment algorithm that
facilitates highly directional transmission by aligning the beam direction of
multiple antenna arrays. This paper discusses a mmWave system employing
dual-polarized antennas. First, we propose a practical soft-decision beam
alignment (soft-alignment) algorithm that exploits orthogonal polarizations. By
sounding the orthogonal polarizations in parallel, the equality criterion of
the Welch bound for training sequences is relaxed. Second, the analog
beamforming system is adapted to the directional characteristics of the mmWave
link assuming a high Ricean K-factor and poor scattering environment. The
soft-algorithm enables the mmWave system to align innumerable narrow beams to
channel subspace in an attempt to effectively scan the mmWave channel. Thirds,
we propose a method to efficiently adapt the number of channel sounding
observations to the specific channel environment based on an approximate
probability of beam misalignment. Simulation results show the proposed
soft-alignment algorithm with adaptive sounding time effectively scans the
channel subspace of a mobile user by exploiting polarization diversity.
|
[
{
"version": "v1",
"created": "Sat, 9 Aug 2014 13:49:31 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jan 2015 16:08:49 GMT"
},
{
"version": "v3",
"created": "Mon, 18 May 2015 14:17:37 GMT"
}
] | 2015-05-19T00:00:00 |
[
[
"Song",
"Jiho",
""
],
[
"Choi",
"Junil",
""
],
[
"Larew",
"Stephen G.",
""
],
[
"Love",
"David J.",
""
],
[
"Thomas",
"Timothy A.",
""
],
[
"Ghosh",
"Amitava",
""
]
] |
new_dataset
| 0.99324 |
1501.04301
|
Heba Abdelnasser
|
Heba Abdelnasser, Moustafa Youssef, Khaled A. Harras
|
WiGest: A Ubiquitous WiFi-based Gesture Recognition System
|
Accepted for publication in INFOCOM 2015
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present WiGest: a system that leverages changes in WiFi signal strength to
sense in-air hand gestures around the user's mobile device. Compared to related
work, WiGest is unique in using standard WiFi equipment, with no
modi-fications, and no training for gesture recognition. The system identifies
different signal change primitives, from which we construct mutually
independent gesture families. These families can be mapped to distinguishable
application actions. We address various challenges including cleaning the noisy
signals, gesture type and attributes detection, reducing false positives due to
interfering humans, and adapting to changing signal polarity. We implement a
proof-of-concept prototype using off-the-shelf laptops and extensively evaluate
the system in both an office environment and a typical apartment with standard
WiFi access points. Our results show that WiGest detects the basic primitives
with an accuracy of 87.5% using a single AP only, including through-the-wall
non-line-of-sight scenarios. This accuracy in-creases to 96% using three
overheard APs. In addition, when evaluating the system using a multi-media
player application, we achieve a classification accuracy of 96%. This accuracy
is robust to the presence of other interfering humans, highlighting WiGest's
ability to enable future ubiquitous hands-free gesture-based interaction with
mobile devices.
|
[
{
"version": "v1",
"created": "Sun, 18 Jan 2015 14:08:08 GMT"
},
{
"version": "v2",
"created": "Mon, 18 May 2015 13:36:45 GMT"
}
] | 2015-05-19T00:00:00 |
[
[
"Abdelnasser",
"Heba",
""
],
[
"Youssef",
"Moustafa",
""
],
[
"Harras",
"Khaled A.",
""
]
] |
new_dataset
| 0.996575 |
1503.07659
|
Andreas Kl\"ockner
|
Andreas Kl\"ockner
|
Loo.py: From Fortran to performance via transformation and substitution
rules
|
ARRAY 2015 - 2nd ACM SIGPLAN International Workshop on Libraries,
Languages and Compilers for Array Programming (ARRAY 2015)
| null |
10.1145/2774959.2774969
| null |
cs.PL cs.CE cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A large amount of numerically-oriented code is written and is being written
in legacy languages. Much of this code could, in principle, make good use of
data-parallel throughput-oriented computer architectures. Loo.py, a
transformation-based programming system targeted at GPUs and general
data-parallel architectures, provides a mechanism for user-controlled
transformation of array programs. This transformation capability is designed to
not just apply to programs written specifically for Loo.py, but also those
imported from other languages such as Fortran. It eases the trade-off between
achieving high performance, portability, and programmability by allowing the
user to apply a large and growing family of transformations to an input
program. These transformations are expressed in and used from Python and may be
applied from a variety of settings, including a pragma-like manner from other
languages.
|
[
{
"version": "v1",
"created": "Thu, 26 Mar 2015 09:40:56 GMT"
},
{
"version": "v2",
"created": "Sat, 16 May 2015 08:14:18 GMT"
}
] | 2015-05-19T00:00:00 |
[
[
"Klöckner",
"Andreas",
""
]
] |
new_dataset
| 0.999653 |
1505.04134
|
Georgios Rokos
|
Georgios Rokos and Gerard J. Gorman and Paul H. J. Kelly
|
An Interrupt-Driven Work-Sharing For-Loop Scheduler
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present a parallel for-loop scheduler which is based on
work-stealing principles but runs under a completely cooperative scheme. POSIX
signals are used by idle threads to interrupt left-behind workers, which in
turn decide what portion of their workload can be given to the requester. We
call this scheme Interrupt-Driven Work-Sharing (IDWS). This article describes
how IDWS works, how it can be integrated into any POSIX-compliant OpenMP
implementation and how a user can manually replace OpenMP parallel for-loops
with IDWS in existing POSIX-compliant C++ applications. Additionally, we
measure its performance using both a synthetic benchmark with varying
distributions of workload across the iteration space and a real-life
application on Sandy Bridge and Xeon Phi systems. Regardless the workload
distribution and the underlying hardware, IDWS is always the best or among the
best-performing strategies, providing a good all-around solution to the
scheduling-choice dilemma.
|
[
{
"version": "v1",
"created": "Fri, 15 May 2015 17:30:17 GMT"
},
{
"version": "v2",
"created": "Mon, 18 May 2015 16:04:00 GMT"
}
] | 2015-05-19T00:00:00 |
[
[
"Rokos",
"Georgios",
""
],
[
"Gorman",
"Gerard J.",
""
],
[
"Kelly",
"Paul H. J.",
""
]
] |
new_dataset
| 0.998651 |
1505.04141
|
Adriana Kovashka
|
Adriana Kovashka and Devi Parikh and Kristen Grauman
|
WhittleSearch: Interactive Image Search with Relative Attribute Feedback
|
Published in the International Journal of Computer Vision (IJCV),
April 2015. The final publication is available at Springer via
http://dx.doi.org/10.1007/s11263-015-0814-0
|
International Journal of Computer Vision, 1573-1405 (2015,
Springer)
|
10.1007/s11263-015-0814-0
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel mode of feedback for image search, where a user describes
which properties of exemplar images should be adjusted in order to more closely
match his/her mental model of the image sought. For example, perusing image
results for a query "black shoes", the user might state, "Show me shoe images
like these, but sportier." Offline, our approach first learns a set of ranking
functions, each of which predicts the relative strength of a nameable attribute
in an image (e.g., sportiness). At query time, the system presents the user
with a set of exemplar images, and the user relates them to his/her target
image with comparative statements. Using a series of such constraints in the
multi-dimensional attribute space, our method iteratively updates its relevance
function and re-ranks the database of images. To determine which exemplar
images receive feedback from the user, we present two variants of the approach:
one where the feedback is user-initiated and another where the feedback is
actively system-initiated. In either case, our approach allows a user to
efficiently "whittle away" irrelevant portions of the visual feature space,
using semantic language to precisely communicate her preferences to the system.
We demonstrate our technique for refining image search for people, products,
and scenes, and we show that it outperforms traditional binary relevance
feedback in terms of search speed and accuracy. In addition, the ordinal nature
of relative attributes helps make our active approach efficient -- both
computationally for the machine when selecting the reference images, and for
the user by requiring less user interaction than conventional passive and
active methods.
|
[
{
"version": "v1",
"created": "Fri, 15 May 2015 18:03:12 GMT"
},
{
"version": "v2",
"created": "Mon, 18 May 2015 13:52:40 GMT"
}
] | 2015-05-19T00:00:00 |
[
[
"Kovashka",
"Adriana",
""
],
[
"Parikh",
"Devi",
""
],
[
"Grauman",
"Kristen",
""
]
] |
new_dataset
| 0.954762 |
1505.04197
|
AbdelRahim Elmadany
|
AbdelRahim A. Elmadany, Sherif M. Abdou, Mervat Gheith
|
Arabic Inquiry-Answer Dialogue Acts Annotation Schema
|
IOSR Journal of Engineering (IOSRJEN),Vol. 04, Issue 12 (December
2014),V2. arXiv admin note: text overlap with arXiv:1505.03084
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an annotation schema as part of an effort to create a manually
annotated corpus for Arabic dialogue language understanding including spoken
dialogue and written "chat" dialogue for inquiry-answer domain. The proposed
schema handles mainly the request and response acts that occurs frequently in
inquiry-answer debate conversations expressing request services, suggests, and
offers. We applied the proposed schema on 83 Arabic inquiry-answer dialogues.
|
[
{
"version": "v1",
"created": "Fri, 15 May 2015 20:13:16 GMT"
}
] | 2015-05-19T00:00:00 |
[
[
"Elmadany",
"AbdelRahim A.",
""
],
[
"Abdou",
"Sherif M.",
""
],
[
"Gheith",
"Mervat",
""
]
] |
new_dataset
| 0.998179 |
1505.04235
|
Therese Biedl
|
Therese Biedl
|
Triangulating planar graphs while keeping the pathwidth small
|
To appear (without the appendix) at WG 2015
| null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Any simple planar graph can be triangulated, i.e., we can add edges to it,
without adding multi-edges, such that the result is planar and all faces are
triangles. In this paper, we study the problem of triangulating a planar graph
without increasing the pathwidth by much.
We show that if a planar graph has pathwidth $k$, then we can triangulate it
so that the resulting graph has pathwidth $O(k)$ (where the factors are 1, 8
and 16 for 3-connected, 2-connected and arbitrary graphs). With similar
techniques, we also show that any outer-planar graph of pathwidth $k$ can be
turned into a maximal outer-planar graph of pathwidth at most $4k+4$. The
previously best known result here was $16k+15$.
|
[
{
"version": "v1",
"created": "Sat, 16 May 2015 02:36:22 GMT"
}
] | 2015-05-19T00:00:00 |
[
[
"Biedl",
"Therese",
""
]
] |
new_dataset
| 0.984381 |
1505.04357
|
Gerard Howard
|
Gerard David Howard, Larry Bull, Ben de Lacy Costello, Andrew
Adamatzky, Ella Gale
|
Evolving Spiking Networks with Variable Resistive Memories
|
27 pages
|
Evolutionary Computation, Spring 2014, Vol. 22, No. 1, Pages
79-103 Posted Online February 7, 2014
|
10.1162/EVCO_a_00103
| null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neuromorphic computing is a brainlike information processing paradigm that
requires adaptive learning mechanisms. A spiking neuro-evolutionary system is
used for this purpose; plastic resistive memories are implemented as synapses
in spiking neural networks. The evolutionary design process exploits parameter
self-adaptation and allows the topology and synaptic weights to be evolved for
each network in an autonomous manner. Variable resistive memories are the focus
of this research; each synapse has its own conductance profile which modifies
the plastic behaviour of the device and may be altered during evolution. These
variable resistive networks are evaluated on a noisy robotic dynamic-reward
scenario against two static resistive memories and a system containing standard
connections only. Results indicate that the extra behavioural degrees of
freedom available to the networks incorporating variable resistive memories
enable them to outperform the comparative synapse types.
|
[
{
"version": "v1",
"created": "Sun, 17 May 2015 05:23:07 GMT"
}
] | 2015-05-19T00:00:00 |
[
[
"Howard",
"Gerard David",
""
],
[
"Bull",
"Larry",
""
],
[
"Costello",
"Ben de Lacy",
""
],
[
"Adamatzky",
"Andrew",
""
],
[
"Gale",
"Ella",
""
]
] |
new_dataset
| 0.97318 |
1505.04401
|
Sebastian Fass
|
Sebastian Fass and Kevin Turner
|
The quantitative and qualitative content analysis of marketing
literature for innovative information systems: the Aldrich Archive
|
Published at arXiv on May 17th 2015, 11 pages
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Aldrich Archive is a collection of technical and marketing material
covering the period from 1977 to 2000; the physical documents are in the
process of being digitised and made available on the internet. The Aldrich
Archive includes contemporaneous case studies of end-user computer systems that
were used for marketing purposes. This paper analyses these case studies of
innovative information systems 1980 - 1990 using a quantitative and qualitative
content analysis. The major aim of this research paper is to find out how
innovative information systems were marketed in the decade from 1980 to 1990.
The paper uses a double-step content analysis and does not focus on one method
of content analysis only. The reason for choosing this approach is to combine
the advantages of both quantitative and qualitative content analysis. The
results of the quantitative content analysis indicated that the focus of the
marketing material would be on information management / information supply. But
the qualitative analysis revealed that the focus is on monetary advantages. The
strong focus on monetary advantages of information technology seems typical for
the 1980s and 1990s. In 1987, Robert Solow stated you can see the computer age
everywhere but in the productivity statistics. This paradox caused a lot of
discussion: since the introduction of the IT productivity paradox the business
value of information technology has been the topic of many debates by
practitioners as well as by academics.
|
[
{
"version": "v1",
"created": "Sun, 17 May 2015 15:07:04 GMT"
}
] | 2015-05-19T00:00:00 |
[
[
"Fass",
"Sebastian",
""
],
[
"Turner",
"Kevin",
""
]
] |
new_dataset
| 0.967368 |
1505.04437
|
Norman Gray
|
Norman Gray
|
Xoxa: a lightweight approach to normalizing and signing XML
|
For submission to 'Software: Practice and Experience'
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cryptographically signing XML, and normalizing it prior to signing, are
forbiddingly intricate problems in the general case. This is largely because of
the complexities of the XML Information Set. We can define a more aggressive
normalization, which dispenses with distinctions and features which are
unimportant in a large class of cases, and thus define a straightforwardly
implementable and portable signature framework.
|
[
{
"version": "v1",
"created": "Sun, 17 May 2015 19:18:04 GMT"
}
] | 2015-05-19T00:00:00 |
[
[
"Gray",
"Norman",
""
]
] |
new_dataset
| 0.989094 |
1001.2888
|
Brad Shutters
|
Jack H. Lutz, Brad Shutters
|
Approximate Self-Assembly of the Sierpinski Triangle
| null | null |
10.1007/978-3-642-13962-8_32
| null |
cs.CC cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Tile Assembly Model is a Turing universal model that Winfree introduced
in order to study the nanoscale self-assembly of complex (typically aperiodic)
DNA crystals. Winfree exhibited a self-assembly that tiles the first quadrant
of the Cartesian plane with specially labeled tiles appearing at exactly the
positions of points in the Sierpinski triangle. More recently, Lathrop, Lutz,
and Summers proved that the Sierpinski triangle cannot self-assemble in the
"strict" sense in which tiles are not allowed to appear at positions outside
the target structure. Here we investigate the strict self-assembly of sets that
approximate the Sierpinski triangle. We show that every set that does strictly
self-assemble disagrees with the Sierpinski triangle on a set with fractal
dimension at least that of the Sierpinski triangle (roughly 1.585), and that no
subset of the Sierpinski triangle with fractal dimension greater than 1
strictly self-assembles. We show that our bounds are tight, even when
restricted to supersets of the Sierpinski triangle, by presenting a strict
self-assembly that adds communication fibers to the fractal structure without
disturbing it. To verify this strict self-assembly we develop a generalization
of the local determinism method of Soloveichik and Winfree.
|
[
{
"version": "v1",
"created": "Mon, 18 Jan 2010 08:41:11 GMT"
}
] | 2015-05-18T00:00:00 |
[
[
"Lutz",
"Jack H.",
""
],
[
"Shutters",
"Brad",
""
]
] |
new_dataset
| 0.983385 |
1002.4832
|
Jugal Garg
|
Bharat Adsul, Ch. Sobhan Babu, Jugal Garg, Ruta Mehta, Milind Sohoni
|
Nash equilibria in Fisher market
| null | null |
10.1007/978-3-642-16170-4_4
| null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Much work has been done on the computation of market equilibria. However due
to strategic play by buyers, it is not clear whether these are actually
observed in the market. Motivated by the observation that a buyer may derive a
better payoff by feigning a different utility function and thereby manipulating
the Fisher market equilibrium, we formulate the {\em Fisher market game} in
which buyers strategize by posing different utility functions. We show that
existence of a {\em conflict-free allocation} is a necessary condition for the
Nash equilibria (NE) and also sufficient for the symmetric NE in this game.
There are many NE with very different payoffs, and the Fisher equilibrium
payoff is captured at a symmetric NE. We provide a complete polyhedral
characterization of all the NE for the two-buyer market game. Surprisingly, all
the NE of this game turn out to be symmetric and the corresponding payoffs
constitute a piecewise linear concave curve. We also study the correlated
equilibria of this game and show that third-party mediation does not help to
achieve a better payoff than NE payoffs.
|
[
{
"version": "v1",
"created": "Thu, 25 Feb 2010 17:16:39 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Mar 2010 16:12:28 GMT"
},
{
"version": "v3",
"created": "Tue, 11 May 2010 05:13:15 GMT"
}
] | 2015-05-18T00:00:00 |
[
[
"Adsul",
"Bharat",
""
],
[
"Babu",
"Ch. Sobhan",
""
],
[
"Garg",
"Jugal",
""
],
[
"Mehta",
"Ruta",
""
],
[
"Sohoni",
"Milind",
""
]
] |
new_dataset
| 0.975238 |
1306.1461
|
Bob Sturm
|
Bob L. Sturm
|
The GTZAN dataset: Its contents, its faults, their effects on
evaluation, and its future use
|
29 pages, 7 figures, 6 tables, 128 references
| null |
10.1080/09298215.2014.894533
| null |
cs.SD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The GTZAN dataset appears in at least 100 published works, and is the
most-used public dataset for evaluation in machine listening research for music
genre recognition (MGR). Our recent work, however, shows GTZAN has several
faults (repetitions, mislabelings, and distortions), which challenge the
interpretability of any result derived using it. In this article, we disprove
the claims that all MGR systems are affected in the same ways by these faults,
and that the performances of MGR systems in GTZAN are still meaningfully
comparable since they all face the same faults. We identify and analyze the
contents of GTZAN, and provide a catalog of its faults. We review how GTZAN has
been used in MGR research, and find few indications that its faults have been
known and considered. Finally, we rigorously study the effects of its faults on
evaluating five different MGR systems. The lesson is not to banish GTZAN, but
to use it with consideration of its contents.
|
[
{
"version": "v1",
"created": "Thu, 6 Jun 2013 16:30:44 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jun 2013 16:57:39 GMT"
}
] | 2015-05-18T00:00:00 |
[
[
"Sturm",
"Bob L.",
""
]
] |
new_dataset
| 0.999888 |
1404.4963
|
Daniel Lemire
|
Antonio Badia and Daniel Lemire
|
Functional dependencies with null markers
|
accepted at the Computer Journal (April 2014)
|
Computer Journal 58 (5), 2015
|
10.1093/comjnl/bxu039
| null |
cs.DB
|
http://creativecommons.org/licenses/by/3.0/
|
Functional dependencies are an integral part of database design. However,
they are only defined when we exclude null markers. Yet we commonly use null
markers in practice. To bridge this gap between theory and practice,
researchers have proposed definitions of functional dependencies over relations
with null markers. Though sound, these definitions lack some qualities that we
find desirable. For example, some fail to satisfy Armstrong's axioms---while
these axioms are part of the foundation of common database methodologies. We
propose a set of properties that any extension of functional dependencies over
relations with null markers should possess. We then propose two new extensions
having these properties. These extensions attempt to allow null markers where
they make sense to practitioners.
They both support Armstrong's axioms and provide realizable null markers: at
any time, some or all of the null markers can be replaced by actual values
without causing an anomaly. Our proposals may improve database designs.
|
[
{
"version": "v1",
"created": "Sat, 19 Apr 2014 15:46:01 GMT"
},
{
"version": "v2",
"created": "Thu, 15 May 2014 14:54:17 GMT"
}
] | 2015-05-18T00:00:00 |
[
[
"Badia",
"Antonio",
""
],
[
"Lemire",
"Daniel",
""
]
] |
new_dataset
| 0.990974 |
1408.6182
|
Tomasz Kociumaka
|
Maxim Babenko, Pawe{\l} Gawrychowski, Tomasz Kociumaka, Tatiana
Starikovskaya
|
Wavelet Trees Meet Suffix Trees
|
33 pages, 5 figures; preliminary version published at SODA 2015
| null |
10.1137/1.9781611973730.39
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an improved wavelet tree construction algorithm and discuss its
applications to a number of rank/select problems for integer keys and strings.
Given a string of length n over an alphabet of size $\sigma\leq n$, our
method builds the wavelet tree in $O(n \log \sigma/ \sqrt{\log{n}})$ time,
improving upon the state-of-the-art algorithm by a factor of $\sqrt{\log n}$.
As a consequence, given an array of n integers we can construct in $O(n
\sqrt{\log n})$ time a data structure consisting of $O(n)$ machine words and
capable of answering rank/select queries for the subranges of the array in
$O(\log n / \log \log n)$ time. This is a $\log \log n$-factor improvement in
query time compared to Chan and P\u{a}tra\c{s}cu and a $\sqrt{\log n}$-factor
improvement in construction time compared to Brodal et al.
Next, we switch to stringological context and propose a novel notion of
wavelet suffix trees. For a string w of length n, this data structure occupies
$O(n)$ words, takes $O(n \sqrt{\log n})$ time to construct, and simultaneously
captures the combinatorial structure of substrings of w while enabling
efficient top-down traversal and binary search. In particular, with a wavelet
suffix tree we are able to answer in $O(\log |x|)$ time the following two
natural analogues of rank/select queries for suffixes of substrings: for
substrings x and y of w count the number of suffixes of x that are
lexicographically smaller than y, and for a substring x of w and an integer k,
find the k-th lexicographically smallest suffix of x.
We further show that wavelet suffix trees allow to compute a
run-length-encoded Burrows-Wheeler transform of a substring x of w in $O(s \log
|x|)$ time, where s denotes the length of the resulting run-length encoding.
This answers a question by Cormode and Muthukrishnan, who considered an
analogous problem for Lempel-Ziv compression.
|
[
{
"version": "v1",
"created": "Tue, 26 Aug 2014 16:44:53 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Aug 2014 15:37:54 GMT"
},
{
"version": "v3",
"created": "Mon, 13 Oct 2014 10:58:04 GMT"
},
{
"version": "v4",
"created": "Fri, 15 May 2015 17:17:18 GMT"
}
] | 2015-05-18T00:00:00 |
[
[
"Babenko",
"Maxim",
""
],
[
"Gawrychowski",
"Paweł",
""
],
[
"Kociumaka",
"Tomasz",
""
],
[
"Starikovskaya",
"Tatiana",
""
]
] |
new_dataset
| 0.997451 |
1505.04157
|
Vijaykumar S
|
Vijaykumar S., Saravanakumar S.G., M. Balamurugan
|
Unique Sense: Smart Computing Prototype
|
6 Pages
|
Elsevier Procedia Computer Science 50, 2015 Pages 223-228
|
10.1016/j.procs.2015.04.056
| null |
cs.OH
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
Unique sense: Smart computing prototype is a part of unique sense computing
architecture, which delivers alternate solution for todays computing
architecture. This computing is one step towards future generation needs, which
brings extended support to the ubiquitous environment. This smart computing
prototype is the light weight compact architecture which is designed to satisfy
all the needs of this society. The proposed solution is based on the hybrid
combination of cutting edge technologies and techniques from the various
layers. In addition it achieves low cost architecture and eco-friendly to meet
all the levels of peoples needs.
|
[
{
"version": "v1",
"created": "Sat, 9 May 2015 02:38:57 GMT"
}
] | 2015-05-18T00:00:00 |
[
[
"S.",
"Vijaykumar",
""
],
[
"G.",
"Saravanakumar S.",
""
],
[
"Balamurugan",
"M.",
""
]
] |
new_dataset
| 0.999377 |
1306.4807
|
Xinhua Wang
|
Xinhua Wang, Bijan Shirinzadeh
|
Nonlinear continuous integral-derivative observer
|
21 pages, 12 figures
|
Nonlinear Dynamics, vol. 77, no. 3, 2014, 793-806
|
10.1007/s11071-014-1341-1
| null |
cs.SY math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a high-order nonlinear continuous integral-derivative observer
is presented based on finite-time stability and singular perturbation
technique. The proposed integral-derivative observer can not only obtain the
multiple integrals of a signal, but can also estimate the derivatives.
Conditions are given ensuring finite-time stability for the presented
integral-derivative observer, and the stability and robustness in time domain
are analysed. The merits of the presented integral-derivative observer include
its synchronous estimation of integrals and derivatives, finite-time stability,
ease of parameters selection, sufficient stochastic noises rejection and almost
no drift phenomenon. The theoretical results are confirmed by computational
analysis and simulations.
|
[
{
"version": "v1",
"created": "Thu, 20 Jun 2013 10:22:11 GMT"
},
{
"version": "v2",
"created": "Thu, 14 May 2015 11:34:25 GMT"
}
] | 2015-05-15T00:00:00 |
[
[
"Wang",
"Xinhua",
""
],
[
"Shirinzadeh",
"Bijan",
""
]
] |
new_dataset
| 0.979655 |
1504.02799
|
Michael Menz
|
Michael Menz, Justin Wang, Jiyang Xie
|
Discrete All-Pay Bidding Games
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In an all-pay auction, only one bidder wins but all bidders must pay the
auctioneer. All-pay bidding games arise from attaching a similar bidding
structure to traditional combinatorial games to determine which player moves
next. In contrast to the established theory of single-pay bidding games,
optimal play involves choosing bids from some probability distribution that
will guarantee a minimum probability of winning. In this manner, all-pay
bidding games wed the underlying concepts of economic and combinatorial games.
We present several results on the structures of optimal strategies in these
games. We then give a fast algorithm for computing such strategies for a large
class of all-pay bidding games. The methods presented provide a framework for
further development of the theory of all-pay bidding games.
|
[
{
"version": "v1",
"created": "Fri, 3 Apr 2015 03:13:01 GMT"
},
{
"version": "v2",
"created": "Wed, 13 May 2015 23:29:51 GMT"
}
] | 2015-05-15T00:00:00 |
[
[
"Menz",
"Michael",
""
],
[
"Wang",
"Justin",
""
],
[
"Xie",
"Jiyang",
""
]
] |
new_dataset
| 0.985917 |
1505.03578
|
Ali Borji
|
Ali Borji, Mengyang Feng, Huchuan Lu
|
Vanishing Point Attracts Eye Movements in Scene Free-viewing
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Eye movements are crucial in understanding complex scenes. By predicting
where humans look in natural scenes, we can understand how they percieve scenes
and priotriaze information for further high-level processing. Here, we study
the effect of a particular type of scene structural information known as
vanishing point and show that human gaze is attracted to vanishing point
regions. We then build a combined model of traditional saliency and vanishing
point channel that outperforms state of the art saliency models.
|
[
{
"version": "v1",
"created": "Thu, 14 May 2015 00:22:35 GMT"
}
] | 2015-05-15T00:00:00 |
[
[
"Borji",
"Ali",
""
],
[
"Feng",
"Mengyang",
""
],
[
"Lu",
"Huchuan",
""
]
] |
new_dataset
| 0.985066 |
1505.03580
|
Francisco Mota
|
Francisco Mota
|
Splitting Root-Locus Plot into Algebraic Plane Curves
| null | null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we show how to split the Root Locus plot for an irreducible
rational transfer function into several individual algebraic plane curves, like
lines, circles, conics, etc. To achieve this goal we use results of a previous
paper of the author to represent the Root Locus as an algebraic variety
generated by an ideal over a polynomial ring, and whose primary decomposion
allow us to isolate the planes curves that composes the Root Locus. As a
by-product, using the concept of duality in projective algebraic geometry, we
show how to obtain the dual curve of each plane curve that composes the Root
Locus and unite them to obtain what we denominate the "Algebraic Dual Root
Locus".
|
[
{
"version": "v1",
"created": "Thu, 14 May 2015 00:28:53 GMT"
}
] | 2015-05-15T00:00:00 |
[
[
"Mota",
"Francisco",
""
]
] |
new_dataset
| 0.987129 |
1505.03581
|
Ali Borji
|
Ali Borji, Laurent Itti
|
CAT2000: A Large Scale Fixation Dataset for Boosting Saliency Research
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Saliency modeling has been an active research area in computer vision for
about two decades. Existing state of the art models perform very well in
predicting where people look in natural scenes. There is, however, the risk
that these models may have been overfitting themselves to available small scale
biased datasets, thus trapping the progress in a local minimum. To gain a
deeper insight regarding current issues in saliency modeling and to better
gauge progress, we recorded eye movements of 120 observers while they freely
viewed a large number of naturalistic and artificial images. Our stimuli
includes 4000 images; 200 from each of 20 categories covering different types
of scenes such as Cartoons, Art, Objects, Low resolution images, Indoor,
Outdoor, Jumbled, Random, and Line drawings. We analyze some basic properties
of this dataset and compare some successful models. We believe that our dataset
opens new challenges for the next generation of saliency models and helps
conduct behavioral studies on bottom-up visual attention.
|
[
{
"version": "v1",
"created": "Thu, 14 May 2015 00:34:43 GMT"
}
] | 2015-05-15T00:00:00 |
[
[
"Borji",
"Ali",
""
],
[
"Itti",
"Laurent",
""
]
] |
new_dataset
| 0.999502 |
1505.03736
|
Ruzanna Chitchyan
|
Ruzanna Chitchyan, Joost Noppen and Iris Groher
|
Sustainability in Software Product Lines: Report on Discussion Panel at
SPLC 2014
|
4 pages, notes on panel held at Software Product Lines Conference -
SPLC 2014
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sustainability (defined as 'the capacity to keep up') encompasses a wide set
of aims: ranging from energy efficient software products (environmental
sustainability), reduction of software development and maintenance costs
(economic sustainability), to employee and end-user wellbeing (social
sustainability). In this report we explore the role that sustainability plays
in software product line engineering (SPL). The report is based on the
'Sustainability in Software Product Lines' panel held at SPLC 2014.
|
[
{
"version": "v1",
"created": "Thu, 14 May 2015 14:29:27 GMT"
}
] | 2015-05-15T00:00:00 |
[
[
"Chitchyan",
"Ruzanna",
""
],
[
"Noppen",
"Joost",
""
],
[
"Groher",
"Iris",
""
]
] |
new_dataset
| 0.998782 |
0909.3091
|
Xin Liu
|
Xin Liu, John Kountouriotis, Athina P. Petropulu and Kapil R. Dandekar
|
ALOHA With Collision Resolution(ALOHA-CR): Theory and Software Defined
Radio Implementation
| null | null |
10.1109/TSP.2010.2048315
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A cross-layer scheme, namely ALOHA With Collision Resolution (ALOHA-CR), is
proposed for high throughput wireless communications in a cellular scenario.
Transmissions occur in a time-slotted ALOHA-type fashion but with an important
difference: simultaneous transmissions of two users can be successful. If more
than two users transmit in the same slot the collision cannot be resolved and
retransmission is required. If only one user transmits, the transmitted packet
is recovered with some probability, depending on the state of the channel. If
two users transmit the collision is resolved and the packets are recovered by
first over-sampling the collision signal and then exploiting independent
information about the two users that is contained in the signal polyphase
components. The ALOHA-CR throughput is derived under the infinite backlog
assumption and also under the assumption of finite backlog. The contention
probability is determined under these two assumptions in order to maximize the
network throughput and maintain stability. Queuing delay analysis for network
users is also conducted. The performance of ALOHA-CR is demonstrated on the
Wireless Open Access Research Platform (WARP) test-bed containing five software
defined radio nodes. Analysis and test-bed results indicate that ALOHA-CR leads
to significant increase in throughput and reduction of service delays.
|
[
{
"version": "v1",
"created": "Wed, 16 Sep 2009 19:46:12 GMT"
}
] | 2015-05-14T00:00:00 |
[
[
"Liu",
"Xin",
""
],
[
"Kountouriotis",
"John",
""
],
[
"Petropulu",
"Athina P.",
""
],
[
"Dandekar",
"Kapil R.",
""
]
] |
new_dataset
| 0.998571 |
0910.1427
|
Maurice Jansen
|
Maurice Jansen and Jayalal Sarma M.N
|
Balancing Bounded Treewidth Circuits
| null | null |
10.1007/978-3-642-13182-0_21
| null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Algorithmic tools for graphs of small treewidth are used to address questions
in complexity theory. For both arithmetic and Boolean circuits, it is shown
that any circuit of size $n^{O(1)}$ and treewidth $O(\log^i n)$ can be
simulated by a circuit of width $O(\log^{i+1} n)$ and size $n^c$, where $c =
O(1)$, if $i=0$, and $c=O(\log \log n)$ otherwise. For our main construction,
we prove that multiplicatively disjoint arithmetic circuits of size $n^{O(1)}$
and treewidth $k$ can be simulated by bounded fan-in arithmetic formulas of
depth $O(k^2\log n)$. From this we derive the analogous statement for
syntactically multilinear arithmetic circuits, which strengthens a theorem of
Mahajan and Rao. As another application, we derive that constant width
arithmetic circuits of size $n^{O(1)}$ can be balanced to depth $O(\log n)$,
provided certain restrictions are made on the use of iterated multiplication.
Also from our main construction, we derive that Boolean bounded fan-in circuits
of size $n^{O(1)}$ and treewidth $k$ can be simulated by bounded fan-in
formulas of depth $O(k^2\log n)$. This strengthens in the non-uniform setting
the known inclusion that $SC^0 \subseteq NC^1$. Finally, we apply our
construction to show that {\sc reachability} for directed graphs of bounded
treewidth is in $LogDCFL$.
|
[
{
"version": "v1",
"created": "Thu, 8 Oct 2009 06:56:50 GMT"
}
] | 2015-05-14T00:00:00 |
[
[
"Jansen",
"Maurice",
""
],
[
"N",
"Jayalal Sarma M.",
""
]
] |
new_dataset
| 0.996866 |
0910.5844
|
Tony Tan
|
Tony Tan
|
On Pebble Automata for Data Languages with Decidable Emptiness Problem
|
An extended abstract of this work has been published in the
proceedings of the 34th International Symposium on Mathematical Foundations
of Computer Science (MFCS) 2009}, Springer, Lecture Notes in Computer Science
5734, pages 712-723
| null |
10.1007/978-3-642-03816-7_60
| null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we study a subclass of pebble automata (PA) for data languages
for which the emptiness problem is decidable. Namely, we introduce the
so-called top view weak PA. Roughly speaking, top view weak PA are weak PA
where the equality test is performed only between the data values seen by the
two most recently placed pebbles. The emptiness problem for this model is
decidable. We also show that it is robust: alternating, nondeterministic and
deterministic top view weak PA have the same recognition power. Moreover, this
model is strong enough to accept all data languages expressible in Linear
Temporal Logic with the future-time operators, augmented with one register
freeze quantifier.
|
[
{
"version": "v1",
"created": "Fri, 30 Oct 2009 11:14:03 GMT"
}
] | 2015-05-14T00:00:00 |
[
[
"Tan",
"Tony",
""
]
] |
new_dataset
| 0.996773 |
0911.3355
|
Zhi Xu
|
Zhi Xu
|
A Minimal Periods Algorithm with Applications
|
14 pages
| null |
10.1007/978-3-642-13509-5_6
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Kosaraju in ``Computation of squares in a string'' briefly described a
linear-time algorithm for computing the minimal squares starting at each
position in a word. Using the same construction of suffix trees, we generalize
his result and describe in detail how to compute in O(k|w|)-time the minimal
k-th power, with period of length larger than s, starting at each position in a
word w for arbitrary exponent $k\geq2$ and integer $s\geq0$. We provide the
complete proof of correctness of the algorithm, which is somehow not completely
clear in Kosaraju's original paper. The algorithm can be used as a sub-routine
to detect certain types of pseudo-patterns in words, which is our original
intention to study the generalization.
|
[
{
"version": "v1",
"created": "Tue, 17 Nov 2009 17:34:23 GMT"
}
] | 2015-05-14T00:00:00 |
[
[
"Xu",
"Zhi",
""
]
] |
new_dataset
| 0.998527 |
0911.4752
|
Yao Yu
|
Yao Yu, Athina P. Petropulu and H. Vincent Poor
|
MIMO Radar Using Compressive Sampling
|
39 pages and 14 figures. Y. Yu, A. P. Petropulu and H. V. Poor, "MIMO
Radar Using Compressive Sampling," IEEE Journal of Selected Topics in Signal
Processing, to appear in Feb. 2010
| null |
10.1109/JSTSP.2009.2038973
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A MIMO radar system is proposed for obtaining angle and Doppler information
on potential targets. Transmitters and receivers are nodes of a small scale
wireless network and are assumed to be randomly scattered on a disk. The
transmit nodes transmit uncorrelated waveforms. Each receive node applies
compressive sampling to the received signal to obtain a small number of
samples, which the node subsequently forwards to a fusion center. Assuming that
the targets are sparsely located in the angle- Doppler space, based on the
samples forwarded by the receive nodes the fusion center formulates an
l1-optimization problem, the solution of which yields target angle and Doppler
information. The proposed approach achieves the superior resolution of MIMO
radar with far fewer samples than required by other approaches. This implies
power savings during the communication phase between the receive nodes and the
fusion center. Performance in the presence of a jammer is analyzed for the case
of slowly moving targets. Issues related to forming the basis matrix that spans
the angle-Doppler space, and for selecting a grid for that space are discussed.
Extensive simulation results are provided to demonstrate the performance of the
proposed approach at difference jammer and noise levels.
|
[
{
"version": "v1",
"created": "Wed, 25 Nov 2009 03:19:13 GMT"
}
] | 2015-05-14T00:00:00 |
[
[
"Yu",
"Yao",
""
],
[
"Petropulu",
"Athina P.",
""
],
[
"Poor",
"H. Vincent",
""
]
] |
new_dataset
| 0.996266 |
0912.1839
|
Vishal Goyal
|
D. Sharma and V. Singh
|
ICT in Universities of the Western Himalayan Region in India: Status,
Performance- An Assessment
|
International Journal of Computer Science Issues, IJCSI Volume 6,
Issue 2, pp44-52, November 2009
| null |
10.5120/1111-1455
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The present paper describes a live project study carried out for the
universities located in the western Himalayan region of India in the year 2009.
The objective of this study is to undertake the task of assessment regarding
initiative, utilization of ICT resources, its performance and impact in these
higher educational institutions/universities. In order to answer these,
initially basic four- tier framework was prepared. Followed by a questionnaire
containing different ICT components 18 different groups like vision, planning,
implementation, ICT infrastructure and related activities exhibiting
performance. Primary data in the form of feedback on the five point scale, of
the questionnaire, was gathered from six universities of the region. A simple
statistical analysis was undertaken using weighted mean, to assess the ICT
initiative, status and performance of various universities. In the process, a
question related to Performance Indicator was identified from each group, whose
Coefficient of Correlation was calculated. This study suggests that a
progressive vision, planning and initiative regarding academic syllabi, ICT
infrastructure, used in training the skilled human resource, is going to have a
favourable impact through actual placement, research and play a dominant role
at the National and International level.
|
[
{
"version": "v1",
"created": "Wed, 9 Dec 2009 19:00:01 GMT"
}
] | 2015-05-14T00:00:00 |
[
[
"Sharma",
"D.",
""
],
[
"Singh",
"V.",
""
]
] |
new_dataset
| 0.994826 |
1001.0889
|
Arnaud Labourel
|
Jurek Czyzowicz, David Ilcinkas (LaBRI, INRIA Bordeaux - Sud-Ouest),
Arnaud Labourel (LaBRI), Andrzej Pelc
|
Asynchronous deterministic rendezvous in bounded terrains
| null | null |
10.1007/978-3-642-13284-1_7
| null |
cs.CG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Two mobile agents (robots) have to meet in an a priori unknown bounded
terrain modeled as a polygon, possibly with polygonal obstacles. Agents are
modeled as points, and each of them is equipped with a compass. Compasses of
agents may be incoherent. Agents construct their routes, but the actual walk of
each agent is decided by the adversary: the movement of the agent can be at
arbitrary speed, the agent may sometimes stop or go back and forth, as long as
the walk of the agent in each segment of its route is continuous, does not
leave it and covers all of it. We consider several scenarios, depending on
three factors: (1) obstacles in the terrain are present, or not, (2) compasses
of both agents agree, or not, (3) agents have or do not have a map of the
terrain with their positions marked. The cost of a rendezvous algorithm is the
worst-case sum of lengths of the agents' trajectories until their meeting. For
each scenario we design a deterministic rendezvous algorithm and analyze its
cost. We also prove lower bounds on the cost of any deterministic rendezvous
algorithm in each case. For all scenarios these bounds are tight.
|
[
{
"version": "v1",
"created": "Wed, 6 Jan 2010 13:25:55 GMT"
}
] | 2015-05-14T00:00:00 |
[
[
"Czyzowicz",
"Jurek",
"",
"LaBRI, INRIA Bordeaux - Sud-Ouest"
],
[
"Ilcinkas",
"David",
"",
"LaBRI, INRIA Bordeaux - Sud-Ouest"
],
[
"Labourel",
"Arnaud",
"",
"LaBRI"
],
[
"Pelc",
"Andrzej",
""
]
] |
new_dataset
| 0.99935 |
1504.07626
|
Asbj{\o}rn Br{\ae}ndeland
|
Asbj{\o}rn Br{\ae}ndeland
|
Split-by-edges trees
|
The definition of 'ordered SBE-tree' has been added. This corrects an
omission in the previous versions but does not change anything essential.
Some changes have been made to accommodate the addition, and others have been
made to correct minor errors and improve wordings
| null | null | null |
cs.DS cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A split-by-edges tree of a graph G on n vertices is a binary tree T where the
root = V(G), every leaf is an independent set in G, and for every other node N
in T with children L and R there is a pair of vertices {u, v} in N such that L
= N - v, R = N - u, and uv is an edge in G. It follows from the definition that
every maximal independent set in G is a leaf in T, and the maximum independent
sets of G are the leaves closest to the root of T.
|
[
{
"version": "v1",
"created": "Tue, 28 Apr 2015 18:39:17 GMT"
},
{
"version": "v2",
"created": "Mon, 4 May 2015 10:55:41 GMT"
},
{
"version": "v3",
"created": "Wed, 13 May 2015 18:59:43 GMT"
}
] | 2015-05-14T00:00:00 |
[
[
"Brændeland",
"Asbjørn",
""
]
] |
new_dataset
| 0.989051 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.