id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1609.08716
|
Juan Francisco Saldarriaga
|
Juan Francisco Saldarriaga (Columbia University), David A. King
(Arizona State University)
|
Access to Taxicabs for Unbanked Households: An Exploratory Analysis in
New York City
|
Presented at the Data For Good Exchange 2016
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Taxicabs are a critical aspect of the public transit system in New York City.
The yellow cabs that are ubiquitous in Manhattan are as iconic as the city's
subway system, and in recent years green taxicabs were introduced by the city
to improve taxi service in areas outside of the central business districts and
airports. Approximately 500,000 taxi trips are taken daily, carrying about
800,000 passengers, and not including other livery firms such as Uber, Lyft or
Carmel. Since 2008 yellow taxis have been able to process fare payments with
credit cards, and credits cards are a growing share of total fare payments.
However, the use of credit cards to pay for taxi fares varies widely across
neighborhoods, and there are strong correlations between cash payments for taxi
fares and the presence of unbanked or underbanked populations. These issues are
of concern for policymakers as approximately ten percent of households in the
city are unbanked, and in some neighborhoods the share of unbanked households
is over 50 percent. In this paper we use multiple datasets to explore taxicab
fare payments by neighborhood and examine how access to taxicab services is
associated with use of conventional banking services. There is a clear spatial
dimension to the propensity of riders to pay cash, and we find that both
immigrant status and being 'unbanked' are strong predictors of cash
transactions for taxicabs. These results have implications for local
regulations of the for-hire vehicle industry, particularly in the context of
the rapid growth of services that require credit cards. Without some type of
cash-based payment option taxi services will isolate certain neighborhoods. At
the very least, existing and new providers of transit services must consider
access to mainstream financial products as part of their equity analyses.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2016 00:34:02 GMT"
}
] | 2016-09-29T00:00:00 |
[
[
"Saldarriaga",
"Juan Francisco",
"",
"Columbia University"
],
[
"King",
"David A.",
"",
"Arizona State University"
]
] |
new_dataset
| 0.999745 |
1609.08723
|
Takuya Akiba
|
Takuya Akiba, Yoichi Iwata, Yosuke Sameshima, Naoto Mizuno, Yosuke
Yano
|
Cut Tree Construction from Massive Graphs
|
Short version will appear at ICDM'16
| null | null | null |
cs.DS cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The construction of cut trees (also known as Gomory-Hu trees) for a given
graph enables the minimum-cut size of the original graph to be obtained for any
pair of vertices. Cut trees are a powerful back-end for graph management and
mining, as they support various procedures related to the minimum cut, maximum
flow, and connectivity. However, the crucial drawback with cut trees is the
computational cost of their construction. In theory, a cut tree is built by
applying a maximum flow algorithm for $n$ times, where $n$ is the number of
vertices. Therefore, naive implementations of this approach result in cubic
time complexity, which is obviously too slow for today's large-scale graphs. To
address this issue, in the present study, we propose a new cut-tree
construction algorithm tailored to real-world networks. Using a series of
experiments, we demonstrate that the proposed algorithm is several orders of
magnitude faster than previous algorithms and it can construct cut trees for
billion-scale graphs.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2016 01:49:46 GMT"
}
] | 2016-09-29T00:00:00 |
[
[
"Akiba",
"Takuya",
""
],
[
"Iwata",
"Yoichi",
""
],
[
"Sameshima",
"Yosuke",
""
],
[
"Mizuno",
"Naoto",
""
],
[
"Yano",
"Yosuke",
""
]
] |
new_dataset
| 0.968924 |
1609.08754
|
Simone Brody
|
Simone Brody (What Works Cities | Results for America), Andel Koester
(What Works Cities | Results for America), Zachary Markovits (What Works
Cities | Results for America), Jacob Phillips (What Works Cities | Results
for America)
|
Moving the Needle: What Works Cities and the use of data and evidence
|
Presented at the Data For Good Exchange 2016
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bloomberg Philanthropies launched What Works Cities (WWC) in 2015 to help
cities better leverage data and evidence to drive decision-making and improve
residents' lives. Over three years, WWC will work with 100 American cities with
populations between 100,000 and 1,000,000 to measure their state of practice
and provide targeted technical assistance. This paper uses the data obtained
through the WWC discovery process to understand how 67 cities are currently
using data to deliver city services. Our analysis confirms that while cities
possess a strong desire to use data and evidence, government leaders are
constrained in their ability to apply these practices. We find that a city's
stated commitment to using data is the strongest predictor of overall
performance and that strong practice in almost any one specific technical area
of using data to inform decisions is an indicator of strong practices in other
areas. The exception is open data; we find larger cities are more adept at
adopting open data policies and programs, independent of their performance
using data overall. This paper seeks to develop a deeper understanding of the
issues underlying these findings and to continue the conversation on how to
best support cities' efforts in this work.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2016 03:29:08 GMT"
}
] | 2016-09-29T00:00:00 |
[
[
"Brody",
"Simone",
"",
"What Works Cities | Results for America"
],
[
"Koester",
"Andel",
"",
"What Works Cities | Results for America"
],
[
"Markovits",
"Zachary",
"",
"What Works\n Cities | Results for America"
],
[
"Phillips",
"Jacob",
"",
"What Works Cities | Results\n for America"
]
] |
new_dataset
| 0.977438 |
1609.08756
|
Wessley Merten
|
Wessley Merten (Oceana), Adam Reyer (Oceana), Jackie Savitz (Oceana),
John Amos (SkyTruth), Paul Woods (SkyTruth), Brian Sullivan (Google)
|
Global Fishing Watch: Bringing Transparency to Global Commercial
Fisheries
|
Presented at the Data For Good Exchange 2016
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Across all major industrial fishing sectors, overfishing due to overcapacity
and lack of compliance in fishery governance has led to a decline in biomass of
many global fish stocks. Overfishing threatens ocean biodiversity, global food
security, and the livelihoods of law abiding fishermen. To address this issue,
Global Fishing Watch (GFW) was created to bring transparency to global
fisheries using computer science and big data analytics. A product of a
partnership between Oceana, SkyTruth and Google, GFW uses the Automatic
Identification System, or AIS, to analyze the movement of vessels at sea. AIS
provides vessel location data, and GFW uses this information to track global
vessel movement and apply algorithms to classify vessel behavior as "fishing"
or "non-fishing" activity. Now publicly available, anyone with an internet
connection can monitor when and where trackable commercial fishing appears to
be occurring around the world. Hundreds of millions of people around the world
depend on our ocean for their livelihoods, and many more rely on it for food.
Collectively, the various applications of GFW will help reduce overfishing and
illegal fishing, restore the ocean's abundance, and ensure sustainability
through better monitoring and governance of our marine resources.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2016 03:32:09 GMT"
}
] | 2016-09-29T00:00:00 |
[
[
"Merten",
"Wessley",
"",
"Oceana"
],
[
"Reyer",
"Adam",
"",
"Oceana"
],
[
"Savitz",
"Jackie",
"",
"Oceana"
],
[
"Amos",
"John",
"",
"SkyTruth"
],
[
"Woods",
"Paul",
"",
"SkyTruth"
],
[
"Sullivan",
"Brian",
"",
"Google"
]
] |
new_dataset
| 0.96005 |
1609.08765
|
Nadine Levick
|
Nadine Levick (EMS Safety Foundation)
|
iRescU - Data for Social Good Saving Lives Bridging the Gaps in Sudden
Cardiac Arrest Survival
|
Presented at the Data For Good Exchange 2016
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Currently every day in the USA 1000 people die of sudden cardiac arrest (SCA)
outside of hospitals or ambulances - before emergency medical help arrives - in
the streets, workplaces, schools and homes of our cities, adults and children.
Brain death commences in 3 minutes, and often the ambulance just can't be there
in time. Citizen cardiopulmonary resuscitation (CPR) and automated external
defibrillator (AED) use can save precious minutes and lives. Using public
access AED's saves lives in SCA- however AEDs are used in <2% of cardiac
arrests, though could save lives in 80% if available, findable, functioning,
and used. The systems problem to solve is that there is no comprehensive or
real time accessible database of the AED locations, and also it is not known
that they are actually being positioned where they are needed. The iRescU
project is designed to bridge this gap in SCA survival, by substantially
augmenting the AED database. Utilizing a combination of AED crowd sourcing and
geolocation integrated with existing 911 services and SCA events and projected
events based on machine learning data information to help make the nearest AED
accessible and available in the setting of a SCA emergency and to identify the
areas of greatest need for AEDs to be positioned in the community. Helping to
save lives and address preventable death with a social good approach and
applied big data.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2016 04:45:53 GMT"
}
] | 2016-09-29T00:00:00 |
[
[
"Levick",
"Nadine",
"",
"EMS Safety Foundation"
]
] |
new_dataset
| 0.992759 |
1609.08813
|
Mao-Ching Chiu
|
Mao-Ching Chiu and Wei-De Wu
|
Reduced-Complexity SCL Decoding of Multi-CRC-Aided Polar Codes
|
9 pages, 7 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cyclic redundancy check (CRC) aided polar codes are capable of achieving
better performance than low-density parity-check (LDPC) codes under the
successive cancelation list (SCL) decoding scheme. However, the SCL decoding
scheme suffers from very high space and time complexities. Especially, the high
space complexity is a major concern for adopting polar codes in modern mobile
communication standards. In this paper, we propose a novel reduced-complexity
successive cancelation list (R-SCL) decoding scheme which is effective to
reduce the space complexity. Simulation results show that, with a (2048, 1024)
CRC-aided polar code, the R-SCL decoders with 25% reduction of space complexity
and 8% reduction of time complexity can still achieve almost the same
performance levels as those decoded by SCL decoders. To further reduce the
complexity, we propose a multi-CRC coding scheme for polar codes. Simulation
results show that, with a (16384, 8192) multi-CRC-aided polar code, a R-SCL
decoder with about 85% reduction of space complexity and 20% reduction of time
complexity results in a worst performance loss of only 0.04dB.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2016 08:33:18 GMT"
}
] | 2016-09-29T00:00:00 |
[
[
"Chiu",
"Mao-Ching",
""
],
[
"Wu",
"Wei-De",
""
]
] |
new_dataset
| 0.987286 |
1609.08874
|
Rakhshan Harifi
|
Rakhshan Harifi, Sama Goliaei
|
A Nondeterministic Model for Abstract Geometrical Computation
| null |
https://lipn.univ-paris13.fr/CIE2016/abstract-booklet.pdf
| null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A signal machine is an abstract geometrical model for computation, proposed
as an extension to the one-dimensional cellular automata, in which discrete
time and space of cellular automata is replaced with continuous time and space
in signal machine. A signal machine is defined as a set of meta-signals and a
set of rules. A signal machine starts from an initial configuration which is a
set of moving signals. Signals are moving in space freely until a collision.
Rules of signal machine specify what happens after a collision, or in other
words, specify out-coming signals for each set of colliding signals. Originally
signal machine is defined by its rule as a deterministic machine. In this
paper, we introduce the concept of non-deterministic signal machine, which may
contain more than one defined rule for each set of colliding signals. We show
that for a specific class of nondeterministic signal machines, called
k-restricted nondeterministic signal machine, there is a deterministic signal
machine computing the same result as the nondeterministic one, on any given
initial configuration. k-restricted nondeterministic signal machine is a
nondeterministic signal machine which accepts an input iff produces a special
accepting signal, which have at most two nondeterministic rule for each
collision, and at most k collisions before any acceptance.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2016 12:05:56 GMT"
}
] | 2016-09-29T00:00:00 |
[
[
"Harifi",
"Rakhshan",
""
],
[
"Goliaei",
"Sama",
""
]
] |
new_dataset
| 0.997657 |
1609.08935
|
Sreechakra Goparaju
|
Sreechakra Goparaju, Robert Calderbank
|
Binary Cyclic Codes that are Locally Repairable
|
This 5 page paper appeared in the proceedings of the IEEE
International Symposium on Information Theory (ISIT), 2014
|
2014 IEEE International Symposium on Information Theory, pp.
676-680. IEEE, 2014
|
10.1109/ISIT.2014.6874918
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Codes for storage systems aim to minimize the repair locality, which is the
number of disks (or nodes) that participate in the repair of a single failed
disk. Simultaneously, the code must sustain a high rate, operate on a small
finite field to be practically significant and be tolerant to a large number of
erasures. To this end, we construct new families of binary linear codes that
have an optimal dimension (rate) for a given minimum distance and locality.
Specifically, we construct cyclic codes that are locally repairable for
locality 2 and distances 2, 6 and 10. In doing so, we discover new upper bounds
on the code dimension, and prove the optimality of enabling local repair by
provisioning disjoint groups of disks. Finally, we extend our construction to
build codes that have multiple repair sets for each disk.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2016 14:46:06 GMT"
}
] | 2016-09-29T00:00:00 |
[
[
"Goparaju",
"Sreechakra",
""
],
[
"Calderbank",
"Robert",
""
]
] |
new_dataset
| 0.999871 |
1506.08454
|
Vijil Chenthamarakshan
|
Vijil Chenthamarakshan, Prasad M Desphande, Raghu Krishnapuram,
Ramakrishna Varadarajan, Knut Stolze
|
WYSIWYE: An Algebra for Expressing Spatial and Textual Rules for Visual
Information Extraction
| null | null | null | null |
cs.CL cs.DB cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The visual layout of a webpage can provide valuable clues for certain types
of Information Extraction (IE) tasks. In traditional rule based IE frameworks,
these layout cues are mapped to rules that operate on the HTML source of the
webpages. In contrast, we have developed a framework in which the rules can be
specified directly at the layout level. This has many advantages, since the
higher level of abstraction leads to simpler extraction rules that are largely
independent of the source code of the page, and, therefore, more robust. It can
also enable specification of new types of rules that are not otherwise
possible. To the best of our knowledge, there is no general framework that
allows declarative specification of information extraction rules based on
spatial layout. Our framework is complementary to traditional text based rules
framework and allows a seamless combination of spatial layout based rules with
traditional text based rules. We describe the algebra that enables such a
system and its efficient implementation using standard relational and text
indexing features of a relational database. We demonstrate the simplicity and
efficiency of this system for a task involving the extraction of software
system requirements from software product pages.
|
[
{
"version": "v1",
"created": "Sun, 28 Jun 2015 21:17:26 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Sep 2016 19:49:41 GMT"
}
] | 2016-09-28T00:00:00 |
[
[
"Chenthamarakshan",
"Vijil",
""
],
[
"Desphande",
"Prasad M",
""
],
[
"Krishnapuram",
"Raghu",
""
],
[
"Varadarajan",
"Ramakrishna",
""
],
[
"Stolze",
"Knut",
""
]
] |
new_dataset
| 0.996888 |
1601.00397
|
Jesper Pedersen
|
Jesper Pedersen, Alexandre Graell i Amat, Iryna Andriyanova, Fredrik
Br\"annstr\"om
|
Distributed Storage in Mobile Wireless Networks with Device-to-Device
Communication
|
After final editing for publication in TCOM
| null |
10.1109/TCOMM.2016.2605681
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the use of distributed storage (DS) to reduce the communication
cost of content delivery in wireless networks. Content is stored (cached) in a
number of mobile devices using an erasure correcting code. Users retrieve
content from other devices using device-to-device communication or from the
base station (BS), at the expense of higher communication cost. We address the
repair problem when a device storing data leaves the cell. We introduce a
repair scheduling where repair is performed periodically and derive analytical
expressions for the overall communication cost of content download and data
repair as a function of the repair interval. The derived expressions are then
used to evaluate the communication cost entailed by DS using several erasure
correcting codes. Our results show that DS can reduce the communication cost
with respect to the case where content is downloaded only from the BS, provided
that repairs are performed frequently enough. If devices storing content arrive
to the cell, the communication cost using DS is further reduced and, for large
enough arrival rate, it is always beneficial. Interestingly, we show that MDS
codes, which do not perform well for classical DS, can yield a low overall
communication cost in wireless DS.
|
[
{
"version": "v1",
"created": "Mon, 4 Jan 2016 07:31:55 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jun 2016 09:20:06 GMT"
},
{
"version": "v3",
"created": "Wed, 31 Aug 2016 19:55:03 GMT"
},
{
"version": "v4",
"created": "Tue, 27 Sep 2016 07:46:30 GMT"
}
] | 2016-09-28T00:00:00 |
[
[
"Pedersen",
"Jesper",
""
],
[
"Amat",
"Alexandre Graell i",
""
],
[
"Andriyanova",
"Iryna",
""
],
[
"Brännström",
"Fredrik",
""
]
] |
new_dataset
| 0.971222 |
1603.01303
|
Wonjun Yoon
|
Wonjun Yoon and Sol-A Kim and Jaesik Choi
|
An End-to-End Robot Architecture to Manipulate Non-Physical State
Changes of Objects
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the advance in robotic hardware and intelligent software, humanoid robot
plays an important role in various tasks including service for human assistance
and heavy job for hazardous industry. Recent advances in task learning enable
humanoid robots to conduct dexterous manipulation tasks such as grasping
objects and assembling parts of furniture. Operating objects without physical
movements is an even more challenging task for humanoid robot because effects
of actions may not be clearly seen in the physical configuration space and
meaningful actions could be very complex in a long time horizon. As an example,
playing a mobile game in a smart device has such challenges because both swipe
actions and complex state transitions inside the smart devices in a long time
horizon. In this paper, we solve this problem by introducing an integrated
architecture which connects end-to-end dataflow from sensors to actuators in a
humanoid robot to operate smart devices. We implement our integrated
architecture in the Baxter Research Robot and experimentally demonstrate that
the robot with our architecture could play a challenging mobile game, the 2048
game, as accurate as in a simulated environment.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2016 22:36:02 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Jul 2016 10:09:48 GMT"
},
{
"version": "v3",
"created": "Tue, 27 Sep 2016 13:33:25 GMT"
}
] | 2016-09-28T00:00:00 |
[
[
"Yoon",
"Wonjun",
""
],
[
"Kim",
"Sol-A",
""
],
[
"Choi",
"Jaesik",
""
]
] |
new_dataset
| 0.999346 |
1603.06236
|
Nik Ruskuc
|
Tom Bourne and Nik Ruskuc
|
On the star-height of factor counting languages and their relationship
to Rees zero-matrix semigroups
| null | null | null | null |
cs.FL math.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a word $w$ over a finite alphabet, we consider, in three special cases,
the generalised star-height of the languages in which $w$ occurs as a
contiguous subword (factor) an exact number of times and of the languages in
which $w$ occurs as a contiguous subword modulo a fixed number, and prove that
in each case it is at most one. We use these combinatorial results to show that
any language recognised by a Rees (zero-)matrix semigroup over an abelian group
is of generalised star-height at most one.
|
[
{
"version": "v1",
"created": "Sun, 20 Mar 2016 16:34:03 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Sep 2016 17:01:02 GMT"
}
] | 2016-09-28T00:00:00 |
[
[
"Bourne",
"Tom",
""
],
[
"Ruskuc",
"Nik",
""
]
] |
new_dataset
| 0.995491 |
1609.08004
|
Jose Rodrigues Jr
|
Bruno Machado, Jonatan Orue, Mauro Arruda, Cleidimar Santos, Diogo
Sarath, Wesley Goncalves, Gercina Silva, Hemerson Pistori, Antonia Roel, Jose
Rodrigues-Jr
|
BioLeaf: a professional mobile application to measure foliar damage
caused by insect herbivory
| null |
Computers and Electronics in Agriculture 129: 1. 44-55 (2016)
|
10.1016/j.compag.2016.09.007
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Soybean is one of the ten greatest crops in the world, answering for
billion-dollar businesses every year. This crop suffers from insect herbivory
that costs millions from producers. Hence, constant monitoring of the crop
foliar damage is necessary to guide the application of insecticides. However,
current methods to measure foliar damage are expensive and dependent on
laboratory facilities, in some cases, depending on complex devices. To cope
with these shortcomings, we introduce an image processing methodology to
measure the foliar damage in soybean leaves. We developed a non-destructive
imaging method based on two techniques, Otsu segmentation and Bezier curves, to
estimate the foliar loss in leaves with or without border damage. We
instantiate our methodology in a mobile application named BioLeaf, which is
freely distributed for smartphone users. We experimented with real-world leaves
collected from a soybean crop in Brazil. Our results demonstrated that BioLeaf
achieves foliar damage quantification with precision comparable to that of
human specialists. With these results, our proposal might assist soybean
producers, reducing the time to measure foliar damage, reducing analytical
costs, and defining a commodity application that is applicable not only to soy,
but also to different crops such as cotton, bean, potato, coffee, and
vegetables.
|
[
{
"version": "v1",
"created": "Mon, 26 Sep 2016 14:59:50 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Sep 2016 01:02:57 GMT"
}
] | 2016-09-28T00:00:00 |
[
[
"Machado",
"Bruno",
""
],
[
"Orue",
"Jonatan",
""
],
[
"Arruda",
"Mauro",
""
],
[
"Santos",
"Cleidimar",
""
],
[
"Sarath",
"Diogo",
""
],
[
"Goncalves",
"Wesley",
""
],
[
"Silva",
"Gercina",
""
],
[
"Pistori",
"Hemerson",
""
],
[
"Roel",
"Antonia",
""
],
[
"Rodrigues-Jr",
"Jose",
""
]
] |
new_dataset
| 0.999464 |
1609.08313
|
Jun Yang
|
Jun Yang, Zhenhua Tian
|
Unsupervised Co-segmentation of 3D Shapes via Functional Maps
|
14 pages, 8figures
| null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an unsupervised method for co-segmentation of a set of 3D shapes
from the same class with the aim of segmenting the input shapes into consistent
semantic parts and establishing their correspondence across the set. Starting
from meaningful pre-segmentation of all given shapes individually, we construct
the correspondence between same candidate parts and obtain the labels via
functional maps. And then, we use these labels to mark the input shapes and
obtain results of co-segmentation. The core of our algorithm is to seek for an
optimal correspondence between semantically similar parts through functional
maps and mark such shape parts. Experimental results on the benchmark datasets
show the efficiency of this method and comparable accuracy to the
state-of-the-art algorithms.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2016 08:35:14 GMT"
}
] | 2016-09-28T00:00:00 |
[
[
"Yang",
"Jun",
""
],
[
"Tian",
"Zhenhua",
""
]
] |
new_dataset
| 0.958164 |
1609.08412
|
Zhiyuan Tang
|
Dong Wang, Zhiyuan Tang, Difei Tang and Qing Chen
|
OC16-CE80: A Chinese-English Mixlingual Database and A Speech
Recognition Baseline
|
O-COCOSDA 2016
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the OC16-CE80 Chinese-English mixlingual speech database which was
released as a main resource for training, development and test for the
Chinese-English mixlingual speech recognition (MixASR-CHEN) challenge on
O-COCOSDA 2016. This database consists of 80 hours of speech signals recorded
from more than 1,400 speakers, where the utterances are in Chinese but each
involves one or several English words. Based on the database and another two
free data resources (THCHS30 and the CMU dictionary), a speech recognition
(ASR) baseline was constructed with the deep neural network-hidden Markov model
(DNN-HMM) hybrid system. We then report the baseline results following the
MixASR-CHEN evaluation rules and demonstrate that OC16-CE80 is a reasonable
data resource for mixlingual research.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2016 13:25:51 GMT"
}
] | 2016-09-28T00:00:00 |
[
[
"Wang",
"Dong",
""
],
[
"Tang",
"Zhiyuan",
""
],
[
"Tang",
"Difei",
""
],
[
"Chen",
"Qing",
""
]
] |
new_dataset
| 0.999711 |
1609.08445
|
Lantian Li Mr.
|
Dong Wang, Lantian Li, Difei Tang, Qing Chen
|
AP16-OL7: A Multilingual Database for Oriental Languages and A Language
Recognition Baseline
|
APSIPA ASC 2016
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the AP16-OL7 database which was released as the training and test
data for the oriental language recognition (OLR) challenge on APSIPA 2016.
Based on the database, a baseline system was constructed on the basis of the
i-vector model. We report the baseline results evaluated in various metrics
defined by the AP16-OLR evaluation plan and demonstrate that AP16-OL7 is a
reasonable data resource for multilingual research.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2016 13:50:13 GMT"
}
] | 2016-09-28T00:00:00 |
[
[
"Wang",
"Dong",
""
],
[
"Li",
"Lantian",
""
],
[
"Tang",
"Difei",
""
],
[
"Chen",
"Qing",
""
]
] |
new_dataset
| 0.995695 |
1601.08123
|
Ananthanarayanan Chockalingam
|
Swaroop Jacob, T. Lakshmi Narasimhan, and A. Chockalingam
|
Space-Time Index Modulation
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a new multi-antenna modulation scheme, termed as
{\em space-time index modulation (STIM)}. In STIM, information bits are
conveyed through antenna indexing in the spatial domain, slot indexing in the
time domain, and $M$-ary modulation symbols. A time slot in a given frame can
be used or unused, and the choice of the slots used for transmission conveys
slot index bits. In addition, antenna index bits are conveyed in every used
time slot by activating one among the available antennas. $M$-ary symbols are
sent on the active antenna in a used time slot. We study STIM in a
cyclic-prefixed single-carrier (CPSC) system in frequency-selective fading
channels. It is shown that, for the same spectral efficiency, STIM can achieve
better performance compared to conventional orthogonal frequency division
multiplexing (OFDM). Low-complexity iterative algorithms for the detection of
large-dimensional STIM signals are also presented.
|
[
{
"version": "v1",
"created": "Fri, 29 Jan 2016 14:23:57 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Sep 2016 12:46:21 GMT"
}
] | 2016-09-27T00:00:00 |
[
[
"Jacob",
"Swaroop",
""
],
[
"Narasimhan",
"T. Lakshmi",
""
],
[
"Chockalingam",
"A.",
""
]
] |
new_dataset
| 0.972978 |
1603.08819
|
Nina Luhmann
|
Nina Luhmann, Manuel Lafond, Annelyse Th\'evenin, A\"ida Ouangraoua,
Roland Wittler and Cedric Chauve
|
The SCJ small parsimony problem for weighted gene adjacencies (Extended
version)
| null | null | null | null |
cs.DS q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconstructing ancestral gene orders in a given phylogeny is a classical
problem in comparative genomics. Most existing methods compare conserved
features in extant genomes in the phylogeny to define potential ancestral gene
adjacencies, and either try to reconstruct all ancestral genomes under a global
evolutionary parsimony criterion, or, focusing on a single ancestral genome,
use a scaffolding approach to select a subset of ancestral gene adjacencies,
generally aiming at reducing the fragmentation of the reconstructed ancestral
genome. In this paper, we describe an exact algorithm for the Small Parsimony
Problem that combines both approaches. We consider that gene adjacencies at
internal nodes of the species phylogeny are weighted, and we introduce an
objective function defined as a convex combination of these weights and the
evolutionary cost under the Single-Cut-or-Join (SCJ) model. The weights of
ancestral gene adjacencies can e.g. be obtained through the recent availability
of ancient DNA sequencing data, which provide a direct hint at the genome
structure of the considered ancestor, or through probabilistic analysis of gene
adjacencies evolution. We show the NP-hardness of our problem variant and
propose a Fixed-Parameter Tractable algorithm based on the Sankoff-Rousseau
dynamic programming algorithm that also allows to sample co-optimal solutions.
We apply our approach to mammalian and bacterial data providing different
degrees of complexity. We show that including adjacency weights in the
objective has a significant impact in reducing the fragmentation of the
reconstructed ancestral gene orders.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2016 15:47:57 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Sep 2016 12:43:55 GMT"
}
] | 2016-09-27T00:00:00 |
[
[
"Luhmann",
"Nina",
""
],
[
"Lafond",
"Manuel",
""
],
[
"Thévenin",
"Annelyse",
""
],
[
"Ouangraoua",
"Aïda",
""
],
[
"Wittler",
"Roland",
""
],
[
"Chauve",
"Cedric",
""
]
] |
new_dataset
| 0.975055 |
1606.01847
|
Marcus Rohrbach
|
Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor
Darrell, and Marcus Rohrbach
|
Multimodal Compact Bilinear Pooling for Visual Question Answering and
Visual Grounding
|
Accepted to EMNLP 2016
| null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modeling textual or visual information with vector representations trained
from large language or visual datasets has been successfully explored in recent
years. However, tasks such as visual question answering require combining these
vector representations with each other. Approaches to multimodal pooling
include element-wise product or sum, as well as concatenation of the visual and
textual representations. We hypothesize that these methods are not as
expressive as an outer product of the visual and textual vectors. As the outer
product is typically infeasible due to its high dimensionality, we instead
propose utilizing Multimodal Compact Bilinear pooling (MCB) to efficiently and
expressively combine multimodal features. We extensively evaluate MCB on the
visual question answering and grounding tasks. We consistently show the benefit
of MCB over ablations without MCB. For visual question answering, we present an
architecture which uses MCB twice, once for predicting attention over spatial
features and again to combine the attended representation with the question
representation. This model outperforms the state-of-the-art on the Visual7W
dataset and the VQA challenge.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2016 17:59:56 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Jun 2016 19:52:41 GMT"
},
{
"version": "v3",
"created": "Sat, 24 Sep 2016 01:58:59 GMT"
}
] | 2016-09-27T00:00:00 |
[
[
"Fukui",
"Akira",
""
],
[
"Park",
"Dong Huk",
""
],
[
"Yang",
"Daylen",
""
],
[
"Rohrbach",
"Anna",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Rohrbach",
"Marcus",
""
]
] |
new_dataset
| 0.966351 |
1609.07597
|
Suriya Singh
|
Suriya Singh and Vijay Kumar
|
DimensionApp : android app to estimate object dimensions
|
Project Report 2014
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this project, we develop an android app that uses on computer vision
techniques to estimate an object dimension present in field of view. The app
while having compact size, is accurate upto +/- 5 mm and robust towards touch
inputs. We use single-view metrology to compute accurate measurement. Unlike
previous approaches, our technique does not rely on line detection and can be
generalize to any object shape easily.
|
[
{
"version": "v1",
"created": "Sat, 24 Sep 2016 10:32:30 GMT"
}
] | 2016-09-27T00:00:00 |
[
[
"Singh",
"Suriya",
""
],
[
"Kumar",
"Vijay",
""
]
] |
new_dataset
| 0.994027 |
1609.07826
|
Georgios Georgakis
|
Georgios Georgakis, Md Alimoor Reza, Arsalan Mousavian, Phi-Hung Le,
Jana Kosecka
|
Multiview RGB-D Dataset for Object Instance Detection
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a new multi-view RGB-D dataset of nine kitchen scenes,
each containing several objects in realistic cluttered environments including a
subset of objects from the BigBird dataset. The viewpoints of the scenes are
densely sampled and objects in the scenes are annotated with bounding boxes and
in the 3D point cloud. Also, an approach for detection and recognition is
presented, which is comprised of two parts: i) a new multi-view 3D proposal
generation method and ii) the development of several recognition baselines
using AlexNet to score our proposals, which is trained either on crops of the
dataset or on synthetically composited training images. Finally, we compare the
performance of the object proposals and a detection baseline to the Washington
RGB-D Scenes (WRGB-D) dataset and demonstrate that our Kitchen scenes dataset
is more challenging for object detection and recognition. The dataset is
available at: http://cs.gmu.edu/~robot/gmu-kitchens.html.
|
[
{
"version": "v1",
"created": "Mon, 26 Sep 2016 01:18:56 GMT"
}
] | 2016-09-27T00:00:00 |
[
[
"Georgakis",
"Georgios",
""
],
[
"Reza",
"Md Alimoor",
""
],
[
"Mousavian",
"Arsalan",
""
],
[
"Le",
"Phi-Hung",
""
],
[
"Kosecka",
"Jana",
""
]
] |
new_dataset
| 0.999846 |
1609.07876
|
Taehwan Kim
|
Taehwan Kim, Jonathan Keane, Weiran Wang, Hao Tang, Jason Riggle,
Gregory Shakhnarovich, Diane Brentari, Karen Livescu
|
Lexicon-Free Fingerspelling Recognition from Video: Data, Models, and
Signer Adaptation
|
arXiv admin note: substantial text overlap with arXiv:1608.08339
| null | null | null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of recognizing video sequences of fingerspelled letters
in American Sign Language (ASL). Fingerspelling comprises a significant but
relatively understudied part of ASL. Recognizing fingerspelling is challenging
for a number of reasons: It involves quick, small motions that are often highly
coarticulated; it exhibits significant variation between signers; and there has
been a dearth of continuous fingerspelling data collected. In this work we
collect and annotate a new data set of continuous fingerspelling videos,
compare several types of recognizers, and explore the problem of signer
variation. Our best-performing models are segmental (semi-Markov) conditional
random fields using deep neural network-based features. In the signer-dependent
setting, our recognizers achieve up to about 92% letter accuracy. The
multi-signer setting is much more challenging, but with neural network
adaptation we achieve up to 83% letter accuracies in this setting.
|
[
{
"version": "v1",
"created": "Mon, 26 Sep 2016 07:34:24 GMT"
}
] | 2016-09-27T00:00:00 |
[
[
"Kim",
"Taehwan",
""
],
[
"Keane",
"Jonathan",
""
],
[
"Wang",
"Weiran",
""
],
[
"Tang",
"Hao",
""
],
[
"Riggle",
"Jason",
""
],
[
"Shakhnarovich",
"Gregory",
""
],
[
"Brentari",
"Diane",
""
],
[
"Livescu",
"Karen",
""
]
] |
new_dataset
| 0.998651 |
1609.07955
|
Vassallo Christian
|
Christian Vassallo, Anne-H\'el\`ene Olivier, Philippe Sou\`eres, Armel
Cr\'etual, Olivier Stasse, Julien Pettr\'e
|
How do walkers avoid a mobile robot crossing their way?
| null | null | null | null |
cs.RO physics.med-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robots and Humans have to share the same environment more and more often. In
the aim of steering robots in a safe and convenient manner among humans it is
required to understand how humans interact with them. This work focuses on
collision avoidance between a human and a robot during locomotion. Having in
mind previous results on human obstacle avoidance, as well as the description
of the main principles which guide collision avoidance strategies, we observe
how humans adapt a goal-directed locomotion task when they have to interfere
with a mobile robot. Our results show differences in the strategy set by humans
to avoid a robot in comparison with avoiding another human. Humans prefer to
give the way to the robot even when they are likely to pass first at the
beginning of the interaction.
|
[
{
"version": "v1",
"created": "Mon, 26 Sep 2016 12:50:53 GMT"
}
] | 2016-09-27T00:00:00 |
[
[
"Vassallo",
"Christian",
""
],
[
"Olivier",
"Anne-Hélène",
""
],
[
"Souères",
"Philippe",
""
],
[
"Crétual",
"Armel",
""
],
[
"Stasse",
"Olivier",
""
],
[
"Pettré",
"Julien",
""
]
] |
new_dataset
| 0.984997 |
1403.6173
|
Anna Senina
|
Anna Senina and Marcus Rohrbach and Wei Qiu and Annemarie Friedrich
and Sikandar Amin and Mykhaylo Andriluka and Manfred Pinkal and Bernt Schiele
|
Coherent Multi-Sentence Video Description with Variable Level of Detail
|
10 pages
| null |
10.1007/978-3-319-11752-2_15
| null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans can easily describe what they see in a coherent way and at varying
level of detail. However, existing approaches for automatic video description
are mainly focused on single sentence generation and produce descriptions at a
fixed level of detail. In this paper, we address both of these limitations: for
a variable level of detail we produce coherent multi-sentence descriptions of
complex videos. We follow a two-step approach where we first learn to predict a
semantic representation (SR) from video and then generate natural language
descriptions from the SR. To produce consistent multi-sentence descriptions, we
model across-sentence consistency at the level of the SR by enforcing a
consistent topic. We also contribute both to the visual recognition of objects
proposing a hand-centric approach as well as to the robust generation of
sentences using a word lattice. Human judges rate our multi-sentence
descriptions as more readable, correct, and relevant than related work. To
understand the difference between more detailed and shorter descriptions, we
collect and analyze a video description corpus of three levels of detail.
|
[
{
"version": "v1",
"created": "Mon, 24 Mar 2014 22:28:38 GMT"
}
] | 2016-09-26T00:00:00 |
[
[
"Senina",
"Anna",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Qiu",
"Wei",
""
],
[
"Friedrich",
"Annemarie",
""
],
[
"Amin",
"Sikandar",
""
],
[
"Andriluka",
"Mykhaylo",
""
],
[
"Pinkal",
"Manfred",
""
],
[
"Schiele",
"Bernt",
""
]
] |
new_dataset
| 0.998483 |
1501.00305
|
Arman Farhang
|
Arman Farhang, Nicola Marchetti, Fabricio Figueiredo and Joao Paulo
Miranda
|
Massive MIMO and Waveform Design for 5th Generation Wireless
Communication Systems
|
6 pages, 2 figures, 1st International Conference on 5G for Ubiquitous
Connectivity
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article reviews existing related work and identifies the main challenges
in the key 5G area at the intersection of waveform design and large-scale
multiple antenna systems, also known as Massive MIMO. The property of
self-equalization is introduced for Filter Bank Multicarrier (FBMC)-based
Massive MIMO, which can reduce the number of subcarriers required by the
system. It is also shown that the blind channel tracking property of FBMC can
be used to address pilot contamination -- one of the main limiting factors of
Massive MIMO systems. Our findings shed light into and motivate for an entirely
new research line towards a better understanding of waveform design with
emphasis on FBMC-based Massive MIMO networks.
|
[
{
"version": "v1",
"created": "Thu, 1 Jan 2015 20:22:48 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Sep 2016 16:05:01 GMT"
}
] | 2016-09-26T00:00:00 |
[
[
"Farhang",
"Arman",
""
],
[
"Marchetti",
"Nicola",
""
],
[
"Figueiredo",
"Fabricio",
""
],
[
"Miranda",
"Joao Paulo",
""
]
] |
new_dataset
| 0.979995 |
1606.02975
|
Tobias Denkinger
|
Tobias Denkinger
|
An automata characterisation for multiple context-free languages
|
This is an extended version of a paper with the same title accepted
at the 20th International Conference on Developments in Language Theory (DLT
2016)
| null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce tree stack automata as a new class of automata with storage and
identify a restricted form of tree stack automata that recognises exactly the
multiple context-free languages.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2016 14:32:41 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Jun 2016 14:59:13 GMT"
},
{
"version": "v3",
"created": "Fri, 23 Sep 2016 11:04:45 GMT"
}
] | 2016-09-26T00:00:00 |
[
[
"Denkinger",
"Tobias",
""
]
] |
new_dataset
| 0.999322 |
1609.06423
|
Mayank Singh
|
Mayank Singh, Barnopriyo Barua, Priyank Palod, Manvi Garg, Sidhartha
Satapathy, Samuel Bushi, Kumar Ayush, Krishna Sai Rohith, Tulasi Gamidi,
Pawan Goyal and Animesh Mukherjee
|
OCR++: A Robust Framework For Information Extraction from Scholarly
Articles
| null | null | null | null |
cs.DL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes OCR++, an open-source framework designed for a variety of
information extraction tasks from scholarly articles including metadata (title,
author names, affiliation and e-mail), structure (section headings and body
text, table and figure headings, URLs and footnotes) and bibliography (citation
instances and references). We analyze a diverse set of scientific articles
written in English language to understand generic writing patterns and
formulate rules to develop this hybrid framework. Extensive evaluations show
that the proposed framework outperforms the existing state-of-the-art tools
with huge margin in structural information extraction along with improved
performance in metadata and bibliography extraction tasks, both in terms of
accuracy (around 50% improvement) and processing time (around 52% improvement).
A user experience study conducted with the help of 30 researchers reveals that
the researchers found this system to be very helpful. As an additional
objective, we discuss two novel use cases including automatically extracting
links to public datasets from the proceedings, which would further accelerate
the advancement in digital libraries. The result of the framework can be
exported as a whole into structured TEI-encoded documents. Our framework is
accessible online at http://cnergres.iitkgp.ac.in/OCR++/home/.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2016 06:12:52 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Sep 2016 10:54:57 GMT"
},
{
"version": "v3",
"created": "Fri, 23 Sep 2016 13:05:27 GMT"
}
] | 2016-09-26T00:00:00 |
[
[
"Singh",
"Mayank",
""
],
[
"Barua",
"Barnopriyo",
""
],
[
"Palod",
"Priyank",
""
],
[
"Garg",
"Manvi",
""
],
[
"Satapathy",
"Sidhartha",
""
],
[
"Bushi",
"Samuel",
""
],
[
"Ayush",
"Kumar",
""
],
[
"Rohith",
"Krishna Sai",
""
],
[
"Gamidi",
"Tulasi",
""
],
[
"Goyal",
"Pawan",
""
],
[
"Mukherjee",
"Animesh",
""
]
] |
new_dataset
| 0.965525 |
1609.07306
|
Helge Rhodin
|
Helge Rhodin, Christian Richardt, Dan Casas, Eldar Insafutdinov,
Mohammad Shafiei, Hans-Peter Seidel, Bernt Schiele, Christian Theobalt
|
EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras
|
SIGGRAPH Asia 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Marker-based and marker-less optical skeletal motion-capture methods use an
outside-in arrangement of cameras placed around a scene, with viewpoints
converging on the center. They often create discomfort by possibly needed
marker suits, and their recording volume is severely restricted and often
constrained to indoor scenes with controlled backgrounds. Alternative
suit-based systems use several inertial measurement units or an exoskeleton to
capture motion. This makes capturing independent of a confined volume, but
requires substantial, often constraining, and hard to set up body
instrumentation. We therefore propose a new method for real-time, marker-less
and egocentric motion capture which estimates the full-body skeleton pose from
a lightweight stereo pair of fisheye cameras that are attached to a helmet or
virtual reality headset. It combines the strength of a new generative pose
estimation framework for fisheye views with a ConvNet-based body-part detector
trained on a large new dataset. Our inside-in method captures full-body motion
in general indoor and outdoor scenes, and also crowded scenes with many people
in close vicinity. The captured user can freely move around, which enables
reconstruction of larger-scale activities and is particularly useful in virtual
reality to freely roam and interact, while seeing the fully motion-captured
virtual body.
|
[
{
"version": "v1",
"created": "Fri, 23 Sep 2016 10:46:19 GMT"
}
] | 2016-09-26T00:00:00 |
[
[
"Rhodin",
"Helge",
""
],
[
"Richardt",
"Christian",
""
],
[
"Casas",
"Dan",
""
],
[
"Insafutdinov",
"Eldar",
""
],
[
"Shafiei",
"Mohammad",
""
],
[
"Seidel",
"Hans-Peter",
""
],
[
"Schiele",
"Bernt",
""
],
[
"Theobalt",
"Christian",
""
]
] |
new_dataset
| 0.989041 |
1609.07329
|
Andrew Eckford
|
Andrew W. Eckford, Taro Furbayashi, and Tadashi Nakano
|
RNA as a Nanoscale Data Transmission Medium: Error Analysis
|
Accepted for publication in the 2016 IEEE International Conference on
Nanotechnology (IEEE NANO), Sendai, Japan
| null | null | null |
cs.ET q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
RNA can be used as a high-density medium for data storage and transmission;
however, an important RNA process -- replication -- is noisy. This paper
presents an error analysis for RNA as a data transmission medium, analyzing how
deletion errors increase in a collection of replicated DNA strands over time.
|
[
{
"version": "v1",
"created": "Fri, 23 Sep 2016 12:02:30 GMT"
}
] | 2016-09-26T00:00:00 |
[
[
"Eckford",
"Andrew W.",
""
],
[
"Furbayashi",
"Taro",
""
],
[
"Nakano",
"Tadashi",
""
]
] |
new_dataset
| 0.993708 |
1609.07370
|
Yi Ren
|
Yi Ren, Yaniv Romano, Michael Elad
|
Example-Based Image Synthesis via Randomized Patch-Matching
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image and texture synthesis is a challenging task that has long been drawing
attention in the fields of image processing, graphics, and machine learning.
This problem consists of modelling the desired type of images, either through
training examples or via a parametric modeling, and then generating images that
belong to the same statistical origin.
This work addresses the image synthesis task, focusing on two specific
families of images -- handwritten digits and face images. This paper offers two
main contributions. First, we suggest a simple and intuitive algorithm capable
of generating such images in a unified way. The proposed approach taken is
pyramidal, consisting of upscaling and refining the estimated image several
times. For each upscaling stage, the algorithm randomly draws small patches
from a patch database, and merges these to form a coherent and novel image with
high visual quality. The second contribution is a general framework for the
evaluation of the generation performance, which combines three aspects: the
likelihood, the originality and the spread of the synthesized images. We assess
the proposed synthesis scheme and show that the results are similar in nature,
and yet different from the ones found in the training set, suggesting that true
synthesis effect has been obtained.
|
[
{
"version": "v1",
"created": "Fri, 23 Sep 2016 14:08:30 GMT"
}
] | 2016-09-26T00:00:00 |
[
[
"Ren",
"Yi",
""
],
[
"Romano",
"Yaniv",
""
],
[
"Elad",
"Michael",
""
]
] |
new_dataset
| 0.982541 |
1401.3733
|
Ed Bennett
|
Ed Bennett, Luigi Del Debbio, Kirk Jordan, Biagio Lucini, Agostino
Patella, Claudio Pica, Antonio Rago
|
BSMBench: a flexible and scalable supercomputer benchmark from
computational particle physics
|
6 pages, 5 figures; version as presented at High Performance
Computing and Simulation, HPCS 2016
|
2016 International Conference on High Performance Computing &
Simulation (HPCS), Innsbruck, Austria, 2016, pp. 834-839
|
10.1109/HPCSim.2016.7568421
|
CP3-Origins-2014-001 DNRF90 & DIAS-2014-1
|
cs.DC hep-lat
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lattice Quantum ChromoDynamics (QCD), and by extension its parent field,
Lattice Gauge Theory (LGT), make up a significant fraction of supercomputing
cycles worldwide. As such, it would be irresponsible not to evaluate machines'
suitability for such applications. To this end, a benchmark has been developed
to assess the performance of LGT applications on modern HPC platforms. Distinct
from previous QCD-based benchmarks, this allows probing the behaviour of a
variety of theories, which allows varying the ratio of demands between on-node
computations and inter-node communications. The results of testing this
benchmark on various recent HPC platforms are presented, and directions for
future development are discussed.
|
[
{
"version": "v1",
"created": "Wed, 15 Jan 2014 20:33:38 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Sep 2016 12:27:22 GMT"
}
] | 2016-09-23T00:00:00 |
[
[
"Bennett",
"Ed",
""
],
[
"Del Debbio",
"Luigi",
""
],
[
"Jordan",
"Kirk",
""
],
[
"Lucini",
"Biagio",
""
],
[
"Patella",
"Agostino",
""
],
[
"Pica",
"Claudio",
""
],
[
"Rago",
"Antonio",
""
]
] |
new_dataset
| 0.997977 |
1609.03160
|
Mingming Cai
|
Mingming Cai, Kang Gao, Ding Nie, Bertrand Hochwald, J. Nicholas
Laneman, Huang Huang, Kunpeng Liu
|
Effect of Wideband Beam Squint on Codebook Design in Phased-Array
Wireless Systems
|
6 pages, to be published in Proc. IEEE GLOBECOM 2016, Washington,
D.C., USA
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Analog beamforming with phased arrays is a promising technique for 5G
wireless communication at millimeter wave frequencies. Using a discrete
codebook consisting of multiple analog beams, each beam focuses on a certain
range of angles of arrival or departure and corresponds to a set of fixed phase
shifts across frequency due to practical hardware considerations. However, for
sufficiently large bandwidth, the gain provided by the phased array is actually
frequency dependent, which is an effect called beam squint, and this effect
occurs even if the radiation pattern of the antenna elements is frequency
independent. This paper examines the nature of beam squint for a uniform linear
array (ULA) and analyzes its impact on codebook design as a function of the
number of antennas and system bandwidth normalized by the carrier frequency.
The criterion for codebook design is to guarantee that each beam's minimum gain
for a range of angles and for all frequencies in the wideband system exceeds a
target threshold, for example 3 dB below the array's maximum gain. Analysis and
numerical examples suggest that a denser codebook is required to compensate for
beam squint. For example, 54% more beams are needed compared to a codebook
design that ignores beam squint for a ULA with 32 antennas operating at a
carrier frequency of 73 GHz and bandwidth of 2.5 GHz. Furthermore, beam squint
with this design criterion limits the bandwidth or the number of antennas of
the array if the other one is fixed.
|
[
{
"version": "v1",
"created": "Sun, 11 Sep 2016 13:32:26 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Sep 2016 17:58:12 GMT"
}
] | 2016-09-23T00:00:00 |
[
[
"Cai",
"Mingming",
""
],
[
"Gao",
"Kang",
""
],
[
"Nie",
"Ding",
""
],
[
"Hochwald",
"Bertrand",
""
],
[
"Laneman",
"J. Nicholas",
""
],
[
"Huang",
"Huang",
""
],
[
"Liu",
"Kunpeng",
""
]
] |
new_dataset
| 0.956825 |
1609.06516
|
Rongkuan Liu
|
Rongkuan Liu, Petar Popovski and Gang Wang
|
Decoupled Uplink and Downlink in a Wireless System with Buffer-Aided
Relaying
|
27 pages, 10 figures, submitted to IEEE Transactions on
Communications
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper treats a multiuser relay scenario where multiple user equipments
(UEs) have a two-way communication with a common Base Station (BS) in the
presence of a buffer-equipped Relay Station (RS). Each of the uplink (UL) and
downlink (DL) transmission can take place over a direct or over a relayed path.
Traditionally, the UL and the DL path of a given two-way link are coupled, that
is, either both are direct links or both are relayed links. By removing the
restriction for coupling, one opens the design space for a decoupled two-way
links. Following this, we devise two protocols: orthogonal decoupled UL/DL
buffer-aided (ODBA) relaying protocol and non-orthogonal decoupled UL/DL
buffer-aided (NODBA) relaying protocol. In NODBA, the receiver can use
successive interference cancellation (SIC) to extract the desired signal from a
collision between UL and DL signals. For both protocols, we characterize the
transmission decision policies in terms of maximization of the average two-way
sum rate of the system. The numerical results show that decoupling association
and non-orthogonal radio access lead to significant throughput gains for
two-way traffic.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2016 12:11:28 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Sep 2016 10:49:21 GMT"
}
] | 2016-09-23T00:00:00 |
[
[
"Liu",
"Rongkuan",
""
],
[
"Popovski",
"Petar",
""
],
[
"Wang",
"Gang",
""
]
] |
new_dataset
| 0.998159 |
1609.06771
|
Wolfgang Wallner
|
Wolfgang Wallner
|
Simulation of the IEEE 1588 Precision Time Protocol in OMNeT++
|
Published in: A. Foerster, V. Vesely, A. Virdis, M. Kirsche (Eds.),
Proc. of the 3rd OMNeT++ Community Summit, Brno University of Technology -
Czech Republic - September 15-16, 2016
| null | null |
OMNET/2016/09
|
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Real-time systems rely on a distributed global time base. As any physical
clock device suffers from noise, it is necessary to provide some kind of clock
synchronization to establish such a global time base. Different clock
synchronization methods have been invented for individual application domains.
The Precision Time Protocol (PTP), which is specified in IEEE 1588, is another
interesting option. It targets local networks, where it is acceptable to assume
small amounts of hardware support, and promises sub-microsecond precision. PTP
provides many different implementation and configuration options, and thus the
Design Space Exploration (DSE) is challenging. In this paper we discuss the
implementation of realistic clock noise and its synchronization via PTP in
OMNeT++. The components presented in this paper are intended to assist
engineers with the configuration of PTP networks.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2016 22:43:29 GMT"
}
] | 2016-09-23T00:00:00 |
[
[
"Wallner",
"Wolfgang",
""
]
] |
new_dataset
| 0.980717 |
1609.06779
|
Jia Pan
|
Yajue Yang and Yuanqing Wu and Jia Pan
|
A Novel GPU-based Parallel Implementation Scheme and Performance
Analysis of Robot Forward Dynamics Algorithms
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel unifying scheme for parallel implementation of articulated
robot dynamics algorithms. It is based on a unified Lie group notation for
deriving the equations of motion of articulated robots, where various
well-known forward algorithms differ only by their joint inertia matrix
inversion strategies. This new scheme leads to a unified abstraction of
state-of-the-art forward dynamics algorithms into combinations of block
bi-diagonal and/or block tri-diagonal systems, which may be efficiently solved
by parallel all-prefix-sum operations (scan) and parallel odd-even elimination
(OEE) respectively. We implement the proposed scheme on a Nvidia CUDA GPU
platform for the comparative study of three algorithms, namely the hybrid
articulated-body inertia algorithm (ABIA), the parallel joint space inertia
inversion algorithm (JSIIA) and the constrained force algorithm (CFA), and the
performances are analyzed.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2016 23:50:48 GMT"
}
] | 2016-09-23T00:00:00 |
[
[
"Yang",
"Yajue",
""
],
[
"Wu",
"Yuanqing",
""
],
[
"Pan",
"Jia",
""
]
] |
new_dataset
| 0.997961 |
1609.06862
|
Gewu Bu
|
Gewu Bu (LIP6, NPA), Maria Potop-Butucaru (LIP6, NPA)
|
Total Order Reliable Convergecast in WBAN
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper is the first extensive work on total order reliable convergecast
in multi-hop Wireless Body Area Networks (WBAN). Convergecast is a many-to-one
cooperative scheme where each node of the network transmits data towards the
same sink. Our contribution is threefold. First, we stress existing WBAN
convergecast strategies with respect to their capacity to be reliable and to
ensure the total order delivery at sink. That is, packets sent in a specific
order should be received in the same order by the sink. When stressed with
transmission rates up to 500 packets per second the performances of these
strategies decrease dramatically (more than 90% of packets lost). Secondly, we
propose a new posture-centric model for WBAN. This model offers a good
characterization of the path availability which is further used to fine tune
the retransmission rate thresholds. Third, based on our model we propose a new
mechanism for reliability and a new converge-cast strategy that outperforms
WBAN dedicated strategies but also strategies adapted from DTN and WSN areas.
Our extensive performance evaluations use essential parameters for WBAN: packet
lost, total order reliability (messages sent in a specific order should be
delivered in that specific order) and various human body postures. In
particular, our strategy ensures zero packet order inversions for various
transmission rates and mobility postures. Interestingly, our strategy respects
this property without the need of additional energy-guzzler mechanisms.
|
[
{
"version": "v1",
"created": "Thu, 22 Sep 2016 08:23:46 GMT"
}
] | 2016-09-23T00:00:00 |
[
[
"Bu",
"Gewu",
"",
"LIP6, NPA"
],
[
"Potop-Butucaru",
"Maria",
"",
"LIP6, NPA"
]
] |
new_dataset
| 0.958975 |
1609.06953
|
Azlan Iqbal
|
Azlan Iqbal
|
The Digital Synaptic Neural Substrate: Size and Quality Matters
|
7 pages, 7 Figures
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the 'Digital Synaptic Neural Substrate' (DSNS) computational
creativity approach further with respect to the size and quality of images that
can be used to seed the process. In previous work we demonstrated how combining
photographs of people and sequences taken from chess games between weak players
can be used to generate chess problems or puzzles of higher aesthetic quality,
on average, compared to alternative approaches. In this work we show
experimentally that using larger images as opposed to smaller ones improves the
output quality even further. The same is also true for using clearer or less
corrupted images. The reasons why these things influence the DSNS process is
presently not well-understood and debatable but the findings are nevertheless
immediately applicable for obtaining better results.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2016 11:26:46 GMT"
}
] | 2016-09-23T00:00:00 |
[
[
"Iqbal",
"Azlan",
""
]
] |
new_dataset
| 0.984297 |
1609.06978
|
\'Attila Rodrigues
|
\'Attila L. Rodrigues, Jo\~ao Felipe C. L. Costa
|
Gridlan: a Multi-purpose Local Grid Computing Framework
|
6 pages, 3 figures
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In scientific computing, more computational power generally implies faster
and possibly more detailed results. The goal of this study was to develop a
framework to submit computational jobs to powerful workstations underused by
nonintensive tasks. This is achieved by using a virtual machine in each of
these workstations, where the computations are done. This group of virtual
machines is called the Gridlan. The Gridlan framework is intermediate between
the cluster and grid computing paradigms. The Gridlan is able to profit from
existing cluster software tools, such as resource managers like Torque, so a
user with previous experience in cluster operation can dispatch jobs
seamlessly. A benchmark test of the Gridlan implementation shows the system's
suitability for computational tasks, principally in embarrassingly parallel
computations.
|
[
{
"version": "v1",
"created": "Thu, 22 Sep 2016 13:50:16 GMT"
}
] | 2016-09-23T00:00:00 |
[
[
"Rodrigues",
"Áttila L.",
""
],
[
"Costa",
"João Felipe C. L.",
""
]
] |
new_dataset
| 0.982065 |
1609.07049
|
Matan Sela
|
Matan Sela, Nadav Toledo, Yaron Honen, Ron Kimmel
|
Customized Facial Constant Positive Air Pressure (CPAP) Masks
| null | null | null | null |
cs.GR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sleep apnea is a syndrome that is characterized by sudden breathing halts
while sleeping. One of the common treatments involves wearing a mask that
delivers continuous air flow into the nostrils so as to maintain a steady air
pressure. These masks are designed for an average facial model and are often
difficult to adjust due to poor fit to the actual patient. The incompatibility
is characterized by gaps between the mask and the face, which deteriorates the
impermeability of the mask and leads to air leakage. We suggest a fully
automatic approach for designing a personalized nasal mask interface using a
facial depth scan. The interfaces generated by the proposed method accurately
fit the geometry of the scanned face, and are easy to manufacture. The proposed
method utilizes cheap commodity depth sensors and 3D printing technologies to
efficiently design and manufacture customized masks for patients suffering from
sleep apnea.
|
[
{
"version": "v1",
"created": "Thu, 22 Sep 2016 16:11:57 GMT"
}
] | 2016-09-23T00:00:00 |
[
[
"Sela",
"Matan",
""
],
[
"Toledo",
"Nadav",
""
],
[
"Honen",
"Yaron",
""
],
[
"Kimmel",
"Ron",
""
]
] |
new_dataset
| 0.992636 |
1501.01941
|
Daniel Lemire
|
Adina Crainiceanu and Daniel Lemire
|
Bloofi: Multidimensional Bloom Filters
| null |
Information Systems Volume 54, December 2015, Pages 311-324
|
10.1016/j.is.2015.01.002
| null |
cs.DB cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Bloom filters are probabilistic data structures commonly used for approximate
membership problems in many areas of Computer Science (networking, distributed
systems, databases, etc.). With the increase in data size and distribution of
data, problems arise where a large number of Bloom filters are available, and
all them need to be searched for potential matches. As an example, in a
federated cloud environment, each cloud provider could encode the information
using Bloom filters and share the Bloom filters with a central coordinator. The
problem of interest is not only whether a given element is in any of the sets
represented by the Bloom filters, but which of the existing sets contain the
given element. This problem cannot be solved by just constructing a Bloom
filter on the union of all the sets. Instead, we effectively have a
multidimensional Bloom filter problem: given an element, we wish to receive a
list of candidate sets where the element might be.
To solve this problem, we consider 3 alternatives. Firstly, we can naively
check many Bloom filters. Secondly, we propose to organize the Bloom filters in
a hierarchical index structure akin to a B+ tree, that we call Bloofi. Finally,
we propose another data structure that packs the Bloom filters in such a way as
to exploit bit-level parallelism, which we call Flat-Bloofi.
Our theoretical and experimental results show that Bloofi and Flat-Bloofi
provide scalable and efficient solutions alternatives to search through a large
number of Bloom filters.
|
[
{
"version": "v1",
"created": "Thu, 8 Jan 2015 20:04:46 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Feb 2015 15:31:25 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Sep 2016 18:37:56 GMT"
}
] | 2016-09-22T00:00:00 |
[
[
"Crainiceanu",
"Adina",
""
],
[
"Lemire",
"Daniel",
""
]
] |
new_dataset
| 0.996913 |
1512.02902
|
Makarand Tapaswi
|
Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba,
Raquel Urtasun, Sanja Fidler
|
MovieQA: Understanding Stories in Movies through Question-Answering
|
CVPR 2016, Spotlight presentation. Benchmark @
http://movieqa.cs.toronto.edu/ Code @
https://github.com/makarandtapaswi/MovieQA_CVPR2016/
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the MovieQA dataset which aims to evaluate automatic story
comprehension from both video and text. The dataset consists of 14,944
questions about 408 movies with high semantic diversity. The questions range
from simpler "Who" did "What" to "Whom", to "Why" and "How" certain events
occurred. Each question comes with a set of five possible answers; a correct
one and four deceiving answers provided by human annotators. Our dataset is
unique in that it contains multiple sources of information -- video clips,
plots, subtitles, scripts, and DVS. We analyze our data through various
statistics and methods. We further extend existing QA techniques to show that
question-answering with such open-ended semantics is hard. We make this data
set public along with an evaluation benchmark to encourage inspiring work in
this challenging domain.
|
[
{
"version": "v1",
"created": "Wed, 9 Dec 2015 15:34:31 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Sep 2016 04:52:35 GMT"
}
] | 2016-09-22T00:00:00 |
[
[
"Tapaswi",
"Makarand",
""
],
[
"Zhu",
"Yukun",
""
],
[
"Stiefelhagen",
"Rainer",
""
],
[
"Torralba",
"Antonio",
""
],
[
"Urtasun",
"Raquel",
""
],
[
"Fidler",
"Sanja",
""
]
] |
new_dataset
| 0.999755 |
1609.06345
|
Christoph Ponikwar
|
Christoph Ponikwar
|
Entwicklung eines Reputationssystems f\"ur cyber-physikalische Systeme
am Beispiel des inHMotion Forschungsprojektes
|
Masterthesis, Masterarbeit, 180 Pages, 180 Seiten, in German
| null | null | null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Through the continued progress and advances of networking technology and also
through efforts like Industry 4.0, SmartGrid, SmartCities, systems, that have
been self-sustaining for the most part are now forming networked Cyber-physical
Systems (CPS). Networking does not only have undoubtely benefits but also
increase the often overlooked attack surface of these systems. A basic problem
in information technology systems is building and sustaining trust. This work
looks at the usage of reputation systems in the field of CPS and whether they
could be used to assert the trustworthyness of communication partners. This is
done in context of a research project inHMotion of the University of applied
Sciences Munich, which is in the field of Intelligent Transportation Systems
(ITS). A risk based analysis is performed for this project. Based on solicited
threats mitigations are proposed. A possible solution might be the usage of a
reputation system, which is designed and a prototype implementation for a
simulation ist done. The concept of a meta reputation system, which combines
different reputation systems and verification systems can achieve promising
results in a simple simulation. In a concluding discussion criticism and
possible improvements are given.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2016 20:34:05 GMT"
}
] | 2016-09-22T00:00:00 |
[
[
"Ponikwar",
"Christoph",
""
]
] |
new_dataset
| 0.963775 |
1609.06404
|
Suwon Shon
|
Suwon Shon, Seongkyu Mun, John H.L. Hansen, Hanseok Ko
|
KU-ISPL Language Recognition System for NIST 2015 i-Vector Machine
Learning Challenge
| null | null | null | null |
cs.SD cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In language recognition, the task of rejecting/differentiating closely spaced
versus acoustically far spaced languages remains a major challenge. For
confusable closely spaced languages, the system needs longer input test
duration material to obtain sufficient information to distinguish between
languages. Alternatively, if languages are distinct and not
acoustically/linguistically similar to others, duration is not a sufficient
remedy. The solution proposed here is to explore duration distribution analysis
for near/far languages based on the Language Recognition i-Vector Machine
Learning Challenge 2015 (LRiMLC15) database. Using this knowledge, we propose a
likelihood ratio based fusion approach that leveraged both score and duration
information. The experimental results show that the use of duration and score
fusion improves language recognition performance by 5% relative in LRiMLC15
cost.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2016 02:14:23 GMT"
}
] | 2016-09-22T00:00:00 |
[
[
"Shon",
"Suwon",
""
],
[
"Mun",
"Seongkyu",
""
],
[
"Hansen",
"John H. L.",
""
],
[
"Ko",
"Hanseok",
""
]
] |
new_dataset
| 0.989728 |
1609.06438
|
Benjamin Rubinstein
|
Tansu Alpcan, Benjamin I. P. Rubinstein, Christopher Leckie
|
Large-Scale Strategic Games and Adversarial Machine Learning
|
7 pages, 1 figure; CDC'16 to appear
| null | null | null |
cs.GT cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Decision making in modern large-scale and complex systems such as
communication networks, smart electricity grids, and cyber-physical systems
motivate novel game-theoretic approaches. This paper investigates big strategic
(non-cooperative) games where a finite number of individual players each have a
large number of continuous decision variables and input data points. Such
high-dimensional decision spaces and big data sets lead to computational
challenges, relating to efforts in non-linear optimization scaling up to large
systems of variables. In addition to these computational challenges, real-world
players often have limited information about their preference parameters due to
the prohibitive cost of identifying them or due to operating in dynamic online
settings. The challenge of limited information is exacerbated in high
dimensions and big data sets. Motivated by both computational and information
limitations that constrain the direct solution of big strategic games, our
investigation centers around reductions using linear transformations such as
random projection methods and their effect on Nash equilibrium solutions.
Specific analytical results are presented for quadratic games and
approximations. In addition, an adversarial learning game is presented where
random projection and sampling schemes are investigated.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2016 07:10:13 GMT"
}
] | 2016-09-22T00:00:00 |
[
[
"Alpcan",
"Tansu",
""
],
[
"Rubinstein",
"Benjamin I. P.",
""
],
[
"Leckie",
"Christopher",
""
]
] |
new_dataset
| 0.950395 |
1609.06479
|
Ricardo Ribeiro
|
Maria Jo\~ao Pereira and Lu\'isa Coheur and Pedro Fialho and Ricardo
Ribeiro
|
Chatbots' Greetings to Human-Computer Communication
|
22 pages, 1 figure
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Both dialogue systems and chatbots aim at putting into action communication
between humans and computers. However, instead of focusing on sophisticated
techniques to perform natural language understanding, as the former usually do,
chatbots seek to mimic conversation. Since Eliza, the first chatbot ever,
developed in 1966, there were many interesting ideas explored by the chatbots'
community. Actually, more than just ideas, some chatbots' developers also
provide free resources, including tools and large-scale corpora. It is our
opinion that this know-how and materials should not be neglected, as they might
be put to use in the human-computer communication field (and some authors
already do it). Thus, in this paper we present a historical overview of the
chatbots' developments, we review what we consider to be the main contributions
of this community, and we point to some possible ways of coupling these with
current work in the human-computer communication research line.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2016 09:39:48 GMT"
}
] | 2016-09-22T00:00:00 |
[
[
"Pereira",
"Maria João",
""
],
[
"Coheur",
"Luísa",
""
],
[
"Fialho",
"Pedro",
""
],
[
"Ribeiro",
"Ricardo",
""
]
] |
new_dataset
| 0.970432 |
1609.06657
|
Andrew Shin
|
Andrew Shin, Yoshitaka Ushiku, Tatsuya Harada
|
The Color of the Cat is Gray: 1 Million Full-Sentences Visual Question
Answering (FSVQA)
| null | null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual Question Answering (VQA) task has showcased a new stage of interaction
between language and vision, two of the most pivotal components of artificial
intelligence. However, it has mostly focused on generating short and repetitive
answers, mostly single words, which fall short of rich linguistic capabilities
of humans. We introduce Full-Sentence Visual Question Answering (FSVQA)
dataset, consisting of nearly 1 million pairs of questions and full-sentence
answers for images, built by applying a number of rule-based natural language
processing techniques to original VQA dataset and captions in the MS COCO
dataset. This poses many additional complexities to conventional VQA task, and
we provide a baseline for approaching and evaluating the task, on top of which
we invite the research community to build further improvements.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2016 18:12:04 GMT"
}
] | 2016-09-22T00:00:00 |
[
[
"Shin",
"Andrew",
""
],
[
"Ushiku",
"Yoshitaka",
""
],
[
"Harada",
"Tatsuya",
""
]
] |
new_dataset
| 0.999398 |
1609.06669
|
Manuel Rodriguez-Vallejo
|
Manuel Rodriguez-Vallejo, Clara Llorens-Quintana, Diego Montagud,
Walter D. Furlan and Juan A. Monsoriu
|
Fast and reliable stereopsis measurement at multiple distances with iPad
|
14 pages, 3 figures, 4 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Purpose: To present a new fast and reliable application for iPad (ST) for
screening stereopsis at multiple distances.
Methods: A new iPad application (app) based on a random dot stereogram was
designed for screening stereopsis at multiple distances. Sixty-five subjects
with no ocular diseases and wearing their habitual correction were tested at
two different distances: 3 m and at 0.4 m. Results were compared with other
commercial tests: TNO (at near) and Howard Dolman (at distance) Subjects were
cited one week later in order to repeat the same procedures for assessing
reproducibility of the tests.
Results: Stereopsis at near was better with ST (40 arcsec) than with TNO (60
arcsec), but not significantly (p = 0.36). The agreement was good (k = 0.604)
and the reproducibility was better with ST (k = 0.801) than with TNO (k =
0.715), in fact median difference between days was significant only with TNO (p
= 0.02). On the other hand, poor agreement was obtained between HD and ST at
far distance (k=0.04), obtaining significant differences in medians (p = 0.001)
and poorer reliability with HD (k = 0.374) than with ST (k = 0.502).
Conclusions: Screening stereopsis at near with a new iPad app demonstrated to
be a fast and realiable. Results were in a good agreement with conventional
tests as TNO, but it could not be compared at far vision with HD due to the
limited resolution of the iPad.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2016 18:34:25 GMT"
}
] | 2016-09-22T00:00:00 |
[
[
"Rodriguez-Vallejo",
"Manuel",
""
],
[
"Llorens-Quintana",
"Clara",
""
],
[
"Montagud",
"Diego",
""
],
[
"Furlan",
"Walter D.",
""
],
[
"Monsoriu",
"Juan A.",
""
]
] |
new_dataset
| 0.99928 |
1505.04870
|
Bryan Plummer
|
Bryan A. Plummer, Liwei Wang, Chris M. Cervantes, Juan C. Caicedo,
Julia Hockenmaier, and Svetlana Lazebnik
|
Flickr30k Entities: Collecting Region-to-Phrase Correspondences for
Richer Image-to-Sentence Models
| null | null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Flickr30k dataset has become a standard benchmark for sentence-based
image description. This paper presents Flickr30k Entities, which augments the
158k captions from Flickr30k with 244k coreference chains, linking mentions of
the same entities across different captions for the same image, and associating
them with 276k manually annotated bounding boxes. Such annotations are
essential for continued progress in automatic image description and grounded
language understanding. They enable us to define a new benchmark for
localization of textual entity mentions in an image. We present a strong
baseline for this task that combines an image-text embedding, detectors for
common objects, a color classifier, and a bias towards selecting larger
objects. While our baseline rivals in accuracy more complex state-of-the-art
models, we show that its gains cannot be easily parlayed into improvements on
such tasks as image-sentence retrieval, thus underlining the limitations of
current methods and the need for further research.
|
[
{
"version": "v1",
"created": "Tue, 19 May 2015 04:46:03 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Oct 2015 22:17:45 GMT"
},
{
"version": "v3",
"created": "Fri, 15 Apr 2016 14:58:37 GMT"
},
{
"version": "v4",
"created": "Mon, 19 Sep 2016 20:20:42 GMT"
}
] | 2016-09-21T00:00:00 |
[
[
"Plummer",
"Bryan A.",
""
],
[
"Wang",
"Liwei",
""
],
[
"Cervantes",
"Chris M.",
""
],
[
"Caicedo",
"Juan C.",
""
],
[
"Hockenmaier",
"Julia",
""
],
[
"Lazebnik",
"Svetlana",
""
]
] |
new_dataset
| 0.999848 |
1506.05996
|
Rajesh Gandham
|
J.-F. Remacle, R. Gandham, T. Warburton
|
GPU accelerated spectral finite elements on all-hex meshes
|
23 pages, 7 figures
| null |
10.1016/j.jcp.2016.08.005
| null |
cs.CE cs.DC cs.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a spectral element finite element scheme that efficiently
solves elliptic problems on unstructured hexahedral meshes. The discrete
equations are solved using a matrix-free preconditioned conjugate gradient
algorithm. An additive Schwartz two-scale preconditioner is employed that
allows h-independence convergence. An extensible multi-threading programming
API is used as a common kernel language that allows runtime selection of
different computing devices (GPU and CPU) and different threading interfaces
(CUDA, OpenCL and OpenMP). Performance tests demonstrate that problems with
over 50 million degrees of freedom can be solved in a few seconds on an
off-the-shelf GPU.
|
[
{
"version": "v1",
"created": "Fri, 19 Jun 2015 13:27:05 GMT"
}
] | 2016-09-21T00:00:00 |
[
[
"Remacle",
"J. -F.",
""
],
[
"Gandham",
"R.",
""
],
[
"Warburton",
"T.",
""
]
] |
new_dataset
| 0.996812 |
1512.02194
|
Derek Groen
|
Derek Groen, Agastya Bhati, James Suter, James Hetherington, Stefan
Zasada, Peter Coveney
|
FabSim: facilitating computational research through automation on
large-scale and distributed e-infrastructures
|
29 pages, 8 figures, 2 tables, submitted
| null |
10.1016/j.cpc.2016.05.020
| null |
cs.DC physics.comp-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present FabSim, a toolkit developed to simplify a range of computational
tasks for researchers in diverse disciplines. FabSim is flexible, adaptable,
and allows users to perform a wide range of tasks with ease. It also provides a
systematic way to automate the use of resourcess, including HPC and distributed
resources, and to make tasks easier to repeat by recording contextual
information. To demonstrate this, we present three use cases where FabSim has
enhanced our research productivity. These include simulating cerebrovascular
bloodflow, modelling clay-polymer nanocomposites across multiple scales, and
calculating ligand-protein binding affinities.
|
[
{
"version": "v1",
"created": "Mon, 7 Dec 2015 20:31:51 GMT"
}
] | 2016-09-21T00:00:00 |
[
[
"Groen",
"Derek",
""
],
[
"Bhati",
"Agastya",
""
],
[
"Suter",
"James",
""
],
[
"Hetherington",
"James",
""
],
[
"Zasada",
"Stefan",
""
],
[
"Coveney",
"Peter",
""
]
] |
new_dataset
| 0.99946 |
1601.03022
|
Xavier Navarro-Sune
|
X Navarro-Sune, A.L. Hudson, F. De Vico Fallani, J. Martinerie, A.
Witon, P. Pouget, M. Raux, T. Similowski and M. Chavez
|
Riemannian geometry applied to detection of respiratory states from EEG
signals: the basis for a brain-ventilator interface
|
14 pages, 7 figures
|
IEEE Transactions on Biomedical Engineering, 2016
|
10.1109/TBME.2016.2592820
| null |
cs.HC q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
During mechanical ventilation, patient-ventilator disharmony is frequently
observed and may result in increased breathing effort, compromising the
patient's comfort and recovery. This circumstance requires clinical
intervention and becomes challenging when verbal communication is difficult. In
this work, we propose a brain computer interface (BCI) to automatically and
non-invasively detect patient-ventilator disharmony from
electroencephalographic (EEG) signals: a brain-ventilator interface (BVI). Our
framework exploits the cortical activation provoked by the inspiratory
compensation when the subject and the ventilator are desynchronized. Use of a
one-class approach and Riemannian geometry of EEG covariance matrices allows
effective classification of respiratory states. The BVI is validated on nine
healthy subjects that performed different respiratory tasks that mimic a
patient-ventilator disharmony. Classification performances, in terms of areas
under ROC curves, are significantly improved using EEG signals compared to
detection based on air flow. Reduction in the number of electrodes that can
achieve discrimination can often be desirable (e.g. for portable BCI systems).
By using an iterative channel selection technique, the Common Highest Order
Ranking (CHOrRa), we find that a reduced set of electrodes (n=6) can slightly
improve for an intra-subject configuration, and it still provides fairly good
performances for a general inter-subject setting. Results support the
discriminant capacity of our approach to identify anomalous respiratory states,
by learning from a training set containing only normal respiratory epochs. The
proposed framework opens the door to brain-ventilator interfaces for monitoring
patient's breathing comfort and adapting ventilator parameters to patient
respiratory needs.
|
[
{
"version": "v1",
"created": "Tue, 12 Jan 2016 20:32:30 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2016 14:09:22 GMT"
}
] | 2016-09-21T00:00:00 |
[
[
"Navarro-Sune",
"X",
""
],
[
"Hudson",
"A. L.",
""
],
[
"Fallani",
"F. De Vico",
""
],
[
"Martinerie",
"J.",
""
],
[
"Witon",
"A.",
""
],
[
"Pouget",
"P.",
""
],
[
"Raux",
"M.",
""
],
[
"Similowski",
"T.",
""
],
[
"Chavez",
"M.",
""
]
] |
new_dataset
| 0.994875 |
1605.02097
|
Wojciech Ja\'skowski
|
Micha{\l} Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek and
Wojciech Ja\'skowski
|
ViZDoom: A Doom-based AI Research Platform for Visual Reinforcement
Learning
| null |
Proceedings of IEEE Conference of Computational Intelligence in
Games 2016
| null | null |
cs.LG cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recent advances in deep neural networks have led to effective
vision-based reinforcement learning methods that have been employed to obtain
human-level controllers in Atari 2600 games from pixel data. Atari 2600 games,
however, do not resemble real-world tasks since they involve non-realistic 2D
environments and the third-person perspective. Here, we propose a novel
test-bed platform for reinforcement learning research from raw visual
information which employs the first-person perspective in a semi-realistic 3D
world. The software, called ViZDoom, is based on the classical first-person
shooter video game, Doom. It allows developing bots that play the game using
the screen buffer. ViZDoom is lightweight, fast, and highly customizable via a
convenient mechanism of user scenarios. In the experimental part, we test the
environment by trying to learn bots for two scenarios: a basic move-and-shoot
task and a more complex maze-navigation problem. Using convolutional deep
neural networks with Q-learning and experience replay, for both scenarios, we
were able to train competent bots, which exhibit human-like behaviors. The
results confirm the utility of ViZDoom as an AI research platform and imply
that visual reinforcement learning in 3D realistic first-person perspective
environments is feasible.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2016 20:46:34 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2016 19:12:49 GMT"
}
] | 2016-09-21T00:00:00 |
[
[
"Kempka",
"Michał",
""
],
[
"Wydmuch",
"Marek",
""
],
[
"Runc",
"Grzegorz",
""
],
[
"Toczek",
"Jakub",
""
],
[
"Jaśkowski",
"Wojciech",
""
]
] |
new_dataset
| 0.999478 |
1606.01884
|
Enrico Prati
|
Enrico Prati
|
Atomic scale nanoelectronics for quantum neuromorphic devices: comparing
different materials
|
15 pag, 2 fig, in press on International Journal of Nanotechnology
2016
| null |
10.1504/IJNT.2016.078543
| null |
cs.ET cond-mat.mes-hall q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
I review the advancements of atomic scale nanoelectronics towards quantum
neuromorphics. First, I summarize the key properties of elementary combinations
of few neurons, namely long-- and short--term plasticity, spike-timing
dependent plasticity (associative plasticity), quantumness and stochastic
effects, and their potential computational employment. Next, I review several
atomic scale device technologies developed to control electron transport at the
atomic level, including single atom implantation for atomic arrays and CMOS
quantum dots, single atom memories, Ag$_2$S and Cu$_2$S atomic switches,
hafnium based RRAMs, organic material based transistors, Ge$_2$Sb$_2$Te$_5$
synapses. Each material/method proved successful in achieving some of the
properties observed in real neurons. I compare the different methods towards
the creation of a new generation of naturally inspired and biophysically
meaningful artificial neurons, in order to replace the rigid CMOS based
neuromorphic hardware. The most challenging aspect to address appears to obtain
both the stochastic/quantum behavior and the associative plasticity, which are
currently observed only below and above 20 nm length scale respectively, by
employing the same material.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2016 12:27:16 GMT"
}
] | 2016-09-21T00:00:00 |
[
[
"Prati",
"Enrico",
""
]
] |
new_dataset
| 0.999742 |
1608.00869
|
Daniela Gerz
|
Daniela Gerz, Ivan Vuli\'c, Felix Hill, Roi Reichart, Anna Korhonen
|
SimVerb-3500: A Large-Scale Evaluation Set of Verb Similarity
|
EMNLP 2016
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Verbs play a critical role in the meaning of sentences, but these ubiquitous
words have received little attention in recent distributional semantics
research. We introduce SimVerb-3500, an evaluation resource that provides human
ratings for the similarity of 3,500 verb pairs. SimVerb-3500 covers all normed
verb types from the USF free-association database, providing at least three
examples for every VerbNet class. This broad coverage facilitates detailed
analyses of how syntactic and semantic phenomena together influence human
understanding of verb meaning. Further, with significantly larger development
and test sets than existing benchmarks, SimVerb-3500 enables more robust
evaluation of representation learning architectures and promotes the
development of methods tailored to verbs. We hope that SimVerb-3500 will enable
a richer understanding of the diversity and complexity of verb semantics and
guide the development of systems that can effectively represent and interpret
this meaning.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2016 15:35:12 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Aug 2016 15:39:53 GMT"
},
{
"version": "v3",
"created": "Tue, 9 Aug 2016 06:20:24 GMT"
},
{
"version": "v4",
"created": "Tue, 20 Sep 2016 14:35:14 GMT"
}
] | 2016-09-21T00:00:00 |
[
[
"Gerz",
"Daniela",
""
],
[
"Vulić",
"Ivan",
""
],
[
"Hill",
"Felix",
""
],
[
"Reichart",
"Roi",
""
],
[
"Korhonen",
"Anna",
""
]
] |
new_dataset
| 0.972934 |
1609.06027
|
Yuyi Mao
|
Yuyi Mao, Jun Zhang, S.H. Song, Khaled B. Letaief
|
Power-Delay Tradeoff in Multi-User Mobile-Edge Computing Systems
|
7 pages, 4 figures, accepted to IEEE Global Communications Conference
(GLOBECOM), Washington DC, December 2016
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile-edge computing (MEC) has recently emerged as a promising paradigm to
liberate mobile devices from increasingly intensive computation workloads, as
well as to improve the quality of computation experience. In this paper, we
investigate the tradeoff between two critical but conflicting objectives in
multi-user MEC systems, namely, the power consumption of mobile devices and the
execution delay of computation tasks. A power consumption minimization problem
with task buffer stability constraints is formulated to investigate the
tradeoff, and an online algorithm that decides the local execution and
computation offloading policy is developed based on Lyapunov optimization.
Specifically, at each time slot, the optimal frequencies of the local CPUs are
obtained in closed forms, while the optimal transmit power and bandwidth
allocation for computation offloading are determined with the Gauss-Seidel
method. Performance analysis is conducted for the proposed algorithm, which
indicates that the power consumption and execution delay obeys an [O (1/V); O
(V)] tradeoff with V as a control parameter. Simulation results are provided to
validate the theoretical analysis and demonstrate the impacts of various
parameters to the system performance.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2016 06:00:16 GMT"
}
] | 2016-09-21T00:00:00 |
[
[
"Mao",
"Yuyi",
""
],
[
"Zhang",
"Jun",
""
],
[
"Song",
"S. H.",
""
],
[
"Letaief",
"Khaled B.",
""
]
] |
new_dataset
| 0.967143 |
1609.06281
|
Sou-Chi Chang
|
Sou-Chi Chang, Sasikanth Manipatruni, Dmitri E. Nikonov, Ian A. Young,
and Azad Naeemi
|
Low-power Spin Valve Logic using Spin-transfer Torque with Automotion of
Domain Walls
|
9 pages
| null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A novel scheme for non-volatile digital computation is proposed using
spin-transfer torque (STT) and automotion of magnetic domain walls (DWs). The
basic computing element is composed of a lateral spin valve (SV) with two
ferromagnetic (FM) wires served as interconnects, where DW automotion is used
to propagate the information from one device to another. The non-reciprocity of
both device and interconnect is realized by sizing different contact areas at
the input and the output as well as enhancing the local damping mechanism. The
proposed logic is suitable for scaling due to a high energy barrier provided by
a long FM wire. Compared to the scheme based on non-local spin valves (NLSVs)
in the previous proposal, the devices can be operated at lower current density
due to utilizing all injected spins for local magnetization reversals, and thus
improve both energy efficiency and resistance to electromigration. This device
concept is justified by simulating a buffer, an inverter, and a 3-input
majority gate with comprehensive numerical simulations, including spin
transport through the FM/non-magnetic (NM) interfaces as well as the NM channel
and stochastic magnetization dynamics inside FM wires. In addition to digital
computing, the proposed framework can also be used as a transducer between DWs
and spin currents for higher wiring flexibility in the interconnect network.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2016 18:27:03 GMT"
}
] | 2016-09-21T00:00:00 |
[
[
"Chang",
"Sou-Chi",
""
],
[
"Manipatruni",
"Sasikanth",
""
],
[
"Nikonov",
"Dmitri E.",
""
],
[
"Young",
"Ian A.",
""
],
[
"Naeemi",
"Azad",
""
]
] |
new_dataset
| 0.995685 |
1509.08443
|
Ayush Dubey
|
Ayush Dubey, Greg D. Hill, Robert Escriva, Emin G\"un Sirer
|
Weaver: A High-Performance, Transactional Graph Database Based on
Refinable Timestamps
| null | null |
10.14778/2983200.2983202
| null |
cs.DC cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Graph databases have become an increasingly common infrastructure component.
Yet existing systems either operate on offline snapshots, provide weak
consistency guarantees, or use expensive concurrency control techniques that
limit performance. In this paper, we introduce a new distributed graph
database, called Weaver, which enables efficient, transactional graph analyses
as well as strictly serializable ACID transactions on dynamic graphs. The key
insight that allows Weaver to combine strict serializability with horizontal
scalability and high performance is a novel request ordering mechanism called
refinable timestamps. This technique couples coarse-grained vector timestamps
with a fine-grained timeline oracle to pay the overhead of strong consistency
only when needed. Experiments show that Weaver enables a Bitcoin blockchain
explorer that is 8x faster than Blockchain.info, and achieves 12x higher
throughput than the Titan graph database on social network workloads and 4x
lower latency than GraphLab on offline graph traversal workloads.
|
[
{
"version": "v1",
"created": "Mon, 28 Sep 2015 19:30:30 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Jun 2016 03:41:20 GMT"
}
] | 2016-09-20T00:00:00 |
[
[
"Dubey",
"Ayush",
""
],
[
"Hill",
"Greg D.",
""
],
[
"Escriva",
"Robert",
""
],
[
"Sirer",
"Emin Gün",
""
]
] |
new_dataset
| 0.982575 |
1603.04134
|
Kanzhi Wu
|
Kanzhi Wu and Xiaoyang Li and Ravindra Ranasinghe and Gamini
Dissanayake and Yong Liu
|
RISAS: A Novel Rotation, Illumination, Scale Invariant Appearance and
Shape Feature
| null | null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel appearance and shape feature, RISAS, which is
robust to viewpoint, illumination, scale and rotation variations. RISAS
consists of a keypoint detector and a feature descriptor both of which utilise
texture and geometric information present in the appearance and shape channels.
A novel response function based on the surface normals is used in combination
with the Harris corner detector for selecting keypoints in the scene. A
strategy that uses the depth information for scale estimation and background
elimination is proposed to select the neighbourhood around the keypoints in
order to build precise invariant descriptors. Proposed descriptor relies on the
ordering of both grayscale intensity and shape information in the
neighbourhood. Comprehensive experiments which confirm the effectiveness of the
proposed RGB-D feature when compared with CSHOT and LOIND are presented.
Furthermore, we highlight the utility of incorporating texture and shape
information in the design of both the detector and the descriptor by
demonstrating the enhanced performance of CSHOT and LOIND when combined with
RISAS detector.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2016 04:39:49 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Sep 2016 09:09:37 GMT"
}
] | 2016-09-20T00:00:00 |
[
[
"Wu",
"Kanzhi",
""
],
[
"Li",
"Xiaoyang",
""
],
[
"Ranasinghe",
"Ravindra",
""
],
[
"Dissanayake",
"Gamini",
""
],
[
"Liu",
"Yong",
""
]
] |
new_dataset
| 0.999441 |
1606.03893
|
Carsten Bockelmann
|
Carsten Bockelmann, Nuno Pratas, Hosein Nikopour, Kelvin Au, Tommy
Svensson, Cedomir Stefanovic, Petar Popovski, Armin Dekorsy
|
Massive Machine-type Communications in 5G: Physical and MAC-layer
solutions
|
Accepted for publication in IEEE Communications Magazine
| null |
10.1109/MCOM.2016.7565189
| null |
cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine-type communications (MTC) are expected to play an essential role
within future 5G systems. In the FP7 project METIS, MTC has been further
classified into "massive Machine-Type Communication" (mMTC) and "ultra-reliable
Machine-Type Communication" (uMTC). While mMTC is about wireless connectivity
to tens of billions of machine-type terminals, uMTC is about availability, low
latency, and high reliability. The main challenge in mMTC is scalable and
efficient connectivity for a massive number of devices sending very short
packets, which is not done adequately in cellular systems designed for
human-type communications. Furthermore, mMTC solutions need to enable wide area
coverage and deep indoor penetration while having low cost and being energy
efficient. In this article, we introduce the physical (PHY) and medium access
control (MAC) layer solutions developed within METIS to address this challenge.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2016 10:52:15 GMT"
}
] | 2016-09-20T00:00:00 |
[
[
"Bockelmann",
"Carsten",
""
],
[
"Pratas",
"Nuno",
""
],
[
"Nikopour",
"Hosein",
""
],
[
"Au",
"Kelvin",
""
],
[
"Svensson",
"Tommy",
""
],
[
"Stefanovic",
"Cedomir",
""
],
[
"Popovski",
"Petar",
""
],
[
"Dekorsy",
"Armin",
""
]
] |
new_dataset
| 0.993734 |
1608.04712
|
Ali-akbar Agha-mohammadi
|
Ali-akbar Agha-mohammadi
|
SMAP: Simultaneous Mapping and Planning on Occupancy Grids
|
Technical report (to be completed)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Occupancy grids are the most common framework when it comes to creating a map
of the environment using a robot. This paper studies occupancy grids from the
motion planning perspective and proposes a mapping method that provides richer
data (map) for the purpose of planning and collision avoidance. Typically, in
occupancy grid mapping, each cell contains a single number representing the
probability of cell being occupied. This leads to conflicts in the map, and
more importantly inconsistency between the map error and reported confidence
values. Such inconsistencies pose challenges for the planner that relies on the
generated map for planning motions. In this work, we store a richer data at
each voxel including an accurate estimate of the variance of occupancy. We show
that in addition to achieving maps that are often more accurate than tradition
methods, the proposed filtering scheme demonstrates a much higher level of
consistency between its error and its reported confidence. This allows the
planner to reason about acquisition of the future sensory information. Such
planning can lead to active perception maneuvers that while guiding the robot
toward the goal aims at increasing the confidence in parts of the map that are
relevant to accomplishing the task.
|
[
{
"version": "v1",
"created": "Tue, 16 Aug 2016 19:15:28 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Aug 2016 23:06:38 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Sep 2016 02:49:26 GMT"
}
] | 2016-09-20T00:00:00 |
[
[
"Agha-mohammadi",
"Ali-akbar",
""
]
] |
new_dataset
| 0.982911 |
1609.03461
|
Hossein Ziaei Nafchi
|
Hossein Ziaei Nafchi, Atena Shahkolaei, Rachid Hedjam, Mohamed Cheriet
|
MUG: A Parameterless No-Reference JPEG Quality Evaluator Robust to Block
Size and Misalignment
|
5 pages, 4 figures, 3 tables
| null |
10.1109/LSP.2016.2608865
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this letter, a very simple no-reference image quality assessment (NR-IQA)
model for JPEG compressed images is proposed. The proposed metric called median
of unique gradients (MUG) is based on the very simple facts of unique gradient
magnitudes of JPEG compressed images. MUG is a parameterless metric and does
not need training. Unlike other NR-IQAs, MUG is independent to block size and
cropping. A more stable index called MUG+ is also introduced. The experimental
results on six benchmark datasets of natural images and a benchmark dataset of
synthetic images show that MUG is comparable to the state-of-the-art indices in
literature. In addition, its performance remains unchanged for the case of the
cropped images in which block boundaries are not known. The MATLAB source code
of the proposed metrics is available at
https://dl.dropboxusercontent.com/u/74505502/MUG.m and
https://dl.dropboxusercontent.com/u/74505502/MUGplus.m.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2016 16:11:26 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Sep 2016 16:33:48 GMT"
}
] | 2016-09-20T00:00:00 |
[
[
"Nafchi",
"Hossein Ziaei",
""
],
[
"Shahkolaei",
"Atena",
""
],
[
"Hedjam",
"Rachid",
""
],
[
"Cheriet",
"Mohamed",
""
]
] |
new_dataset
| 0.972405 |
1609.05215
|
Vladimir Vesely
|
Vladim\'ir Vesel\'y, V\'it Rek, Ond\v{r}ej Ry\v{s}av\'y
|
Babel Routing Protocol for OMNeT++ - More than just a new simulation
module for INET framework
|
Published in: A. Foerster, V. Vesely, A. Virdis, M. Kirsche (Eds.),
Proc. of the 3rd OMNeT++ Community Summit, Brno University of Technology -
Czech Republic - September 15-16, 2016
| null | null |
OMNET/2016/13
|
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Routing and switching capabilities of computer networks seem as the closed
environment containing a limited set of deployed protocols, which nobody dares
to change. The majority of wired network designs are stuck with OSPF
(guaranteeing dynamic routing exchange on network layer) and RSTP (securing
loop-free data-link layer topology). Recently, more use-case specific routing
protocols, such as Babel, have appeared. These technologies claim to have
better characteristic than current industry standards. Babel is a fresh
contribution to the family of distance-vector routing protocols, which is
gaining its momentum for small double-stack (IPv6 and IPv4) networks. This
paper briefly describes Babel behavior and provides details on its
implementation in OMNeT++ discrete event simulator.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2016 20:00:18 GMT"
}
] | 2016-09-20T00:00:00 |
[
[
"Veselý",
"Vladimír",
""
],
[
"Rek",
"Vít",
""
],
[
"Ryšavý",
"Ondřej",
""
]
] |
new_dataset
| 0.999794 |
1609.05259
|
Nikola Zlatanov
|
Nikola Zlatanov, Derrick Wing Kwan Ng, and Robert Schober
|
Capacity of the Two-Hop Relay Channel with Wireless Energy Transfer from
Relay to Source and Energy Transmission Cost
|
Submitted to an IEEE Journal
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate a communication system comprised of an energy
harvesting (EH) source which harvests radio frequency (RF) energy from an
out-of-band full-duplex relay node and exploits this energy to transmit data to
a destination node via the relay node. We assume two scenarios for the battery
of the EH source. In the first scenario, we assume that the EH source is not
equipped with a battery and thereby cannot store energy. As a result, the RF
energy harvested during one symbol interval can only be used in the following
symbol interval. In the second scenario, we assume that the EH source is
equipped with a battery having unlimited storage capacity in which it can store
the harvested RF energy. As a result, the RF energy harvested during one symbol
interval can be used in any of the following symbol intervals. For both system
models, we derive the channel capacity subject to an average power constraint
at the relay and an additional energy transmission cost at the EH source. We
compare the derived capacities to the achievable rates of several benchmark
schemes. Our results show that using the optimal input distributions at both
the EH source and the relay is essential for high performance. Moreover, we
demonstrate that neglecting the energy transmission cost at the source can
result in a severe overestimation of the achievable performance.
|
[
{
"version": "v1",
"created": "Sat, 17 Sep 2016 00:10:00 GMT"
}
] | 2016-09-20T00:00:00 |
[
[
"Zlatanov",
"Nikola",
""
],
[
"Ng",
"Derrick Wing Kwan",
""
],
[
"Schober",
"Robert",
""
]
] |
new_dataset
| 0.997659 |
1609.05315
|
Jeffrey Georgeson
|
Jeffrey Georgeson
|
NPCs Vote! Changing Voter Reactions Over Time Using the Extreme AI
Personality Engine
|
8 pages, 3 tables, 9 figures
| null | null | null |
cs.AI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Can non-player characters have human-realistic personalities, changing over
time depending on input from those around them? And can they have different
reactions and thoughts about different people? Using Extreme AI, a
psychology-based personality engine using the Five Factor model of personality,
I answer these questions by creating personalities for 100 voters and allowing
them to react to two politicians to see if the NPC voters' choice of candidate
develops in a realistic-seeming way, based on initial and changing personality
facets and on their differing feelings toward the politicians (in this case,
across liking, trusting, and feeling affiliated with the candidates). After 16
test runs, the voters did indeed change their attitudes and feelings toward the
candidates in different and yet generally realistic ways, and even changed
their attitudes about other issues based on what a candidate extolled.
|
[
{
"version": "v1",
"created": "Sat, 17 Sep 2016 11:21:17 GMT"
}
] | 2016-09-20T00:00:00 |
[
[
"Georgeson",
"Jeffrey",
""
]
] |
new_dataset
| 0.984544 |
1609.05337
|
Matthew Hammer
|
Dakota Fisher, Matthew A. Hammer, William Byrd, Matthew Might
|
miniAdapton: A Minimal Implementation of Incremental Computation in
Scheme
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe a complete Scheme implementation of miniAdapton, which implements
the core functionality of the Adapton system for incremental computation (also
known as self-adjusting computation). Like Adapton, miniAdapton allows
programmers to safely combine mutation and memoization. miniAdapton is built on
top of an even simpler system, microAdapton. Both miniAdapton and microAdapton
are designed to be easy to understand, extend, and port to host languages other
than Scheme. We also present adapton variables, a new interface in Adapton for
variables intended to represent expressions.
|
[
{
"version": "v1",
"created": "Sat, 17 Sep 2016 13:53:10 GMT"
}
] | 2016-09-20T00:00:00 |
[
[
"Fisher",
"Dakota",
""
],
[
"Hammer",
"Matthew A.",
""
],
[
"Byrd",
"William",
""
],
[
"Might",
"Matthew",
""
]
] |
new_dataset
| 0.950862 |
1609.05420
|
Senthil Purushwalkam
|
Senthil Purushwalkam, Abhinav Gupta
|
Pose from Action: Unsupervised Learning of Pose Features based on Motion
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human actions are comprised of a sequence of poses. This makes videos of
humans a rich and dense source of human poses. We propose an unsupervised
method to learn pose features from videos that exploits a signal which is
complementary to appearance and can be used as supervision: motion. The key
idea is that humans go through poses in a predictable manner while performing
actions. Hence, given two poses, it should be possible to model the motion that
caused the change between them. We represent each of the poses as a feature in
a CNN (Appearance ConvNet) and generate a motion encoding from optical flow
maps using a separate CNN (Motion ConvNet). The data for this task is
automatically generated allowing us to train without human supervision. We
demonstrate the strength of the learned representation by finetuning the
trained model for Pose Estimation on the FLIC dataset, for static image action
recognition on PASCAL and for action recognition in videos on UCF101 and
HMDB51.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2016 04:18:42 GMT"
}
] | 2016-09-20T00:00:00 |
[
[
"Purushwalkam",
"Senthil",
""
],
[
"Gupta",
"Abhinav",
""
]
] |
new_dataset
| 0.990362 |
1609.05512
|
Ivan Dokmanic
|
Miranda Krekovi\'c, Ivan Dokmani\'c, Martin Vetterli
|
Omnidirectional Bats, Point-to-Plane Distances, and the Price of
Uniqueness
|
5 pages, 8 figures, submitted to ICASSP 2017
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study simultaneous localization and mapping with a device that uses
reflections to measure its distance from walls. Such a device can be realized
acoustically with a synchronized collocated source and receiver; it behaves
like a bat with no capacity for directional hearing or vocalizing. In this
paper we generalize our previous work in 2D, and show that the 3D case is not
just a simple extension, but rather a fundamentally different inverse problem.
While generically the 2D problem has a unique solution, in 3D uniqueness is
always absent in rooms with fewer than nine walls. In addition to the complete
characterization of ambiguities which arise due to this non-uniqueness, we
propose a robust solution for inexact measurements similar to analogous results
for Euclidean Distance Matrices. Our theoretical results have important
consequences for the design of collocated range-only SLAM systems, and we
support them with an array of computer experiments.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2016 16:34:07 GMT"
}
] | 2016-09-20T00:00:00 |
[
[
"Kreković",
"Miranda",
""
],
[
"Dokmanić",
"Ivan",
""
],
[
"Vetterli",
"Martin",
""
]
] |
new_dataset
| 0.994667 |
1609.05561
|
Ricardo Fabbri
|
Anil Usumezbas and Ricardo Fabbri and Benjamin B. Kimia
|
From Multiview Image Curves to 3D Drawings
|
Expanded ECCV 2016 version with tweaked figures and including an
overview of the supplementary material available at
multiview-3d-drawing.sourceforge.net
|
Lecture Notes in Computer Science, 9908, pp 70-87, september 2016
|
10.1007/978-3-319-46493-0_5
| null |
cs.CV cs.CG cs.GR cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Reconstructing 3D scenes from multiple views has made impressive strides in
recent years, chiefly by correlating isolated feature points, intensity
patterns, or curvilinear structures. In the general setting - without
controlled acquisition, abundant texture, curves and surfaces following
specific models or limiting scene complexity - most methods produce unorganized
point clouds, meshes, or voxel representations, with some exceptions producing
unorganized clouds of 3D curve fragments. Ideally, many applications require
structured representations of curves, surfaces and their spatial relationships.
This paper presents a step in this direction by formulating an approach that
combines 2D image curves into a collection of 3D curves, with topological
connectivity between them represented as a 3D graph. This results in a 3D
drawing, which is complementary to surface representations in the same sense as
a 3D scaffold complements a tent taut over it. We evaluate our results against
truth on synthetic and real datasets.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2016 22:20:35 GMT"
}
] | 2016-09-20T00:00:00 |
[
[
"Usumezbas",
"Anil",
""
],
[
"Fabbri",
"Ricardo",
""
],
[
"Kimia",
"Benjamin B.",
""
]
] |
new_dataset
| 0.999097 |
1609.05583
|
Seyed Ali Amirshahi Seyed Ali Amirshahi
|
Seyed Ali Amirshahi, Gregor Uwe Hayn-Leichsenring, Joachim Denzler,
Christoph Redies
|
Color: A Crucial Factor for Aesthetic Quality Assessment in a Subjective
Dataset of Paintings
|
This paper was presented at the AIC 2013 Congress
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computational aesthetics is an emerging field of research which has attracted
different research groups in the last few years. In this field, one of the main
approaches to evaluate the aesthetic quality of paintings and photographs is a
feature-based approach. Among the different features proposed to reach this
goal, color plays an import role. In this paper, we introduce a novel dataset
that consists of paintings of Western provenance from 36 well-known painters
from the 15th to the 20th century. As a first step and to assess this dataset,
using a classifier, we investigate the correlation between the subjective
scores and two widely used features that are related to color perception and in
different aesthetic quality assessment approaches. Results show a
classification rate of up to 73% between the color features and the subjective
scores.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2016 02:17:34 GMT"
}
] | 2016-09-20T00:00:00 |
[
[
"Amirshahi",
"Seyed Ali",
""
],
[
"Hayn-Leichsenring",
"Gregor Uwe",
""
],
[
"Denzler",
"Joachim",
""
],
[
"Redies",
"Christoph",
""
]
] |
new_dataset
| 0.998314 |
1609.05626
|
Naveen Sivadasan
|
Naveen Sivadasan, Rajgopal Srinivasan, Kshama Goyal
|
Kmerlight: fast and accurate k-mer abundance estimation
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
k-mers (nucleotide strings of length k) form the basis of several algorithms
in computational genomics. In particular, k-mer abundance information in
sequence data is useful in read error correction, parameter estimation for
genome assembly, digital normalization etc. We give a streaming algorithm
Kmerlight for computing the k-mer abundance histogram from sequence data. Our
algorithm is fast and uses very small memory footprint. We provide analytical
bounds on the error guarantees of our algorithm. Kmerlight can efficiently
process genome scale and metagenome scale data using standard desktop machines.
Few applications of abundance histograms computed by Kmerlight are also shown.
We use abundance histogram for de novo estimation of repetitiveness in the
genome based on a simple probabilistic model that we propose. We also show
estimation of k-mer error rate in the sampling using abundance histogram. Our
algorithm can also be used for abundance estimation in a general streaming
setting. The Kmerlight tool is written in C++ and is available for download and
use from https://github.com/nsivad/kmerlight.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2016 08:01:16 GMT"
}
] | 2016-09-20T00:00:00 |
[
[
"Sivadasan",
"Naveen",
""
],
[
"Srinivasan",
"Rajgopal",
""
],
[
"Goyal",
"Kshama",
""
]
] |
new_dataset
| 0.976758 |
1504.00337
|
Alejandro Sanchez Guinea
|
Alejandro Sanchez Guinea
|
Understanding SAT is in P
|
10 pages, the paper is completely changed from previous versions
while the main idea is the same, correctness and time complexity proofs are
included
| null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the idea of an understanding with respect to a set of clauses as
a satisfying truth assignment explained by the contexts of the literals in the
clauses. Following this idea, we present a mechanical process that obtains, if
it exists, an understanding with respect to a 3-SAT problem instance based on
the contexts of each literal in the instance, otherwise it determines that none
exists. We demonstrate that our process is correct and efficient in solving
3-SAT.
|
[
{
"version": "v1",
"created": "Wed, 1 Apr 2015 18:54:44 GMT"
},
{
"version": "v2",
"created": "Sun, 10 May 2015 21:14:53 GMT"
},
{
"version": "v3",
"created": "Tue, 3 Nov 2015 00:39:37 GMT"
},
{
"version": "v4",
"created": "Fri, 16 Sep 2016 13:23:24 GMT"
}
] | 2016-09-19T00:00:00 |
[
[
"Guinea",
"Alejandro Sanchez",
""
]
] |
new_dataset
| 0.998947 |
1609.04879
|
Jeffrey Georgeson
|
Jeffrey Georgeson and Christopher Child
|
NPCs as People, Too: The Extreme AI Personality Engine
|
9 pages, 3 tables, 3 figures
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
PK Dick once asked "Do Androids Dream of Electric Sheep?" In video games, a
similar question could be asked of non-player characters: Do NPCs have dreams?
Can they live and change as humans do? Can NPCs have personalities, and can
these develop through interactions with players, other NPCs, and the world
around them? Despite advances in personality AI for games, most NPCs are still
undeveloped and undeveloping, reacting with flat affect and predictable
routines that make them far less than human--in fact, they become little more
than bits of the scenery that give out parcels of information. This need not be
the case. Extreme AI, a psychology-based personality engine, creates adaptive
NPC personalities. Originally developed as part of the thesis "NPCs as People:
Using Databases and Behaviour Trees to Give Non-Player Characters Personality,"
Extreme AI is now a fully functioning personality engine using all thirty
facets of the Five Factor model of personality and an AI system that is live
throughout gameplay. This paper discusses the research leading to Extreme AI;
develops the ideas found in that thesis; discusses the development of other
personality engines; and provides examples of Extreme AI's use in two game
demos.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2016 22:40:29 GMT"
}
] | 2016-09-19T00:00:00 |
[
[
"Georgeson",
"Jeffrey",
""
],
[
"Child",
"Christopher",
""
]
] |
new_dataset
| 0.997337 |
1609.04913
|
Macauley Coggins
|
Macauley Coggins
|
Design of an Optoelectronic State Machine with integrated BDD based
Optical logic
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper I demonstrate a novel design for an optoelectronic State
Machine which replaces input/output forming logic found in conventional state
machines with BDD based optical logic while still using solid state memory in
the form of flip-flops in order to store states. This type of logic makes use
of waveguides and ring resonators to create binary switches. These switches in
turn can be used to create combinational logic which can be used as
input/output forming logic for a state machine. Replacing conventional
combinational logic with BDD based optical logic allows for a faster range of
state machines that can certainly outperform conventional state machines as
propagation delays within the logic described are in the order of picoseconds
as opposed to nanoseconds in digital logic.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2016 06:13:37 GMT"
}
] | 2016-09-19T00:00:00 |
[
[
"Coggins",
"Macauley",
""
]
] |
new_dataset
| 0.971213 |
1609.04919
|
Alex James Dr
|
Akshay Kumar Maan, Alex Pappachen James
|
Voltage Controlled Memristor Threshold Logic Gates
|
To appear in 2016 IEEEE Asia Pacific Conference on Circuits & Systems
(IEEE APCCAS 2016), Jeju, Korea, October 25-28, 2016
| null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a resistive switching memristor cell for
implementing universal logic gates. The cell has a weighted control input whose
resistance is set based on a control signal that generalizes the operational
regime from NAND to NOR functionality. We further show how threshold logic in
the voltage-controlled resistive cell can be used to implement a XOR logic.
Building on the same principle we implement a half adder and a 4-bit CLA (Carry
Look-ahead Adder) and show that in comparison with CMOS-only logic, the
proposed system shows significant improvements in terms of device area, power
dissipation and leakage power.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2016 06:49:35 GMT"
}
] | 2016-09-19T00:00:00 |
[
[
"Maan",
"Akshay Kumar",
""
],
[
"James",
"Alex Pappachen",
""
]
] |
new_dataset
| 0.999165 |
1609.04921
|
Alex James Dr
|
Askhat Zhanbossinov, Kamilya Smagulova, Alex Pappachen James
|
CMOS-Memristor Dendrite Threshold Circuits
|
Zhanbossinov, K. Smagulova, A. P. James, CMOS-Memristor Dendrite
Threshold Circuits, 2016 IEEE APCCAS, Jeju, Korea, October 25-28, 2016
| null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-linear neuron models overcomes the limitations of linear binary models of
neurons that have the inability to compute linearly non-separable functions
such as XOR. While several biologically plausible models based on dendrite
thresholds are reported in the previous studies, the hardware implementation of
such non-linear neuron models remain as an open problem. In this paper, we
propose a circuit design for implementing logical dendrite non-linearity
response of dendrite spike and saturation types. The proposed dendrite cells
are used to build XOR circuit and intensity detection circuit that consists of
different combinations of dendrite cells with saturating and spiking responses.
The dendrite cells are designed using a set of memristors, Zener diodes, and
CMOS NOT gates. The circuits are designed, analyzed and verified on circuit
boards.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2016 06:55:40 GMT"
}
] | 2016-09-19T00:00:00 |
[
[
"Zhanbossinov",
"Askhat",
""
],
[
"Smagulova",
"Kamilya",
""
],
[
"James",
"Alex Pappachen",
""
]
] |
new_dataset
| 0.999449 |
1609.04955
|
Benjamin Leiding
|
Benjamin Leiding, Clemens H. Cap, Thomas Mundt, Samaneh Rashidibajgan
|
Authcoin: Validation and Authentication in Decentralized Networks
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Authcoin is an alternative approach to the commonly used public key
infrastructures such as central authorities and the PGP web of trust. It
combines a challenge response-based validation and authentication process for
domains, certificates, email accounts and public keys with the advantages of a
block chain-based storage system. As a result, Authcoin does not suffer from
the downsides of existing solutions and is much more resilient to sybil
attacks.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2016 08:53:05 GMT"
}
] | 2016-09-19T00:00:00 |
[
[
"Leiding",
"Benjamin",
""
],
[
"Cap",
"Clemens H.",
""
],
[
"Mundt",
"Thomas",
""
],
[
"Rashidibajgan",
"Samaneh",
""
]
] |
new_dataset
| 0.998132 |
1609.05020
|
Alejandro Vaisman Dr.
|
Bart Kuijpers and Alejandro Vaisman
|
A Formal Algebra for OLAP
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online Analytical Processing (OLAP) comprises tools and algorithms that allow
querying multidimensional databases. It is based on the multidimensional model,
where data can be seen as a cube, where each cell contains one or more measures
can be aggregated along dimensions. Despite the extensive corpus of work in the
field, a standard language for OLAP is still needed, since there is no
well-defined, accepted semantics, for many of the usual OLAP operations. In
this paper, we address this problem, and present a set of operations for
manipulating a data cube. We clearly define the semantics of these operations,
and prove that they can be composed, yielding a language powerful enough to
express complex OLAP queries. We express these operations as a sequence of
atomic transformations over a fixed multidimensional matrix, whose cells
contain a sequence of measures. Each atomic transformation produces a new
measure. When a sequence of transformations defines an OLAP operation, a flag
is produced indicating which cells must be considered as input for the next
operation. In this way, an elegant algebra is defined. Our main contribution,
with respect to other similar efforts in the field is that, for the first time,
a formal proof of the correctness of the operations is given, thus providing a
clear semantics for them. We believe the present work will serve as a basis to
build more solid practical tools for data analysis.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2016 12:17:34 GMT"
}
] | 2016-09-19T00:00:00 |
[
[
"Kuijpers",
"Bart",
""
],
[
"Vaisman",
"Alejandro",
""
]
] |
new_dataset
| 0.989602 |
1609.05080
|
Yunyan Chang
|
Yunyan Chang, Peter Jung, Chan Zhou, and Slawomir Stanczak
|
Block Compressed Sensing Based Distributed Device Detection for M2M
Communications
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we utilize the framework of compressed sensing (CS) for
distributed device detection and resource allocation in large-scale
machine-to-machine (M2M) communication networks. The devices deployed in the
network are partitioned into clusters according to some pre-defined criteria.
Moreover, the devices in each cluster are assigned a unique signature of a
particular design that can be used to indicate their active status to the
network. The proposed scheme in this work mainly consists of two essential
steps: (i) The base station (BS) detects the active clusters and the number of
active devices in each cluster using a novel block sketching algorithm, and
then assigns a certain amount of resources accordingly. (ii) Each active device
detects its ranking among all the active devices in its cluster using an
enhanced greedy algorithm and accesses the corresponding resource for
transmission based on the ranking. By exploiting the correlation in the device
behaviors and the sparsity in the activation pattern of the M2M devices, the
device detection problem is thus tackled as a CS support recovery procedure for
a particular binary block-sparse signal $x\in\mathbb{B}^N$ -- with block
sparsity $K_B$ and in-block sparsity $K_I$ over block size $d$. Theoretical
analysis shows that the activation pattern of the M2M devices can be reliably
reconstructed within an acquisition time of $\mathcal{O}(\max\{K_B\log N,
K_BK_I\log d\})$, which achieves a better scaling and less computational
complexity of $\mathcal{O}(N(K_I^2+\log N))$ compared with standard CS
algorithms. Moreover, extensive simulations confirm the robustness of the
proposed scheme in the detection process, especially in terms of higher
detection probability and reduced access delay when compared with conventional
schemes like LTE random access (RA) procedure and classic cluster-based access
approaches.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2016 14:30:25 GMT"
}
] | 2016-09-19T00:00:00 |
[
[
"Chang",
"Yunyan",
""
],
[
"Jung",
"Peter",
""
],
[
"Zhou",
"Chan",
""
],
[
"Stanczak",
"Slawomir",
""
]
] |
new_dataset
| 0.994095 |
1609.05118
|
Hamid Tizhoosh
|
Mina Nouredanesh, H.R. Tizhoosh, Ershad Banijamali, James Tung
|
Radon-Gabor Barcodes for Medical Image Retrieval
|
To appear in proceedings of the 23rd International Conference on
Pattern Recognition (ICPR 2016), Cancun, Mexico, December 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, with the explosion of digital images on the Web,
content-based retrieval has emerged as a significant research area. Shapes,
textures, edges and segments may play a key role in describing the content of
an image. Radon and Gabor transforms are both powerful techniques that have
been widely studied to extract shape-texture-based information. The combined
Radon-Gabor features may be more robust against scale/rotation variations,
presence of noise, and illumination changes. The objective of this paper is to
harness the potentials of both Gabor and Radon transforms in order to introduce
expressive binary features, called barcodes, for image annotation/tagging
tasks. We propose two different techniques: Gabor-of-Radon-Image Barcodes
(GRIBCs), and Guided-Radon-of-Gabor Barcodes (GRGBCs). For validation, we
employ the IRMA x-ray dataset with 193 classes, containing 12,677 training
images and 1,733 test images. A total error score as low as 322 and 330 were
achieved for GRGBCs and GRIBCs, respectively. This corresponds to $\approx
81\%$ retrieval accuracy for the first hit.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2016 16:01:43 GMT"
}
] | 2016-09-19T00:00:00 |
[
[
"Nouredanesh",
"Mina",
""
],
[
"Tizhoosh",
"H. R.",
""
],
[
"Banijamali",
"Ershad",
""
],
[
"Tung",
"James",
""
]
] |
new_dataset
| 0.999388 |
1609.04453
|
Terrell Mundhenk
|
T. Nathan Mundhenk, Goran Konjevod, Wesam A. Sakla, Kofi Boakye
|
A Large Contextual Dataset for Classification, Detection and Counting of
Cars with Deep Learning
|
ECCV 2016 Pre-press revision
| null | null | null |
cs.CV cs.DC cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We have created a large diverse set of cars from overhead images, which are
useful for training a deep learner to binary classify, detect and count them.
The dataset and all related material will be made publically available. The set
contains contextual matter to aid in identification of difficult targets. We
demonstrate classification and detection on this dataset using a neural network
we call ResCeption. This network combines residual learning with
Inception-style layers and is used to count cars in one look. This is a new way
to count objects rather than by localization or density estimation. It is
fairly accurate, fast and easy to implement. Additionally, the counting method
is not car or scene specific. It would be easy to train this method to count
other kinds of objects and counting over new scenes requires no extra set up or
assumptions about object locations.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2016 21:44:58 GMT"
}
] | 2016-09-16T00:00:00 |
[
[
"Mundhenk",
"T. Nathan",
""
],
[
"Konjevod",
"Goran",
""
],
[
"Sakla",
"Wesam A.",
""
],
[
"Boakye",
"Kofi",
""
]
] |
new_dataset
| 0.99974 |
1609.04499
|
Mohammed Eltayeb
|
Mohammed E. Eltayeb, Junil Choi, Tareq Y. Al-Naffouri, and Robert W.
Heath Jr
|
On the Security of Millimeter Wave Vehicular Communication Systems using
Random Antenna Subsets
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Millimeter wave (mmWave) vehicular communica tion systems have the potential
to improve traffic efficiency and safety. Lack of secure communication links,
however, may lead to a formidable set of abuses and attacks. To secure
communication links, a physical layer precoding technique for mmWave vehicular
communication systems is proposed in this paper. The proposed technique
exploits the large dimensional antenna arrays available at mmWave systems to
produce direction dependent transmission. This results in coherent transmission
to the legitimate receiver and artificial noise that jams eavesdroppers with
sensitive receivers. Theoretical and numerical results demonstrate the validity
and effectiveness of the proposed technique and show that the proposed
technique provides high secrecy throughput when compared to conventional array
and switched array transmission techniques.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2016 03:16:50 GMT"
}
] | 2016-09-16T00:00:00 |
[
[
"Eltayeb",
"Mohammed E.",
""
],
[
"Choi",
"Junil",
""
],
[
"Al-Naffouri",
"Tareq Y.",
""
],
[
"Heath",
"Robert W.",
"Jr"
]
] |
new_dataset
| 0.997344 |
1609.04602
|
Hongxi Tong
|
Hongxi Tong, Xiaoqing Wang
|
New MDS or near MDS self-dual codes over finite fields
|
12 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The study of MDS self-dual codes has attracted lots of attention in recent
years.
There are many papers on determining existence of $q-$ary MDS self-dual codes
for various lengths.
There are not existence of $q-$ary MDS self-dual codes of some lengths, even
these lengths $< q$.
We generalize MDS Euclidean self-dual codes to near MDS Euclidean self-dual
codes and near MDS isodual codes.
And we obtain many new near MDS isodual codes from extended negacyclic duadic
codes and we obtain many new MDS Euclidean self-dual codes from MDS Euclidean
self-dual codes.
We generalize MDS Hermitian self-dual codes to near MDS Hermitian self-dual
codes.
We obtain near MDS Hermitian self-dual codes from extended negacyclic duadic
codes and from MDS Hermitian self-dual codes.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2016 12:40:30 GMT"
}
] | 2016-09-16T00:00:00 |
[
[
"Tong",
"Hongxi",
""
],
[
"Wang",
"Xiaoqing",
""
]
] |
new_dataset
| 0.985033 |
1609.04730
|
Daniel Pickem
|
Daniel Pickem, Paul Glotfelter, Li Wang, Mark Mote, Aaron Ames, Eric
Feron, Magnus Egerstedt
|
The Robotarium: A remotely accessible swarm robotics research testbed
|
8 pages, 5 figures, 21 references. arXiv admin note: text overlap
with arXiv:1604.00640
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes the Robotarium -- a remotely accessible, multi-robot
research facility. The impetus behind the Robotarium is that multi-robot
testbeds constitute an integral and essential part of the multi-robot research
cycle, yet they are expensive, complex, and time-consuming to develop, operate,
and maintain. These resource constraints, in turn, limit access for large
groups of researchers and students, which is what the Robotarium is remedying
by providing users with remote access to a state-of-the-art multi-robot test
facility. This paper details the design and operation of the Robotarium and
discusses the considerations one must take when making complex hardware
remotely accessible. In particular, safety must be built into the system
already at the design phase without overly constraining what coordinated
control programs users can upload and execute, which calls for minimally
invasive safety routines with provable performance guarantees.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2016 16:45:24 GMT"
}
] | 2016-09-16T00:00:00 |
[
[
"Pickem",
"Daniel",
""
],
[
"Glotfelter",
"Paul",
""
],
[
"Wang",
"Li",
""
],
[
"Mote",
"Mark",
""
],
[
"Ames",
"Aaron",
""
],
[
"Feron",
"Eric",
""
],
[
"Egerstedt",
"Magnus",
""
]
] |
new_dataset
| 0.997378 |
1605.07224
|
Cewei Cui
|
Cewei Cui and Zhe Dang
|
A Free Energy Foundation of Semantic Similarity in Automata and
Languages
| null | null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper develops a free energy theory from physics including the
variational principles for automata and languages and also provides algorithms
to compute the energy as well as efficient algorithms for estimating the
nondeterminism in a nondeterministic finite automaton. This theory is then used
as a foundation to define a semantic similarity metric for automata and
languages. Since automata are a fundamental model for all modern programs while
languages are a fundamental model for the programs' behaviors, we believe that
the theory and the metric developed in this paper can be further used for
real-word programs as well.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2016 22:13:21 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Sep 2016 09:21:30 GMT"
}
] | 2016-09-15T00:00:00 |
[
[
"Cui",
"Cewei",
""
],
[
"Dang",
"Zhe",
""
]
] |
new_dataset
| 0.967494 |
1607.00226
|
George MacCartney Jr
|
George R. MacCartney Jr., Sijia Deng, Shu Sun, and Theodore S.
Rappaport
|
Millimeter-Wave Human Blockage at 73 GHz with a Simple Double Knife-Edge
Diffraction Model and Extension for Directional Antennas
|
To be published in 2016 IEEE 84th Vehicular Technology Conference
(VTC2016-Fall), Montreal, Canada, Sept. 2016
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents 73 GHz human blockage measurements for a point-to-point
link with a 5 m transmitter-receiver separation distance in an indoor
environment, with a human that walked at a speed of approximately 1 m/s at a
perpendicular orientation to the line between the transmitter and receiver, at
various distances between them. The experiment measures the shadowing effect of
a moving human body when using directional antennas at the transmitter and
receiver for millimeter-wave radio communications. The measurements were
conducted using a 500 Megachips-per-second wideband correlator channel sounder
with a 1 GHz first null-to-null RF bandwidth. Results indicate high shadowing
attenuation is not just due to the human blocker but also is due to the static
directional nature of the antennas used, leading to the need for phased-array
antennas to switch beam directions in the presence of obstructions and
blockages at millimeter-waves. A simple model for human blockage is provided
based on the double knife-edge diffraction (DKED) model where humans are
approximated by a rectangular screen with infinite vertical height, similar to
the human blockage model given by the METIS project.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2016 12:53:36 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Jul 2016 15:50:50 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Sep 2016 15:55:01 GMT"
}
] | 2016-09-15T00:00:00 |
[
[
"MacCartney",
"George R.",
"Jr."
],
[
"Deng",
"Sijia",
""
],
[
"Sun",
"Shu",
""
],
[
"Rappaport",
"Theodore S.",
""
]
] |
new_dataset
| 0.999285 |
1608.08334
|
Shervin Ardeshir
|
Shervin Ardeshir and Ali Borji
|
Egocentric Meets Top-view
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Thanks to the availability and increasing popularity of Egocentric cameras
such as GoPro cameras, glasses, and etc. we have been provided with a plethora
of videos captured from the first person perspective. Surveillance cameras and
Unmanned Aerial Vehicles(also known as drones) also offer tremendous amount of
videos, mostly with top-down or oblique view-point. Egocentric vision and
top-view surveillance videos have been studied extensively in the past in the
computer vision community. However, the relationship between the two has yet to
be explored thoroughly. In this effort, we attempt to explore this relationship
by approaching two questions. First, having a set of egocentric videos and a
top-view video, can we verify if the top-view video contains all, or some of
the egocentric viewers present in the egocentric set? And second, can we
identify the egocentric viewers in the content of the top-view video? In other
words, can we find the cameramen in the surveillance videos? These problems can
become more challenging when the videos are not time-synchronous. Thus we
formalize the problem in a way which handles and also estimates the unknown
relative time-delays between the egocentric videos and the top-view video. We
formulate the problem as a spectral graph matching instance, and jointly seek
the optimal assignments and relative time-delays of the videos. As a result, we
spatiotemporally localize the egocentric observers in the top-view video. We
model each view (egocentric or top) using a graph, and compute the assignment
and time-delays in an iterative-alternative fashion.
|
[
{
"version": "v1",
"created": "Tue, 30 Aug 2016 05:42:07 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Sep 2016 18:51:14 GMT"
}
] | 2016-09-15T00:00:00 |
[
[
"Ardeshir",
"Shervin",
""
],
[
"Borji",
"Ali",
""
]
] |
new_dataset
| 0.966514 |
1609.03986
|
Tal Hassner
|
Christopher Parker, Matthew Daiter, Kareem Omar, Gil Levi and Tal
Hassner
|
The CUDA LATCH Binary Descriptor: Because Sometimes Faster Means Better
|
Accepted to ECCV'16 workshops
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accuracy, descriptor size, and the time required for extraction and matching
are all important factors when selecting local image descriptors. To optimize
over all these requirements, this paper presents a CUDA port for the recent
Learned Arrangement of Three Patches (LATCH) binary descriptors to the GPU
platform. The design of LATCH makes it well suited for GPU processing. Owing to
its small size and binary nature, the GPU can further be used to efficiently
match LATCH features. Taken together, this leads to breakneck descriptor
extraction and matching speeds. We evaluate the trade off between these speeds
and the quality of results in a feature matching intensive application. To this
end, we use our proposed CUDA LATCH (CLATCH) to recover structure from motion
(SfM), comparing 3D reconstructions and speed using different representations.
Our results show that CLATCH provides high quality 3D reconstructions at
fractions of the time required by other representations, with little, if any,
loss of reconstruction quality.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2016 19:24:02 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Sep 2016 00:51:19 GMT"
}
] | 2016-09-15T00:00:00 |
[
[
"Parker",
"Christopher",
""
],
[
"Daiter",
"Matthew",
""
],
[
"Omar",
"Kareem",
""
],
[
"Levi",
"Gil",
""
],
[
"Hassner",
"Tal",
""
]
] |
new_dataset
| 0.993898 |
1609.04079
|
Ayan Chakrabarti
|
Ayan Chakrabarti, Kalyan Sunkavalli
|
Single-image RGB Photometric Stereo With Spatially-varying Albedo
|
3DV 2016. Project page at http://www.ttic.edu/chakrabarti/rgbps/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a single-shot system to recover surface geometry of objects with
spatially-varying albedos, from images captured under a calibrated RGB
photometric stereo setup---with three light directions multiplexed across
different color channels in the observed RGB image. Since the problem is
ill-posed point-wise, we assume that the albedo map can be modeled as
piece-wise constant with a restricted number of distinct albedo values. We show
that under ideal conditions, the shape of a non-degenerate local constant
albedo surface patch can theoretically be recovered exactly. Moreover, we
present a practical and efficient algorithm that uses this model to robustly
recover shape from real images. Our method first reasons about shape locally in
a dense set of patches in the observed image, producing shape distributions for
every patch. These local distributions are then combined to produce a single
consistent surface normal map. We demonstrate the efficacy of the approach
through experiments on both synthetic renderings as well as real captured
images.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2016 00:39:58 GMT"
}
] | 2016-09-15T00:00:00 |
[
[
"Chakrabarti",
"Ayan",
""
],
[
"Sunkavalli",
"Kalyan",
""
]
] |
new_dataset
| 0.974706 |
1609.04083
|
Yuan Cao
|
Yonglin Cao, Yuan Cao and Fang-Wei Fu
|
Left dihedral codes over Galois rings ${\rm GR}(p^2,m)$
| null | null | null | null |
cs.IT math.IT math.RA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $D_{2n}=\langle x,y\mid x^n=1, y^2=1, yxy=x^{-1}\rangle$ be a dihedral
group, and $R={\rm GR}(p^2,m)$ be a Galois ring of characteristic $p^2$ and
cardinality $p^{2m}$ where $p$ is a prime. Left ideals of the group ring
$R[D_{2n}]$ are called left dihedral codes over $R$ of length $2n$, and
abbreviated as left $D_{2n}$-codes over $R$. Let ${\rm gcd}(n,p)=1$ in this
paper. Then any left $D_{2n}$-code over $R$ is uniquely decomposed into a
direct sum of concatenated codes with inner codes ${\cal A}_i$ and outer codes
$C_i$, where ${\cal A}_i$ is a cyclic code over $R$ of length $n$ and $C_i$ is
a skew cyclic code of length $2$ over an extension Galois ring or principal
ideal ring of $R$, and a generator matrix and basic parameters for each outer
code $C_i$ is given. Moreover, a formula to count the number of these codes is
obtained, the dual code for each left $D_{2n}$-code is determined and all
self-dual left $D_{2n}$-codes and self-orthogonal left $D_{2n}$-codes over $R$
are presented, respectively.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2016 00:54:06 GMT"
}
] | 2016-09-15T00:00:00 |
[
[
"Cao",
"Yonglin",
""
],
[
"Cao",
"Yuan",
""
],
[
"Fu",
"Fang-Wei",
""
]
] |
new_dataset
| 0.995135 |
1609.04085
|
EPTCS
|
Patrick Ah-Fat (Imperial College London), Michael Huth (Imperial
College London)
|
Partial Solvers for Parity Games: Effective Polynomial-Time Composition
|
In Proceedings GandALF 2016, arXiv:1609.03648
|
EPTCS 226, 2016, pp. 1-15
|
10.4204/EPTCS.226.1
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Partial methods play an important role in formal methods and beyond. Recently
such methods were developed for parity games, where polynomial-time partial
solvers decide the winners of a subset of nodes. We investigate here how
effective polynomial-time partial solvers can be by studying interactions of
partial solvers based on generic composition patterns that preserve
polynomial-time computability. We show that use of such composition patterns
discovers new partial solvers - including those that merge node sets that have
the same but unknown winner - by studying games that composed partial solvers
can neither solve nor simplify. We experimentally validate that this
data-driven approach to refinement leads to polynomial-time partial solvers
that can solve all standard benchmarks of structured games. For one of these
polynomial-time partial solvers not even a sole random game from a few billion
random games of varying configuration was found that it won't solve completely.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2016 00:57:34 GMT"
}
] | 2016-09-15T00:00:00 |
[
[
"Ah-Fat",
"Patrick",
"",
"Imperial College London"
],
[
"Huth",
"Michael",
"",
"Imperial\n College London"
]
] |
new_dataset
| 0.972532 |
1609.04088
|
EPTCS
|
Nick Bezhanishvili (ILLC, University of Amsterdam), Clemens Kupke
(University of Strathclyde)
|
Games for Topological Fixpoint Logic
|
In Proceedings GandALF 2016, arXiv:1609.03648
|
EPTCS 226, 2016, pp. 46-60
|
10.4204/EPTCS.226.4
| null |
cs.LO cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Topological fixpoint logics are a family of logics that admits topological
models and where the fixpoint operators are defined with respect to the
topological interpretations. Here we consider a topological fixpoint logic for
relational structures based on Stone spaces, where the fixpoint operators are
interpreted via clopen sets. We develop a game-theoretic semantics for this
logic. First we introduce games characterising clopen fixpoints of monotone
operators on Stone spaces. These fixpoint games allow us to characterise the
semantics for our topological fixpoint logic using a two-player graph game.
Adequacy of this game is the main result of our paper. Finally, we define
bisimulations for the topological structures under consideration and use our
game semantics to prove that the truth of a formula of our topological fixpoint
logic is bisimulation-invariant.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2016 00:58:01 GMT"
}
] | 2016-09-15T00:00:00 |
[
[
"Bezhanishvili",
"Nick",
"",
"ILLC, University of Amsterdam"
],
[
"Kupke",
"Clemens",
"",
"University of Strathclyde"
]
] |
new_dataset
| 0.999239 |
1609.04091
|
EPTCS
|
Davide Bresolin (University of Bologna), Emilio Mu\~noz-Velasco
(University of Malaga), Guido Sciavicco (University of Ferrara)
|
On the Expressive Power of Sub-Propositional Fragments of Modal Logic
|
In Proceedings GandALF 2016, arXiv:1609.03648
|
EPTCS 226, 2016, pp. 91-104
|
10.4204/EPTCS.226.7
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modal logic is a paradigm for several useful and applicable formal systems in
computer science. It generally retains the low complexity of classical
propositional logic, but notable exceptions exist in the domains of
description, temporal, and spatial logic, where the most expressive formalisms
have a very high complexity or are even undecidable. In search of
computationally well-behaved fragments, clausal forms and other
sub-propositional restrictions of temporal and description logics have been
recently studied. This renewed interest on sub-propositional logics, which
mainly focus on the complexity of the various fragments, raise natural
questions on their the relative expressive power, which we try to answer here
for the basic multi-modal logic Kn. We consider the Horn and the Krom
restrictions, as well as the combined restriction (known as the core fragment)
of modal logic, and, orthogonally, the fragments that emerge by disallowing
boxes or diamonds from positive literals. We study the problem in a very
general setting, to ease transferring our results to other meaningful cases.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2016 00:58:29 GMT"
}
] | 2016-09-15T00:00:00 |
[
[
"Bresolin",
"Davide",
"",
"University of Bologna"
],
[
"Muñoz-Velasco",
"Emilio",
"",
"University of Malaga"
],
[
"Sciavicco",
"Guido",
"",
"University of Ferrara"
]
] |
new_dataset
| 0.990799 |
1609.04096
|
EPTCS
|
Pierre Ganty (IMDEA Software Institute, Madrid, Spain), Damir Valput
(IMDEA Software Institute, Madrid, Spain)
|
Bounded-oscillation Pushdown Automata
|
In Proceedings GandALF 2016, arXiv:1609.03648
|
EPTCS 226, 2016, pp. 178-197
|
10.4204/EPTCS.226.13
| null |
cs.FL cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an underapproximation for context-free languages by filtering out
runs of the underlying pushdown automaton depending on how the stack height
evolves over time. In particular, we assign to each run a number quantifying
the oscillating behavior of the stack along the run. We study languages
accepted by pushdown automata restricted to k-oscillating runs. We relate
oscillation on pushdown automata with a counterpart restriction on context-free
grammars. We also provide a way to filter all but the k-oscillating runs from a
given PDA by annotating stack symbols with information about the oscillation.
Finally, we study closure properties of the defined class of languages and the
complexity of the k-emptiness problem asking, given a pushdown automaton P and
k >= 0, whether P has a k-oscillating run. We show that, when k is not part of
the input, the k-emptiness problem is NLOGSPACE-complete.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2016 00:59:26 GMT"
}
] | 2016-09-15T00:00:00 |
[
[
"Ganty",
"Pierre",
"",
"IMDEA Software Institute, Madrid, Spain"
],
[
"Valput",
"Damir",
"",
"IMDEA Software Institute, Madrid, Spain"
]
] |
new_dataset
| 0.984087 |
1609.04100
|
EPTCS
|
Tomer Libal (Inria), Marco Volpe (Inria)
|
Certification of Prefixed Tableau Proofs for Modal Logic
|
In Proceedings GandALF 2016, arXiv:1609.03648
|
EPTCS 226, 2016, pp. 257-271
|
10.4204/EPTCS.226.18
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Different theorem provers tend to produce proof objects in different formats
and this is especially the case for modal logics, where several deductive
formalisms (and provers based on them) have been presented. This work falls
within the general project of establishing a common specification language in
order to certify proofs given in a wide range of deductive formalisms. In
particular, by using a translation from the modal language into a first-order
polarized language and a checker whose small kernel is based on a classical
focused sequent calculus, we are able to certify modal proofs given in labeled
sequent calculi, prefixed tableaux and free-variable prefixed tableaux. We
describe the general method for the logic K, present its implementation in a
prolog-like language, provide some examples and discuss how to extend the
approach to other normal modal logics
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2016 01:00:16 GMT"
}
] | 2016-09-15T00:00:00 |
[
[
"Libal",
"Tomer",
"",
"Inria"
],
[
"Volpe",
"Marco",
"",
"Inria"
]
] |
new_dataset
| 0.956232 |
1609.04147
|
Abhishek Sawarkar
|
Abhishek Sawarkar, Vishal Chaudhari, Rahul Chavan, Varun Zope, Akshay
Budale, Faruk Kazi
|
HMD Vision-based Teleoperating UGV and UAV for Hostile Environment using
Deep Learning
|
6 pages, 9 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The necessity of maintaining a robust antiterrorist task force has become
imperative in recent times with resurgence of rogue element in the society. A
well equipped combat force warrants the safety and security of citizens and the
integrity of the sovereign state. In this paper we propose a novel
teleoperating robot which can play a major role in combat, rescue and
reconnaissance missions by substantially reducing loss of human soldiers in
such hostile environments. The proposed robotic solution consists of an
unmanned ground vehicle equipped with an IP camera visual system broadcasting
real-time video data to a remote cloud server. With the advancement in machine
learning algorithms in the field of computer vision, we incorporate state of
the art deep convolutional neural networks to identify and predict individuals
with malevolent intent. The classification is performed on every frame of the
video stream by the trained network in the cloud server. The predicted output
of the network is overlaid on the video stream with specific colour marks and
prediction percentage. Finally the data is resized into half-side by side
format and streamed to the head mount display worn by the human controller
which facilitates first person view of the scenario. The ground vehicle is also
coupled with an unmanned aerial vehicle for aerial surveillance. The proposed
scheme is an assistive system and the final decision evidently lies with the
human handler.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2016 07:03:15 GMT"
}
] | 2016-09-15T00:00:00 |
[
[
"Sawarkar",
"Abhishek",
""
],
[
"Chaudhari",
"Vishal",
""
],
[
"Chavan",
"Rahul",
""
],
[
"Zope",
"Varun",
""
],
[
"Budale",
"Akshay",
""
],
[
"Kazi",
"Faruk",
""
]
] |
new_dataset
| 0.999178 |
1609.04173
|
Kasun Samarasinghe
|
Pierre Leone and Kasun Samarasinghe
|
Every Schnyder Drawing is a Greedy Embedding
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Geographic routing is a routing paradigm, which uses geographic coordinates
of network nodes to determine routes. Greedy routing, the simplest form of
geographic routing forwards a packet to the closest neighbor towards the
destination. A greedy embedding is a embedding of a graph on a geometric space
such that greedy routing always guarantees delivery. A Schnyder drawing is a
classical way to draw a planar graph. In this manuscript, we show that every
Schnyder drawing is a greedy embedding, based on a generalized definition of
greedy routing.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2016 08:45:50 GMT"
}
] | 2016-09-15T00:00:00 |
[
[
"Leone",
"Pierre",
""
],
[
"Samarasinghe",
"Kasun",
""
]
] |
new_dataset
| 0.998375 |
1609.04197
|
Albert Sunny
|
Albert Sunny, Sumankumar Panchal, Nikhil Vidhani, Subhashini
Krishnasamy, S.V.R. Anand, Malati Hegde, Joy Kuri, Anurag Kumar
|
ADWISERv2: A Plug-and-play Controller for Managing TCP Transfers in
IEEE~802.11 Infrastructure WLANs with Multiple Access Points
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a generic plug-and-play controller that ensures
fair and efficient operation of IEEE~802.11 infrastructure wireless local area
networks with multiple co-channel access points, without any change to
hardware/firmware of the network devices. Our controller addresses performance
issues of TCP transfers in multi-AP WLANs, by overlaying a coarse time-slicing
scheduler on top of a cascaded fair queuing scheduler. The time slices and
queue weights, used in our controller, are obtained from the solution of a
constrained utility optimization formulation. A study of the impact of coarse
time-slicing on TCP is also presented in this paper. We present an improved
algorithm for adaptation of the service rate of the fair queuing scheduler and
provide experimental results to illustrate its efficacy. We also present the
changes that need to be incorporated to the proposed approach, to handle
short-lived and interactive TCP flows. Finally, we report the results of
experiments performed on a real testbed, demonstrating the efficacy of our
controller.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2016 10:07:06 GMT"
}
] | 2016-09-15T00:00:00 |
[
[
"Sunny",
"Albert",
""
],
[
"Panchal",
"Sumankumar",
""
],
[
"Vidhani",
"Nikhil",
""
],
[
"Krishnasamy",
"Subhashini",
""
],
[
"Anand",
"S. V. R.",
""
],
[
"Hegde",
"Malati",
""
],
[
"Kuri",
"Joy",
""
],
[
"Kumar",
"Anurag",
""
]
] |
new_dataset
| 0.999616 |
1609.04216
|
Marko Angjelichinoski
|
Marko Angjelichinoski, Cedomir Stefanovic, Petar Popovski
|
Modemless Multiple Access Communications over Powerlines for DC
Microgrid Control
|
Submitted to MACOM 2016
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a communication solution tailored specifically for DC microgrids
(MGs) that exploits: (i) the communication potential residing in power
electronic converters interfacing distributed generators to powerlines and (ii)
the multiple access nature of the communication channel presented by
powerlines. The communication is achieved by modulating the parameters of the
primary control loop implemented by the converters, fostering execution of the
upper layer control applications. We present the proposed solution in the
context of the distributed optimal economic dispatch, where the generators
periodically transmit information about their local generation capacity, and,
simultaneously, using the properties of the multiple access channel, detect the
aggregate generation capacity of the remote peers, with an aim of distributed
computation of the optimal dispatch policy. We evaluate the potential of the
proposed solution and illustrate its inherent trade-offs.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2016 10:57:51 GMT"
}
] | 2016-09-15T00:00:00 |
[
[
"Angjelichinoski",
"Marko",
""
],
[
"Stefanovic",
"Cedomir",
""
],
[
"Popovski",
"Petar",
""
]
] |
new_dataset
| 0.963816 |
1609.02453
|
Francisco Couto
|
Diogo Goncalves and Miguel Costa and Francisco M. Couto
|
A Large-Scale Characterization of User Behaviour in Cable TV
|
in 3rd Workshop on Recommendation Systems for Television and online
Video (RecSysTV), At Boston, MA, USA, 2016
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, Cable TV operators provide their users multiple ways to watch TV
content, such as Live TV and Video on Demand (VOD) services. In the last years,
Catch-up TV has been introduced, allowing users to watch recent broadcast
content whenever they want to. Understanding how the users interact with such
services is important to develop solutions that may increase user satisfaction
, user engagement and user consumption. In this paper, we characterize, for the
first time, how users interact with a large European Cable TV operator that
provides Live TV, Catch-up TV and VOD services. We analyzed many
characteristics, such as the service usage, user engagement, program type,
program genres and time periods. This characterization will help us to have a
deeper understanding on how users interact with these different services, that
may be used to enhance the recommendation systems of Cable TV providers.
|
[
{
"version": "v1",
"created": "Thu, 8 Sep 2016 14:59:49 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Sep 2016 10:16:10 GMT"
}
] | 2016-09-14T00:00:00 |
[
[
"Goncalves",
"Diogo",
""
],
[
"Costa",
"Miguel",
""
],
[
"Couto",
"Francisco M.",
""
]
] |
new_dataset
| 0.977344 |
1609.03193
|
Gabriel Synnaeve
|
Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve
|
Wav2Letter: an End-to-End ConvNet-based Speech Recognition System
|
8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars)
| null | null | null |
cs.LG cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform.
|
[
{
"version": "v1",
"created": "Sun, 11 Sep 2016 18:56:53 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Sep 2016 02:49:05 GMT"
}
] | 2016-09-14T00:00:00 |
[
[
"Collobert",
"Ronan",
""
],
[
"Puhrsch",
"Christian",
""
],
[
"Synnaeve",
"Gabriel",
""
]
] |
new_dataset
| 0.998939 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.