id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2107.05566
|
Philipp Seifer
|
Philipp Seifer, Ralf L\"ammel, Steffen Staab
|
ProGS: Property Graph Shapes Language (Extended Version)
| null |
ISWC 2021 - 20th International Semantic Web Conference. Vol.
12922. LNCS. Springer, 2021, pp. 392-409
|
10.1007/978-3-030-88361-4_23
| null |
cs.DB cs.AI cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Property graphs constitute data models for representing knowledge graphs.
They allow for the convenient representation of facts, including facts about
facts, represented by triples in subject or object position of other triples.
Knowledge graphs such as Wikidata are created by a diversity of contributors
and a range of sources leaving them prone to two types of errors. The first
type of error, falsity of facts, is addressed by property graphs through the
representation of provenance and validity, making triples occur as first-order
objects in subject position of metadata triples. The second type of error,
violation of domain constraints, has not been addressed with regard to property
graphs so far. In RDF representations, this error can be addressed by shape
languages such as SHACL or ShEx, which allow for checking whether graphs are
valid with respect to a set of domain constraints. Borrowing ideas from the
syntax and semantics definitions of SHACL, we design a shape language for
property graphs, ProGS, which allows for formulating shape constraints on
property graphs including their specific constructs, such as edges with
identities and key-value annotations to both nodes and edges. We define a
formal semantics of ProGS, investigate the resulting complexity of validating
property graphs against sets of ProGS shapes, compare with corresponding
results for SHACL, and implement a prototypical validator that utilizes answer
set programming.
|
[
{
"version": "v1",
"created": "Mon, 12 Jul 2021 16:44:21 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Seifer",
"Philipp",
""
],
[
"Lämmel",
"Ralf",
""
],
[
"Staab",
"Steffen",
""
]
] |
new_dataset
| 0.995003 |
2206.02883
|
Mitchell Jones
|
Mitchell Jones and Maximilian Haas-Heger and Jur van den Berg
|
Lane-Level Route Planning for Autonomous Vehicles
|
Appeared at the 15th International Workshop on the Algorithmic
Foundations of Robotics (WAFR) 2022
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present an algorithm that, given a representation of a road network in
lane-level detail, computes a route that minimizes the expected cost to reach a
given destination. In doing so, our algorithm allows us to solve for the
complex trade-offs encountered when trying to decide not just which roads to
follow, but also when to change between the lanes making up these roads, in
order to -- for example -- reduce the likelihood of missing a left exit while
not unnecessarily driving in the leftmost lane. This routing problem can
naturally be formulated as a Markov Decision Process (MDP), in which lane
change actions have stochastic outcomes. However, MDPs are known to be
time-consuming to solve in general. In this paper, we show that -- under
reasonable assumptions -- we can use a Dijkstra-like approach to solve this
stochastic problem, and benefit from its efficient $O(n \log n)$ running time.
This enables an autonomous vehicle to exhibit lane-selection behavior as it
efficiently plans an optimal route to its destination.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 20:19:32 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Jul 2023 16:07:17 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Jones",
"Mitchell",
""
],
[
"Haas-Heger",
"Maximilian",
""
],
[
"Berg",
"Jur van den",
""
]
] |
new_dataset
| 0.998147 |
2209.13397
|
Thomas Wiemann
|
Alexander Mock, Thomas Wiemann, Joachim Hertzberg
|
Rmagine: 3D Range Sensor Simulation in Polygonal Maps via Raytracing for
Embedded Hardware on Mobile Robots
| null | null |
10.1109/ICRA48891.2023.10161388
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sensor simulation has emerged as a promising and powerful technique to find
solutions to many real-world robotic tasks like localization and pose
tracking.However, commonly used simulators have high hardware requirements and
are therefore used mostly on high-end computers. In this paper, we present an
approach to simulate range sensors directly on embedded hardware of mobile
robots that use triangle meshes as environment map. This library called Rmagine
allows a robot to simulate sensor data for arbitrary range sensors directly on
board via raytracing. Since robots typically only have limited computational
resources, the Rmagine aims at being flexible and lightweight, while scaling
well even to large environment maps. It runs on several platforms like Laptops
or embedded computing boards like Nvidia Jetson by putting an unified API over
the specific proprietary libraries provided by the hardware manufacturers. This
work is designed to support the future development of robotic applications
depending on simulation of range data that could previously not be computed in
reasonable time on mobile systems.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2022 14:00:23 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Mock",
"Alexander",
""
],
[
"Wiemann",
"Thomas",
""
],
[
"Hertzberg",
"Joachim",
""
]
] |
new_dataset
| 0.999691 |
2302.04529
|
Martijn Goorden
|
Martijn A. Goorden, Kim G. Larsen, Axel Legay, Florian Lorber, Ulrik
Nyman, Andrzej Wasowski
|
Timed I/O Automata: It is never too late to complete your timed
specification theory
|
Version submitted for review
| null | null | null |
cs.FL cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
A specification theory combines notions of specifications and implementations
with a satisfaction relation, a refinement relation and a set of operators
supporting stepwise design. We develop a complete specification framework for
real-time systems using Timed I/O Automata as the specification formalism, with
the semantics expressed in terms of Timed I/O Transition Systems. We provide
constructs for refinement, consistency checking, logical and structural
composition, and quotient of specifications -- all indispensable ingredients of
a compositional design methodology. The theory is backed by rigorous proofs and
is being implemented in the open-source tool ECDAR.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 09:41:48 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Jul 2023 07:50:12 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Goorden",
"Martijn A.",
""
],
[
"Larsen",
"Kim G.",
""
],
[
"Legay",
"Axel",
""
],
[
"Lorber",
"Florian",
""
],
[
"Nyman",
"Ulrik",
""
],
[
"Wasowski",
"Andrzej",
""
]
] |
new_dataset
| 0.999488 |
2304.10498
|
Xiaohang Tang
|
Xiaohang Tang, Le Cong Dinh, Stephen Marcus McAleer, Yaodong Yang
|
Regret-Minimizing Double Oracle for Extensive-Form Games
|
Accepted at ICML, 2023
| null | null | null |
cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
By incorporating regret minimization, double oracle methods have demonstrated
rapid convergence to Nash Equilibrium (NE) in normal-form games and
extensive-form games, through algorithms such as online double oracle (ODO) and
extensive-form double oracle (XDO), respectively. In this study, we further
examine the theoretical convergence rate and sample complexity of such regret
minimization-based double oracle methods, utilizing a unified framework called
Regret-Minimizing Double Oracle. Based on this framework, we extend ODO to
extensive-form games and determine its sample complexity. Moreover, we
demonstrate that the sample complexity of XDO can be exponential in the number
of information sets $|S|$, owing to the exponentially decaying stopping
threshold of restricted games. To solve this problem, we propose the Periodic
Double Oracle (PDO) method, which has the lowest sample complexity among regret
minimization-based double oracle methods, being only polynomial in $|S|$.
Empirical evaluations on multiple poker and board games show that PDO achieves
significantly faster convergence than previous double oracle algorithms and
reaches a competitive level with state-of-the-art regret minimization methods.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 17:39:02 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Jul 2023 11:30:07 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Tang",
"Xiaohang",
""
],
[
"Dinh",
"Le Cong",
""
],
[
"McAleer",
"Stephen Marcus",
""
],
[
"Yang",
"Yaodong",
""
]
] |
new_dataset
| 0.980987 |
2305.07457
|
Tu Anh Dinh
|
Tu Anh Dinh, Jan Niehues
|
Perturbation-based QE: An Explainable, Unsupervised Word-level Quality
Estimation Method for Blackbox Machine Translation
|
Accepted to MT Summit 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Quality Estimation (QE) is the task of predicting the quality of Machine
Translation (MT) system output, without using any gold-standard translation
references. State-of-the-art QE models are supervised: they require
human-labeled quality of some MT system output on some datasets for training,
making them domain-dependent and MT-system-dependent. There has been research
on unsupervised QE, which requires glass-box access to the MT systems, or
parallel MT data to generate synthetic errors for training QE models. In this
paper, we present Perturbation-based QE - a word-level Quality Estimation
approach that works simply by analyzing MT system output on perturbed input
source sentences. Our approach is unsupervised, explainable, and can evaluate
any type of blackbox MT systems, including the currently prominent large
language models (LLMs) with opaque internal processes. For language directions
with no labeled QE data, our approach has similar or better performance than
the zero-shot supervised approach on the WMT21 shared task. Our approach is
better at detecting gender bias and word-sense-disambiguation errors in
translation than supervised QE, indicating its robustness to out-of-domain
usage. The performance gap is larger when detecting errors on a nontraditional
translation-prompting LLM, indicating that our approach is more generalizable
to different MT systems. We give examples demonstrating our approach's
explainability power, where it shows which input source words have influence on
a certain MT output word.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 13:10:57 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Jul 2023 07:35:09 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Dinh",
"Tu Anh",
""
],
[
"Niehues",
"Jan",
""
]
] |
new_dataset
| 0.996028 |
2306.08249
|
Qingbo Kang
|
Qingbo Kang, Jun Gao, Kang Li, Qicheng Lao
|
Deblurring Masked Autoencoder is Better Recipe for Ultrasound Image
Recognition
|
Accepted by MICCAI 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Masked autoencoder (MAE) has attracted unprecedented attention and achieves
remarkable performance in many vision tasks. It reconstructs random masked
image patches (known as proxy task) during pretraining and learns meaningful
semantic representations that can be transferred to downstream tasks. However,
MAE has not been thoroughly explored in ultrasound imaging. In this work, we
investigate the potential of MAE for ultrasound image recognition. Motivated by
the unique property of ultrasound imaging in high noise-to-signal ratio, we
propose a novel deblurring MAE approach that incorporates deblurring into the
proxy task during pretraining. The addition of deblurring facilitates the
pretraining to better recover the subtle details presented in the ultrasound
images, thus improving the performance of the downstream classification task.
Our experimental results demonstrate the effectiveness of our deblurring MAE,
achieving state-of-the-art performance in ultrasound image classification.
Overall, our work highlights the potential of MAE for ultrasound image
recognition and presents a novel approach that incorporates deblurring to
further improve its effectiveness.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 05:29:44 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Jul 2023 06:26:39 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Jul 2023 08:33:08 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Kang",
"Qingbo",
""
],
[
"Gao",
"Jun",
""
],
[
"Li",
"Kang",
""
],
[
"Lao",
"Qicheng",
""
]
] |
new_dataset
| 0.998551 |
2306.14824
|
Li Dong
|
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming
Ma, Furu Wei
|
Kosmos-2: Grounding Multimodal Large Language Models to the World
|
20 pages
| null | null | null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 16:32:47 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Jun 2023 09:11:34 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Jul 2023 05:41:34 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Peng",
"Zhiliang",
""
],
[
"Wang",
"Wenhui",
""
],
[
"Dong",
"Li",
""
],
[
"Hao",
"Yaru",
""
],
[
"Huang",
"Shaohan",
""
],
[
"Ma",
"Shuming",
""
],
[
"Wei",
"Furu",
""
]
] |
new_dataset
| 0.998545 |
2307.03847
|
Vaibhav Vavilala
|
Vaibhav Vavilala, Seemandhar Jain, Rahul Vasanth, Anand Bhattad, David
Forsyth
|
Blocks2World: Controlling Realistic Scenes with Editable Primitives
|
16 pages, 15 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present Blocks2World, a novel method for 3D scene rendering and editing
that leverages a two-step process: convex decomposition of images and
conditioned synthesis. Our technique begins by extracting 3D parallelepipeds
from various objects in a given scene using convex decomposition, thus
obtaining a primitive representation of the scene. These primitives are then
utilized to generate paired data through simple ray-traced depth maps. The next
stage involves training a conditioned model that learns to generate images from
the 2D-rendered convex primitives. This step establishes a direct mapping
between the 3D model and its 2D representation, effectively learning the
transition from a 3D model to an image. Once the model is fully trained, it
offers remarkable control over the synthesis of novel and edited scenes. This
is achieved by manipulating the primitives at test time, including translating
or adding them, thereby enabling a highly customizable scene rendering process.
Our method provides a fresh perspective on 3D scene rendering and editing,
offering control and flexibility. It opens up new avenues for research and
applications in the field, including authoring and data augmentation.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 21:38:50 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Jul 2023 16:39:42 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Vavilala",
"Vaibhav",
""
],
[
"Jain",
"Seemandhar",
""
],
[
"Vasanth",
"Rahul",
""
],
[
"Bhattad",
"Anand",
""
],
[
"Forsyth",
"David",
""
]
] |
new_dataset
| 0.99821 |
2307.06342
|
Ahmed Ghorbel
|
Ahmed Ghorbel, Wassim Hamidouche and Luce Morin
|
ConvNeXt-ChARM: ConvNeXt-based Transform for Efficient Neural Image
Compression
|
arXiv admin note: substantial text overlap with arXiv:2307.02273.
text overlap with arXiv:2307.06091
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Over the last few years, neural image compression has gained wide attention
from research and industry, yielding promising end-to-end deep neural codecs
outperforming their conventional counterparts in rate-distortion performance.
Despite significant advancement, current methods, including attention-based
transform coding, still need to be improved in reducing the coding rate while
preserving the reconstruction fidelity, especially in non-homogeneous textured
image areas. Those models also require more parameters and a higher decoding
time. To tackle the above challenges, we propose ConvNeXt-ChARM, an efficient
ConvNeXt-based transform coding framework, paired with a compute-efficient
channel-wise auto-regressive prior to capturing both global and local contexts
from the hyper and quantized latent representations. The proposed architecture
can be optimized end-to-end to fully exploit the context information and
extract compact latent representation while reconstructing higher-quality
images. Experimental results on four widely-used datasets showed that
ConvNeXt-ChARM brings consistent and significant BD-rate (PSNR) reductions
estimated on average to 5.24% and 1.22% over the versatile video coding (VVC)
reference encoder (VTM-18.0) and the state-of-the-art learned image compression
method SwinT-ChARM, respectively. Moreover, we provide model scaling studies to
verify the computational efficiency of our approach and conduct several
objective and subjective analyses to bring to the fore the performance gap
between the next generation ConvNet, namely ConvNeXt, and Swin Transformer.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 11:45:54 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Ghorbel",
"Ahmed",
""
],
[
"Hamidouche",
"Wassim",
""
],
[
"Morin",
"Luce",
""
]
] |
new_dataset
| 0.998481 |
2307.06350
|
Kaiyi Huang
|
Kaiyi Huang, Kaiyue Sun, Enze Xie, Zhenguo Li, Xihui Liu
|
T2I-CompBench: A Comprehensive Benchmark for Open-world Compositional
Text-to-image Generation
|
Project page: https://karine-h.github.io/T2I-CompBench/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the stunning ability to generate high-quality images by recent
text-to-image models, current approaches often struggle to effectively compose
objects with different attributes and relationships into a complex and coherent
scene. We propose T2I-CompBench, a comprehensive benchmark for open-world
compositional text-to-image generation, consisting of 6,000 compositional text
prompts from 3 categories (attribute binding, object relationships, and complex
compositions) and 6 sub-categories (color binding, shape binding, texture
binding, spatial relationships, non-spatial relationships, and complex
compositions). We further propose several evaluation metrics specifically
designed to evaluate compositional text-to-image generation. We introduce a new
approach, Generative mOdel fine-tuning with Reward-driven Sample selection
(GORS), to boost the compositional text-to-image generation abilities of
pretrained text-to-image models. Extensive experiments and evaluations are
conducted to benchmark previous methods on T2I-CompBench, and to validate the
effectiveness of our proposed evaluation metrics and GORS approach. Project
page is available at https://karine-h.github.io/T2I-CompBench/.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 17:59:42 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Huang",
"Kaiyi",
""
],
[
"Sun",
"Kaiyue",
""
],
[
"Xie",
"Enze",
""
],
[
"Li",
"Zhenguo",
""
],
[
"Liu",
"Xihui",
""
]
] |
new_dataset
| 0.999849 |
2307.06423
|
Yijiong Lin
|
Yijiong Lin, Alex Church, Max Yang, Haoran Li, John Lloyd, Dandan
Zhang, Nathan F. Lepora
|
Bi-Touch: Bimanual Tactile Manipulation with Sim-to-Real Deep
Reinforcement Learning
|
Accepted by IEEE Robotics and Automation Letters (RA-L)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Bimanual manipulation with tactile feedback will be key to human-level robot
dexterity. However, this topic is less explored than single-arm settings,
partly due to the availability of suitable hardware along with the complexity
of designing effective controllers for tasks with relatively large state-action
spaces. Here we introduce a dual-arm tactile robotic system (Bi-Touch) based on
the Tactile Gym 2.0 setup that integrates two affordable industrial-level robot
arms with low-cost high-resolution tactile sensors (TacTips). We present a
suite of bimanual manipulation tasks tailored towards tactile feedback:
bi-pushing, bi-reorienting and bi-gathering. To learn effective policies, we
introduce appropriate reward functions for these tasks and propose a novel
goal-update mechanism with deep reinforcement learning. We also apply these
policies to real-world settings with a tactile sim-to-real approach. Our
analysis highlights and addresses some challenges met during the sim-to-real
application, e.g. the learned policy tended to squeeze an object in the
bi-reorienting task due to the sim-to-real gap. Finally, we demonstrate the
generalizability and robustness of this system by experimenting with different
unseen objects with applied perturbations in the real world. Code and videos
are available at https://sites.google.com/view/bi-touch/.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 19:29:37 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Lin",
"Yijiong",
""
],
[
"Church",
"Alex",
""
],
[
"Yang",
"Max",
""
],
[
"Li",
"Haoran",
""
],
[
"Lloyd",
"John",
""
],
[
"Zhang",
"Dandan",
""
],
[
"Lepora",
"Nathan F.",
""
]
] |
new_dataset
| 0.994338 |
2307.06456
|
Renan Alves
|
Renan C. A. Alves, Bruno C. Albertini, Marcos A. Simplicio Jr
|
Benchmarking the Security Protocol and Data Model (SPDM) for component
authentication
|
10 pages, 8 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Efforts to secure computing systems via software traditionally focus on the
operating system and application levels. In contrast, the Security Protocol and
Data Model (SPDM) tackles firmware level security challenges, which are much
harder (if at all possible) to detect with regular protection software. SPDM
includes key features like enabling peripheral authentication, authenticated
hardware measurements retrieval, and secure session establishment. Since SPDM
is a relatively recent proposal, there is a lack of studies evaluating its
performance impact on real-world applications. In this article, we address this
gap by: (1) implementing the protocol on a simple virtual device, and then
investigating the overhead introduced by each SDPM message; and (2) creating an
SPDM-capable virtual hard drive based on VirtIO, and comparing the resulting
read/write performance with a regular, unsecured implementation. Our results
suggest that SPDM bootstrap time takes the order of tens of milliseconds, while
the toll of introducing SPDM on hard drive communication highly depends on
specific workload patterns. For example, for mixed random read/write
operations, the slowdown is negligible in comparison to the baseline unsecured
setup. Conversely, for sequential read or write operations, the data encryption
process becomes the bottleneck, reducing the performance indicators by several
orders of magnitude.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 21:15:02 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Alves",
"Renan C. A.",
""
],
[
"Albertini",
"Bruno C.",
""
],
[
"Simplicio",
"Marcos A.",
"Jr"
]
] |
new_dataset
| 0.984727 |
2307.06476
|
Vinay Banakar
|
Vinay Banakar, Kan Wu, Yuvraj Patel, Kimberly Keeton, Andrea C.
Arpaci-Dusseau, Remzi H. Arpaci-Dusseau
|
WiscSort: External Sorting For Byte-Addressable Storage
| null | null |
10.14778/3598581.3598585
| null |
cs.DB cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present WiscSort, a new approach to high-performance concurrent sorting
for existing and future byte-addressable storage (BAS) devices. WiscSort
carefully reduces writes, exploits random reads by splitting keys and values
during sorting, and performs interference-aware scheduling with thread pool
sizing to avoid I/O bandwidth degradation. We introduce the BRAID model which
encompasses the unique characteristics of BAS devices. Many state-of-the-art
sorting systems do not comply with the BRAID model and deliver sub-optimal
performance, whereas WiscSort demonstrates the effectiveness of complying with
BRAID. We show that WiscSort is 2-7x faster than competing approaches on a
standard sort benchmark. We evaluate the effectiveness of key-value separation
on different key-value sizes and compare our concurrency optimizations with
various other concurrency models. Finally, we emulate generic BAS devices and
show how our techniques perform well with various combinations of hardware
properties.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 22:16:44 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Banakar",
"Vinay",
""
],
[
"Wu",
"Kan",
""
],
[
"Patel",
"Yuvraj",
""
],
[
"Keeton",
"Kimberly",
""
],
[
"Arpaci-Dusseau",
"Andrea C.",
""
],
[
"Arpaci-Dusseau",
"Remzi H.",
""
]
] |
new_dataset
| 0.999366 |
2307.06577
|
Hu Zhang
|
MD Wahiduzzaman Khan, Hongwei Sheng, Hu Zhang, Heming Du, Sen Wang,
Minas Theodore Coroneo, Farshid Hajati, Sahar Shariflou, Michael Kalloniatis,
Jack Phu, Ashish Agar, Zi Huang, Mojtaba Golzan, Xin Yu
|
RVD: A Handheld Device-Based Fundus Video Dataset for Retinal Vessel
Segmentation
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Retinal vessel segmentation is generally grounded in image-based datasets
collected with bench-top devices. The static images naturally lose the dynamic
characteristics of retina fluctuation, resulting in diminished dataset
richness, and the usage of bench-top devices further restricts dataset
scalability due to its limited accessibility. Considering these limitations, we
introduce the first video-based retinal dataset by employing handheld devices
for data acquisition. The dataset comprises 635 smartphone-based fundus videos
collected from four different clinics, involving 415 patients from 50 to 75
years old. It delivers comprehensive and precise annotations of retinal
structures in both spatial and temporal dimensions, aiming to advance the
landscape of vasculature segmentation. Specifically, the dataset provides three
levels of spatial annotations: binary vessel masks for overall retinal
structure delineation, general vein-artery masks for distinguishing the vein
and artery, and fine-grained vein-artery masks for further characterizing the
granularities of each artery and vein. In addition, the dataset offers temporal
annotations that capture the vessel pulsation characteristics, assisting in
detecting ocular diseases that require fine-grained recognition of hemodynamic
fluctuation. In application, our dataset exhibits a significant domain shift
with respect to data captured by bench-top devices, thus posing great
challenges to existing methods. In the experiments, we provide evaluation
metrics and benchmark results on our dataset, reflecting both the potential and
challenges it offers for vessel segmentation tasks. We hope this challenging
dataset would significantly contribute to the development of eye disease
diagnosis and early prevention.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 06:30:09 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Khan",
"MD Wahiduzzaman",
""
],
[
"Sheng",
"Hongwei",
""
],
[
"Zhang",
"Hu",
""
],
[
"Du",
"Heming",
""
],
[
"Wang",
"Sen",
""
],
[
"Coroneo",
"Minas Theodore",
""
],
[
"Hajati",
"Farshid",
""
],
[
"Shariflou",
"Sahar",
""
],
[
"Kalloniatis",
"Michael",
""
],
[
"Phu",
"Jack",
""
],
[
"Agar",
"Ashish",
""
],
[
"Huang",
"Zi",
""
],
[
"Golzan",
"Mojtaba",
""
],
[
"Yu",
"Xin",
""
]
] |
new_dataset
| 0.999863 |
2307.06595
|
Elisa Gorla
|
Elisa Gorla, Elisa Lorenzo Garc\'ia, Umberto Mart\'inez-Pe\~nas,
Flavio Salizzoni
|
Integer sequences that are generalized weights of a linear code
|
19 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Which integer sequences are sequences of generalized weights of a linear
code? In this paper, we answer this question for linear block codes,
rank-metric codes, and more generally for sum-rank metric codes. We do so under
an existence assumption for MDS and MSRD codes. We also prove that the same
integer sequences appear as sequences of greedy weights of linear block codes,
rank-metric codes, and sum-rank metric codes. Finally, we characterize the
integer sequences which appear as sequences of relative generalized weights
(respectively, relative greedy weights) of linear block codes.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 07:33:47 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Gorla",
"Elisa",
""
],
[
"García",
"Elisa Lorenzo",
""
],
[
"Martínez-Peñas",
"Umberto",
""
],
[
"Salizzoni",
"Flavio",
""
]
] |
new_dataset
| 0.999275 |
2307.06616
|
Mohamed Amine Ferrag
|
Mohamed Amine Ferrag, Ammar Battah, Norbert Tihanyi, Merouane Debbah,
Thierry Lestable, Lucas C. Cordeiro
|
SecureFalcon: The Next Cyber Reasoning System for Cyber Security
| null | null | null | null |
cs.CR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Software vulnerabilities leading to various detriments such as crashes, data
loss, and security breaches, significantly hinder the quality, affecting the
market adoption of software applications and systems. Although traditional
methods such as automated software testing, fault localization, and repair have
been intensively studied, static analysis tools are most commonly used and have
an inherent false positives rate, posing a solid challenge to developer
productivity. Large Language Models (LLMs) offer a promising solution to these
persistent issues. Among these, FalconLLM has shown substantial potential in
identifying intricate patterns and complex vulnerabilities, hence crucial in
software vulnerability detection. In this paper, for the first time, FalconLLM
is being fine-tuned for cybersecurity applications, thus introducing
SecureFalcon, an innovative model architecture built upon FalconLLM.
SecureFalcon is trained to differentiate between vulnerable and non-vulnerable
C code samples. We build a new training dataset, FormAI, constructed thanks to
Generative Artificial Intelligence (AI) and formal verification to evaluate its
performance. SecureFalcon achieved an impressive 94% accuracy rate in detecting
software vulnerabilities, emphasizing its significant potential to redefine
software vulnerability detection methods in cybersecurity.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 08:34:09 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Ferrag",
"Mohamed Amine",
""
],
[
"Battah",
"Ammar",
""
],
[
"Tihanyi",
"Norbert",
""
],
[
"Debbah",
"Merouane",
""
],
[
"Lestable",
"Thierry",
""
],
[
"Cordeiro",
"Lucas C.",
""
]
] |
new_dataset
| 0.996153 |
2307.06621
|
Hugo Ledoux
|
Leon Powa{\l}ka and Chris Poon and Yitong Xia and Siebren Meines and
Lan Yan and Yuduan Cai and Gina Stavropoulou and Bal\'azs Dukai and Hugo
Ledoux
|
cjdb: a simple, fast, and lean database solution for the CityGML data
model
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
When it comes to storing 3D city models in a database, the implementation of
the CityGML data model can be quite demanding and often results in complicated
schemas. As an example, 3DCityDB, a widely used solution, depends on a schema
having 66 tables, mapping closely the CityGML architecture. In this paper, we
propose an alternative (called cjdb) for storing CityGML models efficiently in
PostgreSQL with a much simpler table structure and data model design (only 3
tables are necessary). This is achieved by storing the attributes and
geometries of the objects directly in JSON. In the case of the geometries we
thus adopt the Simple Feature paradigm and we use the structure of CityJSON. We
compare our solution against 3DCityDB with large real-world 3D city models, and
we find that cjdb has significantly lower demands in storage space (around a
factor of 10), allows for faster import/export of data, and has a comparable
data retrieval speed with some queries being faster and some slower. The
accompanying software (importer and exporter) is available at
https://github.com/cityjson/cjdb/ under a permissive open-source license.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 08:36:36 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Powałka",
"Leon",
""
],
[
"Poon",
"Chris",
""
],
[
"Xia",
"Yitong",
""
],
[
"Meines",
"Siebren",
""
],
[
"Yan",
"Lan",
""
],
[
"Cai",
"Yuduan",
""
],
[
"Stavropoulou",
"Gina",
""
],
[
"Dukai",
"Balázs",
""
],
[
"Ledoux",
"Hugo",
""
]
] |
new_dataset
| 0.999037 |
2307.06688
|
Andrew Vekinis
|
Andrew Alexander Vekinis, Stavros Perantonis
|
Aeolus Ocean -- A simulation environment for the autonomous
COLREG-compliant navigation of Unmanned Surface Vehicles using Deep
Reinforcement Learning and Maritime Object Detection
|
22 pages, last blank page, 17 figures, 1 table, color, high
resolution figures
| null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Heading towards navigational autonomy in unmanned surface vehicles (USVs) in
the maritime sector can fundamentally lead towards safer waters as well as
reduced operating costs, while also providing a range of exciting new
capabilities for oceanic research, exploration and monitoring. However,
achieving such a goal is challenging. USV control systems must, safely and
reliably, be able to adhere to the international regulations for preventing
collisions at sea (COLREGs) in encounters with other vessels as they navigate
to a given waypoint while being affected by realistic weather conditions,
either during the day or at night. To deal with the multitude of possible
scenarios, it is critical to have a virtual environment that is able to
replicate the realistic operating conditions USVs will encounter, before they
can be implemented in the real world. Such "digital twins" form the foundations
upon which Deep Reinforcement Learning (DRL) and Computer Vision (CV)
algorithms can be used to develop and guide USV control systems. In this paper
we describe the novel development of a COLREG-compliant DRL-based collision
avoidant navigational system with CV-based awareness in a realistic ocean
simulation environment. The performance of the trained autonomous Agents
resulting from this approach is evaluated in several successful navigations to
set waypoints in both open sea and coastal encounters with other vessels. A
binary executable version of the simulator with trained agents is available at
https://github.com/aavek/Aeolus-Ocean
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 11:20:18 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Vekinis",
"Andrew Alexander",
""
],
[
"Perantonis",
"Stavros",
""
]
] |
new_dataset
| 0.962317 |
2307.06724
|
Minh-Tan Pham
|
Abdelbadie Belmouhcine, Jean-Christophe Burnel, Luc Courtrai, Minh-Tan
Pham and S\'ebastien Lef\`evre
|
Multimodal Object Detection in Remote Sensing
|
4 pages, accepted to IGARSS 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object detection in remote sensing is a crucial computer vision task that has
seen significant advancements with deep learning techniques. However, most
existing works in this area focus on the use of generic object detection and do
not leverage the potential of multimodal data fusion. In this paper, we present
a comparison of methods for multimodal object detection in remote sensing,
survey available multimodal datasets suitable for evaluation, and discuss
future directions.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 12:37:14 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Belmouhcine",
"Abdelbadie",
""
],
[
"Burnel",
"Jean-Christophe",
""
],
[
"Courtrai",
"Luc",
""
],
[
"Pham",
"Minh-Tan",
""
],
[
"Lefèvre",
"Sébastien",
""
]
] |
new_dataset
| 0.998868 |
2307.06756
|
Lang Feng
|
Luyi Li, Jiayi Huang, Lang Feng, Zhongfeng Wang
|
PREFENDER: A Prefetching Defender against Cache Side Channel Attacks as
A Pretender
|
Submitting to a journal
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cache side channel attacks are increasingly alarming in modern processors due
to the recent emergence of Spectre and Meltdown attacks. A typical attack
performs intentional cache access and manipulates cache states to leak secrets
by observing the victim's cache access patterns. Different countermeasures have
been proposed to defend against both general and transient execution based
attacks. Despite their effectiveness, they mostly trade some level of
performance for security, or have restricted security scope. In this paper, we
seek an approach to enforcing security while maintaining performance. We
leverage the insight that attackers need to access cache in order to manipulate
and observe cache state changes for information leakage. Specifically, we
propose Prefender, a secure prefetcher that learns and predicts attack-related
accesses for prefetching the cachelines to simultaneously help security and
performance. Our results show that Prefender is effective against several cache
side channel attacks while maintaining or even improving performance for SPEC
CPU 2006 and 2017 benchmarks.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 13:52:07 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Li",
"Luyi",
""
],
[
"Huang",
"Jiayi",
""
],
[
"Feng",
"Lang",
""
],
[
"Wang",
"Zhongfeng",
""
]
] |
new_dataset
| 0.99841 |
2307.06784
|
Francesca Palermo
|
Francesca Palermo, Bukeikhan Omarali, Changae Oh, Kaspar Althoefer,
Ildar Farkhatdinov
|
Robotic surface exploration with vision and tactile sensing for cracks
detection and characterisation
|
12 pages
| null | null | null |
cs.RO cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a novel algorithm for crack localisation and detection
based on visual and tactile analysis via fibre-optics. A finger-shaped sensor
based on fibre-optics is employed for the data acquisition to collect data for
the analysis and the experiments. To detect the possible locations of cracks a
camera is used to scan an environment while running an object detection
algorithm. Once the crack is detected, a fully-connected graph is created from
a skeletonised version of the crack. A minimum spanning tree is then employed
for calculating the shortest path to explore the crack which is then used to
develop the motion planner for the robotic manipulator. The motion planner
divides the crack into multiple nodes which are then explored individually.
Then, the manipulator starts the exploration and performs the tactile data
classification to confirm if there is indeed a crack in that location or just a
false positive from the vision algorithm. If a crack is detected, also the
length, width, orientation and number of branches are calculated. This is
repeated until all the nodes of the crack are explored.
In order to validate the complete algorithm, various experiments are
performed: comparison of exploration of cracks through full scan and motion
planning algorithm, implementation of frequency-based features for crack
classification and geometry analysis using a combination of vision and tactile
data. From the results of the experiments, it is shown that the proposed
algorithm is able to detect cracks and improve the results obtained from vision
to correctly classify cracks and their geometry with minimal cost thanks to the
motion planning algorithm.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 14:50:38 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Palermo",
"Francesca",
""
],
[
"Omarali",
"Bukeikhan",
""
],
[
"Oh",
"Changae",
""
],
[
"Althoefer",
"Kaspar",
""
],
[
"Farkhatdinov",
"Ildar",
""
]
] |
new_dataset
| 0.979847 |
2307.06789
|
Tim Griesbach
|
Tim Griesbach (1), Carsten Burstedde (1) ((1) INS, Rheinische
Friedrich-Wilhelms-Universit\"at Bonn, Bonn, Germany)
|
scda: A Minimal, Serial-Equivalent Format for Parallel I/O
|
17 pages, 7 figures and 2 tables
| null | null | null |
cs.DC cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We specify a file-oriented data format suitable for parallel,
partition-independent disk I/O. Here, a partition refers to a disjoint and
ordered distribution of the data elements between one or more processes. The
format is designed such that the file contents are invariant under linear (i.
e., unpermuted), parallel repartition of the data prior to writing. The file
contents are indistinguishable from writing in serial. In the same vein, the
file can be read on any number of processes that agree on any partition of the
number of elements stored.
In addition to the format specification we propose an optional convention to
implement transparent per-element data compression. The compressed data and
metadata is layered inside ordinary format elements. Overall, we pay special
attention to both human and machine readability. If pure ASCII data is written,
or compressed data is reencoded to ASCII, the entire file including its header
and sectioning metadata remains entirely in ASCII. If binary data is written,
the metadata stays easy on the human eye.
We refer to this format as scda. Conceptually, it lies one layer below and is
oblivious to the definition of variables, the binary representation of numbers,
considerations of endianness, and self-describing headers, which may all be
specified on top of scda. The main purpose of the format is to abstract any
parallelism and provide sufficient structure as a foundation for a generic and
flexible archival and checkpoint/restart. A documented reference implementation
is available as part of the general-purpose libsc free software library.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 14:59:22 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Griesbach",
"Tim",
""
],
[
"Burstedde",
"Carsten",
""
]
] |
new_dataset
| 0.999641 |
2307.06860
|
Hernan Dario Benitez Restrepo Mr
|
Juan Sebasti\'an Ca\~nas, Maria Paula Toro-G\'omez, Larissa Sayuri
Moreira Sugai, Hern\'an Dar\'io Ben\'itez Restrepo, Jorge Rudas, Breyner
Posso Bautista, Lu\'is Felipe Toledo, Simone Dena, Ad\~ao Henrique Rosa
Domingos, Franco Leandro de Souza, Selvino Neckel-Oliveira, Anderson da Rosa,
V\'itor Carvalho-Rocha, Jos\'e Vin\'icius Bernardy, Jos\'e Luiz Massao
Moreira Sugai, Carolina Em\'ilia dos Santos, Rog\'erio Pereira Bastos, Diego
Llusia, Juan Sebasti\'an Ulloa
|
AnuraSet: A dataset for benchmarking Neotropical anuran calls
identification in passive acoustic monitoring
| null | null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Global change is predicted to induce shifts in anuran acoustic behavior,
which can be studied through passive acoustic monitoring (PAM). Understanding
changes in calling behavior requires the identification of anuran species,
which is challenging due to the particular characteristics of neotropical
soundscapes. In this paper, we introduce a large-scale multi-species dataset of
anuran amphibians calls recorded by PAM, that comprises 27 hours of expert
annotations for 42 different species from two Brazilian biomes. We provide open
access to the dataset, including the raw recordings, experimental setup code,
and a benchmark with a baseline model of the fine-grained categorization
problem. Additionally, we highlight the challenges of the dataset to encourage
machine learning researchers to solve the problem of anuran call identification
towards conservation policy. All our experiments and resources can be found on
our GitHub repository https://github.com/soundclim/anuraset.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 22:25:21 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Cañas",
"Juan Sebastián",
""
],
[
"Toro-Gómez",
"Maria Paula",
""
],
[
"Sugai",
"Larissa Sayuri Moreira",
""
],
[
"Restrepo",
"Hernán Darío Benítez",
""
],
[
"Rudas",
"Jorge",
""
],
[
"Bautista",
"Breyner Posso",
""
],
[
"Toledo",
"Luís Felipe",
""
],
[
"Dena",
"Simone",
""
],
[
"Domingos",
"Adão Henrique Rosa",
""
],
[
"de Souza",
"Franco Leandro",
""
],
[
"Neckel-Oliveira",
"Selvino",
""
],
[
"da Rosa",
"Anderson",
""
],
[
"Carvalho-Rocha",
"Vítor",
""
],
[
"Bernardy",
"José Vinícius",
""
],
[
"Sugai",
"José Luiz Massao Moreira",
""
],
[
"Santos",
"Carolina Emília dos",
""
],
[
"Bastos",
"Rogério Pereira",
""
],
[
"Llusia",
"Diego",
""
],
[
"Ulloa",
"Juan Sebastián",
""
]
] |
new_dataset
| 0.999856 |
2307.06863
|
Jianping Pan
|
Jianping Pan, Jinwei Zhao and Lin Cai
|
Measuring a Low-Earth-Orbit Satellite Network
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Starlink and alike have attracted a lot of attention recently, however, the
inner working of these low-earth-orbit (LEO) satellite networks is still
largely unknown. This paper presents an ongoing measurement campaign focusing
on Starlink, including its satellite access networks, gateway and
point-of-presence structures, and backbone and Internet connections, revealing
insights applicable to other LEO satellite providers. It also highlights the
challenges and research opportunities of the integrated space-air-ground-aqua
network envisioned by 6G mobile communication systems, and calls for a
concerted community effort from practical and experimentation aspects.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 15:14:53 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Pan",
"Jianping",
""
],
[
"Zhao",
"Jinwei",
""
],
[
"Cai",
"Lin",
""
]
] |
new_dataset
| 0.950176 |
2307.06898
|
Marcus Krellner
|
Marcus Krellner and The Anh Han
|
Words are not Wind -- How Joint Commitment and Reputation Solve Social
Dilemmas, without Repeated Interactions or Enforcement by Third Parties
|
13 pages (without ref and supp), 8 figures
| null | null | null |
cs.GT cs.MA cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Joint commitment was argued to "make our social world" (Gilbert, 2014) and to
separate us from other primates. 'Joint' entails that neither of us promises
anything, unless the other promises as well. When we need to coordinate for the
best mutual outcome, any commitment is beneficial. However, when we are tempted
to free-ride (i.e. in social dilemmas), commitment serves no obvious purpose.
We show that a reputation system, which judges action in social dilemmas only
after joint commitment, can prevent free-riding. Keeping commitments builds
trust. We can selectively enter joint commitments with trustworthy individuals
to ensure their cooperation (since they will now be judged). We simply do not
commit to cooperate with those we do not trust, and hence can freely defect
without losing the trust of others. This principle might be the reason for
pointedly public joint commitments, such as marriage. It is especially relevant
to our evolutionary past, in which no mechanisms existed to enforce commitments
reliably and impartially (e.g. via a powerful and accountable government). Much
research from anthropology, philosophy and psychology made the assumption that
past collaborations were mutually beneficial and had little possibilities to
free-ride, for which there is little support. Our evolutionary game theory
approach proves that this assumption is not necessary, because free-riding
could have been dealt with joint commitments and reputation.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 16:50:38 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Krellner",
"Marcus",
""
],
[
"Han",
"The Anh",
""
]
] |
new_dataset
| 0.957721 |
2307.06912
|
Guillaume Ricard
|
Ulrich Dah-Achinanon, Emir Khaled Belhaddad, Guillaume Ricard,
Giovanni Beltrame
|
BittyBuzz: A Swarm Robotics Runtime for Tiny Systems
|
6 pages
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Swarm robotics is an emerging field of research which is increasingly
attracting attention thanks to the advances in robotics and its potential
applications. However, despite the enthusiasm surrounding this area of
research, software development for swarm robotics is still a tedious task. That
fact is partly due to the lack of dedicated solutions, in particular for
low-cost systems to be produced in large numbers and that can have important
resource constraints. To address this issue, we introduce BittyBuzz, a novel
runtime platform: it allows Buzz, a domain-specific language, to run on
microcontrollers while maintaining dynamic memory management. BittyBuzz is
designed to fit a flash memory as small as 32 kB (with usable space for
scripts) and work with as little as 2 kB of RAM. In this work, we introduce the
BittyBuzz implementation, its differences from the original Buzz virtual
machine, and its advantages for swarm robotics systems. We show that BittyBuzz
is successfully integrated with three robotic platforms with minimal memory
footprint and conduct experiments to show computation performance of BittyBuzz.
Results show that BittyBuzz can be effectively used to implement common swarm
behaviors on microcontroller-based systems.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 17:20:36 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Dah-Achinanon",
"Ulrich",
""
],
[
"Belhaddad",
"Emir Khaled",
""
],
[
"Ricard",
"Guillaume",
""
],
[
"Beltrame",
"Giovanni",
""
]
] |
new_dataset
| 0.999715 |
2307.06924
|
Shuijing Liu
|
Shuijing Liu, Aamir Hasan, Kaiwen Hong, Runxuan Wang, Peixin Chang,
Zachary Mizrachi, Justin Lin, D. Livingston McPherson, Wendy A. Rogers, and
Katherine Driggs-Campbell
|
DRAGON: A Dialogue-Based Robot for Assistive Navigation with Visual
Language Grounding
|
Webpage and videos are at
https://sites.google.com/view/dragon-wayfinding/home
| null | null | null |
cs.RO cs.AI cs.CL cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Persons with visual impairments (PwVI) have difficulties understanding and
navigating spaces around them. Current wayfinding technologies either focus
solely on navigation or provide limited communication about the environment.
Motivated by recent advances in visual-language grounding and semantic
navigation, we propose DRAGON, a guiding robot powered by a dialogue system and
the ability to associate the environment with natural language. By
understanding the commands from the user, DRAGON is able to guide the user to
the desired landmarks on the map, describe the environment, and answer
questions from visual observations. Through effective utilization of dialogue,
the robot can ground the user's free-form descriptions to landmarks in the
environment, and give the user semantic information through spoken language. We
conduct a user study with blindfolded participants in an everyday indoor
environment. Our results demonstrate that DRAGON is able to communicate with
the user smoothly, provide a good guiding experience, and connect users with
their surrounding environment in an intuitive manner.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 17:46:15 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Liu",
"Shuijing",
""
],
[
"Hasan",
"Aamir",
""
],
[
"Hong",
"Kaiwen",
""
],
[
"Wang",
"Runxuan",
""
],
[
"Chang",
"Peixin",
""
],
[
"Mizrachi",
"Zachary",
""
],
[
"Lin",
"Justin",
""
],
[
"McPherson",
"D. Livingston",
""
],
[
"Rogers",
"Wendy A.",
""
],
[
"Driggs-Campbell",
"Katherine",
""
]
] |
new_dataset
| 0.997143 |
2307.06940
|
Yingqing He
|
Yingqing He, Menghan Xia, Haoxin Chen, Xiaodong Cun, Yuan Gong, Jinbo
Xing, Yong Zhang, Xintao Wang, Chao Weng, Ying Shan, Qifeng Chen
|
Animate-A-Story: Storytelling with Retrieval-Augmented Video Generation
|
Github: https://github.com/VideoCrafter/Animate-A-Story Project page:
https://videocrafter.github.io/Animate-A-Story
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating videos for visual storytelling can be a tedious and complex
process that typically requires either live-action filming or graphics
animation rendering. To bypass these challenges, our key idea is to utilize the
abundance of existing video clips and synthesize a coherent storytelling video
by customizing their appearances. We achieve this by developing a framework
comprised of two functional modules: (i) Motion Structure Retrieval, which
provides video candidates with desired scene or motion context described by
query texts, and (ii) Structure-Guided Text-to-Video Synthesis, which generates
plot-aligned videos under the guidance of motion structure and text prompts.
For the first module, we leverage an off-the-shelf video retrieval system and
extract video depths as motion structure. For the second module, we propose a
controllable video generation model that offers flexible controls over
structure and characters. The videos are synthesized by following the
structural guidance and appearance instruction. To ensure visual consistency
across clips, we propose an effective concept personalization approach, which
allows the specification of the desired character identities through text
prompts. Extensive experiments demonstrate that our approach exhibits
significant advantages over various existing baselines.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 17:57:13 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"He",
"Yingqing",
""
],
[
"Xia",
"Menghan",
""
],
[
"Chen",
"Haoxin",
""
],
[
"Cun",
"Xiaodong",
""
],
[
"Gong",
"Yuan",
""
],
[
"Xing",
"Jinbo",
""
],
[
"Zhang",
"Yong",
""
],
[
"Wang",
"Xintao",
""
],
[
"Weng",
"Chao",
""
],
[
"Shan",
"Ying",
""
],
[
"Chen",
"Qifeng",
""
]
] |
new_dataset
| 0.998802 |
2307.06942
|
Yi Wang
|
Yi Wang, Yinan He, Yizhuo Li, Kunchang Li, Jiashuo Yu, Xin Ma, Xinyuan
Chen, Yaohui Wang, Ping Luo, Ziwei Liu, Yali Wang, Limin Wang, Yu Qiao
|
InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding
and Generation
|
Data and Code:
https://github.com/OpenGVLab/InternVideo/tree/main/Data/InternVid
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper introduces InternVid, a large-scale video-centric multimodal
dataset that enables learning powerful and transferable video-text
representations for multimodal understanding and generation. The InternVid
dataset contains over 7 million videos lasting nearly 760K hours, yielding 234M
video clips accompanied by detailed descriptions of total 4.1B words. Our core
contribution is to develop a scalable approach to autonomously build a
high-quality video-text dataset with large language models (LLM), thereby
showcasing its efficacy in learning video-language representation at scale.
Specifically, we utilize a multi-scale approach to generate video-related
descriptions. Furthermore, we introduce ViCLIP, a video-text representation
learning model based on ViT-L. Learned on InternVid via contrastive learning,
this model demonstrates leading zero-shot action recognition and competitive
video retrieval performance. Beyond basic video understanding tasks like
recognition and retrieval, our dataset and model have broad applications. They
are particularly beneficial for generating interleaved video-text data for
learning a video-centric dialogue system, advancing video-to-text and
text-to-video generation research. These proposed resources provide a tool for
researchers and practitioners interested in multimodal video understanding and
generation.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 17:58:32 GMT"
}
] | 2023-07-14T00:00:00 |
[
[
"Wang",
"Yi",
""
],
[
"He",
"Yinan",
""
],
[
"Li",
"Yizhuo",
""
],
[
"Li",
"Kunchang",
""
],
[
"Yu",
"Jiashuo",
""
],
[
"Ma",
"Xin",
""
],
[
"Chen",
"Xinyuan",
""
],
[
"Wang",
"Yaohui",
""
],
[
"Luo",
"Ping",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Wang",
"Yali",
""
],
[
"Wang",
"Limin",
""
],
[
"Qiao",
"Yu",
""
]
] |
new_dataset
| 0.999827 |
2001.02299
|
G\'abor Sz\'arnyas
|
Renzo Angles, J\'anos Benjamin Antal, Alex Averbuch, Altan Birler,
Peter Boncz, M\'arton B\'ur, Orri Erling, Andrey Gubichev, Vlad Haprian,
Moritz Kaufmann, Josep Llu\'is Larriba Pey, Norbert Mart\'inez, J\'ozsef
Marton, Marcus Paradies, Minh-Duc Pham, Arnau Prat-P\'erez, David P\"uroja,
Mirko Spasi\'c, Benjamin A. Steer, D\'avid Szak\'allas, G\'abor Sz\'arnyas,
Jack Waudby, Mingxi Wu, Yuchen Zhang
|
The LDBC Social Network Benchmark
|
For the repository containing the source code of this technical
report, see https://github.com/ldbc/ldbc_snb_docs
| null | null | null |
cs.DB cs.PF cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Linked Data Benchmark Council's Social Network Benchmark (LDBC SNB) is an
effort intended to test various functionalities of systems used for graph-like
data management. For this, LDBC SNB uses the recognizable scenario of operating
a social network, characterized by its graph-shaped data. LDBC SNB consists of
two workloads that focus on different functionalities: the Interactive workload
(interactive transactional queries) and the Business Intelligence workload
(analytical queries). This document contains the definition of both workloads.
This includes a detailed explanation of the data used in the LDBC SNB, a
detailed description for all queries, and instructions on how to generate the
data and run the benchmark with the provided software.
|
[
{
"version": "v1",
"created": "Tue, 7 Jan 2020 22:12:35 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Jan 2021 15:45:29 GMT"
},
{
"version": "v3",
"created": "Thu, 31 Mar 2022 09:40:12 GMT"
},
{
"version": "v4",
"created": "Sat, 4 Jun 2022 23:53:08 GMT"
},
{
"version": "v5",
"created": "Mon, 19 Sep 2022 20:57:05 GMT"
},
{
"version": "v6",
"created": "Wed, 21 Sep 2022 21:16:34 GMT"
},
{
"version": "v7",
"created": "Thu, 6 Oct 2022 13:44:46 GMT"
},
{
"version": "v8",
"created": "Wed, 9 Nov 2022 13:42:09 GMT"
},
{
"version": "v9",
"created": "Wed, 12 Jul 2023 07:01:53 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Angles",
"Renzo",
""
],
[
"Antal",
"János Benjamin",
""
],
[
"Averbuch",
"Alex",
""
],
[
"Birler",
"Altan",
""
],
[
"Boncz",
"Peter",
""
],
[
"Búr",
"Márton",
""
],
[
"Erling",
"Orri",
""
],
[
"Gubichev",
"Andrey",
""
],
[
"Haprian",
"Vlad",
""
],
[
"Kaufmann",
"Moritz",
""
],
[
"Pey",
"Josep Lluís Larriba",
""
],
[
"Martínez",
"Norbert",
""
],
[
"Marton",
"József",
""
],
[
"Paradies",
"Marcus",
""
],
[
"Pham",
"Minh-Duc",
""
],
[
"Prat-Pérez",
"Arnau",
""
],
[
"Püroja",
"David",
""
],
[
"Spasić",
"Mirko",
""
],
[
"Steer",
"Benjamin A.",
""
],
[
"Szakállas",
"Dávid",
""
],
[
"Szárnyas",
"Gábor",
""
],
[
"Waudby",
"Jack",
""
],
[
"Wu",
"Mingxi",
""
],
[
"Zhang",
"Yuchen",
""
]
] |
new_dataset
| 0.995083 |
2207.08533
|
Dongcheng Zhao
|
Yi Zeng, Dongcheng Zhao, Feifei Zhao, Guobin Shen, Yiting Dong, Enmeng
Lu, Qian Zhang, Yinqian Sun, Qian Liang, Yuxuan Zhao, Zhuoya Zhao, Hongjian
Fang, Yuwei Wang, Yang Li, Xin Liu, Chengcheng Du, Qingqun Kong, Zizhe Ruan,
Weida Bi
|
BrainCog: A Spiking Neural Network based Brain-inspired Cognitive
Intelligence Engine for Brain-inspired AI and Brain Simulation
|
This paper was accepted by Patterns. The accepted version can be seen
at https://www.cell.com/patterns/fulltext/S2666-3899(23)00144-7
| null |
10.1016/j.patter.2023.100789
| null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spiking neural networks (SNNs) have attracted extensive attentions in
Brain-inspired Artificial Intelligence and computational neuroscience. They can
be used to simulate biological information processing in the brain at multiple
scales. More importantly, SNNs serve as an appropriate level of abstraction to
bring inspirations from brain and cognition to Artificial Intelligence. In this
paper, we present the Brain-inspired Cognitive Intelligence Engine (BrainCog)
for creating brain-inspired AI and brain simulation models. BrainCog
incorporates different types of spiking neuron models, learning rules, brain
areas, etc., as essential modules provided by the platform. Based on these
easy-to-use modules, BrainCog supports various brain-inspired cognitive
functions, including Perception and Learning, Decision Making, Knowledge
Representation and Reasoning, Motor Control, and Social Cognition. These
brain-inspired AI models have been effectively validated on various supervised,
unsupervised, and reinforcement learning tasks, and they can be used to enable
AI models to be with multiple brain-inspired cognitive functions. For brain
simulation, BrainCog realizes the function simulation of decision-making,
working memory, the structure simulation of the Neural Circuit, and whole brain
structure simulation of Mouse brain, Macaque brain, and Human brain. An AI
engine named BORN is developed based on BrainCog, and it demonstrates how the
components of BrainCog can be integrated and used to build AI models and
applications. To enable the scientific quest to decode the nature of biological
intelligence and create AI, BrainCog aims to provide essential and easy-to-use
building blocks, and infrastructural support to develop brain-inspired spiking
neural network based AI, and to simulate the cognitive brains at multiple
scales. The online repository of BrainCog can be found at
https://github.com/braincog-x.
|
[
{
"version": "v1",
"created": "Mon, 18 Jul 2022 11:53:31 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Jul 2023 02:03:03 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Zeng",
"Yi",
""
],
[
"Zhao",
"Dongcheng",
""
],
[
"Zhao",
"Feifei",
""
],
[
"Shen",
"Guobin",
""
],
[
"Dong",
"Yiting",
""
],
[
"Lu",
"Enmeng",
""
],
[
"Zhang",
"Qian",
""
],
[
"Sun",
"Yinqian",
""
],
[
"Liang",
"Qian",
""
],
[
"Zhao",
"Yuxuan",
""
],
[
"Zhao",
"Zhuoya",
""
],
[
"Fang",
"Hongjian",
""
],
[
"Wang",
"Yuwei",
""
],
[
"Li",
"Yang",
""
],
[
"Liu",
"Xin",
""
],
[
"Du",
"Chengcheng",
""
],
[
"Kong",
"Qingqun",
""
],
[
"Ruan",
"Zizhe",
""
],
[
"Bi",
"Weida",
""
]
] |
new_dataset
| 0.997808 |
2208.12081
|
Barack Wanjawa Mr.
|
Barack Wanjawa, Lilian Wanzare, Florence Indede, Owen McOnyango,
Edward Ombui, Lawrence Muchemi
|
Kencorpus: A Kenyan Language Corpus of Swahili, Dholuo and Luhya for
Natural Language Processing Tasks
|
24 pages, 6 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Indigenous African languages are categorized as under-served in Natural
Language Processing. They therefore experience poor digital inclusivity and
information access. The processing challenge with such languages has been how
to use machine learning and deep learning models without the requisite data.
The Kencorpus project intends to bridge this gap by collecting and storing text
and speech data that is good enough for data-driven solutions in applications
such as machine translation, question answering and transcription in
multilingual communities. The Kencorpus dataset is a text and speech corpus for
three languages predominantly spoken in Kenya: Swahili, Dholuo and Luhya. Data
collection was done by researchers from communities, schools, media, and
publishers. The Kencorpus' dataset has a collection of 5,594 items - 4,442
texts (5.6M words) and 1,152 speech files (177hrs). Based on this data, Part of
Speech tagging sets for Dholuo and Luhya (50,000 and 93,000 words respectively)
were developed. We developed 7,537 Question-Answer pairs for Swahili and
created a text translation set of 13,400 sentences from Dholuo and Luhya into
Swahili. The datasets are useful for downstream machine learning tasks such as
model training and translation. We also developed two proof of concept systems:
for Kiswahili speech-to-text and machine learning system for Question Answering
task, with results of 18.87% word error rate and 80% Exact Match (EM)
respectively. These initial results give great promise to the usability of
Kencorpus to the machine learning community. Kencorpus is one of few public
domain corpora for these three low resource languages and forms a basis of
learning and sharing experiences for similar works especially for low resource
languages.
|
[
{
"version": "v1",
"created": "Thu, 25 Aug 2022 13:27:14 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Jul 2023 20:37:28 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Wanjawa",
"Barack",
""
],
[
"Wanzare",
"Lilian",
""
],
[
"Indede",
"Florence",
""
],
[
"McOnyango",
"Owen",
""
],
[
"Ombui",
"Edward",
""
],
[
"Muchemi",
"Lawrence",
""
]
] |
new_dataset
| 0.999506 |
2210.01361
|
Dimity Miller
|
Keita Mason, Joshua Knights, Milad Ramezani, Peyman Moghadam and
Dimity Miller
|
Uncertainty-Aware Lidar Place Recognition in Novel Environments
|
8 pages, 4 figures. Accepted for publication at IEEE IROS 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State-of-the-art lidar place recognition models exhibit unreliable
performance when tested on environments different from their training dataset,
which limits their use in complex and evolving environments. To address this
issue, we investigate the task of uncertainty-aware lidar place recognition,
where each predicted place must have an associated uncertainty that can be used
to identify and reject incorrect predictions. We introduce a novel evaluation
protocol and present the first comprehensive benchmark for this task, testing
across five uncertainty estimation techniques and three large-scale datasets.
Our results show that an Ensembles approach is the highest performing
technique, consistently improving the performance of lidar place recognition
and uncertainty estimation in novel environments, though it incurs a
computational cost. Code is publicly available at
https://github.com/csiro-robotics/Uncertainty-LPR.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 04:06:44 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Jul 2023 05:30:46 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Jul 2023 03:44:59 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Mason",
"Keita",
""
],
[
"Knights",
"Joshua",
""
],
[
"Ramezani",
"Milad",
""
],
[
"Moghadam",
"Peyman",
""
],
[
"Miller",
"Dimity",
""
]
] |
new_dataset
| 0.997003 |
2210.15628
|
Zhi Yan Dr.
|
Iaroslav Okunevich, Vincent Hilaire, Stephane Galland, Olivier
Lamotte, Liubov Shilova, Yassine Ruichek, Zhi Yan
|
Human-centered Benchmarking for Socially-compliant Robot Navigation
|
7 pages, 3 figures, 3 tables, accepted at ECMR 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social compatibility is one of the most important parameters for service
robots. It characterizes the quality of interaction between a robot and a
human. In this paper, a human-centered benchmarking framework is proposed for
socially-compliant robot navigation. In an end-to-end manner, four open-source
robot navigation methods are benchmarked, two of which are socially-compliant.
All aspects of the benchmarking are clarified to ensure the reproducibility and
replicability of the experiments. The social compatibility of robot navigation
methods with the Robotic Social Attributes Scale (RoSAS) is measured. After
that, the correspondence between RoSAS and the robot-centered metrics is
validated. Based on experiments, the extra robot time ratio and the extra
distance ratio are the most suitable to judge social compatibility.
|
[
{
"version": "v1",
"created": "Thu, 27 Oct 2022 17:20:08 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Jul 2023 08:58:08 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Okunevich",
"Iaroslav",
""
],
[
"Hilaire",
"Vincent",
""
],
[
"Galland",
"Stephane",
""
],
[
"Lamotte",
"Olivier",
""
],
[
"Shilova",
"Liubov",
""
],
[
"Ruichek",
"Yassine",
""
],
[
"Yan",
"Zhi",
""
]
] |
new_dataset
| 0.994045 |
2211.10962
|
Dominik Tomaszuk
|
Renzo Angles, Angela Bonifati, Stefania Dumbrava, George Fletcher,
Alastair Green, Jan Hidders, Bei Li, Leonid Libkin, Victor Marsault, Wim
Martens, Filip Murlak, Stefan Plantikow, Ognjen Savkovi\'c, Michael Schmidt,
Juan Sequeda, S{\l}awek Staworko, Dominik Tomaszuk, Hannes Voigt, Domagoj
Vrgo\v{c}, Mingxi Wu, Du\v{s}an \v{Z}ivkovi\'c
|
PG-Schema: Schemas for Property Graphs
|
26 pages
|
Proc. ACM Manag. Data (2023)
|
10.1145/3589778
| null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Property graphs have reached a high level of maturity, witnessed by multiple
robust graph database systems as well as the ongoing ISO standardization effort
aiming at creating a new standard Graph Query Language (GQL). Yet, despite
documented demand, schema support is limited both in existing systems and in
the first version of the GQL Standard. It is anticipated that the second
version of the GQL Standard will include a rich DDL. Aiming to inspire the
development of GQL and enhance the capabilities of graph database systems, we
propose PG-Schema, a simple yet powerful formalism for specifying property
graph schemas. It features PG-Types with flexible type definitions supporting
multi-inheritance, as well as expressive constraints based on the recently
proposed PG-Keys formalism. We provide the formal syntax and semantics of
PG-Schema, which meet principled design requirements grounded in contemporary
property graph management scenarios, and offer a detailed comparison of its
features with those of existing schema languages and graph database systems.
|
[
{
"version": "v1",
"created": "Sun, 20 Nov 2022 12:12:05 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Dec 2022 16:37:11 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Apr 2023 19:48:15 GMT"
},
{
"version": "v4",
"created": "Sat, 8 Jul 2023 15:19:34 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Angles",
"Renzo",
""
],
[
"Bonifati",
"Angela",
""
],
[
"Dumbrava",
"Stefania",
""
],
[
"Fletcher",
"George",
""
],
[
"Green",
"Alastair",
""
],
[
"Hidders",
"Jan",
""
],
[
"Li",
"Bei",
""
],
[
"Libkin",
"Leonid",
""
],
[
"Marsault",
"Victor",
""
],
[
"Martens",
"Wim",
""
],
[
"Murlak",
"Filip",
""
],
[
"Plantikow",
"Stefan",
""
],
[
"Savković",
"Ognjen",
""
],
[
"Schmidt",
"Michael",
""
],
[
"Sequeda",
"Juan",
""
],
[
"Staworko",
"Sławek",
""
],
[
"Tomaszuk",
"Dominik",
""
],
[
"Voigt",
"Hannes",
""
],
[
"Vrgoč",
"Domagoj",
""
],
[
"Wu",
"Mingxi",
""
],
[
"Živković",
"Dušan",
""
]
] |
new_dataset
| 0.954147 |
2301.03198
|
Alessandro Gifford
|
A. T. Gifford, B. Lahner, S. Saba-Sadiya, M. G. Vilas, A. Lascelles,
A. Oliva, K. Kay, G. Roig, R. M. Cichy
|
The Algonauts Project 2023 Challenge: How the Human Brain Makes Sense of
Natural Scenes
|
5 pages, 2 figures
| null | null | null |
cs.CV q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
The sciences of biological and artificial intelligence are ever more
intertwined. Neural computational principles inspire new intelligent machines,
which are in turn used to advance theoretical understanding of the brain. To
promote further exchange of ideas and collaboration between biological and
artificial intelligence researchers, we introduce the 2023 installment of the
Algonauts Project challenge: How the Human Brain Makes Sense of Natural Scenes
(http://algonauts.csail.mit.edu). This installment prompts the fields of
artificial and biological intelligence to come together towards building
computational models of the visual brain using the largest and richest dataset
of fMRI responses to visual scenes, the Natural Scenes Dataset (NSD). NSD
provides high-quality fMRI responses to ~73,000 different naturalistic colored
scenes, making it the ideal candidate for data-driven model building approaches
promoted by the 2023 challenge. The challenge is open to all and makes results
directly comparable and transparent through a public leaderboard automatically
updated after each submission, thus allowing for rapid model development. We
believe that the 2023 installment will spark symbiotic collaborations between
biological and artificial intelligence scientists, leading to a deeper
understanding of the brain through cutting-edge computational models and to
novel ways of engineering artificial intelligent agents through inductive
biases from biological systems.
|
[
{
"version": "v1",
"created": "Mon, 9 Jan 2023 08:27:36 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jan 2023 16:11:33 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Jun 2023 19:47:21 GMT"
},
{
"version": "v4",
"created": "Tue, 11 Jul 2023 20:27:04 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Gifford",
"A. T.",
""
],
[
"Lahner",
"B.",
""
],
[
"Saba-Sadiya",
"S.",
""
],
[
"Vilas",
"M. G.",
""
],
[
"Lascelles",
"A.",
""
],
[
"Oliva",
"A.",
""
],
[
"Kay",
"K.",
""
],
[
"Roig",
"G.",
""
],
[
"Cichy",
"R. M.",
""
]
] |
new_dataset
| 0.999246 |
2302.08992
|
Federico Turrin
|
Marco Alecci and Luca Attanasio and Alessandro Brighente and Mauro
Conti and Eleonora Losiouk and Hideki Ochiai and Federico Turrin
|
Beware of Pickpockets: A Practical Attack against Blocking Cards
| null |
The 26th International Symposium on Research in Attacks,
Intrusions and Defenses (RAID '23), October 16--18, 2023, Hong Kong, Hong
Kong
|
10.1145/3607199.3607243
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Today, we rely on contactless smart cards to perform several critical
operations (e.g., payments and accessing buildings). Attacking smart cards can
have severe consequences, such as losing money or leaking sensitive
information. Although the security protections embedded in smart cards have
evolved over the years, those with weak security properties are still commonly
used. Among the different solutions, blocking cards are affordable devices to
protect smart cards. These devices are placed close to the smart cards,
generating a noisy jamming signal or shielding them. Whereas vendors claim the
reliability of their blocking cards, no previous study has ever focused on
evaluating their effectiveness. In this paper, we shed light on the security
threats on smart cards in the presence of blocking cards, showing the
possibility of being bypassed by an attacker. We analyze blocking cards by
inspecting their emitted signal and assessing a vulnerability in their internal
design. We propose a novel attack that bypasses the jamming signal emitted by a
blocking card and reads the content of the smart card. We evaluate the
effectiveness of 11 blocking cards when protecting a MIFARE Ultralight smart
card and a MIFARE Classic card. Of these 11 cards, we managed to bypass 8 of
them and successfully dump the content of a smart card despite the presence of
the blocking card. Our findings highlight that the noise type implemented by
the blocking cards highly affects the protection level achieved by them. Based
on this observation, we propose a countermeasure that may lead to the design of
effective blocking cards. To further improve security, we released the tool we
developed to inspect the spectrum emitted by blocking cards and set up our
attack.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 16:50:31 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Jul 2023 09:39:29 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Alecci",
"Marco",
""
],
[
"Attanasio",
"Luca",
""
],
[
"Brighente",
"Alessandro",
""
],
[
"Conti",
"Mauro",
""
],
[
"Losiouk",
"Eleonora",
""
],
[
"Ochiai",
"Hideki",
""
],
[
"Turrin",
"Federico",
""
]
] |
new_dataset
| 0.997386 |
2302.10873
|
Pei Xu
|
Pei Xu, Jean-Bernard Hayet and Ioannis Karamouzas
|
Context-Aware Timewise VAEs for Real-Time Vehicle Trajectory Prediction
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Real-time, accurate prediction of human steering behaviors has wide
applications, from developing intelligent traffic systems to deploying
autonomous driving systems in both real and simulated worlds. In this paper, we
present ContextVAE, a context-aware approach for multi-modal vehicle trajectory
prediction. Built upon the backbone architecture of a timewise variational
autoencoder, ContextVAE observation encoding employs a dual attention mechanism
that accounts for the environmental context and the dynamic agents' states, in
a unified way. By utilizing features extracted from semantic maps during agent
state encoding, our approach takes into account both the social features
exhibited by agents on the scene and the physical environment constraints to
generate map-compliant and socially-aware trajectories. We perform extensive
testing on the nuScenes prediction challenge, Lyft Level 5 dataset and Waymo
Open Motion Dataset to show the effectiveness of our approach and its
state-of-the-art performance. In all tested datasets, ContextVAE models are
fast to train and provide high-quality multi-modal predictions in real-time.
Our code is available at: https://github.com/xupei0610/ContextVAE.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 18:42:24 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Mar 2023 00:02:34 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Jul 2023 18:15:18 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Xu",
"Pei",
""
],
[
"Hayet",
"Jean-Bernard",
""
],
[
"Karamouzas",
"Ioannis",
""
]
] |
new_dataset
| 0.980748 |
2303.10703
|
Srikar Yellapragada
|
Srikar Yellapragada, Zhenghong Li, Kevin Bhadresh Doshi, Purva
Makarand Mhasakar, Heng Fan, Jie Wei, Erik Blasch, Bin Zhang, Haibin Ling
|
CCTV-Gun: Benchmarking Handgun Detection in CCTV Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gun violence is a critical security problem, and it is imperative for the
computer vision community to develop effective gun detection algorithms for
real-world scenarios, particularly in Closed Circuit Television (CCTV)
surveillance data. Despite significant progress in visual object detection,
detecting guns in real-world CCTV images remains a challenging and
under-explored task. Firearms, especially handguns, are typically very small in
size, non-salient in appearance, and often severely occluded or
indistinguishable from other small objects. Additionally, the lack of
principled benchmarks and difficulty collecting relevant datasets further
hinder algorithmic development. In this paper, we present a meticulously
crafted and annotated benchmark, called \textbf{CCTV-Gun}, which addresses the
challenges of detecting handguns in real-world CCTV images. Our contribution is
three-fold. Firstly, we carefully select and analyze real-world CCTV images
from three datasets, manually annotate handguns and their holders, and assign
each image with relevant challenge factors such as blur and occlusion.
Secondly, we propose a new cross-dataset evaluation protocol in addition to the
standard intra-dataset protocol, which is vital for gun detection in practical
settings. Finally, we comprehensively evaluate both classical and
state-of-the-art object detection algorithms, providing an in-depth analysis of
their generalizing abilities. The benchmark will facilitate further research
and development on this topic and ultimately enhance security. Code,
annotations, and trained models are available at
https://github.com/srikarym/CCTV-Gun.
|
[
{
"version": "v1",
"created": "Sun, 19 Mar 2023 16:17:35 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Apr 2023 18:18:23 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Jul 2023 15:33:09 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Yellapragada",
"Srikar",
""
],
[
"Li",
"Zhenghong",
""
],
[
"Doshi",
"Kevin Bhadresh",
""
],
[
"Mhasakar",
"Purva Makarand",
""
],
[
"Fan",
"Heng",
""
],
[
"Wei",
"Jie",
""
],
[
"Blasch",
"Erik",
""
],
[
"Zhang",
"Bin",
""
],
[
"Ling",
"Haibin",
""
]
] |
new_dataset
| 0.999481 |
2303.14618
|
Minghan Li
|
Minghan Li and Lei Zhang
|
BoxVIS: Video Instance Segmentation with Box Annotations
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
It is expensive and labour-extensive to label the pixel-wise object masks in
a video. As a result, the amount of pixel-wise annotations in existing video
instance segmentation (VIS) datasets is small, limiting the generalization
capability of trained VIS models. An alternative but much cheaper solution is
to use bounding boxes to label instances in videos. Inspired by the recent
success of box-supervised image instance segmentation, we adapt the
state-of-the-art pixel-supervised VIS models to a box-supervised VIS (BoxVIS)
baseline, and observe slight performance degradation. We consequently propose
to improve the BoxVIS performance from two aspects. First, we propose a
box-center guided spatial-temporal pairwise affinity (STPA) loss to predict
instance masks for better spatial and temporal consistency. Second, we collect
a larger scale box-annotated VIS dataset (BVISD) by consolidating the videos
from current VIS benchmarks and converting images from the COCO dataset to
short pseudo video clips. With the proposed BVISD and the STPA loss, our
trained BoxVIS model achieves 43.2\% and 29.0\% mask AP on the YouTube-VIS 2021
and OVIS valid sets, respectively. It exhibits comparable instance mask
prediction performance and better generalization ability than state-of-the-art
pixel-supervised VIS models by using only 16\% of their annotation time and
cost. Codes and data can be found at \url{https://github.com/MinghanLi/BoxVIS}.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 04:04:58 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Jul 2023 10:44:51 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Li",
"Minghan",
""
],
[
"Zhang",
"Lei",
""
]
] |
new_dataset
| 0.999728 |
2304.06718
|
Hao Zhang
|
Xueyan Zou, Jianwei Yang, Hao Zhang, Feng Li, Linjie Li, Jianfeng
Wang, Lijuan Wang, Jianfeng Gao, Yong Jae Lee
|
Segment Everything Everywhere All at Once
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present SEEM, a promptable and interactive model for
segmenting everything everywhere all at once in an image, as shown in Fig.1. In
SEEM, we propose a novel decoding mechanism that enables diverse prompting for
all types of segmentation tasks, aiming at a universal segmentation interface
that behaves like large language models (LLMs). More specifically, SEEM is
designed with four desiderata: i) Versatility. We introduce a new visual prompt
to unify different spatial queries including points, boxes, scribbles and
masks, which can further generalize to a different referring image; ii)
Compositionality. We learn a joint visual-semantic space between text and
visual prompts, which facilitates the dynamic composition of two prompt types
required for various segmentation tasks; iii) Interactivity. We further
incorporate learnable memory prompts into the decoder to retain segmentation
history through mask-guided cross-attention from decoder to image features; and
iv) Semantic-awareness. We use a text encoder to encode text queries and mask
labels into the same semantic space for open-vocabulary segmentation. We
conduct a comprehensive empirical study to validate the effectiveness of SEEM
across diverse segmentation tasks. Notably, our single SEEM model achieves
competitive performance across interactive segmentation, generic segmentation,
referring segmentation, and video object segmentation on 9 datasets with
minimum 1/100 supervision. Furthermore, SEEM showcases a remarkable capacity
for generalization to novel prompts or their combinations, rendering it a
readily universal image segmentation interface.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 17:59:40 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Apr 2023 17:43:56 GMT"
},
{
"version": "v3",
"created": "Mon, 1 May 2023 17:57:19 GMT"
},
{
"version": "v4",
"created": "Tue, 11 Jul 2023 18:13:14 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Zou",
"Xueyan",
""
],
[
"Yang",
"Jianwei",
""
],
[
"Zhang",
"Hao",
""
],
[
"Li",
"Feng",
""
],
[
"Li",
"Linjie",
""
],
[
"Wang",
"Jianfeng",
""
],
[
"Wang",
"Lijuan",
""
],
[
"Gao",
"Jianfeng",
""
],
[
"Lee",
"Yong Jae",
""
]
] |
new_dataset
| 0.966175 |
2305.03175
|
Ankush Meshram
|
Ankush Meshram, Markus Karch, Christian Haas, J\"urgen Beyerer
|
POET: A Self-learning Framework for PROFINET Industrial Operations
Behaviour
|
To be published in the proceedings of EAI TRIDENTCOM 2022
| null |
10.1007/978-3-031-33458-0_1
| null |
cs.CR cs.AI cs.NI cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Since 2010, multiple cyber incidents on industrial infrastructure, such as
Stuxnet and CrashOverride, have exposed the vulnerability of Industrial Control
Systems (ICS) to cyber threats. The industrial systems are commissioned for
longer duration amounting to decades, often resulting in non-compliance to
technological advancements in industrial cybersecurity mechanisms. The
unavailability of network infrastructure information makes designing the
security policies or configuring the cybersecurity countermeasures such as
Network Intrusion Detection Systems (NIDS) challenging. An empirical solution
is to self-learn the network infrastructure information of an industrial system
from its monitored network traffic to make the network transparent for
downstream analyses tasks such as anomaly detection. In this work, a
Python-based industrial communication paradigm-aware framework, named PROFINET
Operations Enumeration and Tracking (POET), that enumerates different
industrial operations executed in a deterministic order of a PROFINET-based
industrial system is reported. The operation-driving industrial network
protocol frames are dissected for enumeration of the operations. For the
requirements of capturing the transitions between industrial operations
triggered by the communication events, the Finite State Machines (FSM) are
modelled to enumerate the PROFINET operations of the device, connection and
system. POET extracts the network information from network traffic to
instantiate appropriate FSM models (Device, Connection or System) and track the
industrial operations. It successfully detects and reports the anomalies
triggered by a network attack in a miniaturized PROFINET-based industrial
system, executed through valid network protocol exchanges and resulting in
invalid PROFINET operation transition for the device.
|
[
{
"version": "v1",
"created": "Sat, 29 Apr 2023 19:41:27 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Meshram",
"Ankush",
""
],
[
"Karch",
"Markus",
""
],
[
"Haas",
"Christian",
""
],
[
"Beyerer",
"Jürgen",
""
]
] |
new_dataset
| 0.96794 |
2305.08844
|
Afra Feyza Aky\"urek
|
Afra Feyza Aky\"urek, Ekin Aky\"urek, Aman Madaan, Ashwin Kalyan,
Peter Clark, Derry Wijaya, Niket Tandon
|
RL4F: Generating Natural Language Feedback with Reinforcement Learning
for Repairing Model Outputs
|
ACL 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite their unprecedented success, even the largest language models make
mistakes. Similar to how humans learn and improve using feedback, previous work
proposed providing language models with natural language feedback to guide them
in repairing their outputs. Because human-generated critiques are expensive to
obtain, researchers have devised learned critique generators in lieu of human
critics while assuming one can train downstream models to utilize generated
feedback. However, this approach does not apply to black-box or limited access
models such as ChatGPT, as they cannot be fine-tuned. Moreover, in the era of
large general-purpose language agents, fine-tuning is neither computationally
nor spatially efficient as it results in multiple copies of the network. In
this work, we introduce RL4F (Reinforcement Learning for Feedback), a
multi-agent collaborative framework where the critique generator is trained to
maximize end-task performance of GPT-3, a fixed model more than 200 times its
size. RL4F produces critiques that help GPT-3 revise its outputs. We study
three datasets for action planning, summarization and alphabetization and show
relative improvements up to 10% in multiple text similarity metrics over other
learned, retrieval-augmented or prompting-based critique generators.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 17:57:16 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Jul 2023 18:29:12 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Akyürek",
"Afra Feyza",
""
],
[
"Akyürek",
"Ekin",
""
],
[
"Madaan",
"Aman",
""
],
[
"Kalyan",
"Ashwin",
""
],
[
"Clark",
"Peter",
""
],
[
"Wijaya",
"Derry",
""
],
[
"Tandon",
"Niket",
""
]
] |
new_dataset
| 0.970704 |
2305.12529
|
Yukun Huang
|
Yukun Huang, Jianan Wang, Ailing Zeng, He Cao, Xianbiao Qi, Yukai Shi,
Zheng-Jun Zha, Lei Zhang
|
DreamWaltz: Make a Scene with Complex 3D Animatable Avatars
|
project page at https://dreamwaltz3d.github.io/
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present DreamWaltz, a novel framework for generating and animating complex
3D avatars given text guidance and parametric human body prior. While recent
methods have shown encouraging results for text-to-3D generation of common
objects, creating high-quality and animatable 3D avatars remains challenging.
To create high-quality 3D avatars, DreamWaltz proposes 3D-consistent
occlusion-aware Score Distillation Sampling (SDS) to optimize implicit neural
representations with canonical poses. It provides view-aligned supervision via
3D-aware skeleton conditioning which enables complex avatar generation without
artifacts and multiple faces. For animation, our method learns an animatable
and generalizable avatar representation which could map arbitrary poses to the
canonical pose representation. Extensive evaluations demonstrate that
DreamWaltz is an effective and robust approach for creating 3D avatars that can
take on complex shapes and appearances as well as novel poses for animation.
The proposed framework further enables the creation of complex scenes with
diverse compositions, including avatar-avatar, avatar-object and avatar-scene
interactions. See https://dreamwaltz3d.github.io/ for more vivid 3D avatar and
animation results.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 17:59:39 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Jul 2023 17:58:59 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Huang",
"Yukun",
""
],
[
"Wang",
"Jianan",
""
],
[
"Zeng",
"Ailing",
""
],
[
"Cao",
"He",
""
],
[
"Qi",
"Xianbiao",
""
],
[
"Shi",
"Yukai",
""
],
[
"Zha",
"Zheng-Jun",
""
],
[
"Zhang",
"Lei",
""
]
] |
new_dataset
| 0.993987 |
2306.00001
|
Julian Moosmann
|
Julian Moosmann, Marco Giordano, Christian Vogt, Michele Magno
|
TinyissimoYOLO: A Quantized, Low-Memory Footprint, TinyML Object
Detection Network for Low Power Microcontrollers
|
Published In: 2023 IEEE 5th International Conference on Artificial
Intelligence Circuits and Systems (AICAS)
| null |
10.1109/AICAS57966.2023.10168657
| null |
cs.CV cs.AR eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a highly flexible, quantized, memory-efficient, and
ultra-lightweight object detection network, called TinyissimoYOLO. It aims to
enable object detection on microcontrollers in the power domain of milliwatts,
with less than 0.5MB memory available for storing convolutional neural network
(CNN) weights. The proposed quantized network architecture with 422k
parameters, enables real-time object detection on embedded microcontrollers,
and it has been evaluated to exploit CNN accelerators. In particular, the
proposed network has been deployed on the MAX78000 microcontroller achieving
high frame-rate of up to 180fps and an ultra-low energy consumption of only
196{\mu}J per inference with an inference efficiency of more than 106
MAC/Cycle. TinyissimoYOLO can be trained for any multi-object detection.
However, considering the small network size, adding object detection classes
will increase the size and memory consumption of the network, thus object
detection with up to 3 classes is demonstrated. Furthermore, the network is
trained using quantization-aware training and deployed with 8-bit quantization
on different microcontrollers, such as STM32H7A3, STM32L4R9, Apollo4b and on
the MAX78000's CNN accelerator. Performance evaluations are presented in this
paper.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 12:57:38 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Jul 2023 06:10:52 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Moosmann",
"Julian",
""
],
[
"Giordano",
"Marco",
""
],
[
"Vogt",
"Christian",
""
],
[
"Magno",
"Michele",
""
]
] |
new_dataset
| 0.992078 |
2306.15745
|
Michael Yoder
|
Michael Miller Yoder, Chloe Perry, David West Brown, Kathleen M.
Carley, Meredith L. Pruden
|
Identity Construction in a Misogynist Incels Forum
|
Workshop on Online Abuse and Harms (WOAH) 2023; Minor edits to author
names and abstracts in most recent version
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Online communities of involuntary celibates (incels) are a prominent source
of misogynist hate speech. In this paper, we use quantitative text and network
analysis approaches to examine how identity groups are discussed on
incels-dot-is, the largest black-pilled incels forum. We find that this
community produces a wide range of novel identity terms and, while terms for
women are most common, mentions of other minoritized identities are increasing.
An analysis of the associations made with identity groups suggests an
essentialist ideology where physical appearance, as well as gender and racial
hierarchies, determine human value. We discuss implications for research into
automated misogynist hate speech detection.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 18:56:28 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jun 2023 14:00:04 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Jul 2023 21:15:36 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Yoder",
"Michael Miller",
""
],
[
"Perry",
"Chloe",
""
],
[
"Brown",
"David West",
""
],
[
"Carley",
"Kathleen M.",
""
],
[
"Pruden",
"Meredith L.",
""
]
] |
new_dataset
| 0.976564 |
2307.00721
|
Koji Hashimoto
|
Koji Hashimoto, Tomoya Naito, Hisashi Naito
|
Neural Polytopes
|
5 pages, 9 figures. v2: References added. Accepted at the 1st
Workshop on the Synergy of Scientific and Machine Learning Modeling at
International Conference on Machine Learning (ICML), Honolulu, Hawaii, USA.
2023
| null | null |
KUNS-2972, RIKEN-iTHEMS-Report-23
|
cs.LG cs.GR hep-th math.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We find that simple neural networks with ReLU activation generate polytopes
as an approximation of a unit sphere in various dimensions. The species of
polytopes are regulated by the network architecture, such as the number of
units and layers. For a variety of activation functions, generalization of
polytopes is obtained, which we call neural polytopes. They are a smooth
analogue of polytopes, exhibiting geometric duality. This finding initiates
research of generative discrete geometry to approximate surfaces by machine
learning.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 03:00:22 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Jul 2023 03:00:48 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Hashimoto",
"Koji",
""
],
[
"Naito",
"Tomoya",
""
],
[
"Naito",
"Hisashi",
""
]
] |
new_dataset
| 0.969791 |
2307.05034
|
Sushma Anand Akoju
|
Sushma Anand Akoju, Robert Vacareanu, Haris Riaz, Eduardo Blanco,
Mihai Surdeanu
|
Synthetic Dataset for Evaluating Complex Compositional Knowledge for
Natural Language Inference
|
Accepted to Natural Language Reasoning and Structured Explanations
(NLRSE) Workshop, ACL 2023. For dataset, please refer
https://github.com/clulab/releases/tree/master/acl2023-nlrse-sicck and
https://github.com/sushmaakoju/natural-logic
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce a synthetic dataset called Sentences Involving Complex
Compositional Knowledge (SICCK) and a novel analysis that investigates the
performance of Natural Language Inference (NLI) models to understand
compositionality in logic. We produce 1,304 sentence pairs by modifying 15
examples from the SICK dataset (Marelli et al., 2014). To this end, we modify
the original texts using a set of phrases - modifiers that correspond to
universal quantifiers, existential quantifiers, negation, and other concept
modifiers in Natural Logic (NL) (MacCartney, 2009). We use these phrases to
modify the subject, verb, and object parts of the premise and hypothesis.
Lastly, we annotate these modified texts with the corresponding entailment
labels following NL rules. We conduct a preliminary verification of how well
the change in the structural and semantic composition is captured by neural NLI
models, in both zero-shot and fine-tuned scenarios. We found that the
performance of NLI models under the zero-shot setting is poor, especially for
modified sentences with negation and existential quantifiers. After fine-tuning
this dataset, we observe that models continue to perform poorly over negation,
existential and universal modifiers.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 06:18:07 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Jul 2023 00:52:15 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Akoju",
"Sushma Anand",
""
],
[
"Vacareanu",
"Robert",
""
],
[
"Riaz",
"Haris",
""
],
[
"Blanco",
"Eduardo",
""
],
[
"Surdeanu",
"Mihai",
""
]
] |
new_dataset
| 0.999384 |
2307.05468
|
Luchao Qi
|
Luchao Qi, Jiaye Wu, Shengze Wang, Soumyadip Sengupta
|
My3DGen: Building Lightweight Personalized 3D Generative Model
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Our paper presents My3DGen, a practical system for creating a personalized
and lightweight 3D generative prior using as few as 10 images. My3DGen can
reconstruct multi-view consistent images from an input test image, and generate
novel appearances by interpolating between any two images of the same
individual. While recent studies have demonstrated the effectiveness of
personalized generative priors in producing high-quality 2D portrait
reconstructions and syntheses, to the best of our knowledge, we are the first
to develop a personalized 3D generative prior. Instead of fine-tuning a large
pre-trained generative model with millions of parameters to achieve
personalization, we propose a parameter-efficient approach. Our method involves
utilizing a pre-trained model with fixed weights as a generic prior, while
training a separate personalized prior through low-rank decomposition of the
weights in each convolution and fully connected layer. However,
parameter-efficient few-shot fine-tuning on its own often leads to overfitting.
To address this, we introduce a regularization technique based on symmetry of
human faces. This regularization enforces that novel view renderings of a
training sample, rendered from symmetric poses, exhibit the same identity. By
incorporating this symmetry prior, we enhance the quality of reconstruction and
synthesis, particularly for non-frontal (profile) faces. Our final system
combines low-rank fine-tuning with symmetry regularization and significantly
surpasses the performance of pre-trained models, e.g. EG3D. It introduces only
approximately 0.6 million additional parameters per identity compared to 31
million for full finetuning of the original model. As a result, our system
achieves a 50-fold reduction in model size without sacrificing the quality of
the generated 3D faces. Code will be available at our project page:
https://luchaoqi.github.io/my3dgen.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 17:53:43 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Jul 2023 05:11:23 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Qi",
"Luchao",
""
],
[
"Wu",
"Jiaye",
""
],
[
"Wang",
"Shengze",
""
],
[
"Sengupta",
"Soumyadip",
""
]
] |
new_dataset
| 0.988771 |
2307.05501
|
Ruslan Isaev Dr.
|
Ruslan Isaev, Radmir Gumerov, Gulzada Esenalieva, Remudin Reshid
Mekuria, Ermek Doszhanov
|
HIVA: Holographic Intellectual Voice Assistant
|
6 pages, 6 figures
| null |
10.1109/ICECCO58239.2023.10146600
| null |
cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Holographic Intellectual Voice Assistant (HIVA) aims to facilitate human
computer interaction using audiovisual effects and 3D avatar. HIVA provides
complete information about the university, including requests of various
nature: admission, study issues, fees, departments, university structure and
history, canteen, human resources, library, student life and events,
information about the country and the city, etc. There are other ways for
receiving the data listed above: the university's official website and other
supporting apps, HEI (Higher Education Institution) official social media,
directly asking the HEI staff, and other channels. However, HIVA provides the
unique experience of "face-to-face" interaction with an animated 3D mascot,
helping to get a sense of 'real-life' communication. The system includes many
sub-modules and connects a family of applications such as mobile applications,
Telegram chatbot, suggestion categorization, and entertainment services. The
Voice assistant uses Russian language NLP models and tools, which are pipelined
for the best user experience.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 03:29:32 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Isaev",
"Ruslan",
""
],
[
"Gumerov",
"Radmir",
""
],
[
"Esenalieva",
"Gulzada",
""
],
[
"Mekuria",
"Remudin Reshid",
""
],
[
"Doszhanov",
"Ermek",
""
]
] |
new_dataset
| 0.990272 |
2307.05528
|
Eric Ruzomberka
|
Eric Ruzomberka and Homa Nikbakht and Christopher G. Brinton and H.
Vincent Poor
|
On Pseudolinear Codes for Correcting Adversarial Errors
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider error-correction coding schemes for adversarial wiretap channels
(AWTCs) in which the channel can a) read a fraction of the codeword bits up to
a bound $r$ and b) flip a fraction of the bits up to a bound $p$. The channel
can freely choose the locations of the bit reads and bit flips via a process
with unbounded computational power. Codes for the AWTC are of broad interest in
the area of information security, as they can provide data resiliency in
settings where an attacker has limited access to a storage or transmission
medium.
We investigate a family of non-linear codes known as pseudolinear codes,
which were first proposed by Guruswami and Indyk (FOCS 2001) for constructing
list-decodable codes independent of the AWTC setting. Unlike general non-linear
codes, pseudolinear codes admit efficient encoders and have succinct
representations. We focus on unique decoding and show that random pseudolinear
codes can achieve rates up to the binary symmetric channel (BSC) capacity
$1-H_2(p)$ for any $p,r$ in the less noisy region: $p<1/2$ and $r<1-H_2(p)$
where $H_2(\cdot)$ is the binary entropy function. Thus, pseudolinear codes are
the first known optimal-rate binary code family for the less noisy AWTC that
admit efficient encoders. The above result can be viewed as a derandomization
result of random general codes in the AWTC setting, which in turn opens new
avenues for applying derandomization techniques to randomized constructions of
AWTC codes. Our proof applies a novel concentration inequality for sums of
random variables with limited independence which may be of interest as an
analysis tool more generally.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 22:31:19 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Ruzomberka",
"Eric",
""
],
[
"Nikbakht",
"Homa",
""
],
[
"Brinton",
"Christopher G.",
""
],
[
"Poor",
"H. Vincent",
""
]
] |
new_dataset
| 0.957363 |
2307.05537
|
Andrew Gao
|
Andrew Kean Gao
|
NLP Meets RNA: Unsupervised Embedding Learning for Ribozymes with
Word2Vec
| null | null | null | null |
cs.LG q-bio.BM
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Ribozymes, RNA molecules with distinct 3D structures and catalytic activity,
have widespread applications in synthetic biology and therapeutics. However,
relatively little research has focused on leveraging deep learning to enhance
our understanding of ribozymes. This study implements Word2Vec, an unsupervised
learning technique for natural language processing, to learn ribozyme
embeddings. Ribo2Vec was trained on over 9,000 diverse ribozymes, learning to
map sequences to 128 and 256-dimensional vector spaces. Using Ribo2Vec,
sequence embeddings for five classes of ribozymes (hatchet, pistol, hairpin,
hovlinc, and twister sister) were calculated. Principal component analysis
demonstrated the ability of these embeddings to distinguish between ribozyme
classes. Furthermore, a simple SVM classifier trained on ribozyme embeddings
showed promising results in accurately classifying ribozyme types. Our results
suggest that the embedding vectors contained meaningful information about
ribozymes. Interestingly, 256-dimensional embeddings behaved similarly to
128-dimensional embeddings, suggesting that a lower dimension vector space is
generally sufficient to capture ribozyme features. This approach demonstrates
the potential of Word2Vec for bioinformatics, opening new avenues for ribozyme
research. Future research includes using a Transformer-based method to learn
RNA embeddings, which can capture long-range interactions between nucleotides.
|
[
{
"version": "v1",
"created": "Sat, 8 Jul 2023 15:06:48 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Gao",
"Andrew Kean",
""
]
] |
new_dataset
| 0.997691 |
2307.05563
|
Bhavin Jawade
|
Bhavin Jawade, Deen Dayal Mohan, Srirangaraj Setlur, Nalini Ratha and
Venu Govindaraju
|
RidgeBase: A Cross-Sensor Multi-Finger Contactless Fingerprint Dataset
|
Paper accepted at IJCB 2022
|
2022 IEEE International Joint Conference on Biometrics (IJCB), Abu
Dhabi, United Arab Emirates, 2022, pp. 1-9
|
10.1109/IJCB54206.2022.10007936
| null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Contactless fingerprint matching using smartphone cameras can alleviate major
challenges of traditional fingerprint systems including hygienic acquisition,
portability and presentation attacks. However, development of practical and
robust contactless fingerprint matching techniques is constrained by the
limited availability of large scale real-world datasets. To motivate further
advances in contactless fingerprint matching across sensors, we introduce the
RidgeBase benchmark dataset. RidgeBase consists of more than 15,000 contactless
and contact-based fingerprint image pairs acquired from 88 individuals under
different background and lighting conditions using two smartphone cameras and
one flatbed contact sensor. Unlike existing datasets, RidgeBase is designed to
promote research under different matching scenarios that include Single Finger
Matching and Multi-Finger Matching for both contactless- to-contactless (CL2CL)
and contact-to-contactless (C2CL) verification and identification. Furthermore,
due to the high intra-sample variance in contactless fingerprints belonging to
the same finger, we propose a set-based matching protocol inspired by the
advances in facial recognition datasets. This protocol is specifically designed
for pragmatic contactless fingerprint matching that can account for variances
in focus, polarity and finger-angles. We report qualitative and quantitative
baseline results for different protocols using a COTS fingerprint matcher
(Verifinger) and a Deep CNN based approach on the RidgeBase dataset. The
dataset can be downloaded here:
https://www.buffalo.edu/cubs/research/datasets/ridgebase-benchmark-dataset.html
|
[
{
"version": "v1",
"created": "Sun, 9 Jul 2023 22:09:15 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Jawade",
"Bhavin",
""
],
[
"Mohan",
"Deen Dayal",
""
],
[
"Setlur",
"Srirangaraj",
""
],
[
"Ratha",
"Nalini",
""
],
[
"Govindaraju",
"Venu",
""
]
] |
new_dataset
| 0.999759 |
2307.05591
|
Fabian Paischer
|
Fabian Paischer, Thomas Adler, Markus Hofmarcher, Sepp Hochreiter
|
SITTA: A Semantic Image-Text Alignment for Image Captioning
|
10 pages (+ references and appendix), Code:
https://github.com/ml-jku/semantic-image-text-alignment
| null | null | null |
cs.CV cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Textual and semantic comprehension of images is essential for generating
proper captions. The comprehension requires detection of objects, modeling of
relations between them, an assessment of the semantics of the scene and,
finally, representing the extracted knowledge in a language space. To achieve
rich language capabilities while ensuring good image-language mappings,
pretrained language models (LMs) were conditioned on pretrained multi-modal
(image-text) models that allow for image inputs. This requires an alignment of
the image representation of the multi-modal model with the language
representations of a generative LM. However, it is not clear how to best
transfer semantics detected by the vision encoder of the multi-modal model to
the LM. We introduce two novel ways of constructing a linear mapping that
successfully transfers semantics between the embedding spaces of the two
pretrained models. The first aligns the embedding space of the multi-modal
language encoder with the embedding space of the pretrained LM via token
correspondences. The latter leverages additional data that consists of
image-text pairs to construct the mapping directly from vision to language
space. Using our semantic mappings, we unlock image captioning for LMs without
access to gradient information. By using different sources of data we achieve
strong captioning performance on MS-COCO and Flickr30k datasets. Even in the
face of limited data, our method partly exceeds the performance of other
zero-shot and even finetuned competitors. Our ablation studies show that even
LMs at a scale of merely 250M parameters can generate decent captions employing
our semantic mappings. Our approach makes image captioning more accessible for
institutions with restricted computational resources.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 17:59:21 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Paischer",
"Fabian",
""
],
[
"Adler",
"Thomas",
""
],
[
"Hofmarcher",
"Markus",
""
],
[
"Hochreiter",
"Sepp",
""
]
] |
new_dataset
| 0.998523 |
2307.05609
|
Jiangnan Cheng
|
Jiangnan Cheng, Yingjie Bi, Ao Tang
|
Virtual Network Embedding without Explicit Virtual Network Specification
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Network virtualization enables Internet service providers to run multiple
heterogeneous and dedicated network architectures for different customers on a
shared substrate. In existing works on virtual network embedding (VNE), each
customer formulates a virtual network request (VNR) where a virtual network
(VN) is required. Motivated by a concrete example where VN is not a proper VNR
formulation to reflect the traffic demand of a customer, we propose a new VNR
formulation described by the traffic demand between several access node pairs
to complement the existing VNR formulation. Moreover, three different groups of
VNE variants are systematically examined. Simulations demonstrate that shared
channel embedding, as a new embedding variant under the proposed VNR
formulation, improves the acceptance rate and reduces cost and link utility
compared to traditional independent channel embedding.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 22:37:54 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Cheng",
"Jiangnan",
""
],
[
"Bi",
"Yingjie",
""
],
[
"Tang",
"Ao",
""
]
] |
new_dataset
| 0.96073 |
2307.05646
|
Dhruv Mullick
|
Dhruv Mullick, Bilal Ghanem, Alona Fyshe
|
Better Handling Coreference Resolution in Aspect Level Sentiment
Classification by Fine-Tuning Language Models
|
Work done up till December 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Customer feedback is invaluable to companies as they refine their products.
Monitoring customer feedback can be automated with Aspect Level Sentiment
Classification (ALSC) which allows us to analyse specific aspects of the
products in reviews. Large Language Models (LLMs) are the heart of many
state-of-the-art ALSC solutions, but they perform poorly in some scenarios
requiring Coreference Resolution (CR). In this work, we propose a framework to
improve an LLM's performance on CR-containing reviews by fine tuning on highly
inferential tasks. We show that the performance improvement is likely
attributed to the improved model CR ability. We also release a new dataset that
focuses on CR in ALSC.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 12:43:28 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Mullick",
"Dhruv",
""
],
[
"Ghanem",
"Bilal",
""
],
[
"Fyshe",
"Alona",
""
]
] |
new_dataset
| 0.97194 |
2307.05663
|
Matt Deitke
|
Matt Deitke, Ruoshi Liu, Matthew Wallingford, Huong Ngo, Oscar Michel,
Aditya Kusupati, Alan Fan, Christian Laforte, Vikram Voleti, Samir Yitzhak
Gadre, Eli VanderBilt, Aniruddha Kembhavi, Carl Vondrick, Georgia Gkioxari,
Kiana Ehsani, Ludwig Schmidt, Ali Farhadi
|
Objaverse-XL: A Universe of 10M+ 3D Objects
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Natural language processing and 2D vision models have attained remarkable
proficiency on many tasks primarily by escalating the scale of training data.
However, 3D vision tasks have not seen the same progress, in part due to the
challenges of acquiring high-quality 3D data. In this work, we present
Objaverse-XL, a dataset of over 10 million 3D objects. Our dataset comprises
deduplicated 3D objects from a diverse set of sources, including manually
designed objects, photogrammetry scans of landmarks and everyday items, and
professional scans of historic and antique artifacts. Representing the largest
scale and diversity in the realm of 3D datasets, Objaverse-XL enables
significant new possibilities for 3D vision. Our experiments demonstrate the
improvements enabled with the scale provided by Objaverse-XL. We show that by
training Zero123 on novel view synthesis, utilizing over 100 million multi-view
rendered images, we achieve strong zero-shot generalization abilities. We hope
that releasing Objaverse-XL will enable further innovations in the field of 3D
vision at scale.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 17:57:40 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Deitke",
"Matt",
""
],
[
"Liu",
"Ruoshi",
""
],
[
"Wallingford",
"Matthew",
""
],
[
"Ngo",
"Huong",
""
],
[
"Michel",
"Oscar",
""
],
[
"Kusupati",
"Aditya",
""
],
[
"Fan",
"Alan",
""
],
[
"Laforte",
"Christian",
""
],
[
"Voleti",
"Vikram",
""
],
[
"Gadre",
"Samir Yitzhak",
""
],
[
"VanderBilt",
"Eli",
""
],
[
"Kembhavi",
"Aniruddha",
""
],
[
"Vondrick",
"Carl",
""
],
[
"Gkioxari",
"Georgia",
""
],
[
"Ehsani",
"Kiana",
""
],
[
"Schmidt",
"Ludwig",
""
],
[
"Farhadi",
"Ali",
""
]
] |
new_dataset
| 0.999885 |
2307.05700
|
Adway Mitra
|
Priyanka Goyal, Sohan Patnaik, Adway Mitra, Manjira Sinha
|
SepHRNet: Generating High-Resolution Crop Maps from Remote Sensing
imagery using HRNet with Separable Convolution
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
The accurate mapping of crop production is crucial for ensuring food
security, effective resource management, and sustainable agricultural
practices. One way to achieve this is by analyzing high-resolution satellite
imagery. Deep Learning has been successful in analyzing images, including
remote sensing imagery. However, capturing intricate crop patterns is
challenging due to their complexity and variability. In this paper, we propose
a novel Deep learning approach that integrates HRNet with Separable
Convolutional layers to capture spatial patterns and Self-attention to capture
temporal patterns of the data. The HRNet model acts as a backbone and extracts
high-resolution features from crop images. Spatially separable convolution in
the shallow layers of the HRNet model captures intricate crop patterns more
effectively while reducing the computational cost. The multi-head attention
mechanism captures long-term temporal dependencies from the encoded vector
representation of the images. Finally, a CNN decoder generates a crop map from
the aggregated representation. Adaboost is used on top of this to further
improve accuracy. The proposed algorithm achieves a high classification
accuracy of 97.5\% and IoU of 55.2\% in generating crop maps. We evaluate the
performance of our pipeline on the Zuericrop dataset and demonstrate that our
results outperform state-of-the-art models such as U-Net++, ResNet50, VGG19,
InceptionV3, DenseNet, and EfficientNet. This research showcases the potential
of Deep Learning for Earth Observation Systems.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 18:07:25 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Goyal",
"Priyanka",
""
],
[
"Patnaik",
"Sohan",
""
],
[
"Mitra",
"Adway",
""
],
[
"Sinha",
"Manjira",
""
]
] |
new_dataset
| 0.964897 |
2307.05721
|
Hao Zheng
|
Hao Zheng, Regina Lee, Yuqian Lu
|
HA-ViD: A Human Assembly Video Dataset for Comprehensive Assembly
Knowledge Understanding
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Understanding comprehensive assembly knowledge from videos is critical for
futuristic ultra-intelligent industry. To enable technological breakthrough, we
present HA-ViD - the first human assembly video dataset that features
representative industrial assembly scenarios, natural procedural knowledge
acquisition process, and consistent human-robot shared annotations.
Specifically, HA-ViD captures diverse collaboration patterns of real-world
assembly, natural human behaviors and learning progression during assembly, and
granulate action annotations to subject, action verb, manipulated object,
target object, and tool. We provide 3222 multi-view, multi-modality videos
(each video contains one assembly task), 1.5M frames, 96K temporal labels and
2M spatial labels. We benchmark four foundational video understanding tasks:
action recognition, action segmentation, object detection and multi-object
tracking. Importantly, we analyze their performance for comprehending knowledge
in assembly progress, process efficiency, task collaboration, skill parameters
and human intention. Details of HA-ViD is available at:
https://iai-hrc.github.io/ha-vid.
|
[
{
"version": "v1",
"created": "Sun, 9 Jul 2023 08:44:46 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Zheng",
"Hao",
""
],
[
"Lee",
"Regina",
""
],
[
"Lu",
"Yuqian",
""
]
] |
new_dataset
| 0.999849 |
2307.05740
|
Raghavendra Kanakagiri
|
Raghavendra Kanakagiri and Edgar Solomonik
|
Minimum Cost Loop Nests for Contraction of a Sparse Tensor with a Tensor
Network
|
17 pages, 13 figures
| null | null | null |
cs.DC cs.MS cs.PF cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Sparse tensor decomposition and completion are common in numerous
applications, ranging from machine learning to computational quantum chemistry.
Typically, the main bottleneck in optimization of these models are contractions
of a single large sparse tensor with a network of several dense matrices or
tensors (SpTTN). Prior works on high-performance tensor decomposition and
completion have focused on performance and scalability optimizations for
specific SpTTN kernels. We present algorithms and a runtime system for
identifying and executing the most efficient loop nest for any SpTTN kernel. We
consider both enumeration of such loop nests for autotuning and efficient
algorithms for finding the lowest cost loop-nest for simpler metrics, such as
buffer size or cache miss models. Our runtime system identifies the best choice
of loop nest without user guidance, and also provides a distributed-memory
parallelization of SpTTN kernels. We evaluate our framework using both
real-world and synthetic tensors. Our results demonstrate that our approach
outperforms available generalized state-of-the-art libraries and matches the
performance of specialized codes.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 19:08:06 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Kanakagiri",
"Raghavendra",
""
],
[
"Solomonik",
"Edgar",
""
]
] |
new_dataset
| 0.967457 |
2307.05797
|
Nafees Mansoor PhD
|
Tasfia Rahman, Sumaiya Islam Mouno, Arunangshu Mojumder Raatul, Abul
Kalam Al Azad, and Nafees Mansoor
|
Verifi-Chain: A Credentials Verifier using Blockchain and IPFS
|
Presented at International Conference on. Inventive Communication and
Computational Technologies 2023
| null | null | null |
cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Submitting fake certificates is a common problem in Southeast Asia, which
prevents qualified candidates from getting the jobs they deserve. When applying
for a job, students must provide academic credentials as proof of their
qualifications, acquired both inside and outside the classroom. Verifying
academic documents before hiring is crucial to prevent fraud. Employing
blockchain technology has the potential to address this issue. Blockchain
provides an electronic certificate that is tamper-proof and non-repudiable,
making it difficult for students to manipulate their academic credentials. This
paper presents a prototype for an academic credential verification model that
leverages the security features of blockchain and IPFS (Interplanetary File
System). Certificates are temporarily stored in a database before being
transferred to IPFS, where a unique hash code is generated using a hashing
algorithm. This hash code serves as the certificate's unique identity and is
stored in the blockchain nodes. Companies can verify an applicant's credentials
by searching for the applicant and accessing their already verified
certificates. Utilizing IPFS as a middleman storage platform lowers the
expenses of directly storing massive data on the blockchain. To sum it up, the
proposed solution would make the process of certificate verification more
efficient, secure, and cost-effective. It would save time and resources that
would otherwise be used to manually verify certificates.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 20:42:28 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Rahman",
"Tasfia",
""
],
[
"Mouno",
"Sumaiya Islam",
""
],
[
"Raatul",
"Arunangshu Mojumder",
""
],
[
"Azad",
"Abul Kalam Al",
""
],
[
"Mansoor",
"Nafees",
""
]
] |
new_dataset
| 0.998578 |
2307.05815
|
Dipal Halder
|
Dipal Halder, Maneesh Merugu, Sandip Ray
|
ObNoCs: Protecting Network-on-Chip Fabrics Against Reverse-Engineering
Attacks
| null | null | null | null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Modern System-on-Chip designs typically use Network-on-Chip (NoC) fabrics to
implement coordination among integrated hardware blocks. An important class of
security vulnerabilities involves a rogue foundry reverse-engineering the NoC
topology and routing logic. In this paper, we develop an infrastructure,
$\obnocs$, for protecting NoC fabrics against such attacks. $\obnocs$
systematically replaces router connections with switches that can be programmed
after fabrication to induce the desired topology. Our approach provides
provable redaction of NoC functionality: switch configurations induce a large
number of legal topologies, only one of which corresponds to the intended
topology. We implement the $\obnocs$ methodology on Intel
Quartus\texttrademark\ Platform, and experimental results on realistic SoC
designs show that the architecture incurs minimal overhead in power, resource
utilization, and system latency.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 21:49:45 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Halder",
"Dipal",
""
],
[
"Merugu",
"Maneesh",
""
],
[
"Ray",
"Sandip",
""
]
] |
new_dataset
| 0.991034 |
2307.05830
|
Eric Easthope
|
Eric Easthope
|
SnakeSynth: New Interactions for Generative Audio Synthesis
| null | null | null | null |
cs.HC cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
I present "SnakeSynth," a web-based lightweight audio synthesizer that
combines audio generated by a deep generative model and real-time continuous
two-dimensional (2D) input to create and control variable-length generative
sounds through 2D interaction gestures. Interaction gestures are touch and
mobile-compatible with analogies to strummed, bowed, and plucked musical
instrument controls. Point-and-click and drag-and-drop gestures directly
control audio playback length and I show that sound length and intensity are
modulated by interactions with a programmable 2D coordinate grid. Leveraging
the speed and ubiquity of browser-based audio and hardware acceleration in
Google's TensorFlow.js we generate time-varying high-fidelity sounds with
real-time interactivity. SnakeSynth adaptively reproduces and interpolates
between sounds encountered during model training, notably without long training
times, and I briefly discuss possible futures for deep generative models as an
interactive paradigm for musical expression.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 22:51:54 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Easthope",
"Eric",
""
]
] |
new_dataset
| 0.999277 |
2307.05871
|
Wei Zhang
|
Wei Zhang
|
A Novel SCL Bit-Flipping Decoding Of Polarization-Adjusted Convolutional
(PAC) Codes
|
arXiv admin note: text overlap with arXiv:2306.02629
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Polar codes have attracted the attention of numerous researchers in the past
decade due to their excellent performance. However, their performance at short
block lengths under standard successive cancellation decoding is far from
desirable. An effective method to improve the performance at short lengths is
CRC precoding followed by successive-cancellation list decoding. Later, Arikan
presented polarization-adjusted convolutional (PAC) codes, which further
improve the performance of polar codes. In fact, bit-flipping is another
post-processing method that can improve decoding performance. In this paper, we
propose a novel SCL Bit-Flipping of PAC Codes. We show that better performance
can be achieved using list decoding when the list size is the same for PAC
codes (N=128, K=64). The decoding performance of our newly proposed PAC-SCLF
with a list size of 32 is 0.3 dB better than that of the traditional PAC-SCL
with a list size of 32. We set the maximum number of bit flips to 5. The
performance of the list size (L=32) for PAC-SCLF is almost the same as the
performance of the list size (L=128) for PAC-SCL.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 01:56:24 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Zhang",
"Wei",
""
]
] |
new_dataset
| 0.996515 |
2307.05874
|
Hiroshi Fukui
|
Hiroshi Fukui and Taiki Miyagawa and Yusuke Morishita
|
Multi-Object Tracking as Attention Mechanism
|
Accepted to IEEE International Conference on Image Processing (IEEE
ICIP) 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a conceptually simple and thus fast multi-object tracking (MOT)
model that does not require any attached modules, such as the Kalman filter,
Hungarian algorithm, transformer blocks, or graph networks. Conventional MOT
models are built upon the multi-step modules listed above, and thus the
computational cost is high. Our proposed end-to-end MOT model,
\textit{TicrossNet}, is composed of a base detector and a cross-attention
module only. As a result, the overhead of tracking does not increase
significantly even when the number of instances ($N_t$) increases. We show that
TicrossNet runs \textit{in real-time}; specifically, it achieves 32.6 FPS on
MOT17 and 31.0 FPS on MOT20 (Tesla V100), which includes as many as $>$100
instances per frame. We also demonstrate that TicrossNet is robust to $N_t$;
thus, it does not have to change the size of the base detector, depending on
$N_t$, as is often done by other models for real-time processing.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 02:02:18 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Fukui",
"Hiroshi",
""
],
[
"Miyagawa",
"Taiki",
""
],
[
"Morishita",
"Yusuke",
""
]
] |
new_dataset
| 0.958436 |
2307.05914
|
Weipeng Zhuo
|
Weipeng Zhuo, Ka Ho Chiu, Jierun Chen, Ziqi Zhao, S.-H. Gary Chan,
Sangtae Ha, Chul-Ho Lee
|
FIS-ONE: Floor Identification System with One Label for Crowdsourced RF
Signals
|
Accepted by IEEE ICDCS 2023
| null | null | null |
cs.NI cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Floor labels of crowdsourced RF signals are crucial for many smart-city
applications, such as multi-floor indoor localization, geofencing, and robot
surveillance. To build a prediction model to identify the floor number of a new
RF signal upon its measurement, conventional approaches using the crowdsourced
RF signals assume that at least few labeled signal samples are available on
each floor. In this work, we push the envelope further and demonstrate that it
is technically feasible to enable such floor identification with only one
floor-labeled signal sample on the bottom floor while having the rest of signal
samples unlabeled.
We propose FIS-ONE, a novel floor identification system with only one labeled
sample. FIS-ONE consists of two steps, namely signal clustering and cluster
indexing. We first build a bipartite graph to model the RF signal samples and
obtain a latent representation of each node (each signal sample) using our
attention-based graph neural network model so that the RF signal samples can be
clustered more accurately. Then, we tackle the problem of indexing the clusters
with proper floor labels, by leveraging the observation that signals from an
access point can be detected on different floors, i.e., signal spillover.
Specifically, we formulate a cluster indexing problem as a combinatorial
optimization problem and show that it is equivalent to solving a traveling
salesman problem, whose (near-)optimal solution can be found efficiently. We
have implemented FIS-ONE and validated its effectiveness on the Microsoft
dataset and in three large shopping malls. Our results show that FIS-ONE
outperforms other baseline algorithms significantly, with up to 23% improvement
in adjusted rand index and 25% improvement in normalized mutual information
using only one floor-labeled signal sample.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 04:43:59 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Zhuo",
"Weipeng",
""
],
[
"Chiu",
"Ka Ho",
""
],
[
"Chen",
"Jierun",
""
],
[
"Zhao",
"Ziqi",
""
],
[
"Chan",
"S. -H. Gary",
""
],
[
"Ha",
"Sangtae",
""
],
[
"Lee",
"Chul-Ho",
""
]
] |
new_dataset
| 0.957429 |
2307.05916
|
Peter Kim
|
Peter Yongho Kim, Junbeom Kwon, Sunghwan Joo, Sangyoon Bae, Donggyu
Lee, Yoonho Jung, Shinjae Yoo, Jiook Cha, Taesup Moon
|
SwiFT: Swin 4D fMRI Transformer
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The modeling of spatiotemporal brain dynamics from high-dimensional data,
such as 4D functional MRI, is a formidable task in neuroscience. To address
this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer
architecture that can learn brain dynamics directly from 4D functional brain
MRI data in a memory and computation-efficient manner. SwiFT achieves this by
implementing a 4D window multi-head self-attention mechanism and absolute
positional embeddings. We evaluate SwiFT using multiple largest-scale human
functional brain imaging datasets in tasks such as predicting sex, age, and
cognitive intelligence. Our experimental outcomes reveal that SwiFT
consistently outperforms recent state-of-the-art models. To the best of our
knowledge, SwiFT is the first Swin Transformer architecture that can process
dimensional spatiotemporal brain functional data in an end-to-end fashion.
Furthermore, due to the end-to-end learning capability, we also show that
contrastive loss-based self-supervised pre-training of SwiFT is also feasible
for achieving improved performance on a downstream task. We believe that our
work holds substantial potential in facilitating scalable learning of
functional brain imaging in neuroscience research by reducing the hurdles
associated with applying Transformer models to high-dimensional fMRI.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 04:53:36 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Kim",
"Peter Yongho",
""
],
[
"Kwon",
"Junbeom",
""
],
[
"Joo",
"Sunghwan",
""
],
[
"Bae",
"Sangyoon",
""
],
[
"Lee",
"Donggyu",
""
],
[
"Jung",
"Yoonho",
""
],
[
"Yoo",
"Shinjae",
""
],
[
"Cha",
"Jiook",
""
],
[
"Moon",
"Taesup",
""
]
] |
new_dataset
| 0.995385 |
2307.05929
|
Richard Wang
|
Tianxiao Zhang, Kaidong Li, Xiangyu Chen, Cuncong Zhong, Bo Luo, Ivan
Grijalva Teran, Brian McCornack, Daniel Flippo, Ajay Sharda, Guanghui Wang
|
A New Dataset and Comparative Study for Aphid Cluster Detection
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Aphids are one of the main threats to crops, rural families, and global food
security. Chemical pest control is a necessary component of crop production for
maximizing yields, however, it is unnecessary to apply the chemical approaches
to the entire fields in consideration of the environmental pollution and the
cost. Thus, accurately localizing the aphid and estimating the infestation
level is crucial to the precise local application of pesticides. Aphid
detection is very challenging as each individual aphid is really small and all
aphids are crowded together as clusters. In this paper, we propose to estimate
the infection level by detecting aphid clusters. We have taken millions of
images in the sorghum fields, manually selected 5,447 images that contain
aphids, and annotated each aphid cluster in the image. To use these images for
machine learning models, we crop the images into patches and created a labeled
dataset with over 151,000 image patches. Then, we implement and compare the
performance of four state-of-the-art object detection models.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 05:49:21 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Zhang",
"Tianxiao",
""
],
[
"Li",
"Kaidong",
""
],
[
"Chen",
"Xiangyu",
""
],
[
"Zhong",
"Cuncong",
""
],
[
"Luo",
"Bo",
""
],
[
"Teran",
"Ivan Grijalva",
""
],
[
"McCornack",
"Brian",
""
],
[
"Flippo",
"Daniel",
""
],
[
"Sharda",
"Ajay",
""
],
[
"Wang",
"Guanghui",
""
]
] |
new_dataset
| 0.999827 |
2307.05992
|
Ruichao Jiang
|
Ze Chen and Ruichao Jiang and Javad Tavakoli and Yiqiang Zhao
|
Robbed withdrawal
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this article we show that Theorem 2 in Lie et al. (2023) is incorrect.
Since Wombat Exchange, a decentralized exchange, is built upon Lie et al.
(2023) and Theorem 2 is fundamental to Wombat Finance, we show that an
undesirable phenomenon, which we call the robbed withdrawal, can happen as a
consequence.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 08:14:23 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Chen",
"Ze",
""
],
[
"Jiang",
"Ruichao",
""
],
[
"Tavakoli",
"Javad",
""
],
[
"Zhao",
"Yiqiang",
""
]
] |
new_dataset
| 0.993308 |
2307.06006
|
Gabriele Merlin
|
Gabriele Merlin, Vedant Nanda, Ruchit Rawal, Mariya Toneva
|
What Happens During Finetuning of Vision Transformers: An Invariance
Based Investigation
|
Accepted to CoLLAs 2023
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The pretrain-finetune paradigm usually improves downstream performance over
training a model from scratch on the same task, becoming commonplace across
many areas of machine learning. While pretraining is empirically observed to be
beneficial for a range of tasks, there is not a clear understanding yet of the
reasons for this effect. In this work, we examine the relationship between
pretrained vision transformers and the corresponding finetuned versions on
several benchmark datasets and tasks. We present new metrics that specifically
investigate the degree to which invariances learned by a pretrained model are
retained or forgotten during finetuning. Using these metrics, we present a
suite of empirical findings, including that pretraining induces transferable
invariances in shallow layers and that invariances from deeper pretrained
layers are compressed towards shallower layers during finetuning. Together,
these findings contribute to understanding some of the reasons for the
successes of pretrained models and the changes that a pretrained model
undergoes when finetuned on a downstream task.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 08:35:24 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Merlin",
"Gabriele",
""
],
[
"Nanda",
"Vedant",
""
],
[
"Rawal",
"Ruchit",
""
],
[
"Toneva",
"Mariya",
""
]
] |
new_dataset
| 0.994112 |
2307.06023
|
Xuesong Pan
|
Xuesong Pan, Zhong Zheng, Xueqing Huang, Zesong Fei
|
On the Uplink Distributed Detection in UAV-enabled Aerial Cell-Free
mMIMO Systems
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate the uplink signal detection approaches in the
cell-free massive MIMO systems with unmanned aerial vehicles (UAVs) serving as
aerial access points (APs). The ground users are equipped with multiple
antennas and the ground-to-air propagation channels are subject to correlated
Rician fading. To overcome huge signaling overhead in the fully-centralized
detection, we propose a two-layer distributed uplink detection scheme, where
the uplink signals are first detected in the AP-UAVs by using the minimum
mean-squared error (MMSE) detector depending on local channel state information
(CSI), and then collected and weighted combined at the CPU-UAV to obtain the
refined detection. By using the operator-valued free probability theory, the
asymptotic expressions of the combining weights are obtained, which only depend
on the statistical CSI and show excellent accuracy. Based on the proposed
distributed scheme, we further investigate the impacts of different distributed
deployments on the achieved spectral efficiency (SE). Numerical results show
that in urban and dense urban environments, it is more beneficial to deploy
more AP-UAVs to achieve higher SE. On the other hand, in suburban environment,
an optimal ratio between the number of deployed UAVs and the number of antennas
per UAV exists to maximize the SE.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 09:05:07 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Pan",
"Xuesong",
""
],
[
"Zheng",
"Zhong",
""
],
[
"Huang",
"Xueqing",
""
],
[
"Fei",
"Zesong",
""
]
] |
new_dataset
| 0.973487 |
2307.06066
|
Irum Rauf Dr.
|
Irum Rauf and Tamara Lopez and Thein Tun and Marian Petre and Bashar
Nuseibeh
|
Security in Online Freelance Software Development: A case for
Distributed Security Responsibility
| null | null | null | null |
cs.CR cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Secure software is a cornerstone to safe and resilient digital ecosystems. It
offers strong foundation to protect users' sensitive data and guard against
cyber-threats. The rapidly increasing landscape of digital economy has
encouraged developers from different socio-technical and socio-economic
backgrounds to join online freelance marketplaces. While, secure software
practices facilitate software developers in developing secure software, there
is paucity of research on how freelance developers adhere to security practices
and how they can be facilitated to improve their security behavior in
under-resourced environments. Moreover, freelance developers are often held
responsible for producing insecure code. In this position paper, we review
existing literature and argue for the case of distributed security
responsibilities in online freelance environment. We propose a research agenda
aimed at offering an organized and systematic effort by researchers to address
security needs and challenges of online freelance marketplaces. These include:
characterising software security and defining separation of responsibilities,
building trust in online freelance development communities, leveraging the
potential of online freelancing platforms in the promotion of secure software
development and building adaptive security interventions for online freelance
software development. The research has the potential to bring forth existing
security solutions to wider developer community and deliver substantial
benefits to the broader security ecosystem.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 10:35:27 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Rauf",
"Irum",
""
],
[
"Lopez",
"Tamara",
""
],
[
"Tun",
"Thein",
""
],
[
"Petre",
"Marian",
""
],
[
"Nuseibeh",
"Bashar",
""
]
] |
new_dataset
| 0.958667 |
2307.06079
|
Jessica Bariffi
|
Jessica Bariffi, Violetta Weger
|
Better bounds on the minimal Lee distance
| null | null | null | null |
cs.IT cs.DM math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper provides new and improved Singleton-like bounds for Lee metric
codes over integer residue rings. We derive the bounds using various novel
definitions of generalized Lee weights based on different notions of a support
of a linear code. In this regard, we introduce three main different support
types for codes in the Lee metric and analyze their utility to derive bounds on
the minimum Lee distance. Eventually, we propose a new point of view to
generalized weights and give an improved bound on the minimum distance of codes
in the Lee metric for which we discuss the density of maximum Lee distance
codes with respect to this novel Singleton-like bound.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 11:01:08 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Bariffi",
"Jessica",
""
],
[
"Weger",
"Violetta",
""
]
] |
new_dataset
| 0.984752 |
2307.06084
|
Matteo Cartiglia
|
Arianna Rubino, Matteo Cartiglia, Melika Payvand and Giacomo Indiveri
|
Neuromorphic analog circuits for robust on-chip always-on learning in
spiking neural networks
| null |
2023 IEEE 5th International Conference on Artificial Intelligence
Circuits and Systems (AICAS)
|
10.1109/AICAS57966.2023.10168620
| null |
cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Mixed-signal neuromorphic systems represent a promising solution for solving
extreme-edge computing tasks without relying on external computing resources.
Their spiking neural network circuits are optimized for processing sensory data
on-line in continuous-time. However, their low precision and high variability
can severely limit their performance. To address this issue and improve their
robustness to inhomogeneities and noise in both their internal state variables
and external input signals, we designed on-chip learning circuits with
short-term analog dynamics and long-term tristate discretization mechanisms. An
additional hysteretic stop-learning mechanism is included to improve stability
and automatically disable weight updates when necessary, to enable continuous
always-on learning. We designed a spiking neural network with these learning
circuits in a prototype chip using a 180 nm CMOS technology. Simulation and
silicon measurement results from the prototype chip are presented. These
circuits enable the construction of large-scale spiking neural networks with
online learning capabilities for real-world edge computing tasks.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 11:14:25 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Rubino",
"Arianna",
""
],
[
"Cartiglia",
"Matteo",
""
],
[
"Payvand",
"Melika",
""
],
[
"Indiveri",
"Giacomo",
""
]
] |
new_dataset
| 0.974075 |
2307.06165
|
Manuel Hetzel
|
Manuel Hetzel, Hannes Reichert, G\"unther Reitberger, Erich Fuchs,
Konrad Doll, Bernhard Sick
|
The IMPTC Dataset: An Infrastructural Multi-Person Trajectory and
Context Dataset
|
IEEE Intelligent Vehicles Conference (IV) 2023
| null | null | null |
cs.CV cs.DB
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Inner-city intersections are among the most critical traffic areas for injury
and fatal accidents. Automated vehicles struggle with the complex and hectic
everyday life within those areas. Sensor-equipped smart infrastructures, which
can cooperate with vehicles, can benefit automated traffic by extending the
perception capabilities of drivers and vehicle perception systems.
Additionally, they offer the opportunity to gather reproducible and precise
data of a holistic scene understanding, including context information as a
basis for training algorithms for various applications in automated traffic.
Therefore, we introduce the Infrastructural Multi-Person Trajectory and Context
Dataset (IMPTC). We use an intelligent public inner-city intersection in
Germany with visual sensor technology. A multi-view camera and LiDAR system
perceives traffic situations and road users' behavior. Additional sensors
monitor contextual information like weather, lighting, and traffic light signal
status. The data acquisition system focuses on Vulnerable Road Users (VRUs) and
multi-agent interaction. The resulting dataset consists of eight hours of
measurement data. It contains over 2,500 VRU trajectories, including
pedestrians, cyclists, e-scooter riders, strollers, and wheelchair users, and
over 20,000 vehicle trajectories at different day times, weather conditions,
and seasons. In addition, to enable the entire stack of research capabilities,
the dataset includes all data, starting from the sensor-, calibration- and
detection data until trajectory and context data. The dataset is continuously
expanded and is available online for non-commercial research at
https://github.com/kav-institute/imptc-dataset.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 13:46:20 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Hetzel",
"Manuel",
""
],
[
"Reichert",
"Hannes",
""
],
[
"Reitberger",
"Günther",
""
],
[
"Fuchs",
"Erich",
""
],
[
"Doll",
"Konrad",
""
],
[
"Sick",
"Bernhard",
""
]
] |
new_dataset
| 0.9998 |
2307.06177
|
Manuel Hetzel
|
Manuel Hetzel, Hannes Reichert, Konrad Doll, Bernhard Sick
|
Smart Infrastructure: A Research Junction
|
IEEE International Smart Cities Conference (ISC2) 2021
| null |
10.1109/ISC253183.2021.9562809
| null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Complex inner-city junctions are among the most critical traffic areas for
injury and fatal accidents. The development of highly automated driving (HAD)
systems struggles with the complex and hectic everyday life within those areas.
Sensor-equipped smart infrastructures, which can communicate and cooperate with
vehicles, are essential to enable a holistic scene understanding to resolve
occlusions drivers and vehicle perception systems for themselves can not cover.
We introduce an intelligent research infrastructure equipped with visual sensor
technology, located at a public inner-city junction in Aschaffenburg, Germany.
A multiple-view camera system monitors the traffic situation to perceive road
users' behavior. Both motorized and non-motorized traffic is considered. The
system is used for research in data generation, evaluating new HAD sensors
systems, algorithms, and Artificial Intelligence (AI) training strategies using
real-, synthetic- and augmented data. In addition, the junction features a
highly accurate digital twin. Real-world data can be taken into the digital
twin for simulation purposes and synthetic data generation.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 14:04:12 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Hetzel",
"Manuel",
""
],
[
"Reichert",
"Hannes",
""
],
[
"Doll",
"Konrad",
""
],
[
"Sick",
"Bernhard",
""
]
] |
new_dataset
| 0.999418 |
2307.06206
|
Robin Louiset
|
Robin Louiset, Edouard Duchesnay, Antoine Grigis, Benoit Dufumier,
Pietro Gori
|
SepVAE: a contrastive VAE to separate pathological patterns from healthy
ones
|
Workshop on Interpretable ML in Healthcare at International
Conference on Machine Learning (ICML), Honolulu, Hawaii, USA. 2023
| null | null | null |
cs.CV stat.ML
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Contrastive Analysis VAE (CA-VAEs) is a family of Variational auto-encoders
(VAEs) that aims at separating the common factors of variation between a
background dataset (BG) (i.e., healthy subjects) and a target dataset (TG)
(i.e., patients) from the ones that only exist in the target dataset. To do so,
these methods separate the latent space into a set of salient features (i.e.,
proper to the target dataset) and a set of common features (i.e., exist in both
datasets). Currently, all models fail to prevent the sharing of information
between latent spaces effectively and to capture all salient factors of
variation. To this end, we introduce two crucial regularization losses: a
disentangling term between common and salient representations and a
classification term between background and target samples in the salient space.
We show a better performance than previous CA-VAEs methods on three medical
applications and a natural images dataset (CelebA). Code and datasets are
available on GitHub https://github.com/neurospin-projects/2023_rlouiset_sepvae.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 14:52:21 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Louiset",
"Robin",
""
],
[
"Duchesnay",
"Edouard",
""
],
[
"Grigis",
"Antoine",
""
],
[
"Dufumier",
"Benoit",
""
],
[
"Gori",
"Pietro",
""
]
] |
new_dataset
| 0.99974 |
2307.06218
|
Zaid Alyafeai Mr
|
Zaid Alyafeai and Maged S. Al-Shaibani and Moataz Ahmed
|
Ashaar: Automatic Analysis and Generation of Arabic Poetry Using Deep
Learning Approaches
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Poetry holds immense significance within the cultural and traditional fabric
of any nation. It serves as a vehicle for poets to articulate their emotions,
preserve customs, and convey the essence of their culture. Arabic poetry is no
exception, having played a cherished role in the heritage of the Arabic
community throughout history and maintaining its relevance in the present era.
Typically, comprehending Arabic poetry necessitates the expertise of a linguist
who can analyze its content and assess its quality. This paper presents the
introduction of a framework called \textit{Ashaar}
https://github.com/ARBML/Ashaar, which encompasses a collection of datasets and
pre-trained models designed specifically for the analysis and generation of
Arabic poetry. The pipeline established within our proposed approach
encompasses various aspects of poetry, such as meter, theme, and era
classification. It also incorporates automatic poetry diacritization, enabling
more intricate analyses like automated extraction of the \textit{Arudi} style.
Additionally, we explore the feasibility of generating conditional poetry
through the pre-training of a character-based GPT model. Furthermore, as part
of this endeavor, we provide four datasets: one for poetry generation, another
for diacritization, and two for Arudi-style prediction. These datasets aim to
facilitate research and development in the field of Arabic poetry by enabling
researchers and enthusiasts to delve into the nuances of this rich literary
tradition.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 15:07:16 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Alyafeai",
"Zaid",
""
],
[
"Al-Shaibani",
"Maged S.",
""
],
[
"Ahmed",
"Moataz",
""
]
] |
new_dataset
| 0.999487 |
2307.06240
|
Fabricio Barth
|
Manuel Castanares and Luis F. S. Carrete and Enrico F. Damiani and
Leonardo D. M. de Abreu and Jos\'e Fernando B. Brancalion and Fabr\'icio J.
Barth
|
DSSE: a drone swarm search environment
|
6 pages
| null | null | null |
cs.LG cs.AI cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
The Drone Swarm Search project is an environment, based on PettingZoo, that
is to be used in conjunction with multi-agent (or single-agent) reinforcement
learning algorithms. It is an environment in which the agents (drones), have to
find the targets (shipwrecked people). The agents do not know the position of
the target and do not receive rewards related to their own distance to the
target(s). However, the agents receive the probabilities of the target(s) being
in a certain cell of the map. The aim of this project is to aid in the study of
reinforcement learning algorithms that require dynamic probabilities as inputs.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 15:28:26 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Castanares",
"Manuel",
""
],
[
"Carrete",
"Luis F. S.",
""
],
[
"Damiani",
"Enrico F.",
""
],
[
"de Abreu",
"Leonardo D. M.",
""
],
[
"Brancalion",
"José Fernando B.",
""
],
[
"Barth",
"Fabrício J.",
""
]
] |
new_dataset
| 0.997926 |
2307.06260
|
Sang Dinh
|
Pham Vu Hung, Nguyen Duy Manh, Nguyen Thi Oanh, Nguyen Thi Thuy, Dinh
Viet Sang
|
UGCANet: A Unified Global Context-Aware Transformer-based Network with
Feature Alignment for Endoscopic Image Analysis
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Gastrointestinal endoscopy is a medical procedure that utilizes a flexible
tube equipped with a camera and other instruments to examine the digestive
tract. This minimally invasive technique allows for diagnosing and managing
various gastrointestinal conditions, including inflammatory bowel disease,
gastrointestinal bleeding, and colon cancer. The early detection and
identification of lesions in the upper gastrointestinal tract and the
identification of malignant polyps that may pose a risk of cancer development
are critical components of gastrointestinal endoscopy's diagnostic and
therapeutic applications. Therefore, enhancing the detection rates of
gastrointestinal disorders can significantly improve a patient's prognosis by
increasing the likelihood of timely medical intervention, which may prolong the
patient's lifespan and improve overall health outcomes. This paper presents a
novel Transformer-based deep neural network designed to perform multiple tasks
simultaneously, thereby enabling accurate identification of both upper
gastrointestinal tract lesions and colon polyps. Our approach proposes a unique
global context-aware module and leverages the powerful MiT backbone, along with
a feature alignment block, to enhance the network's representation capability.
This novel design leads to a significant improvement in performance across
various endoscopic diagnosis tasks. Extensive experiments demonstrate the
superior performance of our method compared to other state-of-the-art
approaches.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 16:01:56 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Hung",
"Pham Vu",
""
],
[
"Manh",
"Nguyen Duy",
""
],
[
"Oanh",
"Nguyen Thi",
""
],
[
"Thuy",
"Nguyen Thi",
""
],
[
"Sang",
"Dinh Viet",
""
]
] |
new_dataset
| 0.999669 |
2307.06304
|
Mostafa Dehghani
|
Mostafa Dehghani, Basil Mustafa, Josip Djolonga, Jonathan Heek,
Matthias Minderer, Mathilde Caron, Andreas Steiner, Joan Puigcerver, Robert
Geirhos, Ibrahim Alabdulmohsin, Avital Oliver, Piotr Padlewski, Alexey
Gritsenko, Mario Lu\v{c}i\'c, Neil Houlsby
|
Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and
Resolution
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ubiquitous and demonstrably suboptimal choice of resizing images to a
fixed resolution before processing them with computer vision models has not yet
been successfully challenged. However, models such as the Vision Transformer
(ViT) offer flexible sequence-based modeling, and hence varying input sequence
lengths. We take advantage of this with NaViT (Native Resolution ViT) which
uses sequence packing during training to process inputs of arbitrary
resolutions and aspect ratios. Alongside flexible model usage, we demonstrate
improved training efficiency for large-scale supervised and contrastive
image-text pretraining. NaViT can be efficiently transferred to standard tasks
such as image and video classification, object detection, and semantic
segmentation and leads to improved results on robustness and fairness
benchmarks. At inference time, the input resolution flexibility can be used to
smoothly navigate the test-time cost-performance trade-off. We believe that
NaViT marks a departure from the standard, CNN-designed, input and modelling
pipeline used by most computer vision models, and represents a promising
direction for ViTs.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 17:01:03 GMT"
}
] | 2023-07-13T00:00:00 |
[
[
"Dehghani",
"Mostafa",
""
],
[
"Mustafa",
"Basil",
""
],
[
"Djolonga",
"Josip",
""
],
[
"Heek",
"Jonathan",
""
],
[
"Minderer",
"Matthias",
""
],
[
"Caron",
"Mathilde",
""
],
[
"Steiner",
"Andreas",
""
],
[
"Puigcerver",
"Joan",
""
],
[
"Geirhos",
"Robert",
""
],
[
"Alabdulmohsin",
"Ibrahim",
""
],
[
"Oliver",
"Avital",
""
],
[
"Padlewski",
"Piotr",
""
],
[
"Gritsenko",
"Alexey",
""
],
[
"Lučić",
"Mario",
""
],
[
"Houlsby",
"Neil",
""
]
] |
new_dataset
| 0.992178 |
2206.03318
|
Siddharth Dalmia
|
Siddharth Dalmia, Dmytro Okhonko, Mike Lewis, Sergey Edunov, Shinji
Watanabe, Florian Metze, Luke Zettlemoyer, and Abdelrahman Mohamed
|
LegoNN: Building Modular Encoder-Decoder Models
|
IEEE/ACM Transactions on Audio, Speech, and Language Processing
(TASLP)
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State-of-the-art encoder-decoder models (e.g. for machine translation (MT) or
automatic speech recognition (ASR)) are constructed and trained end-to-end as
an atomic unit. No component of the model can be (re-)used without the others,
making it impossible to share parts, e.g. a high resourced decoder, across
tasks. We describe LegoNN, a procedure for building encoder-decoder
architectures in a way so that its parts can be applied to other tasks without
the need for any fine-tuning. To achieve this reusability, the interface
between encoder and decoder modules is grounded to a sequence of marginal
distributions over a pre-defined discrete vocabulary. We present two approaches
for ingesting these marginals; one is differentiable, allowing the flow of
gradients across the entire network, and the other is gradient-isolating. To
enable the portability of decoder modules between MT tasks for different source
languages and across other tasks like ASR, we introduce a modality agnostic
encoder which consists of a length control mechanism to dynamically adapt
encoders' output lengths in order to match the expected input length range of
pre-trained decoders. We present several experiments to demonstrate the
effectiveness of LegoNN models: a trained language generation LegoNN decoder
module from German-English (De-En) MT task can be reused without any
fine-tuning for the Europarl English ASR and the Romanian-English (Ro-En) MT
tasks, matching or beating the performance of baseline. After fine-tuning,
LegoNN models improve the Ro-En MT task by 1.5 BLEU points and achieve 12.5%
relative WER reduction on the Europarl ASR task. To show how the approach
generalizes, we compose a LegoNN ASR model from three modules -- each has been
learned within different end-to-end trained models on three different datasets
-- achieving an overall WER reduction of 19.5%.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 14:08:07 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Jul 2023 17:43:57 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Dalmia",
"Siddharth",
""
],
[
"Okhonko",
"Dmytro",
""
],
[
"Lewis",
"Mike",
""
],
[
"Edunov",
"Sergey",
""
],
[
"Watanabe",
"Shinji",
""
],
[
"Metze",
"Florian",
""
],
[
"Zettlemoyer",
"Luke",
""
],
[
"Mohamed",
"Abdelrahman",
""
]
] |
new_dataset
| 0.996204 |
2206.14071
|
Yan Kai Lai
|
Yan Kai Lai, Prahlad Vadakkepat, Abdullah Al Mamun, Cheng Xiang, Tong
Heng Lee
|
R2: Heuristic Bug-Based Any-angle Path-Planning using Lazy Searches
|
Rejected, and replaced with new prototype with same name
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
R2 is a novel online any-angle path planner that uses heuristic bug-based or
ray casting approaches to find optimal paths in 2D maps with non-convex,
polygonal obstacles. R2 is competitive to traditional free-space planners,
finding paths quickly if queries have direct line-of-sight. On large sparse
maps with few obstacle contours, which are likely to occur in practice, R2
outperforms free-space planners, and can be much faster than state-of-the-art
free-space expansion planner Anya. On maps with many contours, Anya performs
faster than R2. R2 is built on RayScan, introducing lazy-searches and a
source-pledge counter to find successors optimistically on contiguous contours.
The novel approach bypasses most successors on jagged contours to reduce
expensive line-of-sight checks, therefore requiring no pre-processing to be a
competitive online any-angle planner.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 15:14:42 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Jul 2023 17:43:14 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Lai",
"Yan Kai",
""
],
[
"Vadakkepat",
"Prahlad",
""
],
[
"Mamun",
"Abdullah Al",
""
],
[
"Xiang",
"Cheng",
""
],
[
"Lee",
"Tong Heng",
""
]
] |
new_dataset
| 0.998959 |
2208.03781
|
Yohei Watanabe
|
Yohei Watanabe, Naoto Yanai, Junji Shikata
|
IoT-REX: A Secure Remote-Control System for IoT Devices from Centralized
Multi-Designated Verifier Signatures
|
Updated as a whole. 25 pages
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
IoT technology has been developing rapidly, while at the same time, notorious
IoT malware such as Mirai is a severe and inherent threat. We believe it is
essential to consider systems that enable us to remotely control infected
devices in order to prevent or limit malicious behaviors of infected devices.
In this paper, we design a promising candidate for such remote-control systems,
called IoT-REX (REmote-Control System for IoT devices). IoT-REX allows a
systems manager to designate an arbitrary subset of all IoT devices in the
system, and every device can confirm whether or not the device itself was
designated; if so, the device executes a command given by the systems manager.
Towards realizing IoT-REX, we introduce a novel cryptographic primitive called
centralized multi-designated verifier signatures (CMDVS). Although CMDVS works
under a restricted condition compared to conventional MDVS, it is sufficient
for realizing IoT-REX. We provide an efficient CMDVS construction from any
approximate membership query structures and digital signatures, yielding
compact communication sizes and efficient verification procedures for IoT-REX.
We then discuss the feasibility of IoT-REX through the cryptographic
implementation of the CMDVS construction on a Raspberry Pi. Our promising
results demonstrate that the CMDVS construction can compress communication size
to about 30% compared to a trivial construction, and thus its resulting IoT-REX
becomes three times faster than a trivial construction over typical low-power
wide area networks with an IoT device.
|
[
{
"version": "v1",
"created": "Sun, 7 Aug 2022 18:01:48 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Sep 2022 11:13:14 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Jul 2023 14:41:12 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Watanabe",
"Yohei",
""
],
[
"Yanai",
"Naoto",
""
],
[
"Shikata",
"Junji",
""
]
] |
new_dataset
| 0.998596 |
2211.11030
|
Christopher Lu
|
Chris Lu, Timon Willi, Alistair Letcher, Jakob Foerster
|
Adversarial Cheap Talk
|
To be published at ICML 2023. Project video and code are available at
https://sites.google.com/view/adversarial-cheap-talk
| null | null | null |
cs.LG cs.AI cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Adversarial attacks in reinforcement learning (RL) often assume
highly-privileged access to the victim's parameters, environment, or data.
Instead, this paper proposes a novel adversarial setting called a Cheap Talk
MDP in which an Adversary can merely append deterministic messages to the
Victim's observation, resulting in a minimal range of influence. The Adversary
cannot occlude ground truth, influence underlying environment dynamics or
reward signals, introduce non-stationarity, add stochasticity, see the Victim's
actions, or access their parameters. Additionally, we present a simple
meta-learning algorithm called Adversarial Cheap Talk (ACT) to train
Adversaries in this setting. We demonstrate that an Adversary trained with ACT
still significantly influences the Victim's training and testing performance,
despite the highly constrained setting. Affecting train-time performance
reveals a new attack vector and provides insight into the success and failure
modes of existing RL algorithms. More specifically, we show that an ACT
Adversary is capable of harming performance by interfering with the learner's
function approximation, or instead helping the Victim's performance by
outputting useful features. Finally, we show that an ACT Adversary can
manipulate messages during train-time to directly and arbitrarily control the
Victim at test-time. Project video and code are available at
https://sites.google.com/view/adversarial-cheap-talk
|
[
{
"version": "v1",
"created": "Sun, 20 Nov 2022 17:17:56 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2023 16:37:16 GMT"
},
{
"version": "v3",
"created": "Fri, 16 Jun 2023 16:00:04 GMT"
},
{
"version": "v4",
"created": "Tue, 11 Jul 2023 17:31:34 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Lu",
"Chris",
""
],
[
"Willi",
"Timon",
""
],
[
"Letcher",
"Alistair",
""
],
[
"Foerster",
"Jakob",
""
]
] |
new_dataset
| 0.988642 |
2211.16211
|
Yuting Xiao
|
Yuting Xiao, Yiqun Zhao, Yanyu Xu, Shenghua Gao
|
ResNeRF: Geometry-Guided Residual Neural Radiance Field for Indoor Scene
Novel View Synthesis
|
This is an incomplete paper
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We represent the ResNeRF, a novel geometry-guided two-stage framework for
indoor scene novel view synthesis. Be aware of that a good geometry would
greatly boost the performance of novel view synthesis, and to avoid the
geometry ambiguity issue, we propose to characterize the density distribution
of the scene based on a base density estimated from scene geometry and a
residual density parameterized by the geometry. In the first stage, we focus on
geometry reconstruction based on SDF representation, which would lead to a good
geometry surface of the scene and also a sharp density. In the second stage,
the residual density is learned based on the SDF learned in the first stage for
encoding more details about the appearance. In this way, our method can better
learn the density distribution with the geometry prior for high-fidelity novel
view synthesis while preserving the 3D structures. Experiments on large-scale
indoor scenes with many less-observed and textureless areas show that with the
good 3D surface, our method achieves state-of-the-art performance for novel
view synthesis.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 08:48:44 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Dec 2022 09:06:08 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Jul 2023 08:49:38 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Xiao",
"Yuting",
""
],
[
"Zhao",
"Yiqun",
""
],
[
"Xu",
"Yanyu",
""
],
[
"Gao",
"Shenghua",
""
]
] |
new_dataset
| 0.955008 |
2302.14725
|
Matthias Pfretzschner
|
Jakob Baumann, Matthias Pfretzschner, Ignaz Rutter
|
Parameterized Complexity of Vertex Splitting to Pathwidth at most 1
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Motivated by the planarization of 2-layered straight-line drawings, we
consider the problem of modifying a graph such that the resulting graph has
pathwidth at most 1. The problem Pathwidth-One Vertex Explosion (POVE) asks
whether such a graph can be obtained using at most $k$ vertex explosions, where
a vertex explosion replaces a vertex $v$ by deg$(v)$ degree-1 vertices, each
incident to exactly one edge that was originally incident to $v$. For POVE, we
give an FPT algorithm with running time $O(4^k \cdot m)$ and an $O(k^2)$
kernel, thereby improving over the $O(k^6)$-kernel by Ahmed et al. [GD 22] in a
more general setting. Similarly, a vertex split replaces a vertex $v$ by two
distinct vertices $v_1$ and $v_2$ and distributes the edges originally incident
to $v$ arbitrarily to $v_1$ and $v_2$. Analogously to POVE, we define the
problem variant Pathwidth-One Vertex Splitting (POVS) that uses the split
operation instead of vertex explosions. Here we obtain a linear kernel and an
algorithm with running time $O((6k+12)^k \cdot m)$. This answers an open
question by Ahmed et al. [GD22].
Finally, we consider the problem $\Pi$ Vertex Splitting ($\Pi$-VS), which
generalizes the problem POVS and asks whether a given graph can be turned into
a graph of a specific graph class $\Pi$ using at most $k$ vertex splits. For
graph classes $\Pi$ that can be tested in monadic second-order graph logic
(MSO$_2$), we show that the problem $\Pi$-VS can be expressed as an MSO$_2$
formula, resulting in an FPT algorithm for $\Pi$-VS parameterized by $k$ if
$\Pi$ additionally has bounded treewidth. We obtain the same result for the
problem variant using vertex explosions.
|
[
{
"version": "v1",
"created": "Tue, 28 Feb 2023 16:33:18 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Jul 2023 08:47:32 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Baumann",
"Jakob",
""
],
[
"Pfretzschner",
"Matthias",
""
],
[
"Rutter",
"Ignaz",
""
]
] |
new_dataset
| 0.995263 |
2304.00389
|
EPTCS
|
Thomas Schl\"ogl (TU Wien, Vienna, Austria), Ulrich Schmid (TU Wien,
Vienna, Austria)
|
A Sufficient Condition for Gaining Belief in Byzantine Fault-Tolerant
Distributed Systems
|
In Proceedings TARK 2023, arXiv:2307.04005
|
EPTCS 379, 2023, pp. 487-506
|
10.4204/EPTCS.379.37
| null |
cs.DC cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
Existing protocols for byzantine fault tolerant distributed systems usually
rely on the correct agents' ability to detect faulty agents and/or to detect
the occurrence of some event or action on some correct agent. In this paper, we
provide sufficient conditions that allow an agent to infer the appropriate
beliefs from its history, and a procedure that allows these conditions to be
checked in finite time. Our results thus provide essential stepping stones for
developing efficient protocols and proving them correct.
|
[
{
"version": "v1",
"created": "Sat, 1 Apr 2023 21:14:02 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Jul 2023 07:14:30 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Schlögl",
"Thomas",
"",
"TU Wien, Vienna, Austria"
],
[
"Schmid",
"Ulrich",
"",
"TU Wien,\n Vienna, Austria"
]
] |
new_dataset
| 0.997951 |
2304.04794
|
Thomas Leonard
|
Thomas Leonard, Samuel Liu, Harrison Jin, and Jean Anne C. Incorvia
|
Stochastic Domain Wall-Magnetic Tunnel Junction Artificial Neurons for
Noise-Resilient Spiking Neural Networks
|
10 pages, 4 figures
| null |
10.1063/5.0152211
| null |
cs.NE cond-mat.mes-hall
|
http://creativecommons.org/licenses/by/4.0/
|
The spatiotemporal nature of neuronal behavior in spiking neural networks
(SNNs) make SNNs promising for edge applications that require high energy
efficiency. To realize SNNs in hardware, spintronic neuron implementations can
bring advantages of scalability and energy efficiency. Domain wall (DW) based
magnetic tunnel junction (MTJ) devices are well suited for probabilistic neural
networks given their intrinsic integrate-and-fire behavior with tunable
stochasticity. Here, we present a scaled DW-MTJ neuron with voltage-dependent
firing probability. The measured behavior was used to simulate a SNN that
attains accuracy during learning compared to an equivalent, but more
complicated, multi-weight (MW) DW-MTJ device. The validation accuracy during
training was also shown to be comparable to an ideal leaky integrate and fire
(LIF) device. However, during inference, the binary DW-MTJ neuron outperformed
the other devices after gaussian noise was introduced to the Fashion-MNIST
classification task. This work shows that DW-MTJ devices can be used to
construct noise-resilient networks suitable for neuromorphic computing on the
edge.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 18:00:26 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Leonard",
"Thomas",
""
],
[
"Liu",
"Samuel",
""
],
[
"Jin",
"Harrison",
""
],
[
"Incorvia",
"Jean Anne C.",
""
]
] |
new_dataset
| 0.963222 |
2305.09300
|
Boming Xia
|
Sung Une Lee, Harsha Perera, Boming Xia, Yue Liu, Qinghua Lu, Liming
Zhu, Olivier Salvado, Jon Whittle
|
QB4AIRA: A Question Bank for AI Risk Assessment
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid advancement of Artificial Intelligence (AI), represented by
ChatGPT, has raised concerns about responsible AI development and utilization.
Existing frameworks lack a comprehensive synthesis of AI risk assessment
questions. To address this, we introduce QB4AIRA, a novel question bank
developed by refining questions from five globally recognized AI risk
frameworks, categorized according to Australia's AI ethics principles. QB4AIRA
comprises 293 prioritized questions covering a wide range of AI risk areas,
facilitating effective risk assessment. It serves as a valuable resource for
stakeholders in assessing and managing AI risks, while paving the way for new
risk frameworks and guidelines. By promoting responsible AI practices, QB4AIRA
contributes to responsible AI deployment, mitigating potential risks and harms.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 09:18:44 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Jul 2023 01:57:28 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Lee",
"Sung Une",
""
],
[
"Perera",
"Harsha",
""
],
[
"Xia",
"Boming",
""
],
[
"Liu",
"Yue",
""
],
[
"Lu",
"Qinghua",
""
],
[
"Zhu",
"Liming",
""
],
[
"Salvado",
"Olivier",
""
],
[
"Whittle",
"Jon",
""
]
] |
new_dataset
| 0.999608 |
2306.07532
|
Xuying Zhang
|
Xuying Zhang, Bowen Yin, Zheng Lin, Qibin Hou, Deng-Ping Fan,
Ming-Ming Cheng
|
Referring Camouflaged Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of referring camouflaged object detection (Ref-COD),
a new task that aims to segment specified camouflaged objects based on a small
set of referring images with salient target objects. We first assemble a
large-scale dataset, called R2C7K, which consists of 7K images covering 64
object categories in real-world scenarios. Then, we develop a simple but strong
dual-branch framework, dubbed R2CNet, with a reference branch embedding the
common representations of target objects from referring images and a
segmentation branch identifying and segmenting camouflaged objects under the
guidance of the common representations. In particular, we design a Referring
Mask Generation module to generate pixel-level prior mask and a Referring
Feature Enrichment module to enhance the capability of identifying specified
camouflaged objects. Extensive experiments show the superiority of our Ref-COD
methods over their COD counterparts in segmenting specified camouflaged objects
and identifying the main body of target objects. Our code and dataset are
publicly available at https://github.com/zhangxuying1004/RefCOD.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 04:15:37 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Jul 2023 05:15:34 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Zhang",
"Xuying",
""
],
[
"Yin",
"Bowen",
""
],
[
"Lin",
"Zheng",
""
],
[
"Hou",
"Qibin",
""
],
[
"Fan",
"Deng-Ping",
""
],
[
"Cheng",
"Ming-Ming",
""
]
] |
new_dataset
| 0.999818 |
2306.15975
|
Shipeng Qi
|
Shipeng Qi, Heng Lin, Zhihui Guo, G\'abor Sz\'arnyas, Bing Tong, Yan
Zhou, Bin Yang, Jiansong Zhang, Zheng Wang, Youren Shen, Changyuan Wang,
Parviz Peiravi, Henry Gabb, Ben Steer
|
The LDBC Financial Benchmark
|
For the source code of this specification, see the ldbc_finbench_docs
repository on Github. arXiv admin note: substantial text overlap with
arXiv:2001.02299
| null | null | null |
cs.DB cs.PF
|
http://creativecommons.org/licenses/by/4.0/
|
The Linked Data Benchmark Council's Financial Benchmark (LDBC FinBench) is a
new effort that defines a graph database benchmark targeting financial
scenarios such as anti-fraud and risk control. The benchmark has one workload,
the Transaction Workload, currently. It captures OLTP scenario with complex,
simple read queries and write queries that continuously insert or delete data
in the graph. Compared to the LDBC SNB, the LDBC FinBench differs in
application scenarios, data patterns, and query patterns. This document
contains a detailed explanation of the data used in the LDBC FinBench, the
definition of transaction workload, a detailed description for all queries, and
instructions on how to use the benchmark suite.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 07:24:46 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jun 2023 10:54:35 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Qi",
"Shipeng",
""
],
[
"Lin",
"Heng",
""
],
[
"Guo",
"Zhihui",
""
],
[
"Szárnyas",
"Gábor",
""
],
[
"Tong",
"Bing",
""
],
[
"Zhou",
"Yan",
""
],
[
"Yang",
"Bin",
""
],
[
"Zhang",
"Jiansong",
""
],
[
"Wang",
"Zheng",
""
],
[
"Shen",
"Youren",
""
],
[
"Wang",
"Changyuan",
""
],
[
"Peiravi",
"Parviz",
""
],
[
"Gabb",
"Henry",
""
],
[
"Steer",
"Ben",
""
]
] |
new_dataset
| 0.999125 |
2307.03790
|
Karthika Venkatesan
|
Karthika Venkatesan, Sujit Kumar Chakrabarti
|
ConStaBL -- A Fresh Look at Software Engineering with State Machines
|
24 pages
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Statechart is a visual modelling language for systems. In this paper, we
extend our earlier work on modular statecharts with local variables and present
an updated operational semantics for statecharts with concurrency. Our variant
of the statechart has local variables, which interact significantly with the
remainder of the language semantics. Our semantics does not allow transition
conflicts in simulations and is stricter than most other available semantics of
statecharts in that sense. It allows arbitrary interleaving of concurrently
executing action code, which allows more precise modelling of systems and
upstream analysis of the same. We present the operational semantics in the form
of the simulation algorithm. We also establish the criteria based on our
semantics for defining conflicting transitions and valid simulations. Our
semantics is executable and can be used to simulate statechart models and
verify their correctness. We present a preliminary setup to carry out fuzz
testing of Statechart models, an idea that does not seem to have a precedent in
literature. We have used our simulator in conjunction with a well-known fuzzer
to do fuzz testing of statechart models of non-trivial sizes and have found
issues in them that would have been hard to find through inspection.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 18:29:35 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Jul 2023 06:21:44 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Venkatesan",
"Karthika",
""
],
[
"Chakrabarti",
"Sujit Kumar",
""
]
] |
new_dataset
| 0.996689 |
2307.04344
|
Kaiyuan Yang
|
Yan He, Dai Li, Zhanghao Yu, Kaiyuan Yang
|
ASCH-PUF: A "Zero" Bit Error Rate CMOS Physically Unclonable Function
with Dual-Mode Low-Cost Stabilization
|
This paper has been accepted to IEEE Journal of Solid-State Circuits
(JSSC)
| null |
10.1109/JSSC.2022.3233373
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physically unclonable functions (PUFs) are increasingly adopted for low-cost
and secure secret key and chip ID generations for embedded and IoT devices.
Achieving 100% reproducible keys across wide temperature and voltage variations
over the lifetime of a device is critical and conventionally requires large
masking or Error Correction Code (ECC) overhead to guarantee. This paper
presents an Automatic Self Checking and Healing (ASCH) stabilization technique
for a state-of-the-art PUF cell design based on sub-threshold inverter chains.
The ASCH system successfully removes all unstable PUF cells without the need
for expensive temperature sweeps during unstable bit detection. By accurately
finding all unstable bits without expensive temperature sweeps to find all
unstable bits, ASCH achieves ultra-low bit error rate (BER), thus significantly
reducing the costs of using ECC and enrollment. Our ASCH can operate in two
modes, a static mode (S-ASCH) with a conventional pre-enrolled unstable bit
mask and a dynamic mode (D-ASCH) that further eliminates the need for
non-volatile memories (NVMs) for storing masks. The proposed ASCH-PUF is
fabricated and evaluated in 65nm CMOS. The ASCH system achieves "0" Bit Error
Rate (BER, < 1.77E-9) across temperature variations of -20{\deg}C to
125{\deg}C, and voltage variations of 0.7V to 1.4V, by masking 31% and 35% of
all fabricated PUF bits in S-ASCH and D-ASCH mode respectively. The prototype
achieves a measured throughput of 11.4 Gbps with 0.057 fJ/b core energy
efficiency at 1.2V, 25{\deg}C.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 05:01:30 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Jul 2023 16:11:44 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"He",
"Yan",
""
],
[
"Li",
"Dai",
""
],
[
"Yu",
"Zhanghao",
""
],
[
"Yang",
"Kaiyuan",
""
]
] |
new_dataset
| 0.996292 |
2307.04907
|
Supun Bhathiya Hemanthage
|
Bhathiya Hemanthage, Christian Dondrup, Phil Bartie, Oliver Lemon
|
SimpleMTOD: A Simple Language Model for Multimodal Task-Oriented
Dialogue with Symbolic Scene Representation
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
SimpleMTOD is a simple language model which recasts several sub-tasks in
multimodal task-oriented dialogues as sequence prediction tasks. SimpleMTOD is
built on a large-scale transformer-based auto-regressive architecture, which
has already proven to be successful in uni-modal task-oriented dialogues, and
effectively leverages transfer learning from pre-trained GPT-2. In-order to
capture the semantics of visual scenes, we introduce both local and
de-localized tokens for objects within a scene. De-localized tokens represent
the type of an object rather than the specific object itself and so possess a
consistent meaning across the dataset. SimpleMTOD achieves a state-of-the-art
BLEU score (0.327) in the Response Generation sub-task of the SIMMC 2.0
test-std dataset while performing on par in other multimodal sub-tasks:
Disambiguation, Coreference Resolution, and Dialog State Tracking. This is
despite taking a minimalist approach for extracting visual (and non-visual)
information. In addition the model does not rely on task-specific architectural
changes such as classification heads.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 21:16:46 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Hemanthage",
"Bhathiya",
""
],
[
"Dondrup",
"Christian",
""
],
[
"Bartie",
"Phil",
""
],
[
"Lemon",
"Oliver",
""
]
] |
new_dataset
| 0.984223 |
2307.04916
|
Marcos V. Conde
|
Gabor Fodor, Marcos V. Conde
|
Rapid Deforestation and Burned Area Detection using Deep Multimodal
Learning on Satellite Imagery
|
CVPR 2023 Workshop on Multimodal Learning for Earth and Environment
(MultiEarth)
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Deforestation estimation and fire detection in the Amazon forest poses a
significant challenge due to the vast size of the area and the limited
accessibility. However, these are crucial problems that lead to severe
environmental consequences, including climate change, global warming, and
biodiversity loss. To effectively address this problem, multimodal satellite
imagery and remote sensing offer a promising solution for estimating
deforestation and detecting wildfire in the Amazonia region. This research
paper introduces a new curated dataset and a deep learning-based approach to
solve these problems using convolutional neural networks (CNNs) and
comprehensive data processing techniques. Our dataset includes curated images
and diverse channel bands from Sentinel, Landsat, VIIRS, and MODIS satellites.
We design the dataset considering different spatial and temporal resolution
requirements. Our method successfully achieves high-precision deforestation
estimation and burned area detection on unseen images from the region. Our
code, models and dataset are open source:
https://github.com/h2oai/cvpr-multiearth-deforestation-segmentation
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 21:49:30 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Fodor",
"Gabor",
""
],
[
"Conde",
"Marcos V.",
""
]
] |
new_dataset
| 0.998474 |
2307.04941
|
Zheng Wu
|
Zheng Wu
|
MG3MConv: Multi-Grained Matrix-Multiplication-Mapping Convolution
Algorithm toward the SW26010 Processor
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
As the core of artificial intelligence applications, the research of
convolution has become a hot topic in high performance computing. With the
rapid development of the emerging SW26010 processor in artificial intelligence,
there is an urgent need for high-performance convolution algorithms on the
processor. However, the current support of convolution on SW26010 is still
rudimentary. The only studies provide sufficient runtime peak performance but
lack the adaptability to various convolution scenes. To perfect convolution
algorithms on SW26010, we propose a multi-grained matrix-multiplication-mapping
convolution algorithm called MG3MConv, which targets the architectural features
of SW26010. MG3MConv supports diversified mapping schemes of convolution tasks
based on the concept of the thread block proposed in this paper. All the
architecture-oriented optimization methods are elaborately designed from four
levels to fully exploit the hardware efficiency of SW26010. The experiments
show that the hardware efficiency of MG3MConv can reach 84.78% in max, which is
1.75 times compared with that of cuDNN based on NVIDIA K80m GPU. Moreover,
MG3MConv can overperform cuDNN in most convolution scenes. We also use six
representative CNNs as real-world cases, and the hardware efficiency of
MG3MConv reaches up to 67.04% on the VGG network model, which is 1.37 times and
1.96 times that of cuDNN and swDNN, respectively.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 00:03:28 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Wu",
"Zheng",
""
]
] |
new_dataset
| 0.980739 |
2307.04973
|
Guoyao Deng
|
Guoyao Deng, Ke Zou, Kai Ren, Meng Wang, Xuedong Yuan, Sancong Ying
and Huazhu Fu
|
SAM-U: Multi-box prompts triggered uncertainty estimation for reliable
SAM in medical image
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, Segmenting Anything has taken an important step towards general
artificial intelligence. At the same time, its reliability and fairness have
also attracted great attention, especially in the field of health care. In this
study, we propose multi-box prompts triggered uncertainty estimation for SAM
cues to demonstrate the reliability of segmented lesions or tissues. We
estimate the distribution of SAM predictions via Monte Carlo with prior
distribution parameters, which employs different prompts as formulation of
test-time augmentation. Our experimental results found that multi-box prompts
augmentation improve the SAM performance, and endowed each pixel with
uncertainty. This provides the first paradigm for a reliable SAM.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 02:27:45 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Deng",
"Guoyao",
""
],
[
"Zou",
"Ke",
""
],
[
"Ren",
"Kai",
""
],
[
"Wang",
"Meng",
""
],
[
"Yuan",
"Xuedong",
""
],
[
"Ying",
"Sancong",
""
],
[
"Fu",
"Huazhu",
""
]
] |
new_dataset
| 0.989232 |
2307.05038
|
Guanzhou Lan
|
Guanzhou Lan, Bin Zhao, Xuelong Li
|
Disentangled Contrastive Image Translation for Nighttime Surveillance
|
Submitted to TIP
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nighttime surveillance suffers from degradation due to poor illumination and
arduous human annotations. It is challengable and remains a security risk at
night. Existing methods rely on multi-spectral images to perceive objects in
the dark, which are troubled by low resolution and color absence. We argue that
the ultimate solution for nighttime surveillance is night-to-day translation,
or Night2Day, which aims to translate a surveillance scene from nighttime to
the daytime while maintaining semantic consistency. To achieve this, this paper
presents a Disentangled Contrastive (DiCo) learning method. Specifically, to
address the poor and complex illumination in the nighttime scenes, we propose a
learnable physical prior, i.e., the color invariant, which provides a stable
perception of a highly dynamic night environment and can be incorporated into
the learning pipeline of neural networks. Targeting the surveillance scenes, we
develop a disentangled representation, which is an auxiliary pretext task that
separates surveillance scenes into the foreground and background with
contrastive learning. Such a strategy can extract the semantics without
supervision and boost our model to achieve instance-aware translation. Finally,
we incorporate all the modules above into generative adversarial networks and
achieve high-fidelity translation. This paper also contributes a new
surveillance dataset called NightSuR. It includes six scenes to support the
study on nighttime surveillance. This dataset collects nighttime images with
different properties of nighttime environments, such as flare and extreme
darkness. Extensive experiments demonstrate that our method outperforms
existing works significantly. The dataset and source code will be released on
GitHub soon.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 06:40:27 GMT"
}
] | 2023-07-12T00:00:00 |
[
[
"Lan",
"Guanzhou",
""
],
[
"Zhao",
"Bin",
""
],
[
"Li",
"Xuelong",
""
]
] |
new_dataset
| 0.995602 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.