id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.01302
|
Marek Szyku{\l}a
|
Igor Rystsov, Marek Szyku{\l}a
|
Primitive Automata that are Synchronizing
|
Note: The weak variant of our conjecture in a stronger form has been
recently solved by Mikhail Volkov arXiv:2306.13317, together with several new
results concerning our problem
| null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
A deterministic finite (semi)automaton is primitive if its transition monoid
(semigroup) acting on the set of states has no non-trivial congruences. It is
synchronizing if it contains a constant map (transformation). In analogy to
synchronizing groups, we study the possibility of characterizing automata that
are synchronizing if primitive. We prove that the implication holds for several
classes of automata. In particular, we show it for automata whose every letter
induce either a permutation or a semiconstant transformation (an idempotent
with one point of contraction) unless all letters are of the first type. We
propose and discuss two conjectures about possible more general
characterizations.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 19:12:48 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Rystsov",
"Igor",
""
],
[
"Szykuła",
"Marek",
""
]
] |
new_dataset
| 0.998768 |
2307.01327
|
Gun Pinyo
|
Gun Pinyo
|
Twisted Cubes and their Applications in Type Theory
|
PhD thesis (accepted at the University of Nottingham), 162 pages
| null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
This thesis captures the ongoing development of twisted cubes, which is a
modification of cubes (in a topological sense) where its homotopy type theory
does not require paths or higher paths to be invertible. My original motivation
to develop the twisted cubes was to resolve the incompatibility between cubical
type theory and directed type theory. The development of twisted cubes is still
in the early stages and the intermediate goal, for now, is to define a twisted
cube category and its twisted cubical sets that can be used to construct a
potential definition of (infinity, n)-categories. The intermediate goal above
leads me to discover a novel framework that uses graph theory to transform
convex polytopes, such as simplices and (standard) cubes, into base categories.
Intuitively, an n-dimensional polytope is transformed into a directed graph
consists 0-faces (extreme points) of the polytope as its nodes and 1-faces of
the polytope as its edges. Then, we define the base category as the full
subcategory of the graph category induced by the family of these graphs from
all n-dimensional cases. With this framework, the modification from cubes to
twisted cubes can formally be done by reversing some edges of cube graphs.
Equivalently, the twisted n-cube graph is the result of a certain endofunctor
being applied n times to the singleton graph; this endofunctor (called twisted
prism functor) duplicates the input, reverses all edges in the first copy, and
then pairwisely links nodes from the first copy to the second copy. The core
feature of a twisted graph is its unique Hamiltonian path, which is useful to
prove many properties of twisted cubes. In particular, the reflexive transitive
closure of a twisted graph is isomorphic to the simplex graph counterpart,
which remarkably suggests that twisted cubes not only relate to (standard)
cubes but also simplices.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 20:01:10 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Pinyo",
"Gun",
""
]
] |
new_dataset
| 0.996096 |
2307.01350
|
Amartya Purushottam
|
Amartya Purushottam, Yeongtae Jung, Christopher Xu, and Joao Ramos
|
Dynamic Mobile Manipulation via Whole-Body Bilateral Teleoperation of a
Wheeled Humanoid
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humanoid robots have the potential to help human workers by realizing
physically demanding manipulation tasks such as moving large boxes within
warehouses. We define such tasks as Dynamic Mobile Manipulation (DMM). This
paper presents a framework for DMM via whole-body teleoperation, built upon
three key contributions: Firstly, a teleoperation framework employing a Human
Machine Interface (HMI) and a bi-wheeled humanoid, SATYRR, is proposed.
Secondly, the study introduces a dynamic locomotion mapping, utilizing
human-robot reduced order models, and a kinematic retargeting strategy for
manipulation tasks. Additionally, the paper discusses the role of whole-body
haptic feedback for wheeled humanoid control. Finally, the system's
effectiveness and mappings for DMM are validated through locomanipulation
experiments and heavy box pushing tasks. Here we show two forms of DMM:
grasping a target moving at an average speed of 0.4 m/s, and pushing boxes
weighing up to 105\% of the robot's weight. By simultaneously adjusting their
pitch and using their arms, the pilot adjusts the robot pose to apply larger
contact forces and move a heavy box at a constant velocity of 0.2 m/s.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 20:50:58 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Purushottam",
"Amartya",
""
],
[
"Jung",
"Yeongtae",
""
],
[
"Xu",
"Christopher",
""
],
[
"Ramos",
"Joao",
""
]
] |
new_dataset
| 0.994435 |
2307.01387
|
Javier De La Rosa
|
Javier de la Rosa, \'Alvaro P\'erez Pozo, Salvador Ros, Elena
Gonz\'alez-Blanco
|
ALBERTI, a Multilingual Domain Specific Language Model for Poetry
Analysis
|
Accepted for publication at SEPLN 2023: 39th International Conference
of the Spanish Society for Natural Language Processing
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The computational analysis of poetry is limited by the scarcity of tools to
automatically analyze and scan poems. In a multilingual settings, the problem
is exacerbated as scansion and rhyme systems only exist for individual
languages, making comparative studies very challenging and time consuming. In
this work, we present \textsc{Alberti}, the first multilingual pre-trained
large language model for poetry. Through domain-specific pre-training (DSP), we
further trained multilingual BERT on a corpus of over 12 million verses from 12
languages. We evaluated its performance on two structural poetry tasks: Spanish
stanza type classification, and metrical pattern prediction for Spanish,
English and German. In both cases, \textsc{Alberti} outperforms multilingual
BERT and other transformers-based models of similar sizes, and even achieves
state-of-the-art results for German when compared to rule-based systems,
demonstrating the feasibility and effectiveness of DSP in the poetry domain.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 22:50:53 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"de la Rosa",
"Javier",
""
],
[
"Pozo",
"Álvaro Pérez",
""
],
[
"Ros",
"Salvador",
""
],
[
"González-Blanco",
"Elena",
""
]
] |
new_dataset
| 0.99531 |
2307.01502
|
Philipp L\"osel
|
Jacob J. Relle, Samuel Vo{\ss}, Ramesch Raschidi, Regine Nessel,
Johannes G\"orich, Mark O. Wielp\"utz, Thorsten L\"offler, Vincent Heuveline,
Friedrich Kallinowski, Philipp D. L\"osel
|
HEDI: First-Time Clinical Application and Results of a Biomechanical
Evaluation and Visualisation Tool for Incisional Hernia Repair
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Abdominal wall defects often lead to pain, discomfort, and recurrence of
incisional hernias, resulting in significant morbidity and repeated surgical
repairs worldwide. Mesh repair for large hernias is usually based on the defect
area with a fixed overlap, without considering biomechanical aspects such as
muscle activation, intra-abdominal pressure, tissue elasticity, and abdominal
wall distention. To address this issue, we present a biomechanical approach to
incisional hernia repair that takes into account the unstable abdominal wall.
Additionally, we introduce HEDI, a tool that uses dynamic computed tomography
with Valsalva maneuver to automatically detect and assess hernia size, volume,
and abdominal wall instability. Our first clinical application of HEDI in the
preoperative evaluation of 31 patients shows significantly improved success
rates compared to reported rates, with all patients remaining pain-free and
showing no hernia recurrence after three years of follow-up.
|
[
{
"version": "v1",
"created": "Tue, 4 Jul 2023 06:15:06 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Relle",
"Jacob J.",
""
],
[
"Voß",
"Samuel",
""
],
[
"Raschidi",
"Ramesch",
""
],
[
"Nessel",
"Regine",
""
],
[
"Görich",
"Johannes",
""
],
[
"Wielpütz",
"Mark O.",
""
],
[
"Löffler",
"Thorsten",
""
],
[
"Heuveline",
"Vincent",
""
],
[
"Kallinowski",
"Friedrich",
""
],
[
"Lösel",
"Philipp D.",
""
]
] |
new_dataset
| 0.998773 |
2307.01512
|
Yanshi Sun
|
Yanshi Sun and Zhiguo Ding
|
A Fine Grained Stochastic Geometry Based Analysis on LEO Satellite
Communication Systems
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, stochastic geometry has been applied to provide tractable
performance analysis for low earth orbit (LEO) satellite networks. However,
existing works mainly focus on analyzing the ``coverage probability'', which
provides limited information. To provide more insights, this paper provides a
more fine grained analysis on LEO satellite networks modeled by a homogeneous
Poisson point process (HPPP). Specifically, the distribution and moments of the
conditional coverage probability given the point process are studied. The
developed analytical results can provide characterizations on LEO satellite
networks, which are not available in existing literature, such as ``user
fairness'' and ``what fraction of users can achieve a given transmission
reliability ''. Simulation results are provided to verify the developed
analysis. Numerical results show that, in a dense satellite network,
{\color{black}it is} beneficial to deploy satellites at low altitude, for the
sake of both coverage probability and user fairness.
|
[
{
"version": "v1",
"created": "Tue, 4 Jul 2023 06:49:07 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Sun",
"Yanshi",
""
],
[
"Ding",
"Zhiguo",
""
]
] |
new_dataset
| 0.994219 |
2307.01557
|
Yuanxian Huang
|
Mingjie Lu, Yuanxian Huang, Ji Liu, Jinzhang Peng, Lu Tian, Ashish
Sirasao
|
Separated RoadTopoFormer
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Understanding driving scenarios is crucial to realizing autonomous driving.
Previous works such as map learning and BEV lane detection neglect the
connection relationship between lane instances, and traffic elements detection
tasks usually neglect the relationship with lane lines. To address these
issues, the task is presented which includes 4 sub-tasks, the detection of
traffic elements, the detection of lane centerlines, reasoning connection
relationships among lanes, and reasoning assignment relationships between lanes
and traffic elements. We present Separated RoadTopoFormer to tackle the issues,
which is an end-to-end framework that detects lane centerline and traffic
elements with reasoning relationships among them. We optimize each module
separately to prevent interaction with each other and aggregate them together
with few finetunes. For two detection heads, we adopted a DETR-like
architecture to detect objects, and for the relationship head, we concat two
instance features from front detectors and feed them to the classifier to
obtain relationship probability. Our final submission achieves 0.445 OLS, which
is competitive in both sub-task and combined scores.
|
[
{
"version": "v1",
"created": "Tue, 4 Jul 2023 08:21:39 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Lu",
"Mingjie",
""
],
[
"Huang",
"Yuanxian",
""
],
[
"Liu",
"Ji",
""
],
[
"Peng",
"Jinzhang",
""
],
[
"Tian",
"Lu",
""
],
[
"Sirasao",
"Ashish",
""
]
] |
new_dataset
| 0.998154 |
2307.01618
|
Olivier Lindamulage De Silva
|
Olivier Lindamulage De Silva, Vineeth Satheeskumar Varma, Ming Cao,
Irinel-Constantin Morarescu, Samson Lasaulce
|
A Stackelberg viral marketing design for two competing players
|
This paper appears in: IEEE Control Systems Letters
|
IEEE Control Systems Letters 2023
|
10.1109/LCSYS.2023.3291421
| null |
cs.GT cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
A Stackelberg duopoly model in which two firms compete to maximize their
market share is considered. The firms offer a service/product to customers that
are spread over several geographical regions (e.g., countries, provinces, or
states). Each region has its own characteristics (spreading and recovery rates)
of each service propagation. We consider that the spreading rate can be
controlled by each firm and is subject to some investment that the firm does in
each region. One of the main objectives of this work is to characterize the
advertising budget allocation strategy for each firm across regions to maximize
its market share when competing. To achieve this goal we propose a Stackelberg
game model that is relatively simple while capturing the main effects of the
competition for market share. {By characterizing the strong/weak Stackelberg
equilibria of the game, we provide the associated budget allocation strategy.}
In this setting, it is established under which conditions the solution of the
game is the so-called ``winner takes all". Numerical results expand upon our
theoretical findings and we provide the equilibrium characterization for an
example.
|
[
{
"version": "v1",
"created": "Tue, 4 Jul 2023 10:14:02 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"De Silva",
"Olivier Lindamulage",
""
],
[
"Varma",
"Vineeth Satheeskumar",
""
],
[
"Cao",
"Ming",
""
],
[
"Morarescu",
"Irinel-Constantin",
""
],
[
"Lasaulce",
"Samson",
""
]
] |
new_dataset
| 0.997458 |
2307.01630
|
Anshul Gupta
|
Samy Tafasca, Anshul Gupta, Jean-Marc Odobez
|
ChildPlay: A New Benchmark for Understanding Children's Gaze Behaviour
|
First submitted for CVPR 2022. Current draft is in review
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gaze behaviors such as eye-contact or shared attention are important markers
for diagnosing developmental disorders in children. While previous studies have
looked at some of these elements, the analysis is usually performed on private
datasets and is restricted to lab settings. Furthermore, all publicly available
gaze target prediction benchmarks mostly contain instances of adults, which
makes models trained on them less applicable to scenarios with young children.
In this paper, we propose the first study for predicting the gaze target of
children and interacting adults. To this end, we introduce the ChildPlay
dataset: a curated collection of short video clips featuring children playing
and interacting with adults in uncontrolled environments (e.g. kindergarten,
therapy centers, preschools etc.), which we annotate with rich gaze
information. We further propose a new model for gaze target prediction that is
geometrically grounded by explicitly identifying the scene parts in the 3D
field of view (3DFoV) of the person, leveraging recent geometry preserving
depth inference methods. Our model achieves state of the art results on
benchmark datasets and ChildPlay. Furthermore, results show that looking at
faces prediction performance on children is much worse than on adults, and can
be significantly improved by fine-tuning models using child gaze annotations.
Our dataset and models will be made publicly available.
|
[
{
"version": "v1",
"created": "Tue, 4 Jul 2023 10:26:53 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Tafasca",
"Samy",
""
],
[
"Gupta",
"Anshul",
""
],
[
"Odobez",
"Jean-Marc",
""
]
] |
new_dataset
| 0.997828 |
2307.01658
|
Farhad Rezazadeh
|
Farhad Rezazadeh, Hatim Chergui, Luis Alonso, Christos Verikoukis
|
SliceOps: Explainable MLOps for Streamlined Automation-Native 6G
Networks
|
8 pages, 6 Figures
| null | null | null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sixth-generation (6G) network slicing is the backbone of future
communications systems. It inaugurates the era of extreme ultra-reliable and
low-latency communication (xURLLC) and pervades the digitalization of the
various vertical immersive use cases. Since 6G inherently underpins artificial
intelligence (AI), we propose a systematic and standalone slice termed SliceOps
that is natively embedded in the 6G architecture, which gathers and manages the
whole AI lifecycle through monitoring, re-training, and deploying the machine
learning (ML) models as a service for the 6G slices. By leveraging machine
learning operations (MLOps) in conjunction with eXplainable AI (XAI), SliceOps
strives to cope with the opaqueness of black-box AI using explanation-guided
reinforcement learning (XRL) to fulfill transparency, trustworthiness, and
interpretability in the network slicing ecosystem. This article starts by
elaborating on the architectural and algorithmic aspects of SliceOps. Then, the
deployed cloud-native SliceOps working is exemplified via a latency-aware
resource allocation problem. The deep RL (DRL)-based SliceOps agents within
slices provide AI services aiming to allocate optimal radio resources and
impede service quality degradation. Simulation results demonstrate the
effectiveness of SliceOps-driven slicing. The article discusses afterward the
SliceOps challenges and limitations. Finally, the key open research directions
corresponding to the proposed approach are identified.
|
[
{
"version": "v1",
"created": "Tue, 4 Jul 2023 11:36:30 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Rezazadeh",
"Farhad",
""
],
[
"Chergui",
"Hatim",
""
],
[
"Alonso",
"Luis",
""
],
[
"Verikoukis",
"Christos",
""
]
] |
new_dataset
| 0.968424 |
2307.01718
|
Ivan Heibi
|
Elia Rizzetto, Arcangelo Massari, Ivan Heibi, and Silvio Peroni
|
A Prototype for a Controlled and Valid RDF Data Production Using SHACL
| null | null | null | null |
cs.DB cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
The paper introduces a tool prototype that combines SHACL's capabilities with
ad-hoc validation functions to create a controlled and user-friendly form
interface for producing valid RDF data. The proposed tool is developed within
the context of the OpenCitations Data Model (OCDM) use case. The paper
discusses the current status of the tool, outlines the future steps required
for achieving full functionality, and explores the potential applications and
benefits of the tool.
|
[
{
"version": "v1",
"created": "Tue, 4 Jul 2023 13:45:04 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Rizzetto",
"Elia",
""
],
[
"Massari",
"Arcangelo",
""
],
[
"Heibi",
"Ivan",
""
],
[
"Peroni",
"Silvio",
""
]
] |
new_dataset
| 0.994808 |
2307.01741
|
Michael Mommert
|
Michael Mommert, Nicolas Kesseli, Jo\"elle Hanna, Linus Scheibenreif,
Damian Borth, Beg\"um Demir
|
Ben-ge: Extending BigEarthNet with Geographical and Environmental Data
|
Accepted for presentation at the IEEE International Geoscience and
Remote Sensing Symposium 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Deep learning methods have proven to be a powerful tool in the analysis of
large amounts of complex Earth observation data. However, while Earth
observation data are multi-modal in most cases, only single or few modalities
are typically considered. In this work, we present the ben-ge dataset, which
supplements the BigEarthNet-MM dataset by compiling freely and globally
available geographical and environmental data. Based on this dataset, we
showcase the value of combining different data modalities for the downstream
tasks of patch-based land-use/land-cover classification and land-use/land-cover
segmentation. ben-ge is freely available and expected to serve as a test bed
for fully supervised and self-supervised Earth observation applications.
|
[
{
"version": "v1",
"created": "Tue, 4 Jul 2023 14:17:54 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Mommert",
"Michael",
""
],
[
"Kesseli",
"Nicolas",
""
],
[
"Hanna",
"Joëlle",
""
],
[
"Scheibenreif",
"Linus",
""
],
[
"Borth",
"Damian",
""
],
[
"Demir",
"Begüm",
""
]
] |
new_dataset
| 0.983814 |
2307.01778
|
Zhanhao Hu
|
Zhanhao Hu, Wenda Chu, Xiaopei Zhu, Hui Zhang, Bo Zhang, Xiaolin Hu
|
Physically Realizable Natural-Looking Clothing Textures Evade Person
Detectors via 3D Modeling
|
Accepted by CVPR 2023
| null | null | null |
cs.CV cs.AI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent works have proposed to craft adversarial clothes for evading person
detectors, while they are either only effective at limited viewing angles or
very conspicuous to humans. We aim to craft adversarial texture for clothes
based on 3D modeling, an idea that has been used to craft rigid adversarial
objects such as a 3D-printed turtle. Unlike rigid objects, humans and clothes
are non-rigid, leading to difficulties in physical realization. In order to
craft natural-looking adversarial clothes that can evade person detectors at
multiple viewing angles, we propose adversarial camouflage textures (AdvCaT)
that resemble one kind of the typical textures of daily clothes, camouflage
textures. We leverage the Voronoi diagram and Gumbel-softmax trick to
parameterize the camouflage textures and optimize the parameters via 3D
modeling. Moreover, we propose an efficient augmentation pipeline on 3D meshes
combining topologically plausible projection (TopoProj) and Thin Plate Spline
(TPS) to narrow the gap between digital and real-world objects. We printed the
developed 3D texture pieces on fabric materials and tailored them into T-shirts
and trousers. Experiments show high attack success rates of these clothes
against multiple detectors.
|
[
{
"version": "v1",
"created": "Tue, 4 Jul 2023 15:31:03 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Hu",
"Zhanhao",
""
],
[
"Chu",
"Wenda",
""
],
[
"Zhu",
"Xiaopei",
""
],
[
"Zhang",
"Hui",
""
],
[
"Zhang",
"Bo",
""
],
[
"Hu",
"Xiaolin",
""
]
] |
new_dataset
| 0.966336 |
2307.01905
|
Sina Labbaf
|
Sina Labbaf, Mahyar Abbasian, Iman Azimi, Nikil Dutt, and Amir M.
Rahmani
|
ZotCare: A Flexible, Personalizable, and Affordable mHealth Service
Provider
|
23 pages, 5 figures, 6 tables, journal paper
| null | null | null |
cs.HC cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The proliferation of Internet-connected health devices and the widespread
availability of mobile connectivity have resulted in a wealth of reliable
digital health data and the potential for delivering just-in-time
interventions. However, leveraging these opportunities for health research
requires the development and deployment of mobile health (mHealth)
applications, which present significant technical challenges for researchers.
While existing mHealth solutions have made progress in addressing some of these
challenges, they often fall short in terms of time-to-use, affordability, and
flexibility for personalization and adaptation. ZotCare aims to address these
limitations by offering ready-to-use and flexible services, providing
researchers with an accessible, cost-effective, and adaptable solution for
their mHealth studies. This article focuses on ZotCare's service orchestration
and highlights its capabilities in creating a programmable environment for
mHealth research. Additionally, we showcase several successful research use
cases that have utilized ZotCare, both in the past and in ongoing projects.
Furthermore, we provide resources and information for researchers who are
considering ZotCare as their mHealth research solution.
|
[
{
"version": "v1",
"created": "Tue, 4 Jul 2023 20:27:16 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Labbaf",
"Sina",
""
],
[
"Abbasian",
"Mahyar",
""
],
[
"Azimi",
"Iman",
""
],
[
"Dutt",
"Nikil",
""
],
[
"Rahmani",
"Amir M.",
""
]
] |
new_dataset
| 0.999463 |
2307.01952
|
Robin Rombach
|
Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim
Dockhorn, Jonas M\"uller, Joe Penna, Robin Rombach
|
SDXL: Improving Latent Diffusion Models for High-Resolution Image
Synthesis
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present SDXL, a latent diffusion model for text-to-image synthesis.
Compared to previous versions of Stable Diffusion, SDXL leverages a three times
larger UNet backbone: The increase of model parameters is mainly due to more
attention blocks and a larger cross-attention context as SDXL uses a second
text encoder. We design multiple novel conditioning schemes and train SDXL on
multiple aspect ratios. We also introduce a refinement model which is used to
improve the visual fidelity of samples generated by SDXL using a post-hoc
image-to-image technique. We demonstrate that SDXL shows drastically improved
performance compared the previous versions of Stable Diffusion and achieves
results competitive with those of black-box state-of-the-art image generators.
In the spirit of promoting open research and fostering transparency in large
model training and evaluation, we provide access to code and model weights at
https://github.com/Stability-AI/generative-models
|
[
{
"version": "v1",
"created": "Tue, 4 Jul 2023 23:04:57 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Podell",
"Dustin",
""
],
[
"English",
"Zion",
""
],
[
"Lacey",
"Kyle",
""
],
[
"Blattmann",
"Andreas",
""
],
[
"Dockhorn",
"Tim",
""
],
[
"Müller",
"Jonas",
""
],
[
"Penna",
"Joe",
""
],
[
"Rombach",
"Robin",
""
]
] |
new_dataset
| 0.966433 |
2307.01956
|
Ramviyas Parasuraman
|
Ehsan Latif and Ramviyas Parasuraman
|
Instantaneous Wireless Robotic Node Localization Using Collaborative
Direction of Arrival
|
Accepted to the IEEE Internet of Things Journal. arXiv admin note:
text overlap with arXiv:2201.05105
| null | null | null |
cs.RO cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Localizing mobile robotic nodes in indoor and GPS-denied environments is a
complex problem, particularly in dynamic, unstructured scenarios where
traditional cameras and LIDAR-based sensing and localization modalities may
fail. Alternatively, wireless signal-based localization has been extensively
studied in the literature yet primarily focuses on fingerprinting and
feature-matching paradigms, requiring dedicated environment-specific offline
data collection. We propose an online robot localization algorithm enabled by
collaborative wireless sensor nodes to remedy these limitations. Our approach's
core novelty lies in obtaining the Collaborative Direction of Arrival (CDOA) of
wireless signals by exploiting the geometric features and collaboration between
wireless nodes. The CDOA is combined with the Expectation Maximization (EM) and
Particle Filter (PF) algorithms to calculate the Gaussian probability of the
node's location with high efficiency and accuracy. The algorithm relies on
RSSI-only data, making it ubiquitous to resource-constrained devices. We
theoretically analyze the approach and extensively validate the proposed
method's consistency, accuracy, and computational efficiency in simulations,
real-world public datasets, as well as real robot demonstrations. The results
validate the method's real-time computational capability and demonstrate
considerably-high centimeter-level localization accuracy, outperforming
relevant state-of-the-art localization approaches.
|
[
{
"version": "v1",
"created": "Tue, 4 Jul 2023 23:27:07 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Latif",
"Ehsan",
""
],
[
"Parasuraman",
"Ramviyas",
""
]
] |
new_dataset
| 0.970265 |
2307.02003
|
Yuhuan Yang
|
Yuhuan Yang, Chaofan Ma, Chen Ju, Ya Zhang, Yanfeng Wang
|
Multi-Modal Prototypes for Open-Set Semantic Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In semantic segmentation, adapting a visual system to novel object categories
at inference time has always been both valuable and challenging. To enable such
generalization, existing methods rely on either providing several support
examples as visual cues or class names as textual cues. Through the development
is relatively optimistic, these two lines have been studied in isolation,
neglecting the complementary intrinsic of low-level visual and high-level
language information. In this paper, we define a unified setting termed as
open-set semantic segmentation (O3S), which aims to learn seen and unseen
semantics from both visual examples and textual names. Our pipeline extracts
multi-modal prototypes for segmentation task, by first single modal
self-enhancement and aggregation, then multi-modal complementary fusion. To be
specific, we aggregate visual features into several tokens as visual
prototypes, and enhance the class name with detailed descriptions for textual
prototype generation. The two modalities are then fused to generate multi-modal
prototypes for final segmentation. On both \pascal and \coco datasets, we
conduct extensive experiments to evaluate the framework effectiveness.
State-of-the-art results are achieved even on more detailed part-segmentation,
Pascal-Animals, by only training on coarse-grained datasets. Thorough ablation
studies are performed to dissect each component, both quantitatively and
qualitatively.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 03:27:31 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Yang",
"Yuhuan",
""
],
[
"Ma",
"Chaofan",
""
],
[
"Ju",
"Chen",
""
],
[
"Zhang",
"Ya",
""
],
[
"Wang",
"Yanfeng",
""
]
] |
new_dataset
| 0.9836 |
2307.02006
|
Viktor Schlegel
|
Viktor Schlegel, Hao Li, Yuping Wu, Anand Subramanian, Thanh-Tung
Nguyen, Abhinav Ramesh Kashyap, Daniel Beck, Xiaojun Zeng, Riza Theresa
Batista-Navarro, Stefan Winkler, Goran Nenadic
|
PULSAR at MEDIQA-Sum 2023: Large Language Models Augmented by Synthetic
Dialogue Convert Patient Dialogues to Medical Records
|
8 pages. ImageClef 2023 MediQA-Sum
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes PULSAR, our system submission at the ImageClef 2023
MediQA-Sum task on summarising patient-doctor dialogues into clinical records.
The proposed framework relies on domain-specific pre-training, to produce a
specialised language model which is trained on task-specific natural data
augmented by synthetic data generated by a black-box LLM. We find limited
evidence towards the efficacy of domain-specific pre-training and data
augmentation, while scaling up the language model yields the best performance
gains. Our approach was ranked second and third among 13 submissions on task B
of the challenge. Our code is available at https://github.com/yuping-wu/PULSAR.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 03:31:12 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Schlegel",
"Viktor",
""
],
[
"Li",
"Hao",
""
],
[
"Wu",
"Yuping",
""
],
[
"Subramanian",
"Anand",
""
],
[
"Nguyen",
"Thanh-Tung",
""
],
[
"Kashyap",
"Abhinav Ramesh",
""
],
[
"Beck",
"Daniel",
""
],
[
"Zeng",
"Xiaojun",
""
],
[
"Batista-Navarro",
"Riza Theresa",
""
],
[
"Winkler",
"Stefan",
""
],
[
"Nenadic",
"Goran",
""
]
] |
new_dataset
| 0.997842 |
2307.02028
|
Michael Wornow
|
Michael Wornow, Rahul Thapa, Ethan Steinberg, Jason Fries, Nigam Shah
|
EHRSHOT: An EHR Benchmark for Few-Shot Evaluation of Foundation Models
| null | null | null | null |
cs.LG cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
While the general machine learning (ML) community has benefited from public
datasets, tasks, and models, the progress of ML in healthcare has been hampered
by a lack of such shared assets. The success of foundation models creates new
challenges for healthcare ML by requiring access to shared pretrained models to
validate performance benefits. We help address these challenges through three
contributions. First, we publish a new dataset, EHRSHOT, containing
de-identified structured data from the electronic health records (EHRs) of
6,712 patients from Stanford Medicine. Unlike MIMIC-III/IV and other popular
EHR datasets, EHRSHOT is longitudinal and not restricted to ICU/ED patients.
Second, we publish the weights of a 141M parameter clinical foundation model
pretrained on the structured EHR data of 2.57M patients. We are one of the
first to fully release such a model for coded EHR data; in contrast, most prior
models released for clinical data (e.g. GatorTron, ClinicalBERT) only work with
unstructured text and cannot process the rich, structured data within an EHR.
We provide an end-to-end pipeline for the community to validate and build upon
its performance. Third, we define 15 few-shot clinical prediction tasks,
enabling evaluation of foundation models on benefits such as sample efficiency
and task adaption. The code to reproduce our results, as well as the model and
dataset (via a research data use agreement), are available at our Github repo
here: https://github.com/som-shahlab/ehrshot-benchmark
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 05:24:59 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Wornow",
"Michael",
""
],
[
"Thapa",
"Rahul",
""
],
[
"Steinberg",
"Ethan",
""
],
[
"Fries",
"Jason",
""
],
[
"Shah",
"Nigam",
""
]
] |
new_dataset
| 0.998983 |
2307.02032
|
Ali Shoker
|
Ali Shoker, Fernando Alves, Paulo Esteves-Verissimo
|
ScalOTA: Scalable Secure Over-the-Air Software Updates for Vehicles
| null | null | null | null |
cs.CR cs.DC cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Over-the-Air (OTA) software updates are becoming essential for
electric/electronic vehicle architectures in order to reduce recalls amid the
increasing software bugs and vulnerabilities. Current OTA update architectures
rely heavily on direct cellular repository-to-vehicle links, which makes the
repository a communication bottleneck, and increases the cellular bandwidth
utilization cost as well as the software download latency. In this paper, we
introduce ScalOTA, an end-to-end scalable OTA software update architecture and
secure protocol for modern vehicles. For the first time, we propose using a
network of update stations, as part of Electric Vehicle charging stations, to
boost the download speed through these stations, and reduce the cellular
bandwidth overhead significantly. Our formalized OTA update protocol ensures
proven end-to-end chain-of-trust including all stakeholders: manufacturer,
suppliers, update stations, and all layers of in-vehicle Electric Control Units
(ECUs). The empirical evaluation shows that ScalOTA reduces the bandwidth
utilization and download latency up to an order of magnitude compared with
current OTA update systems.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 05:30:22 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Shoker",
"Ali",
""
],
[
"Alves",
"Fernando",
""
],
[
"Esteves-Verissimo",
"Paulo",
""
]
] |
new_dataset
| 0.999289 |
2307.02055
|
Jaydip Sen Prof
|
Jaydip Sen and Subhasis Dasgupta
|
Adversarial Attacks on Image Classification Models: FGSM and Patch
Attacks and their Impact
|
This is the preprint of the chapter titled "Adversarial Attacks on
Image Classification Models: FGSM and Patch Attacks and their Impact" which
will be published in the volume titled "Information Security and Privacy in
the Digital World - Some Selected Cases", edited by Jaydip Sen. The book will
be published by IntechOpen, London, UK, in 2023. This is not the final
version of the chapter
| null | null | null |
cs.CV cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This chapter introduces the concept of adversarial attacks on image
classification models built on convolutional neural networks (CNN). CNNs are
very popular deep-learning models which are used in image classification tasks.
However, very powerful and pre-trained CNN models working very accurately on
image datasets for image classification tasks may perform disastrously when the
networks are under adversarial attacks. In this work, two very well-known
adversarial attacks are discussed and their impact on the performance of image
classifiers is analyzed. These two adversarial attacks are the fast gradient
sign method (FGSM) and adversarial patch attack. These attacks are launched on
three powerful pre-trained image classifier architectures, ResNet-34,
GoogleNet, and DenseNet-161. The classification accuracy of the models in the
absence and presence of the two attacks are computed on images from the
publicly accessible ImageNet dataset. The results are analyzed to evaluate the
impact of the attacks on the image classification task.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 06:40:08 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Sen",
"Jaydip",
""
],
[
"Dasgupta",
"Subhasis",
""
]
] |
new_dataset
| 0.986393 |
2307.02144
|
Nithin Nagaraj
|
Tulasi Bharathi, Shailaja D Sharma, Nithin Nagaraj
|
Kolam Simulation using Angles at Lattice Points
|
19 pages, 31 figures
| null | null | null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Kolam is a ritual art form practised by people in South India and consists of
rule-bound geometric patterns of dots and lines. Single loop Kolams are
mathematical closed loop patterns drawn over a grid of dots and conforming to
certain heuristics. In this work, we propose a novel encoding scheme where we
map the angular movements of Kolam at lattice points into sequences containing
$4$ distinct symbols. This is then used to simulate single loop Kolam procedure
via turtle moves in accordance with the desired angular direction at specific
points. We thus obtain sequential codes for Kolams, unique up to cyclic
permutations. We specify the requirements for the algorithm and indicate the
general methodology. We demonstrate a sample of Kolams using our algorithm with
a software implementation in Python.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 09:40:36 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Bharathi",
"Tulasi",
""
],
[
"Sharma",
"Shailaja D",
""
],
[
"Nagaraj",
"Nithin",
""
]
] |
new_dataset
| 0.99938 |
2307.02146
|
Longshen Ou
|
Longshen Ou, Xichu Ma, Ye Wang
|
LOAF-M2L: Joint Learning of Wording and Formatting for Singable
Melody-to-Lyric Generation
|
An extension of our previous work arXiv:2305.16816 [cs.CL]
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite previous efforts in melody-to-lyric generation research, there is
still a significant compatibility gap between generated lyrics and melodies,
negatively impacting the singability of the outputs. This paper bridges the
singability gap with a novel approach to generating singable lyrics by jointly
Learning wOrding And Formatting during Melody-to-Lyric training (LOAF-M2L).
After general-domain pretraining, our proposed model acquires length awareness
first from a large text-only lyric corpus. Then, we introduce a new objective
informed by musicological research on the relationship between melody and
lyrics during melody-to-lyric training, which enables the model to learn the
fine-grained format requirements of the melody. Our model achieves 3.75% and
21.44% absolute accuracy gains in the outputs' number-of-line and
syllable-per-line requirements compared to naive fine-tuning, without
sacrificing text fluency. Furthermore, our model demonstrates a 63.92% and
74.18% relative improvement of music-lyric compatibility and overall quality in
the subjective evaluation, compared to the state-of-the-art melody-to-lyric
generation model, highlighting the significance of formatting learning.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 09:42:47 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Ou",
"Longshen",
""
],
[
"Ma",
"Xichu",
""
],
[
"Wang",
"Ye",
""
]
] |
new_dataset
| 0.996544 |
2307.02211
|
Slimane Larabi
|
Souayah Abdelkader, Mokretar Kraroubi Abderrahmene, Slimane Larabi
|
Object Recognition System on a Tactile Device for Visually Impaired
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
People with visual impairments face numerous challenges when interacting with
their environment. Our objective is to develop a device that facilitates
communication between individuals with visual impairments and their
surroundings. The device will convert visual information into auditory
feedback, enabling users to understand their environment in a way that suits
their sensory needs. Initially, an object detection model is selected from
existing machine learning models based on its accuracy and cost considerations,
including time and power consumption. The chosen model is then implemented on a
Raspberry Pi, which is connected to a specifically designed tactile device.
When the device is touched at a specific position, it provides an audio signal
that communicates the identification of the object present in the scene at that
corresponding position to the visually impaired individual. Conducted tests
have demonstrated the effectiveness of this device in scene understanding,
encompassing static or dynamic objects, as well as screen contents such as TVs,
computers, and mobile phones.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 11:37:17 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Abdelkader",
"Souayah",
""
],
[
"Abderrahmene",
"Mokretar Kraroubi",
""
],
[
"Larabi",
"Slimane",
""
]
] |
new_dataset
| 0.99421 |
2307.02242
|
Yuan Fang
|
Yuan Fang, Siyao Zhang, Xinmin Li, Jie Xu, and Shuguang Cui
|
Multi-IRS-Enabled Integrated Sensing and Communications
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies a multi-intelligent-reflecting-surface-(IRS)-enabled
integrated sensing and communications (ISAC) system, in which multiple IRSs are
installed to help the base station (BS) provide ISAC services at separate
line-of-sight (LoS) blocked areas. We focus on the scenario with semi-passive
uniform linear array (ULA) IRSsfor sensing, in which each IRS is integrated
with dedicated sensors for processing echo signals, and each IRS simultaneously
serves one sensing target and one communication user (CU) in its coverage area.
In particular, we suppose that the BS sends combined information and dedicated
sensing signals for ISAC, and we consider two cases with point and extended
targets, in which each IRS aims to estimate the direction-of-arrival (DoA) of
the corresponding target and the complete target response matrix, respectively.
Under this setup, we first derive the closed-form Cram{\'e}r-Rao bounds (CRBs)
for parameters estimation under the two target models. For the point target
case, the CRB for AoA estimation is shown to be inversely proportional to the
cubic of the number of sensors at each IRS, while for the extended target case,
the CRB for target response matrix estimation is proportional to the number of
IRS sensors. Next, we consider two different types of CU receivers that can and
cannot cancel the interference from dedicated sensing signals prior to
information decoding. To achieve fair and optimized sensing performance, we
minimize the maximum CRB at all IRSs for the two target cases, via jointly
optimizing the transmit beamformers at the BS and the reflective beamformers at
the multiple IRSs, subject to the minimum signal-to-interference-plus-noise
ratio (SINR) constraints at individual CUs, the maximum transmit power
constraint at the BS, and the unit-modulus constraints at the multiple IRSs.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 12:35:14 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Fang",
"Yuan",
""
],
[
"Zhang",
"Siyao",
""
],
[
"Li",
"Xinmin",
""
],
[
"Xu",
"Jie",
""
],
[
"Cui",
"Shuguang",
""
]
] |
new_dataset
| 0.99879 |
2307.02269
|
Lasha Abzianidze
|
Lasha Abzianidze, Joost Zwarts, Yoad Winter
|
SpaceNLI: Evaluating the Consistency of Predicting Inferences in Space
|
Accepted and presented at the NALOMA (Natural Logic Meets Machine
Learning) workshop. The paper repository is at
https://github.com/kovvalsky/SpaceNLI
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
While many natural language inference (NLI) datasets target certain semantic
phenomena, e.g., negation, tense & aspect, monotonicity, and presupposition, to
the best of our knowledge, there is no NLI dataset that involves diverse types
of spatial expressions and reasoning. We fill this gap by semi-automatically
creating an NLI dataset for spatial reasoning, called SpaceNLI. The data
samples are automatically generated from a curated set of reasoning patterns,
where the patterns are annotated with inference labels by experts. We test
several SOTA NLI systems on SpaceNLI to gauge the complexity of the dataset and
the system's capacity for spatial reasoning. Moreover, we introduce a Pattern
Accuracy and argue that it is a more reliable and stricter measure than the
accuracy for evaluating a system's performance on pattern-based generated data
samples. Based on the evaluation results we find that the systems obtain
moderate results on the spatial NLI problems but lack consistency per inference
pattern. The results also reveal that non-projective spatial inferences
(especially due to the "between" preposition) are the most challenging ones.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 13:08:18 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Abzianidze",
"Lasha",
""
],
[
"Zwarts",
"Joost",
""
],
[
"Winter",
"Yoad",
""
]
] |
new_dataset
| 0.992032 |
2307.02308
|
Saisai Ding
|
Saisai Ding, Jun Wang, Juncheng Li, and Jun Shi
|
Multi-Scale Prototypical Transformer for Whole Slide Image
Classification
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Whole slide image (WSI) classification is an essential task in computational
pathology. Despite the recent advances in multiple instance learning (MIL) for
WSI classification, accurate classification of WSIs remains challenging due to
the extreme imbalance between the positive and negative instances in bags, and
the complicated pre-processing to fuse multi-scale information of WSI. To this
end, we propose a novel multi-scale prototypical Transformer (MSPT) for WSI
classification, which includes a prototypical Transformer (PT) module and a
multi-scale feature fusion module (MFFM). The PT is developed to reduce
redundant instances in bags by integrating prototypical learning into the
Transformer architecture. It substitutes all instances with cluster prototypes,
which are then re-calibrated through the self-attention mechanism of the
Trans-former. Thereafter, an MFFM is proposed to fuse the clustered prototypes
of different scales, which employs MLP-Mixer to enhance the information
communication between prototypes. The experimental results on two public WSI
datasets demonstrate that the proposed MSPT outperforms all the compared
algorithms, suggesting its potential applications.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 14:10:29 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Ding",
"Saisai",
""
],
[
"Wang",
"Jun",
""
],
[
"Li",
"Juncheng",
""
],
[
"Shi",
"Jun",
""
]
] |
new_dataset
| 0.998355 |
2307.02340
|
Timo Pierre Schrader
|
Timo Pierre Schrader, Teresa B\"urkle, Sophie Henning, Sherry Tan,
Matteo Finco, Stefan Gr\"unewald, Maira Indrikova, Felix Hildebrand,
Annemarie Friedrich
|
MuLMS-AZ: An Argumentative Zoning Dataset for the Materials Science
Domain
|
15 pages, 2 figures, 14 tables, to be published in "Proceedings of
the 4th Workshop on Computational Approaches to Discourse"
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Scientific publications follow conventionalized rhetorical structures.
Classifying the Argumentative Zone (AZ), e.g., identifying whether a sentence
states a Motivation, a Result or Background information, has been proposed to
improve processing of scholarly documents. In this work, we adapt and extend
this idea to the domain of materials science research. We present and release a
new dataset of 50 manually annotated research articles. The dataset spans seven
sub-topics and is annotated with a materials-science focused multi-label
annotation scheme for AZ. We detail corpus statistics and demonstrate high
inter-annotator agreement. Our computational experiments show that using
domain-specific pre-trained transformer-based text encoders is key to high
classification performance. We also find that AZ categories from existing
datasets in other domains are transferable to varying degrees.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 14:55:18 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Schrader",
"Timo Pierre",
""
],
[
"Bürkle",
"Teresa",
""
],
[
"Henning",
"Sophie",
""
],
[
"Tan",
"Sherry",
""
],
[
"Finco",
"Matteo",
""
],
[
"Grünewald",
"Stefan",
""
],
[
"Indrikova",
"Maira",
""
],
[
"Hildebrand",
"Felix",
""
],
[
"Friedrich",
"Annemarie",
""
]
] |
new_dataset
| 0.997867 |
2307.02383
|
Brian Bittner
|
Brian A. Bittner, Jason Reid, Kevin C. Wolfe
|
Floating-base manipulation on zero-perturbation manifolds
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
To achieve high-dexterity motion planning on floating-base systems, the base
dynamics induced by arm motions must be treated carefully. In general, it is a
significant challenge to establish a fixed-base frame during tasking due to
forces and torques on the base that arise directly from arm motions (e.g. arm
drag in low Reynolds environments and arm momentum in high Reynolds
environments). While thrusters can in theory be used to regulate the vehicle
pose, it is often insufficient to establish a stable pose for precise tasking,
whether the cause be due to underactuation, modeling inaccuracy, suboptimal
control parameters, or insufficient power. We propose a solution that asks the
thrusters to do less high bandwidth perturbation correction by planning arm
motions that induce zero perturbation on the base. We are able to cast our
motion planner as a nonholonomic rapidly-exploring random tree (RRT) by
representing the floating-base dynamics as pfaffian constraints on joint
velocity. These constraints guide the manipulators to move on zero-perturbation
manifolds (which inhabit a subspace of the tangent space of the internal
configuration space). To invoke this representation (termed a
\textit{perturbation map}) we assume the body velocity (perturbation) of the
base to be a joint-defined linear mapping of joint velocity and describe
situations where this assumption is realistic (including underwater, aerial,
and orbital environments). The core insight of this work is that when
perturbation of the floating-base has affine structure with respect to joint
velocity, it provides the system a class of kinematic reduction that permits
the use of sample-based motion planners (specifically a nonholonomic RRT). We
show that this allows rapid, exploration-geared motion planning for high degree
of freedom systems in obstacle rich environments, even on floating-base systems
with nontrivial dynamics.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 15:50:50 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Bittner",
"Brian A.",
""
],
[
"Reid",
"Jason",
""
],
[
"Wolfe",
"Kevin C.",
""
]
] |
new_dataset
| 0.991633 |
2307.02413
|
Filippos Christou
|
Filippos Christou
|
MINDFul.jl: A Framework for Intent-driven Multi-Domain Network
coordination
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Network coordination across multiple domains is a complex task requiring
seamless communication between network entities. Network operators target to
minimize costs while ensuring the requirements of the user requests. Such
efforts are highly challenging in decentralized environments with diverse
network operators, where only partial knowledge of the complete network is
available. Intent-driven multi-domain coordination offers various benefits,
some inherent to Intent-Based Networking (IBN) and others stemming from the
standardization of the Northbound Interface (NBI). As standardization is still
missing, there has not been a substantial initiative to develop tools that
leverage this paradigm. MINDFul.jl is a Julia library that fills this gap and
provides the means to accelerate research in this area, both at the
architectural and the algorithmic level. It provides a stateful, modular
representation of common metro/core IP-optical network equipment as well as the
common intent operations. Finally, it facilitates event-based simulations with
a hackable interface and offers visualization support.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 06:53:24 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Christou",
"Filippos",
""
]
] |
new_dataset
| 0.969547 |
2307.02416
|
Satyajit Ghosh
|
Satyajit Ghosh and Mousumi Dutta
|
Indriya: Building a Secure and Transparent Organ Donation System with
Hyperledger Fabric
|
13 pages, 4 figures, 4 tables
| null |
10.36227/techrxiv.22225999.v1
| null |
cs.DC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent technological advancements have led to the development of new methods
for managing organ donation systems, which aim to overcome the limitations of
traditional centralized systems. To achieve increased transparency, security,
and efficiency in the organ donation process, blockchain technology is being
proposed as a replacement for these centralized systems. However, most previous
works on organ donation systems have focused on using Ethereum-based blockchain
solutions, which offer limited control, a fixed set of consensus protocols, and
no support for concurrent executions. In contrast, our work has utilized the
Hyperledger Fabric framework to develop a network model of the organ donation
system. We have designed and deployed a prototype system with smart contracts
using Amazon Managed Blockchain Service. Additionally, we have built a client
application that uses the Fabric SDK to interact with the network and perform
various actions. To evaluate the performance of our system, we conducted
extensive testing using the Hyperledger Caliper benchmarking tool. In our test
bench, the system achieved a peak actual send rate of 389.1 transactions per
second (TPS) for creating new records and 508.4 TPS for reading records. At a
send rate of 800 TPS, the system took an average of 12.16 seconds to serve a
request for creating a record and an average of 3.71 seconds to serve a request
for reading a record. Future work is required to extend the functionalities of
the system and identify potential endorsers and managers for this type of
controlled blockchain network.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 15:03:17 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Ghosh",
"Satyajit",
""
],
[
"Dutta",
"Mousumi",
""
]
] |
new_dataset
| 0.999226 |
2307.02429
|
Spyridon Mastorakis
|
Md Washik Al Azad and Hasniuj Zahan and Sifat Ut Taki and Spyridon
Mastorakis
|
DarkHorse: A UDP-based Framework to Improve the Latency of Tor Onion
Services
|
This paper has been accepted for publication by the 48th IEEE
Conference on Local Computer Networks (LCN)
| null | null | null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tor is the most popular anonymous communication overlay network which hides
clients' identities from servers by passing packets through multiple relays. To
provide anonymity to both clients and servers, Tor onion services were
introduced by increasing the number of relays between a client and a server.
Because of the limited bandwidth of Tor relays, large numbers of users, and
multiple layers of encryption at relays, onion services suffer from high
end-to-end latency and low data transfer rates, which degrade user experiences,
making onion services unsuitable for latency-sensitive applications. In this
paper, we present a UDP-based framework, called DarkHorse, that improves the
end-to-end latency and the data transfer overhead of Tor onion services by
exploiting the connectionless nature of UDP. Our evaluation results demonstrate
that DarkHorse is up to 3.62x faster than regular TCP-based Tor onion services
and reduces the Tor network overhead by up to 47%.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 16:51:54 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Azad",
"Md Washik Al",
""
],
[
"Zahan",
"Hasniuj",
""
],
[
"Taki",
"Sifat Ut",
""
],
[
"Mastorakis",
"Spyridon",
""
]
] |
new_dataset
| 0.999493 |
2307.02446
|
Muhammad Anis Al Hilmi
|
Alifia Puspaningrum, Muhammad Anis Al Hilmi, Darsih, Muhamad
Mustamiin, Maulana Ilham Ginanjar
|
Vulnerable Source Code Detection using SonarCloud Code Analysis
|
Paper entitled "#1570844450 ('Vulnerable Source Code Detection using
SonarCloud Code Analysis')" is ACCEPTED as an oral or video presentation in
the 5th International Conference on Applied Science Technology (ICAST-2022)
https://icast.isas.or.id/2022/
| null | null | null |
cs.CY cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
In Software Development Life Cycle (SDLC), security vulnerabilities are one
of the points introduced during the construction stage. Failure to detect
software defects earlier after releasing the product to the market causes
higher repair costs for the company. So, it decreases the company's reputation,
violates user privacy, and causes an unrepairable issue for the application.
The introduction of vulnerability detection enables reducing the number of
false alerts to focus the limited testing efforts on potentially vulnerable
files. UMKM Masa Kini (UMI) is a Point of Sales application to sell any Micro,
Small, and Medium Enterprises Product (UMKM). Therefore, in the current work,
we analyze the suitability of these metrics to create Machine Learning based
software vulnerability detectors for UMI applications. Code is generated using
a commercial tool, SonarCloud. Experimental result shows that there are 3,285
vulnerable rules detected.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 17:15:15 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Puspaningrum",
"Alifia",
""
],
[
"Hilmi",
"Muhammad Anis Al",
""
],
[
"Darsih",
"",
""
],
[
"Mustamiin",
"Muhamad",
""
],
[
"Ginanjar",
"Maulana Ilham",
""
]
] |
new_dataset
| 0.998194 |
2307.02465
|
Marc Ru{\ss}wurm
|
Marc Ru{\ss}wurm, Sushen Jilla Venkatesa, Devis Tuia
|
Large-scale Detection of Marine Debris in Coastal Areas with Sentinel-2
|
in review
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Detecting and quantifying marine pollution and macro-plastics is an
increasingly pressing ecological issue that directly impacts ecology and human
health. Efforts to quantify marine pollution are often conducted with sparse
and expensive beach surveys, which are difficult to conduct on a large scale.
Here, remote sensing can provide reliable estimates of plastic pollution by
regularly monitoring and detecting marine debris in coastal areas.
Medium-resolution satellite data of coastal areas is readily available and can
be leveraged to detect aggregations of marine debris containing plastic litter.
In this work, we present a detector for marine debris built on a deep
segmentation model that outputs a probability for marine debris at the pixel
level. We train this detector with a combination of annotated datasets of
marine debris and evaluate it on specifically selected test sites where it is
highly probable that plastic pollution is present in the detected marine
debris. We demonstrate quantitatively and qualitatively that a deep learning
model trained on this dataset issued from multiple sources outperforms existing
detection models trained on previous datasets by a large margin. Our
experiments show, consistent with the principles of data-centric AI, that this
performance is due to our particular dataset design with extensive sampling of
negative examples and label refinements rather than depending on the particular
deep learning model. We hope to accelerate advances in the large-scale
automated detection of marine debris, which is a step towards quantifying and
monitoring marine litter with remote sensing at global scales, and release the
model weights and training source code under
https://github.com/marccoru/marinedebrisdetector
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 17:38:48 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Rußwurm",
"Marc",
""
],
[
"Venkatesa",
"Sushen Jilla",
""
],
[
"Tuia",
"Devis",
""
]
] |
new_dataset
| 0.999215 |
2307.02480
|
Hari Gupta
|
Hari Prabhat Gupta and Rahul Mishra
|
A Dataset of Inertial Measurement Units for Handwritten English
Alphabets
|
10 pages, 12 figures
| null |
10.21227/av6q-jj17
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents an end-to-end methodology for collecting datasets to
recognize handwritten English alphabets by utilizing Inertial Measurement Units
(IMUs) and leveraging the diversity present in the Indian writing style. The
IMUs are utilized to capture the dynamic movement patterns associated with
handwriting, enabling more accurate recognition of alphabets. The Indian
context introduces various challenges due to the heterogeneity in writing
styles across different regions and languages. By leveraging this diversity,
the collected dataset and the collection system aim to achieve higher
recognition accuracy. Some preliminary experimental results demonstrate the
effectiveness of the dataset in accurately recognizing handwritten English
alphabet in the Indian context. This research can be extended and contributes
to the field of pattern recognition and offers valuable insights for developing
improved systems for handwriting recognition, particularly in diverse
linguistic and cultural contexts.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 17:54:36 GMT"
}
] | 2023-07-06T00:00:00 |
[
[
"Gupta",
"Hari Prabhat",
""
],
[
"Mishra",
"Rahul",
""
]
] |
new_dataset
| 0.998813 |
2111.11843
|
Liheng Bian
|
Lintao Peng, Chunli Zhu, Liheng Bian
|
U-shape Transformer for Underwater Image Enhancement
|
under review
| null |
10.1109/TIP.2023.3276332
| null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
The light absorption and scattering of underwater impurities lead to poor
underwater imaging quality. The existing data-driven based underwater image
enhancement (UIE) techniques suffer from the lack of a large-scale dataset
containing various underwater scenes and high-fidelity reference images.
Besides, the inconsistent attenuation in different color channels and space
areas is not fully considered for boosted enhancement. In this work, we
constructed a large-scale underwater image (LSUI) dataset including 5004 image
pairs, and reported an U-shape Transformer network where the transformer model
is for the first time introduced to the UIE task. The U-shape Transformer is
integrated with a channel-wise multi-scale feature fusion transformer (CMSFFT)
module and a spatial-wise global feature modeling transformer (SGFMT) module,
which reinforce the network's attention to the color channels and space areas
with more serious attenuation. Meanwhile, in order to further improve the
contrast and saturation, a novel loss function combining RGB, LAB and LCH color
spaces is designed following the human vision principle. The extensive
experiments on available datasets validate the state-of-the-art performance of
the reported technique with more than 2dB superiority.
|
[
{
"version": "v1",
"created": "Tue, 23 Nov 2021 13:15:56 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Nov 2021 04:49:34 GMT"
},
{
"version": "v3",
"created": "Fri, 3 Dec 2021 05:44:54 GMT"
},
{
"version": "v4",
"created": "Wed, 23 Mar 2022 11:07:28 GMT"
},
{
"version": "v5",
"created": "Wed, 4 May 2022 14:19:46 GMT"
},
{
"version": "v6",
"created": "Sun, 12 Jun 2022 11:45:40 GMT"
}
] | 2023-07-05T00:00:00 |
[
[
"Peng",
"Lintao",
""
],
[
"Zhu",
"Chunli",
""
],
[
"Bian",
"Liheng",
""
]
] |
new_dataset
| 0.973786 |
2206.06427
|
Zhenyu Wu
|
Priya Narayanan, Xin Hu, Zhenyu Wu, Matthew D Thielke, John G Rogers,
Andre V Harrison, John A D'Agostino, James D Brown, Long P Quang, James R
Uplinger, Heesung Kwon, Zhangyang Wang
|
A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels
and Ground Truth
|
This paper has been ACCEPTED for publication as a REGULAR paper in
the IEEE Transactions on Image Processing (TIP)
| null |
10.1109/TIP.2023.3245994
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Imagery collected from outdoor visual environments is often degraded due to
the presence of dense smoke or haze. A key challenge for research in scene
understanding in these degraded visual environments (DVE) is the lack of
representative benchmark datasets. These datasets are required to evaluate
state-of-the-art vision algorithms (e.g., detection and tracking) in degraded
settings. In this paper, we address some of these limitations by introducing
the first realistic hazy image benchmark, from both aerial and ground view,
with paired haze-free images, and in-situ haze density measurements. This
dataset was produced in a controlled environment with professional smoke
generating machines that covered the entire scene, and consists of images
captured from the perspective of both an unmanned aerial vehicle (UAV) and an
unmanned ground vehicle (UGV). We also evaluate a set of representative
state-of-the-art dehazing approaches as well as object detectors on the
dataset. The full dataset presented in this paper, including the ground truth
object classification bounding boxes and haze density measurements, is provided
for the community to evaluate their algorithms at:
https://a2i2-archangel.vision. A subset of this dataset has been used for the
``Object Detection in Haze'' Track of CVPR UG2 2022 challenge at
http://cvpr2022.ug2challenge.org/track1.html.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 19:14:06 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Jul 2022 01:09:44 GMT"
},
{
"version": "v3",
"created": "Sat, 11 Feb 2023 20:36:36 GMT"
}
] | 2023-07-05T00:00:00 |
[
[
"Narayanan",
"Priya",
""
],
[
"Hu",
"Xin",
""
],
[
"Wu",
"Zhenyu",
""
],
[
"Thielke",
"Matthew D",
""
],
[
"Rogers",
"John G",
""
],
[
"Harrison",
"Andre V",
""
],
[
"D'Agostino",
"John A",
""
],
[
"Brown",
"James D",
""
],
[
"Quang",
"Long P",
""
],
[
"Uplinger",
"James R",
""
],
[
"Kwon",
"Heesung",
""
],
[
"Wang",
"Zhangyang",
""
]
] |
new_dataset
| 0.998713 |
1608.01712
|
Victor Yodaiken
|
Victor Yodaiken
|
State machines for large scale computer software and systems
|
another minor typo fix. Hopefully, a stable version
| null | null | null |
cs.FL cs.DC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The behavior and architecture of large scale discrete state systems found in
computer software and hardware can be specified and analyzed using a particular
class of primitive recursive functions. This paper begins with an illustration
of the utility of the method via a number of small examples and then via longer
specification and verification of the Paxos distributed consensus algorithm.
The sequence maps are then shown to provide an alternative representation of
deterministic state machines and algebraic products of state machines.
Distributed and composite systems, parallel and concurrent computation, and
real-time behavior can all be specified naturally with these methods - which
require neither extensions to the classical state machine model nor any
axiomatic methods or other techniques from formal methods. Compared to state
diagrams or tables or the standard set-tuple-transition-maps, sequence maps are
more concise and better suited to describing the behavior and compositional
architecture of computer systems. Staying strictly within the boundaries of
classical deterministic state machines anchors the methods to the algebraic
structures of automata and makes the specifications faithful to engineering
practice.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2016 22:16:50 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jan 2020 17:29:22 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Sep 2020 05:37:25 GMT"
},
{
"version": "v4",
"created": "Wed, 4 Nov 2020 01:18:12 GMT"
},
{
"version": "v5",
"created": "Tue, 19 Apr 2022 21:03:15 GMT"
},
{
"version": "v6",
"created": "Mon, 10 Apr 2023 01:11:39 GMT"
},
{
"version": "v7",
"created": "Tue, 16 May 2023 20:31:35 GMT"
},
{
"version": "v8",
"created": "Wed, 7 Jun 2023 16:32:56 GMT"
},
{
"version": "v9",
"created": "Sun, 2 Jul 2023 10:31:16 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Yodaiken",
"Victor",
""
]
] |
new_dataset
| 0.992116 |
2105.01427
|
Yihan Zhang
|
Nikita Polyanskii, Yihan Zhang
|
Codes for the Z-channel
| null | null | null | null |
cs.IT cs.CC math.CO math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This paper is a collection of results on combinatorial properties of codes
for the Z-channel. A Z-channel with error fraction $\tau$ takes as input a
length-$n$ binary codeword and injects in an adversarial manner up to $n\tau$
asymmetric errors, i.e., errors that only zero out bits but do not flip $0$'s
to $1$'s. It is known that the largest $(L-1)$-list-decodable code for the
Z-channel with error fraction $\tau$ has exponential size (in $n$) if $\tau$ is
less than a critical value that we call the $(L-1)$-list-decoding Plotkin point
and has constant size if $\tau$ is larger than the threshold. The
$(L-1)$-list-decoding Plotkin point is known to be $ L^{-\frac{1}{L-1}} -
L^{-\frac{L}{L-1}} $, which equals $1/4$ for unique-decoding with $ L-1=1 $. In
this paper, we derive various results for the size of the largest codes above
and below the list-decoding Plotkin point. In particular, we show that the
largest $(L-1)$-list-decodable code $\epsilon$-above the Plotkin point, {for
any given sufficiently small positive constant $ \epsilon>0 $,} has size
$\Theta_L(\epsilon^{-3/2})$ for any $L-1\ge1$. We also devise upper and lower
bounds on the exponential size of codes below the list-decoding Plotkin point.
|
[
{
"version": "v1",
"created": "Tue, 4 May 2021 11:31:47 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Mar 2022 09:51:43 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Jul 2023 14:08:21 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Polyanskii",
"Nikita",
""
],
[
"Zhang",
"Yihan",
""
]
] |
new_dataset
| 0.988135 |
2107.09889
|
Wenxuan Liu
|
Wenxuan Liu, Tianyao He, Chen Gong, Ning Zhang, Hua Yang, Junchi Yan
|
Fine-Grained Music Plagiarism Detection: Revealing Plagiarists through
Bipartite Graph Matching and a Comprehensive Large-Scale Dataset
| null | null | null | null |
cs.SD cs.MM eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Music plagiarism detection is gaining more and more attention due to the
popularity of music production and society's emphasis on intellectual property.
We aim to find fine-grained plagiarism in music pairs since conventional
methods are coarse-grained and cannot match real-life scenarios. Considering
that there is no sizeable dataset designed for the music plagiarism task, we
establish a large-scale simulated dataset, named Music Plagiarism Detection
Dataset (MPD-Set) under the guidance and expertise of renowned researchers from
national-level professional institutions in the field of music. MPD-Set
considers diverse music plagiarism cases found in real life from the melodic,
rhythmic, and tonal levels respectively. Further, we establish a Real-life
Dataset for evaluation, where all plagiarism pairs are real cases. To detect
the fine-grained plagiarism pairs effectively, we propose a graph-based method
called Bipatite Melody Matching Detector (BMM-Det), which formulates the
problem as a max matching problem in the bipartite graph. Experimental results
on both the simulated and Real-life Datasets demonstrate that BMM-Det
outperforms the existing plagiarism detection methods, and is robust to common
plagiarism cases like transpositions, pitch shifts, duration variance, and
melody change. Datasets and source code are open-sourced at
https://github.com/xuan301/BMMDet_MPDSet.
|
[
{
"version": "v1",
"created": "Wed, 21 Jul 2021 06:04:47 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Jul 2023 08:28:07 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Liu",
"Wenxuan",
""
],
[
"He",
"Tianyao",
""
],
[
"Gong",
"Chen",
""
],
[
"Zhang",
"Ning",
""
],
[
"Yang",
"Hua",
""
],
[
"Yan",
"Junchi",
""
]
] |
new_dataset
| 0.999714 |
2107.11298
|
Giuseppe Vecchio
|
Giuseppe Vecchio, Simone Palazzo, Concetto Spampinato
|
SurfaceNet: Adversarial SVBRDF Estimation from a Single Image
| null | null |
10.1109/ICCV48922.2021
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we present SurfaceNet, an approach for estimating
spatially-varying bidirectional reflectance distribution function (SVBRDF)
material properties from a single image. We pose the problem as an image
translation task and propose a novel patch-based generative adversarial network
(GAN) that is able to produce high-quality, high-resolution surface reflectance
maps. The employment of the GAN paradigm has a twofold objective: 1) allowing
the model to recover finer details than standard translation models; 2)
reducing the domain shift between synthetic and real data distributions in an
unsupervised way. An extensive evaluation, carried out on a public benchmark of
synthetic and real images under different illumination conditions, shows that
SurfaceNet largely outperforms existing SVBRDF reconstruction methods, both
quantitatively and qualitatively. Furthermore, SurfaceNet exhibits a remarkable
ability in generating high-quality maps from real samples without any
supervision at training time.
|
[
{
"version": "v1",
"created": "Fri, 23 Jul 2021 15:18:54 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Vecchio",
"Giuseppe",
""
],
[
"Palazzo",
"Simone",
""
],
[
"Spampinato",
"Concetto",
""
]
] |
new_dataset
| 0.98538 |
2108.09388
|
Bayan Al-Nahhas
|
Bayan Al-Nahhas, Qurrat-Ul-Ain Nadeem, and Anas Chaaban
|
Distributed Reconfigurable Intelligent Surfaces Assisted Wireless
Communication: Asymptotic Analysis under Imperfect CSI
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This work studies the net sum-rate performance of a distributed
reconfigurable intelligent surfaces (RISs)-assisted multi-user
multiple-input-single-output (MISO) downlink communication system under
imperfect instantaneous-channel state information (I-CSI) to implement
precoding at the base station (BS) and statistical-CSI (S-CSI) to design the
RISs phase-shifts. Two channel estimation (CE) protocols are considered for
I-CSI acquisition: (i) a full CE protocol that estimates all direct and
RISs-assisted channels over multiple training sub-phases, and (ii) a
low-overhead direct estimation (DE) protocol that estimates the end-to-end
channel in a single sub-phase. We derive the deterministic equivalents of
signal-to-interference-plus-noise ratio (SINR) and ergodic net sum-rate under
Rayleigh and Rician fading and both CE protocols, for given RISs phase-shifts,
which are then optimized based on S-CSI. Simulation results reveal that the
low-complexity DE protocol yields better net sum-rate than the full CE protocol
when used to obtain CSI for precoding. A benchmark full I-CSI based RISs design
is also outlined and shown to yield higher SINR but lower net sum-rate than the
S-CSI based RISs design due to the large overhead associated with full I-CSI
acquisition. Therefore the proposed DE-S-CSI based design for precoding and
reflect beamforming achieves high net sum-rate with low complexity, overhead
and power consumption.
|
[
{
"version": "v1",
"created": "Fri, 20 Aug 2021 22:19:56 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jun 2023 21:16:33 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Al-Nahhas",
"Bayan",
""
],
[
"Nadeem",
"Qurrat-Ul-Ain",
""
],
[
"Chaaban",
"Anas",
""
]
] |
new_dataset
| 0.994989 |
2205.00395
|
Shabnam Behzad
|
Shabnam Behzad, Keisuke Sakaguchi, Nathan Schneider, Amir Zeldes
|
ELQA: A Corpus of Metalinguistic Questions and Answers about English
|
Accepted to ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present ELQA, a corpus of questions and answers in and about the English
language. Collected from two online forums, the >70k questions (from English
learners and others) cover wide-ranging topics including grammar, meaning,
fluency, and etymology. The answers include descriptions of general properties
of English vocabulary and grammar as well as explanations about specific
(correct and incorrect) usage examples. Unlike most NLP datasets, this corpus
is metalinguistic -- it consists of language about language. As such, it can
facilitate investigations of the metalinguistic capabilities of NLU models, as
well as educational applications in the language learning domain. To study
this, we define a free-form question answering task on our dataset and conduct
evaluations on multiple LLMs (Large Language Models) to analyze their capacity
to generate metalinguistic answers.
|
[
{
"version": "v1",
"created": "Sun, 1 May 2022 04:29:50 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jul 2023 17:42:36 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Behzad",
"Shabnam",
""
],
[
"Sakaguchi",
"Keisuke",
""
],
[
"Schneider",
"Nathan",
""
],
[
"Zeldes",
"Amir",
""
]
] |
new_dataset
| 0.999802 |
2206.14451
|
Yining Shi
|
Yining Shi, Jingyan Shen, Yifan Sun, Yunlong Wang, Jiaxin Li, Shiqi
Sun, Kun Jiang, Diange Yang
|
SRCN3D: Sparse R-CNN 3D for Compact Convolutional Multi-View 3D Object
Detection and Tracking
|
Accepted to Vision-centric Autonomous Driving(VCAD) Workshop at
CVPR2023, For more details refer to http://vcad.site/#/papers
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detection and tracking of moving objects is an essential component in
environmental perception for autonomous driving. In the flourishing field of
multi-view 3D camera-based detectors, different transformer-based pipelines are
designed to learn queries in 3D space from 2D feature maps of perspective
views, but the dominant dense BEV query mechanism is computationally
inefficient. This paper proposes Sparse R-CNN 3D (SRCN3D), a novel two-stage
fully-sparse detector that incorporates sparse queries, sparse attention with
box-wise sampling, and sparse prediction. SRCN3D adopts a cascade structure
with the twin-track update of both a fixed number of query boxes and latent
query features. Our novel sparse feature sampling module only utilizes local 2D
region of interest (RoI) features calculated by the projection of 3D query
boxes for further box refinement, leading to a fully-convolutional and
deployment-friendly pipeline. For multi-object tracking, motion features, query
features and RoI features are comprehensively utilized in multi-hypotheses data
association. Extensive experiments on nuScenes dataset demonstrate that SRCN3D
achieves competitive performance in both 3D object detection and multi-object
tracking tasks, while also exhibiting superior efficiency compared to
transformer-based methods. Code and models are available at
https://github.com/synsin0/SRCN3D.
|
[
{
"version": "v1",
"created": "Wed, 29 Jun 2022 07:58:39 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Oct 2022 04:05:32 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Jul 2023 01:11:12 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Shi",
"Yining",
""
],
[
"Shen",
"Jingyan",
""
],
[
"Sun",
"Yifan",
""
],
[
"Wang",
"Yunlong",
""
],
[
"Li",
"Jiaxin",
""
],
[
"Sun",
"Shiqi",
""
],
[
"Jiang",
"Kun",
""
],
[
"Yang",
"Diange",
""
]
] |
new_dataset
| 0.990046 |
2207.01078
|
Kenneth Ooi
|
Kenneth Ooi, Zhen-Ting Ong, Karn N. Watcharasupat, Bhan Lam, Joo Young
Hong, Woon-Seng Gan
|
ARAUS: A Large-Scale Dataset and Baseline Models of Affective Responses
to Augmented Urban Soundscapes
|
[v1, v2] 25 pages, 11 figures. [v3] 33 pages, 18 figures. v3 updated
with changes made after peer review. in IEEE Transactions on Affective
Computing, 2023
|
IEEE Trans. Affect. Comput., pp. 1-17, 2023
|
10.1109/TAFFC.2023.3247914
| null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Choosing optimal maskers for existing soundscapes to effect a desired
perceptual change via soundscape augmentation is non-trivial due to extensive
varieties of maskers and a dearth of benchmark datasets with which to compare
and develop soundscape augmentation models. To address this problem, we make
publicly available the ARAUS (Affective Responses to Augmented Urban
Soundscapes) dataset, which comprises a five-fold cross-validation set and
independent test set totaling 25,440 unique subjective perceptual responses to
augmented soundscapes presented as audio-visual stimuli. Each augmented
soundscape is made by digitally adding "maskers" (bird, water, wind, traffic,
construction, or silence) to urban soundscape recordings at fixed
soundscape-to-masker ratios. Responses were then collected by asking
participants to rate how pleasant, annoying, eventful, uneventful, vibrant,
monotonous, chaotic, calm, and appropriate each augmented soundscape was, in
accordance with ISO 12913-2:2018. Participants also provided relevant
demographic information and completed standard psychological questionnaires. We
perform exploratory and statistical analysis of the responses obtained to
verify internal consistency and agreement with known results in the literature.
Finally, we demonstrate the benchmarking capability of the dataset by training
and comparing four baseline models for urban soundscape pleasantness: a
low-parameter regression model, a high-parameter convolutional neural network,
and two attention-based networks in the literature.
|
[
{
"version": "v1",
"created": "Sun, 3 Jul 2022 17:09:09 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Jul 2022 08:18:28 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Mar 2023 03:24:53 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Ooi",
"Kenneth",
""
],
[
"Ong",
"Zhen-Ting",
""
],
[
"Watcharasupat",
"Karn N.",
""
],
[
"Lam",
"Bhan",
""
],
[
"Hong",
"Joo Young",
""
],
[
"Gan",
"Woon-Seng",
""
]
] |
new_dataset
| 0.999851 |
2207.10553
|
Jennifer J. Sun
|
Jennifer J. Sun, Markus Marks, Andrew Ulmer, Dipam Chakraborty, Brian
Geuther, Edward Hayes, Heng Jia, Vivek Kumar, Sebastian Oleszko, Zachary
Partridge, Milan Peelman, Alice Robie, Catherine E. Schretter, Keith
Sheppard, Chao Sun, Param Uttarwar, Julian M. Wagner, Eric Werner, Joseph
Parker, Pietro Perona, Yisong Yue, Kristin Branson, Ann Kennedy
|
MABe22: A Multi-Species Multi-Task Benchmark for Learned Representations
of Behavior
|
To appear in ICML 2023, Project website:
https://sites.google.com/view/computational-behavior/our-datasets/mabe2022-dataset
| null | null | null |
cs.LG cs.AI cs.CV cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce MABe22, a large-scale, multi-agent video and trajectory
benchmark to assess the quality of learned behavior representations. This
dataset is collected from a variety of biology experiments, and includes
triplets of interacting mice (4.7 million frames video+pose tracking data, 10
million frames pose only), symbiotic beetle-ant interactions (10 million frames
video data), and groups of interacting flies (4.4 million frames of pose
tracking data). Accompanying these data, we introduce a panel of real-life
downstream analysis tasks to assess the quality of learned representations by
evaluating how well they preserve information about the experimental conditions
(e.g. strain, time of day, optogenetic stimulation) and animal behavior. We
test multiple state-of-the-art self-supervised video and trajectory
representation learning methods to demonstrate the use of our benchmark,
revealing that methods developed using human action datasets do not fully
translate to animal datasets. We hope that our benchmark and dataset encourage
a broader exploration of behavior representation learning methods across
species and settings.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 15:51:30 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jun 2023 22:45:47 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Sun",
"Jennifer J.",
""
],
[
"Marks",
"Markus",
""
],
[
"Ulmer",
"Andrew",
""
],
[
"Chakraborty",
"Dipam",
""
],
[
"Geuther",
"Brian",
""
],
[
"Hayes",
"Edward",
""
],
[
"Jia",
"Heng",
""
],
[
"Kumar",
"Vivek",
""
],
[
"Oleszko",
"Sebastian",
""
],
[
"Partridge",
"Zachary",
""
],
[
"Peelman",
"Milan",
""
],
[
"Robie",
"Alice",
""
],
[
"Schretter",
"Catherine E.",
""
],
[
"Sheppard",
"Keith",
""
],
[
"Sun",
"Chao",
""
],
[
"Uttarwar",
"Param",
""
],
[
"Wagner",
"Julian M.",
""
],
[
"Werner",
"Eric",
""
],
[
"Parker",
"Joseph",
""
],
[
"Perona",
"Pietro",
""
],
[
"Yue",
"Yisong",
""
],
[
"Branson",
"Kristin",
""
],
[
"Kennedy",
"Ann",
""
]
] |
new_dataset
| 0.999844 |
2208.00329
|
Sayontan Ghosh
|
Sayontan Ghosh, Mahnaz Koupaee, Isabella Chen, Francis Ferraro,
Nathanael Chambers, Niranjan Balasubramanian
|
PASTA: A Dataset for Modeling Participant States in Narratives
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The events in a narrative are understood as a coherent whole via the
underlying states of their participants. Often, these participant states are
not explicitly mentioned, instead left to be inferred by the reader. A model
that understands narratives should likewise infer these implicit states, and
even reason about the impact of changes to these states on the narrative. To
facilitate this goal, we introduce a new crowdsourced English-language,
Participant States dataset, PASTA. This dataset contains inferable participant
states; a counterfactual perturbation to each state; and the changes to the
story that would be necessary if the counterfactual were true. We introduce
three state-based reasoning tasks that test for the ability to infer when a
state is entailed by a story, to revise a story conditioned on a counterfactual
state, and to explain the most likely state change given a revised story.
Experiments show that today's LLMs can reason about states to some degree, but
there is large room for improvement, especially in problems requiring access
and ability to reason with diverse types of knowledge (e.g. physical,
numerical, factual).
|
[
{
"version": "v1",
"created": "Sun, 31 Jul 2022 01:21:48 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Jul 2023 22:34:52 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Ghosh",
"Sayontan",
""
],
[
"Koupaee",
"Mahnaz",
""
],
[
"Chen",
"Isabella",
""
],
[
"Ferraro",
"Francis",
""
],
[
"Chambers",
"Nathanael",
""
],
[
"Balasubramanian",
"Niranjan",
""
]
] |
new_dataset
| 0.999661 |
2208.12976
|
Kees Middelburg
|
C. A. Middelburg
|
Paraconsistent logic and query answering in inconsistent databases
|
21 pages; revision of v4, some inaccuracies removed and material
streamlined at several places. arXiv admin note: substantial text overlap
with arXiv:2303.05264
| null | null | null |
cs.DB cs.LO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper concerns the paraconsistent logic LPQ$^{\supset,\mathsf{F}}$ and
an application of it in the area of relational database theory. The notions of
a relational database, a query applicable to a relational database, and a
consistent answer to a query with respect to a possibly inconsistent relational
database are considered from the perspective of this logic. This perspective
enables among other things the definition of a consistent answer to a query
with respect to a possibly inconsistent database without resort to database
repairs. In a previous paper, LPQ$^{\supset,\mathsf{F}}$ is presented with a
sequent-style natural deduction proof system. In this paper, a sequent calculus
proof system is presented because it is common to use a sequent calculus proof
system as the basis of proof search procedures and such procedures may form the
core of algorithms for computing consistent answers to queries.
|
[
{
"version": "v1",
"created": "Sat, 27 Aug 2022 09:48:32 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Sep 2022 14:25:46 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Oct 2022 12:31:57 GMT"
},
{
"version": "v4",
"created": "Thu, 12 Jan 2023 15:39:56 GMT"
},
{
"version": "v5",
"created": "Sat, 11 Mar 2023 13:31:31 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Middelburg",
"C. A.",
""
]
] |
new_dataset
| 0.991162 |
2208.13068
|
Qian Li
|
Peter Kraft, Qian Li, Kostis Kaffes, Athinagoras Skiadopoulos,
Deeptaanshu Kumar, Danny Cho, Jason Li, Robert Redmond, Nathan Weckwerth,
Brian Xia, Peter Bailis, Michael Cafarella, Goetz Graefe, Jeremy Kepner,
Christos Kozyrakis, Michael Stonebraker, Lalith Suresh, Xiangyao Yu, Matei
Zaharia
|
Apiary: A DBMS-Integrated Transactional Function-as-a-Service Framework
|
14 pages, 13 figures, 3 tables. Preprint
| null | null | null |
cs.DB cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Developers increasingly use function-as-a-service (FaaS) platforms for
data-centric applications that perform low-latency and transactional operations
on data, such as for microservices or web serving. Unfortunately, existing FaaS
platforms support these applications poorly because they physically and
logically separate application logic, executed in cloud functions, from data
management, done in interactive transactions accessing remote storage. Physical
separation harms performance while logical separation complicates efficiently
providing transactional guarantees and fault tolerance.
This paper introduces Apiary, a novel DBMS-integrated FaaS platform for
deploying and composing fault-tolerant transactional functions. Apiary
physically co-locates and logically integrates function execution and data
management by wrapping a distributed DBMS engine and using it as a unified
runtime for function execution, data management, and operational logging, thus
providing similar or stronger transactional guarantees as comparable systems
while greatly improving performance and observability. To allow developers to
write complex stateful programs, we leverage this integration to enable
efficient and fault-tolerant function composition, building a frontend for
orchestrating workflows of functions with the guarantees that each workflow
runs to completion and each function in a workflow executes exactly once. We
evaluate Apiary against research and production FaaS platforms and show it
outperforms them by 2--68x on microservice workloads by reducing communication
overhead.
|
[
{
"version": "v1",
"created": "Sat, 27 Aug 2022 18:17:53 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jun 2023 20:10:44 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Kraft",
"Peter",
""
],
[
"Li",
"Qian",
""
],
[
"Kaffes",
"Kostis",
""
],
[
"Skiadopoulos",
"Athinagoras",
""
],
[
"Kumar",
"Deeptaanshu",
""
],
[
"Cho",
"Danny",
""
],
[
"Li",
"Jason",
""
],
[
"Redmond",
"Robert",
""
],
[
"Weckwerth",
"Nathan",
""
],
[
"Xia",
"Brian",
""
],
[
"Bailis",
"Peter",
""
],
[
"Cafarella",
"Michael",
""
],
[
"Graefe",
"Goetz",
""
],
[
"Kepner",
"Jeremy",
""
],
[
"Kozyrakis",
"Christos",
""
],
[
"Stonebraker",
"Michael",
""
],
[
"Suresh",
"Lalith",
""
],
[
"Yu",
"Xiangyao",
""
],
[
"Zaharia",
"Matei",
""
]
] |
new_dataset
| 0.998103 |
2210.17283
|
Mathieu Chevalley
|
Mathieu Chevalley, Yusuf Roohani, Arash Mehrjou, Jure Leskovec,
Patrick Schwab
|
CausalBench: A Large-scale Benchmark for Network Inference from
Single-cell Perturbation Data
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Causal inference is a vital aspect of multiple scientific disciplines and is
routinely applied to high-impact applications such as medicine. However,
evaluating the performance of causal inference methods in real-world
environments is challenging due to the need for observations under both
interventional and control conditions. Traditional evaluations conducted on
synthetic datasets do not reflect the performance in real-world systems. To
address this, we introduce CausalBench, a benchmark suite for evaluating
network inference methods on real-world interventional data from large-scale
single-cell perturbation experiments. CausalBench incorporates
biologically-motivated performance metrics, including new distribution-based
interventional metrics. A systematic evaluation of state-of-the-art causal
inference methods using our CausalBench suite highlights how poor scalability
of current methods limits performance. Moreover, methods that use
interventional information do not outperform those that only use observational
data, contrary to what is observed on synthetic benchmarks. Thus, CausalBench
opens new avenues in causal network inference research and provides a
principled and reliable way to track progress in leveraging real-world
interventional data.
|
[
{
"version": "v1",
"created": "Mon, 31 Oct 2022 13:04:07 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jul 2023 09:12:49 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Chevalley",
"Mathieu",
""
],
[
"Roohani",
"Yusuf",
""
],
[
"Mehrjou",
"Arash",
""
],
[
"Leskovec",
"Jure",
""
],
[
"Schwab",
"Patrick",
""
]
] |
new_dataset
| 0.998485 |
2211.07208
|
Nadim Ghaddar
|
Nadim Ghaddar and Shouvik Ganguly and Lele Wang and Young-Han Kim
|
A Lego-Brick Approach to Coding for Network Communication
| null | null | null | null |
cs.IT cs.SY eess.SY math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Coding schemes for several problems in network information theory are
constructed starting from point-to-point channel codes that are designed for
symmetric channels. Given that the point-to-point codes satisfy certain
properties pertaining to the rate, the error probability, and the distribution
of decoded sequences, bounds on the performance of the coding schemes are
derived and shown to hold irrespective of other properties of the codes. In
particular, we consider the problems of lossless and lossy source coding,
Slepian--Wolf coding, Wyner--Ziv coding, Berger--Tung coding, multiple
description coding, asymmetric channel coding, Gelfand--Pinsker coding, coding
for multiple access channels, Marton coding for broadcast channels, and coding
for cloud radio access networks (C-RAN's). We show that the coding schemes can
achieve the best known inner bounds for these problems, provided that the
constituent point-to-point channel codes are rate-optimal. This would allow one
to leverage commercial off-the-shelf codes for point-to-point symmetric
channels in the practical implementation of codes over networks. Simulation
results demonstrate the gain of the proposed coding schemes compared to
existing practical solutions to these problems.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 08:53:12 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Dec 2022 23:01:25 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Jul 2023 20:27:18 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Ghaddar",
"Nadim",
""
],
[
"Ganguly",
"Shouvik",
""
],
[
"Wang",
"Lele",
""
],
[
"Kim",
"Young-Han",
""
]
] |
new_dataset
| 0.990987 |
2212.10180
|
Ananya B Sai
|
Ananya B. Sai, Vignesh Nagarajan, Tanay Dixit, Raj Dabre, Anoop
Kunchukuttan, Pratyush Kumar, Mitesh M. Khapra
|
IndicMT Eval: A Dataset to Meta-Evaluate Machine Translation metrics for
Indian Languages
|
ACL 2023 long paper
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The rapid growth of machine translation (MT) systems has necessitated
comprehensive studies to meta-evaluate evaluation metrics being used, which
enables a better selection of metrics that best reflect MT quality.
Unfortunately, most of the research focuses on high-resource languages, mainly
English, the observations for which may not always apply to other languages.
Indian languages, having over a billion speakers, are linguistically different
from English, and to date, there has not been a systematic study of evaluating
MT systems from English into Indian languages. In this paper, we fill this gap
by creating an MQM dataset consisting of 7000 fine-grained annotations,
spanning 5 Indian languages and 7 MT systems, and use it to establish
correlations between annotator scores and scores obtained using existing
automatic metrics. Our results show that pre-trained metrics, such as COMET,
have the highest correlations with annotator scores. Additionally, we find that
the metrics do not adequately capture fluency-based errors in Indian languages,
and there is a need to develop metrics focused on Indian languages. We hope
that our dataset and analysis will help promote further research in this area.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 11:37:22 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jul 2023 14:26:38 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Sai",
"Ananya B.",
""
],
[
"Nagarajan",
"Vignesh",
""
],
[
"Dixit",
"Tanay",
""
],
[
"Dabre",
"Raj",
""
],
[
"Kunchukuttan",
"Anoop",
""
],
[
"Kumar",
"Pratyush",
""
],
[
"Khapra",
"Mitesh M.",
""
]
] |
new_dataset
| 0.999854 |
2303.03170
|
Rasmus M{\o}gelberg
|
Patrick Bahr and Rasmus Ejlers M{\o}gelberg
|
Asynchronous Modal FRP
|
35 pages
| null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Over the past decade, a number of languages for functional reactive
programming (FRP) have been suggested, which use modal types to ensure
properties like causality, productivity and lack of space leaks. So far, almost
all of these languages have included a modal operator for delay on a global
clock. For some applications, however, the notion of global clock is unnatural
and leads to leaky abstractions as well as inefficient implementations. While
modal languages without a global clock have been proposed, no operational
properties have been proved about them, yet.
This paper proposes Async RaTT, a new modal language for asynchronous FRP,
equipped with an operational semantics mapping complete programs to machines
that take asynchronous input signals and produce output signals. The main
novelty of Async RaTT is a new modality for asynchronous delay, allowing each
output channel to be associated at runtime with the set of input channels it
depends on, thus causing the machine to only compute new output when necessary.
We prove a series of operational properties including causality, productivity
and lack of space leaks. We also show that, although the set of input channels
associated with an output channel can change during execution, upper bounds on
these can be determined statically by the type system.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 14:34:06 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jul 2023 14:11:21 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Bahr",
"Patrick",
""
],
[
"Møgelberg",
"Rasmus Ejlers",
""
]
] |
new_dataset
| 0.998342 |
2303.03888
|
Heitor Ferreira Gonzaga
|
Heitor Ferreira Gonzaga
|
A Juridicidade e a Regulamenta\c{c}\~ao dos Dark Patterns
|
in Portuguese language. arXiv admin note: text overlap with
arXiv:2101.04843 by other authors
| null | null | null |
cs.CY cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
The evolution of audiovisual computer interfaces was an important milestone
for the popularization of the internet without which it is impossible to
conceive the use of this technology in modern society However the progress of
these interfaces has not taken exclusively beneficial paths for humanity From
the beginning of the 21st century onwards an increase in interface design
patterns was observed that instead of facilitating navigation harmed users or
restricted their decisionmaking capabilities earning them the name of Dark
Patterns In view of this the present work aims to address whether Dark Patterns
are legal or illegal in the face of Brazilian data protection and consumer law
verifying in the absence of specific norms on Dark Patterns the best way to
regulate them The research method employed is qualitative analyzing research
court cases norms and national and foreign documents on Dark Patterns After
addressing its effects its legal development and establishing a definition
compatible with Brazilian law it was concluded that although some
implementations are capable of producing damage and violating rights in some
cases the mere declaration of the illegality of these techniques is an
insufficient solution requiring further investigations regarding the hypotheses
in which their negative impacts are less apparent or when they are used for
beneficial purposes among other unsolved problems Therefore it is suggested
that the regulation of Dark Patterns should occur through a system composed of
formal laws and regulations of public administration bodies through a
multidisciplinary approach that is adaptable to new findings and technologies
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 12:13:13 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Jul 2023 19:56:42 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Gonzaga",
"Heitor Ferreira",
""
]
] |
new_dataset
| 0.994665 |
2303.07399
|
Yining Li
|
Tao Jiang, Peng Lu, Li Zhang, Ningsheng Ma, Rui Han, Chengqi Lyu,
Yining Li, Kai Chen
|
RTMPose: Real-Time Multi-Person Pose Estimation based on MMPose
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recent studies on 2D pose estimation have achieved excellent performance on
public benchmarks, yet its application in the industrial community still
suffers from heavy model parameters and high latency. In order to bridge this
gap, we empirically explore key factors in pose estimation including paradigm,
model architecture, training strategy, and deployment, and present a
high-performance real-time multi-person pose estimation framework, RTMPose,
based on MMPose. Our RTMPose-m achieves 75.8% AP on COCO with 90+ FPS on an
Intel i7-11700 CPU and 430+ FPS on an NVIDIA GTX 1660 Ti GPU, and RTMPose-l
achieves 67.0% AP on COCO-WholeBody with 130+ FPS. To further evaluate
RTMPose's capability in critical real-time applications, we also report the
performance after deploying on the mobile device. Our RTMPose-s achieves 72.2%
AP on COCO with 70+ FPS on a Snapdragon 865 chip, outperforming existing
open-source libraries. Code and models are released at
https://github.com/open-mmlab/mmpose/tree/1.x/projects/rtmpose.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 18:26:11 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jul 2023 03:06:26 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Jiang",
"Tao",
""
],
[
"Lu",
"Peng",
""
],
[
"Zhang",
"Li",
""
],
[
"Ma",
"Ningsheng",
""
],
[
"Han",
"Rui",
""
],
[
"Lyu",
"Chengqi",
""
],
[
"Li",
"Yining",
""
],
[
"Chen",
"Kai",
""
]
] |
new_dataset
| 0.998546 |
2304.00962
|
Jihan Yang
|
Jihan Yang, Runyu Ding, Zhe Wang, Xiaojuan Qi
|
RegionPLC: Regional Point-Language Contrastive Learning for Open-World
3D Scene Understanding
|
project page: https://jihanyang.github.io/projects/RegionPLC
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Existing 3D scene understanding tasks have achieved high performance on
close-set benchmarks but fail to handle novel categories in real-world
applications. To this end, we propose a Regional Point-Language Contrastive
learning framework, namely RegionPLC, for open-world 3D scene understanding,
which equips models trained on closed-set datasets with open-vocabulary
recognition capabilities. We propose dense visual prompts to elicit
region-level visual-language knowledge from 2D foundation models via
captioning, which further allows us to build dense regional point-language
associations. Then, we design a point-discriminative contrastive learning
objective to enable point-independent learning from captions for dense scene
understanding. We conduct extensive experiments on ScanNet, ScanNet200, and
nuScenes datasets. Our RegionPLC significantly outperforms previous
base-annotated 3D open-world scene understanding approaches by an average of
11.6\% and 6.6\% for semantic and instance segmentation, respectively. It also
shows promising open-world results in absence of any human annotation with low
training and inference costs. Code will be released.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 13:30:04 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jul 2023 04:52:17 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Yang",
"Jihan",
""
],
[
"Ding",
"Runyu",
""
],
[
"Wang",
"Zhe",
""
],
[
"Qi",
"Xiaojuan",
""
]
] |
new_dataset
| 0.999495 |
2304.03289
|
Ayal Taitler
|
Harel Yadid, Almog Algranti, Mark Levin, Ayal Taitler
|
A2D: Anywhere Anytime Drumming
| null | null | null | null |
cs.HC cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
The drum kit, which has only been around for around 100 years, is a popular
instrument in many music genres such as pop, rock, and jazz. However, the road
to owning a kit is expensive, both financially and space-wise. Also, drums are
more difficult to move around compared to other instruments, as they do not fit
into a single bag. We propose a no-drums approach that uses only two sticks and
a smartphone or a webcam to provide an air-drumming experience. The detection
algorithm combines deep learning tools with tracking methods for an enhanced
user experience. Based on both quantitative and qualitative testing with
humans-in-the-loop, we show that our system has zero misses for beginner level
play and negligible misses for advanced level play. Additionally, our limited
human trials suggest potential directions for future research.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 21:36:37 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jun 2023 20:18:52 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Yadid",
"Harel",
""
],
[
"Algranti",
"Almog",
""
],
[
"Levin",
"Mark",
""
],
[
"Taitler",
"Ayal",
""
]
] |
new_dataset
| 0.999795 |
2304.06968
|
Sireesha Chamarthi
|
Katharina Fogelberg, Sireesha Chamarthi, Roman C. Maron, Julia
Niebling, Titus J. Brinker
|
Domain shifts in dermoscopic skin cancer datasets: Evaluation of
essential limitations for clinical translation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The limited ability of Convolutional Neural Networks to generalize to images
from previously unseen domains is a major limitation, in particular, for
safety-critical clinical tasks such as dermoscopic skin cancer classification.
In order to translate CNN-based applications into the clinic, it is essential
that they are able to adapt to domain shifts. Such new conditions can arise
through the use of different image acquisition systems or varying lighting
conditions. In dermoscopy, shifts can also occur as a change in patient age or
occurence of rare lesion localizations (e.g. palms). These are not prominently
represented in most training datasets and can therefore lead to a decrease in
performance. In order to verify the generalizability of classification models
in real world clinical settings it is crucial to have access to data which
mimics such domain shifts. To our knowledge no dermoscopic image dataset exists
where such domain shifts are properly described and quantified. We therefore
grouped publicly available images from ISIC archive based on their metadata
(e.g. acquisition location, lesion localization, patient age) to generate
meaningful domains. To verify that these domains are in fact distinct, we used
multiple quantification measures to estimate the presence and intensity of
domain shifts. Additionally, we analyzed the performance on these domains with
and without an unsupervised domain adaptation technique. We observed that in
most of our grouped domains, domain shifts in fact exist. Based on our results,
we believe these datasets to be helpful for testing the generalization
capabilities of dermoscopic skin cancer classifiers.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 07:38:09 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Apr 2023 08:20:40 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Jul 2023 08:40:03 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Fogelberg",
"Katharina",
""
],
[
"Chamarthi",
"Sireesha",
""
],
[
"Maron",
"Roman C.",
""
],
[
"Niebling",
"Julia",
""
],
[
"Brinker",
"Titus J.",
""
]
] |
new_dataset
| 0.987336 |
2305.10029
|
Boying Li
|
Boying Li, Danping Zou, Yuan Huang, Xinghan Niu, Ling Pei, Wenxian Yu
|
TextSLAM: Visual SLAM with Semantic Planar Text Features
|
19 pages, 23 figures. Whole project page:
https://leeby68.github.io/TextSLAM/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel visual SLAM method that integrates text objects tightly by
treating them as semantic features via fully exploring their geometric and
semantic prior. The text object is modeled as a texture-rich planar patch whose
semantic meaning is extracted and updated on the fly for better data
association. With the full exploration of locally planar characteristics and
semantic meaning of text objects, the SLAM system becomes more accurate and
robust even under challenging conditions such as image blurring, large
viewpoint changes, and significant illumination variations (day and night). We
tested our method in various scenes with the ground truth data. The results
show that integrating texture features leads to a more superior SLAM system
that can match images across day and night. The reconstructed semantic 3D text
map could be useful for navigation and scene understanding in robotic and mixed
reality applications. Our project page: https://github.com/SJTU-ViSYS/TextSLAM .
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 08:16:26 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jul 2023 12:06:12 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Li",
"Boying",
""
],
[
"Zou",
"Danping",
""
],
[
"Huang",
"Yuan",
""
],
[
"Niu",
"Xinghan",
""
],
[
"Pei",
"Ling",
""
],
[
"Yu",
"Wenxian",
""
]
] |
new_dataset
| 0.992985 |
2305.13681
|
Weiye Zhao
|
Weiye Zhao, Rui Chen, Yifan Sun, Ruixuan Liu, Tianhao Wei, Changliu
Liu
|
GUARD: A Safe Reinforcement Learning Benchmark
| null | null | null | null |
cs.LG cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the trial-and-error nature, it is typically challenging to apply RL
algorithms to safety-critical real-world applications, such as autonomous
driving, human-robot interaction, robot manipulation, etc, where such errors
are not tolerable. Recently, safe RL (i.e. constrained RL) has emerged rapidly
in the literature, in which the agents explore the environment while satisfying
constraints. Due to the diversity of algorithms and tasks, it remains difficult
to compare existing safe RL algorithms. To fill that gap, we introduce GUARD, a
Generalized Unified SAfe Reinforcement Learning Development Benchmark. GUARD
has several advantages compared to existing benchmarks. First, GUARD is a
generalized benchmark with a wide variety of RL agents, tasks, and safety
constraint specifications. Second, GUARD comprehensively covers
state-of-the-art safe RL algorithms with self-contained implementations. Third,
GUARD is highly customizable in tasks and algorithms. We present a comparison
of state-of-the-art safe RL algorithms in various task settings using GUARD and
establish baselines that future work can build on.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 04:40:29 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 21:23:06 GMT"
},
{
"version": "v3",
"created": "Fri, 30 Jun 2023 20:28:11 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Zhao",
"Weiye",
""
],
[
"Chen",
"Rui",
""
],
[
"Sun",
"Yifan",
""
],
[
"Liu",
"Ruixuan",
""
],
[
"Wei",
"Tianhao",
""
],
[
"Liu",
"Changliu",
""
]
] |
new_dataset
| 0.987874 |
2305.15690
|
Adithya Kulkarni
|
Adithya Kulkarni, Mohna Chakraborty, Yonas Sium, Sai Charishma
Valluri, Wei Le, Qi Li
|
Beryllium: Neural Search for Algorithm Implementations
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we explore the feasibility of finding algorithm
implementations from code. Successfully matching code and algorithms can help
understand unknown code, provide reference implementations, and automatically
collect data for learning-based program synthesis. To achieve the goal, we
designed a new language named p-language to specify the algorithms and a static
analyzer for the p-language to automatically extract control flow, math, and
natural language information from the algorithm descriptions. We embedded the
output of p-language (p-code) and source code in a common vector space using
self-supervised machine learning methods to match algorithm with code without
any manual annotation. We developed a tool named Beryllium. It takes pseudo
code as a query and returns a list of ranked code snippets that likely match
the algorithm query. Our evaluation on Stony Brook Algorithm Repository and
popular GitHub projects show that Beryllium significantly outperformed the
state-of-the-art code search tools in both C and Java. Specifically, for 98.5%,
93.8%, and 66.2% queries, we found the algorithm implementations in the top 25,
10, and 1 ranked list, respectively. Given 87 algorithm queries, we found
implementations for 74 algorithms in the GitHub projects where we did not know
the algorithms before.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 03:49:36 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Jul 2023 22:33:04 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Kulkarni",
"Adithya",
""
],
[
"Chakraborty",
"Mohna",
""
],
[
"Sium",
"Yonas",
""
],
[
"Valluri",
"Sai Charishma",
""
],
[
"Le",
"Wei",
""
],
[
"Li",
"Qi",
""
]
] |
new_dataset
| 0.996443 |
2305.16555
|
Ali Zia
|
Ali Zia, Renuka Sharma, Reza Arablouei, Greg Bishop-Hurley, Jody
McNally, Neil Bagnall, Vivien Rolland, Brano Kusy, Lars Petersson, Aaron
Ingham
|
CVB: A Video Dataset of Cattle Visual Behaviors
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Existing image/video datasets for cattle behavior recognition are mostly
small, lack well-defined labels, or are collected in unrealistic controlled
environments. This limits the utility of machine learning (ML) models learned
from them. Therefore, we introduce a new dataset, called Cattle Visual
Behaviors (CVB), that consists of 502 video clips, each fifteen seconds long,
captured in natural lighting conditions, and annotated with eleven visually
perceptible behaviors of grazing cattle. We use the Computer Vision Annotation
Tool (CVAT) to collect our annotations. To make the procedure more efficient,
we perform an initial detection and tracking of cattle in the videos using
appropriate pre-trained models. The results are corrected by domain experts
along with cattle behavior labeling in CVAT. The pre-hoc detection and tracking
step significantly reduces the manual annotation time and effort. Moreover, we
convert CVB to the atomic visual action (AVA) format and train and evaluate the
popular SlowFast action recognition model on it. The associated preliminary
results confirm that we can localize the cattle and recognize their frequently
occurring behaviors with confidence. By creating and sharing CVB, our aim is to
develop improved models capable of recognizing all important behaviors
accurately and to assist other researchers and practitioners in developing and
evaluating new ML models for cattle behavior classification using video data.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 00:44:11 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jul 2023 07:11:17 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Zia",
"Ali",
""
],
[
"Sharma",
"Renuka",
""
],
[
"Arablouei",
"Reza",
""
],
[
"Bishop-Hurley",
"Greg",
""
],
[
"McNally",
"Jody",
""
],
[
"Bagnall",
"Neil",
""
],
[
"Rolland",
"Vivien",
""
],
[
"Kusy",
"Brano",
""
],
[
"Petersson",
"Lars",
""
],
[
"Ingham",
"Aaron",
""
]
] |
new_dataset
| 0.999827 |
2305.18326
|
Liyan Kang
|
Liyan Kang, Luyang Huang, Ningxin Peng, Peihao Zhu, Zewei Sun, Shanbo
Cheng, Mingxuan Wang, Degen Huang and Jinsong Su
|
BigVideo: A Large-scale Video Subtitle Translation Dataset for
Multimodal Machine Translation
|
Accepted to ACL 2023 Findings
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a large-scale video subtitle translation dataset, BigVideo, to
facilitate the study of multi-modality machine translation. Compared with the
widely used How2 and VaTeX datasets, BigVideo is more than 10 times larger,
consisting of 4.5 million sentence pairs and 9,981 hours of videos. We also
introduce two deliberately designed test sets to verify the necessity of visual
information: Ambiguous with the presence of ambiguous words, and Unambiguous in
which the text context is self-contained for translation. To better model the
common semantics shared across texts and videos, we introduce a contrastive
learning method in the cross-modal encoder. Extensive experiments on the
BigVideo show that: a) Visual information consistently improves the NMT model
in terms of BLEU, BLEURT, and COMET on both Ambiguous and Unambiguous test
sets. b) Visual information helps disambiguation, compared to the strong text
baseline on terminology-targeted scores and human evaluation. Dataset and our
implementations are available at https://github.com/DeepLearnXMU/BigVideo-VMT.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 08:53:36 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2023 07:03:06 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Jul 2023 08:10:10 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Kang",
"Liyan",
""
],
[
"Huang",
"Luyang",
""
],
[
"Peng",
"Ningxin",
""
],
[
"Zhu",
"Peihao",
""
],
[
"Sun",
"Zewei",
""
],
[
"Cheng",
"Shanbo",
""
],
[
"Wang",
"Mingxuan",
""
],
[
"Huang",
"Degen",
""
],
[
"Su",
"Jinsong",
""
]
] |
new_dataset
| 0.999778 |
2306.00942
|
Gaoyue Zhou
|
Gaoyue Zhou, Victoria Dean, Mohan Kumar Srirama, Aravind Rajeswaran,
Jyothish Pari, Kyle Hatch, Aryan Jain, Tianhe Yu, Pieter Abbeel, Lerrel
Pinto, Chelsea Finn, Abhinav Gupta
|
Train Offline, Test Online: A Real Robot Learning Benchmark
|
Accepted to ICRA 2023
| null | null | null |
cs.RO cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Three challenges limit the progress of robot learning research: robots are
expensive (few labs can participate), everyone uses different robots (findings
do not generalize across labs), and we lack internet-scale robotics data. We
take on these challenges via a new benchmark: Train Offline, Test Online
(TOTO). TOTO provides remote users with access to shared robotic hardware for
evaluating methods on common tasks and an open-source dataset of these tasks
for offline training. Its manipulation task suite requires challenging
generalization to unseen objects, positions, and lighting. We present initial
results on TOTO comparing five pretrained visual representations and four
offline policy learning baselines, remotely contributed by five institutions.
The real promise of TOTO, however, lies in the future: we release the benchmark
for additional submissions from any user, enabling easy, direct comparison to
several methods without the need to obtain hardware or collect data.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 17:42:08 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jun 2023 19:24:32 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Zhou",
"Gaoyue",
""
],
[
"Dean",
"Victoria",
""
],
[
"Srirama",
"Mohan Kumar",
""
],
[
"Rajeswaran",
"Aravind",
""
],
[
"Pari",
"Jyothish",
""
],
[
"Hatch",
"Kyle",
""
],
[
"Jain",
"Aryan",
""
],
[
"Yu",
"Tianhe",
""
],
[
"Abbeel",
"Pieter",
""
],
[
"Pinto",
"Lerrel",
""
],
[
"Finn",
"Chelsea",
""
],
[
"Gupta",
"Abhinav",
""
]
] |
new_dataset
| 0.999597 |
2306.07743
|
Wolfgang Stammer
|
Lukas Helff, Wolfgang Stammer, Hikaru Shindo, Devendra Singh Dhami,
Kristian Kersting
|
V-LoL: A Diagnostic Dataset for Visual Logical Learning
| null | null | null | null |
cs.AI cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Despite the successes of recent developments in visual AI, different
shortcomings still exist; from missing exact logical reasoning, to abstract
generalization abilities, to understanding complex and noisy scenes.
Unfortunately, existing benchmarks, were not designed to capture more than a
few of these aspects. Whereas deep learning datasets focus on visually complex
data but simple visual reasoning tasks, inductive logic datasets involve
complex logical learning tasks, however, lack the visual component. To address
this, we propose the visual logical learning dataset, V-LoL, that seamlessly
combines visual and logical challenges. Notably, we introduce the first
instantiation of V-LoL, V-LoL-Trains, -- a visual rendition of a classic
benchmark in symbolic AI, the Michalski train problem. By incorporating
intricate visual scenes and flexible logical reasoning tasks within a versatile
framework, V-LoL-Trains provides a platform for investigating a wide range of
visual logical learning challenges. We evaluate a variety of AI systems
including traditional symbolic AI, neural AI, as well as neuro-symbolic AI. Our
evaluations demonstrate that even state-of-the-art AI faces difficulties in
dealing with visual logical learning challenges, highlighting unique advantages
and limitations specific to each methodology. Overall, V-LoL opens up new
avenues for understanding and enhancing current abilities in visual logical
learning for AI systems.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 13:00:10 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jul 2023 10:24:33 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Helff",
"Lukas",
""
],
[
"Stammer",
"Wolfgang",
""
],
[
"Shindo",
"Hikaru",
""
],
[
"Dhami",
"Devendra Singh",
""
],
[
"Kersting",
"Kristian",
""
]
] |
new_dataset
| 0.999866 |
2306.08422
|
Omer Hofman
|
Omer Hofman, Amit Giloni, Yarin Hayun, Ikuya Morikawa, Toshiya
Shimizu, Yuval Elovici and Asaf Shabtai
|
X-Detect: Explainable Adversarial Patch Detection for Object Detectors
in Retail
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object detection models, which are widely used in various domains (such as
retail), have been shown to be vulnerable to adversarial attacks. Existing
methods for detecting adversarial attacks on object detectors have had
difficulty detecting new real-life attacks. We present X-Detect, a novel
adversarial patch detector that can: i) detect adversarial samples in real
time, allowing the defender to take preventive action; ii) provide explanations
for the alerts raised to support the defender's decision-making process, and
iii) handle unfamiliar threats in the form of new attacks. Given a new scene,
X-Detect uses an ensemble of explainable-by-design detectors that utilize
object extraction, scene manipulation, and feature transformation techniques to
determine whether an alert needs to be raised. X-Detect was evaluated in both
the physical and digital space using five different attack scenarios (including
adaptive attacks) and the COCO dataset and our new Superstore dataset. The
physical evaluation was performed using a smart shopping cart setup in
real-world settings and included 17 adversarial patch attacks recorded in 1,700
adversarial videos. The results showed that X-Detect outperforms the
state-of-the-art methods in distinguishing between benign and adversarial
scenes for all attack scenarios while maintaining a 0% FPR (no false alarms)
and providing actionable explanations for the alerts raised. A demo is
available.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 10:35:21 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Jul 2023 06:39:59 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Hofman",
"Omer",
""
],
[
"Giloni",
"Amit",
""
],
[
"Hayun",
"Yarin",
""
],
[
"Morikawa",
"Ikuya",
""
],
[
"Shimizu",
"Toshiya",
""
],
[
"Elovici",
"Yuval",
""
],
[
"Shabtai",
"Asaf",
""
]
] |
new_dataset
| 0.999533 |
2306.12729
|
Hongyi Zhou
|
Fabian Otto, Hongyi Zhou, Onur Celik, Ge Li, Rudolf Lioutikov, Gerhard
Neumann
|
MP3: Movement Primitive-Based (Re-)Planning Policy
|
The video demonstration can be accessed at
https://intuitive-robots.github.io/mp3_website/. arXiv admin note: text
overlap with arXiv:2210.09622
| null | null | null |
cs.LG cs.AI cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We introduce a novel deep reinforcement learning (RL) approach called
Movement Primitive-based Planning Policy (MP3). By integrating movement
primitives (MPs) into the deep RL framework, MP3 enables the generation of
smooth trajectories throughout the whole learning process while effectively
learning from sparse and non-Markovian rewards. Additionally, MP3 maintains the
capability to adapt to changes in the environment during execution. Although
many early successes in robot RL have been achieved by combining RL with MPs,
these approaches are often limited to learning single stroke-based motions,
lacking the ability to adapt to task variations or adjust motions during
execution. Building upon our previous work, which introduced an episode-based
RL method for the non-linear adaptation of MP parameters to different task
variations, this paper extends the approach to incorporating replanning
strategies. This allows adaptation of the MP parameters throughout motion
execution, addressing the lack of online motion adaptation in stochastic
domains requiring feedback. We compared our approach against state-of-the-art
deep RL and RL with MPs methods. The results demonstrated improved performance
in sophisticated, sparse reward settings and in domains requiring replanning.
|
[
{
"version": "v1",
"created": "Thu, 22 Jun 2023 08:11:32 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Jul 2023 20:00:50 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Otto",
"Fabian",
""
],
[
"Zhou",
"Hongyi",
""
],
[
"Celik",
"Onur",
""
],
[
"Li",
"Ge",
""
],
[
"Lioutikov",
"Rudolf",
""
],
[
"Neumann",
"Gerhard",
""
]
] |
new_dataset
| 0.988797 |
2306.13394
|
Chaoyou Fu
|
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu
Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Rongrong
Ji
|
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language
Models
|
Project page:
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first MLLM
Evaluation benchmark MME. It measures both perception and cognition abilities
on a total of 14 subtasks. In order to avoid data leakage that may arise from
direct use of public datasets for evaluation, the annotations of
instruction-answer pairs are all manually designed. The concise instruction
design allows us to fairly compare MLLMs, instead of struggling in prompt
engineering. Besides, with such an instruction, we can also easily carry out
quantitative statistics. A total of 12 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 09:22:36 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Jul 2023 02:56:04 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Fu",
"Chaoyou",
""
],
[
"Chen",
"Peixian",
""
],
[
"Shen",
"Yunhang",
""
],
[
"Qin",
"Yulei",
""
],
[
"Zhang",
"Mengdan",
""
],
[
"Lin",
"Xu",
""
],
[
"Qiu",
"Zhenyu",
""
],
[
"Lin",
"Wei",
""
],
[
"Yang",
"Jinrui",
""
],
[
"Zheng",
"Xiawu",
""
],
[
"Li",
"Ke",
""
],
[
"Sun",
"Xing",
""
],
[
"Ji",
"Rongrong",
""
]
] |
new_dataset
| 0.984018 |
2306.15195
|
Zhao Zhang
|
Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao
|
Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 04:31:52 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jul 2023 16:08:00 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Chen",
"Keqin",
""
],
[
"Zhang",
"Zhao",
""
],
[
"Zeng",
"Weili",
""
],
[
"Zhang",
"Richong",
""
],
[
"Zhu",
"Feng",
""
],
[
"Zhao",
"Rui",
""
]
] |
new_dataset
| 0.999359 |
2306.16671
|
Yiming Zeng
|
Yiming Zeng, Jiarui Zhang, Ji Liu, Zhenhua Liu, Yuanyuan Yang
|
Entanglement Routing over Quantum Networks Using
Greenberger-Horne-Zeilinger Measurements
| null |
Proc. IEEE 43th Int. Conf. Distrib. Comput. Syst. (ICDCS),2023
| null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating a long-distance quantum entanglement is one of the most essential
functions of a quantum network to support quantum communication and computing
applications. The successful entanglement rate during a probabilistic
entanglement process decreases dramatically with distance, and swapping is a
widely-applied quantum technique to address this issue. Most existing
entanglement routing protocols use a classic entanglement-swapping method based
on Bell State measurements that can only fuse two successful entanglement
links. This paper appeals to a more general and efficient swapping method,
namely n-fusion based on Greenberger-Horne-Zeilinger measurements that can fuse
n successful entanglement links, to maximize the entanglement rate for multiple
quantum-user pairs over a quantum network. We propose efficient entanglement
routing algorithms that utilize the properties of n-fusion for quantum networks
with general topologies. Evaluation results highlight that our proposed
algorithm under n-fusion can greatly improve the network performance compared
with existing ones.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 04:08:03 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jul 2023 13:34:58 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Zeng",
"Yiming",
""
],
[
"Zhang",
"Jiarui",
""
],
[
"Liu",
"Ji",
""
],
[
"Liu",
"Zhenhua",
""
],
[
"Yang",
"Yuanyuan",
""
]
] |
new_dataset
| 0.971737 |
2306.17624
|
Gengchen Mai
|
Gengchen Mai, Yao Xuan, Wenyun Zuo, Yutong He, Jiaming Song, Stefano
Ermon, Krzysztof Janowicz, and Ni Lao
|
Sphere2Vec: A General-Purpose Location Representation Learning over a
Spherical Surface for Large-Scale Geospatial Predictions
|
30 Pages, 16 figures. Accepted to ISPRS Journal of Photogrammetry and
Remote Sensing
|
ISPRS Journal of Photogrammetry and Remote Sensing, 2023
| null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Generating learning-friendly representations for points in space is a
fundamental and long-standing problem in ML. Recently, multi-scale encoding
schemes (such as Space2Vec and NeRF) were proposed to directly encode any point
in 2D/3D Euclidean space as a high-dimensional vector, and has been
successfully applied to various geospatial prediction and generative tasks.
However, all current 2D and 3D location encoders are designed to model point
distances in Euclidean space. So when applied to large-scale real-world GPS
coordinate datasets, which require distance metric learning on the spherical
surface, both types of models can fail due to the map projection distortion
problem (2D) and the spherical-to-Euclidean distance approximation error (3D).
To solve these problems, we propose a multi-scale location encoder called
Sphere2Vec which can preserve spherical distances when encoding point
coordinates on a spherical surface. We developed a unified view of
distance-reserving encoding on spheres based on the DFS. We also provide
theoretical proof that the Sphere2Vec preserves the spherical surface distance
between any two points, while existing encoding schemes do not. Experiments on
20 synthetic datasets show that Sphere2Vec can outperform all baseline models
on all these datasets with up to 30.8% error rate reduction. We then apply
Sphere2Vec to three geo-aware image classification tasks - fine-grained species
recognition, Flickr image recognition, and remote sensing image classification.
Results on 7 real-world datasets show the superiority of Sphere2Vec over
multiple location encoders on all three tasks. Further analysis shows that
Sphere2Vec outperforms other location encoder models, especially in the polar
regions and data-sparse areas because of its nature for spherical surface
distance preservation. Code and data are available at
https://gengchenmai.github.io/sphere2vec-website/.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 12:55:02 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jul 2023 01:26:30 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Mai",
"Gengchen",
""
],
[
"Xuan",
"Yao",
""
],
[
"Zuo",
"Wenyun",
""
],
[
"He",
"Yutong",
""
],
[
"Song",
"Jiaming",
""
],
[
"Ermon",
"Stefano",
""
],
[
"Janowicz",
"Krzysztof",
""
],
[
"Lao",
"Ni",
""
]
] |
new_dataset
| 0.999311 |
2307.00021
|
Vishvajit Bakarola
|
Vishvajitsinh Bakrola and Jitendra Nasariwala
|
SAHAAYAK 2023 -- the Multi Domain Bilingual Parallel Corpus of Sanskrit
to Hindi for Machine Translation
|
3 Pages, 1 Figure, and 1 Table
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The data article presents the large bilingual parallel corpus of
low-resourced language pair Sanskrit-Hindi, named SAHAAYAK 2023. The corpus
contains total of 1.5M sentence pairs between Sanskrit and Hindi. To make the
universal usability of the corpus and to make it balanced, data from multiple
domain has been incorporated into the corpus that includes, News, Daily
conversations, Politics, History, Sport, and Ancient Indian Literature. The
multifaceted approach has been adapted to make a sizable multi-domain corpus of
low-resourced languages like Sanskrit. Our development approach is spanned from
creating a small hand-crafted dataset to applying a wide range of mining,
cleaning, and verification. We have used the three-fold process of mining:
mining from machine-readable sources, mining from non-machine readable sources,
and collation from existing corpora sources. Post mining, the dedicated
pipeline for normalization, alignment, and corpus cleaning is developed and
applied to the corpus to make it ready to use on machine translation
algorithms.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 11:06:44 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Bakrola",
"Vishvajitsinh",
""
],
[
"Nasariwala",
"Jitendra",
""
]
] |
new_dataset
| 0.999678 |
2307.00037
|
Peder Bergebakken Sundt
|
Peder Bergebakken Sundt, Theoharis Theoharis
|
MARF: The Medial Atom Ray Field Object Representation
|
To be published in 3DOR 2023 and C&G Volume 114
| null |
10.1016/j.cag.2023.06.032
| null |
cs.GR cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose Medial Atom Ray Fields (MARFs), a novel neural object
representation that enables accurate differentiable surface rendering with a
single network evaluation per camera ray. Existing neural ray fields struggle
with multi-view consistency and representing surface discontinuities. MARFs
address both using a medial shape representation, a dual representation of
solid geometry that yields cheap geometrically grounded surface normals, in
turn enabling computing analytical curvature despite the network having no
second derivative. MARFs map a camera ray to multiple medial intersection
candidates, subject to ray-sphere intersection testing. We illustrate how the
learned medial shape quantities applies to sub-surface scattering, part
segmentation, and aid representing a space of articulated shapes. Able to learn
a space of shape priors, MARFs may prove useful for tasks like shape retrieval
and shape completion, among others. Code and data can be found at
https://github.com/pbsds/MARF.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 08:51:22 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Sundt",
"Peder Bergebakken",
""
],
[
"Theoharis",
"Theoharis",
""
]
] |
new_dataset
| 0.999652 |
2307.00040
|
Tan Wang
|
Tan Wang, Linjie Li, Kevin Lin, Chung-Ching Lin, Zhengyuan Yang,
Hanwang Zhang, Zicheng Liu, Lijuan Wang
|
DisCo: Disentangled Control for Referring Human Dance Generation in Real
World
|
Project Page: https://disco-dance.github.io/; Github Page:
https://github.com/Wangt-CN/DisCo
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Generative AI has made significant strides in computer vision, particularly
in image/video synthesis conditioned on text descriptions. Despite the
advancements, it remains challenging especially in the generation of
human-centric content such as dance synthesis. Existing dance synthesis methods
struggle with the gap between synthesized content and real-world dance
scenarios. In this paper, we define a new problem setting: Referring Human
Dance Generation, which focuses on real-world dance scenarios with three
important properties: (i) Faithfulness: the synthesis should retain the
appearance of both human subject foreground and background from the reference
image, and precisely follow the target pose; (ii) Generalizability: the model
should generalize to unseen human subjects, backgrounds, and poses; (iii)
Compositionality: it should allow for composition of seen/unseen subjects,
backgrounds, and poses from different sources. To address these challenges, we
introduce a novel approach, DISCO, which includes a novel model architecture
with disentangled control to improve the faithfulness and compositionality of
dance synthesis, and an effective human attribute pre-training for better
generalizability to unseen humans. Extensive qualitative and quantitative
results demonstrate that DISCO can generate high-quality human dance images and
videos with diverse appearances and flexible motions. Code, demo, video and
visualization are available at: https://disco-dance.github.io/.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 17:37:48 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Wang",
"Tan",
""
],
[
"Li",
"Linjie",
""
],
[
"Lin",
"Kevin",
""
],
[
"Lin",
"Chung-Ching",
""
],
[
"Yang",
"Zhengyuan",
""
],
[
"Zhang",
"Hanwang",
""
],
[
"Liu",
"Zicheng",
""
],
[
"Wang",
"Lijuan",
""
]
] |
new_dataset
| 0.998243 |
2307.00108
|
Zhexiong Liu
|
Zhexiong Liu, Cris Benge, Siduo Jiang
|
Ticket-BERT: Labeling Incident Management Tickets with Language Models
|
In the Microsoft Journal of Applied Research (MSJAR), Volume 18,
January 2023
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
An essential aspect of prioritizing incident tickets for resolution is
efficiently labeling tickets with fine-grained categories. However, ticket data
is often complex and poses several unique challenges for modern machine
learning methods: (1) tickets are created and updated either by machines with
pre-defined algorithms or by engineers with domain expertise that share
different protocols, (2) tickets receive frequent revisions that update ticket
status by modifying all or parts of ticket descriptions, and (3) ticket
labeling is time-sensitive and requires knowledge updates and new labels per
the rapid software and hardware improvement lifecycle. To handle these issues,
we introduce Ticket- BERT which trains a simple yet robust language model for
labeling tickets using our proposed ticket datasets. Experiments demonstrate
the superiority of Ticket-BERT over baselines and state-of-the-art text
classifiers on Azure Cognitive Services. We further encapsulate Ticket-BERT
with an active learning cycle and deploy it on the Microsoft IcM system, which
enables the model to quickly finetune on newly-collected tickets with a few
annotations.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 19:48:25 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Liu",
"Zhexiong",
""
],
[
"Benge",
"Cris",
""
],
[
"Jiang",
"Siduo",
""
]
] |
new_dataset
| 0.993542 |
2307.00133
|
James Akl
|
James Akl, Yash Patil, Chinmay Todankar, Berk Calli
|
Vision-based Oxy-fuel Torch Control for Robotic Metal Cutting
|
Accepted in: 2023 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The automation of key processes in metal cutting would substantially benefit
many industries such as manufacturing and metal recycling. We present a
vision-based control scheme for automated metal cutting with oxy-fuel torches,
an established cutting medium in industry. The system consists of a robot
equipped with a cutting torch and an eye-in-hand camera observing the scene
behind a tinted visor. We develop a vision-based control algorithm to servo the
torch's motion by visually observing its effects on the metal surface. As such,
the vision system processes the metal surface's heat pool and computes its
associated features, specifically pool convexity and intensity, which are then
used for control. The operating conditions of the control problem are defined
within which the stability is proven. In addition, metal cutting experiments
are performed using a physical 1-DOF robot and oxy-fuel cutting equipment. Our
results demonstrate the successful cutting of metal plates across three
different plate thicknesses, relying purely on visual information without a
priori knowledge of the thicknesses.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 20:55:47 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Akl",
"James",
""
],
[
"Patil",
"Yash",
""
],
[
"Todankar",
"Chinmay",
""
],
[
"Calli",
"Berk",
""
]
] |
new_dataset
| 0.99971 |
2307.00142
|
Patrick Emami
|
Patrick Emami, Abhijeet Sahu, Peter Graf
|
BuildingsBench: A Large-Scale Dataset of 900K Buildings and Benchmark
for Short-Term Load Forecasting
|
32 pages. Code available at https://github.com/NREL/BuildingsBench/
and data available at https://data.openei.org/submissions/5859
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Short-term forecasting of residential and commercial building energy
consumption is widely used in power systems and continues to grow in
importance. Data-driven short-term load forecasting (STLF), although promising,
has suffered from a lack of open, large-scale datasets with high building
diversity. This has hindered exploring the pretrain-then-finetune paradigm for
STLF. To help address this, we present BuildingsBench, which consists of 1)
Buildings-900K, a large-scale dataset of 900K simulated buildings representing
the U.S. building stock, and 2) an evaluation platform with over 1,900 real
residential and commercial buildings from 7 open datasets. BuildingsBench
benchmarks two under-explored tasks: zero-shot STLF, where a pretrained model
is evaluated on unseen buildings without fine-tuning, and transfer learning,
where a pretrained model is fine-tuned on a target building. The main finding
of our benchmark analysis is that synthetically pretrained models generalize
surprisingly well to real commercial buildings. An exploration of the effect of
increasing dataset size and diversity on zero-shot commercial building
performance reveals a power-law with diminishing returns. We also show that
fine-tuning pretrained models on real commercial and residential buildings
improves performance for a majority of target buildings. We hope that
BuildingsBench encourages and facilitates future research on generalizable
STLF. All datasets and code can be accessed from
\url{https://github.com/NREL/BuildingsBench}.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 21:26:24 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Emami",
"Patrick",
""
],
[
"Sahu",
"Abhijeet",
""
],
[
"Graf",
"Peter",
""
]
] |
new_dataset
| 0.999889 |
2307.00143
|
Hari Venugopalan
|
Hari Venugopalan, Kaustav Goswami, Zainul Abi Din, Jason Lowe-Power,
Samuel T. King, Zubair Shafiq
|
Centauri: Practical Rowhammer Fingerprinting
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Fingerprinters leverage the heterogeneity in hardware and software
configurations to extract a device fingerprint. Fingerprinting countermeasures
attempt to normalize these attributes such that they present a uniform
fingerprint across different devices or present different fingerprints for the
same device each time. We present Centauri, a Rowhammer fingerprinting approach
that can build a unique and stable fingerprints even across devices with
homogeneous or normalized/obfuscated hardware and software configurations. To
this end, Centauri leverages the process variation in the underlying
manufacturing process that gives rise to unique distributions of
Rowhammer-induced bit flips across different DRAM modules. Centauri's design
and implementation is able to overcome memory allocation constrains without
requiring root privileges. Our evaluation on a test bed of about one hundred
DRAM modules shows that system achieves 99.91% fingerprinting accuracy.
Centauri's fingerprints are also stable with daily experiments over a period of
10 days revealing no loss in fingerprinting accuracy. We show that Centauri is
efficient, taking as little as 9.92 seconds to extract a fingerprint. Centauri
is the first practical Rowhammer fingerprinting approach that is able to
extract unique and stable fingerprints efficiently and at-scale.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 21:27:54 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Venugopalan",
"Hari",
""
],
[
"Goswami",
"Kaustav",
""
],
[
"Din",
"Zainul Abi",
""
],
[
"Lowe-Power",
"Jason",
""
],
[
"King",
"Samuel T.",
""
],
[
"Shafiq",
"Zubair",
""
]
] |
new_dataset
| 0.999169 |
2307.00152
|
Christopher Getschmann
|
Christopher Getschmann, Florian Echtler
|
LensLeech: On-Lens Interaction for Arbitrary Camera Devices
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Cameras provide a vast amount of information at high rates and are part of
many specialized or general-purpose devices. This versatility makes them
suitable for many interaction scenarios, yet they are constrained by geometry
and require objects to keep a minimum distance for focusing. We present the
LensLeech, a soft silicone cylinder that can be placed directly on or above
lenses. The clear body itself acts as a lens to focus a marker pattern from its
surface into the camera it sits on. This allows us to detect rotation,
translation, and deformation-based gestures such as pressing or squeezing the
soft silicone. We discuss design requirements, describe fabrication processes,
and report on the limitations of such on-lens widgets. To demonstrate the
versatility of LensLeeches, we built prototypes to show application examples
for wearable cameras, smartphones, and interchangeable-lens cameras, extending
existing devices by providing both optical input and output for new
functionality.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 22:04:06 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Getschmann",
"Christopher",
""
],
[
"Echtler",
"Florian",
""
]
] |
new_dataset
| 0.999219 |
2307.00154
|
Bohan Zhuang
|
Zizheng Pan, Jing Liu, Haoyu He, Jianfei Cai, Bohan Zhuang
|
Stitched ViTs are Flexible Vision Backbones
|
Tech report
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Large pretrained plain vision Transformers (ViTs) have been the workhorse for
many downstream tasks. However, existing works utilizing off-the-shelf ViTs are
inefficient in terms of training and deployment, because adopting ViTs with
individual sizes requires separate training and is restricted by fixed
performance-efficiency trade-offs. In this paper, we are inspired by stitchable
neural networks, which is a new framework that cheaply produces a single model
that covers rich subnetworks by stitching pretrained model families, supporting
diverse performance-efficiency trade-offs at runtime. Building upon this
foundation, we introduce SN-Netv2, a systematically improved model stitching
framework to facilitate downstream task adaptation. Specifically, we first
propose a Two-way stitching scheme to enlarge the stitching space. We then
design a resource-constrained sampling strategy that takes into account the
underlying FLOPs distributions in the space for improved sampling. Finally, we
observe that learning stitching layers is a low-rank update, which plays an
essential role on downstream tasks to stabilize training and ensure a good
Pareto frontier. With extensive experiments on ImageNet-1K, ADE20K,
COCO-Stuff-10K, NYUv2 and COCO-2017, SN-Netv2 demonstrates strong ability to
serve as a flexible vision backbone, achieving great advantages in both
training efficiency and adaptation. Code will be released at
https://github.com/ziplab/SN-Netv2.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 22:05:34 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Pan",
"Zizheng",
""
],
[
"Liu",
"Jing",
""
],
[
"He",
"Haoyu",
""
],
[
"Cai",
"Jianfei",
""
],
[
"Zhuang",
"Bohan",
""
]
] |
new_dataset
| 0.998805 |
2307.00178
|
Jingcheng Li
|
Jingcheng Li, Loukas Lazos, Ming Li
|
SecBeam: Securing mmWave Beam Alignment against Beam-Stealing Attacks
| null | null | null | null |
cs.CR eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Millimeter wave (mmWave) communications employ narrow-beam directional
communications to compensate for the high path loss at mmWave frequencies.
Compared to their omnidirectional counterparts, an additional step of aligning
the transmitter's and receiver's antennas is required. In current standards
such as 802.11ad, this beam alignment process is implemented via an exhaustive
search through the horizontal plane known as beam sweeping. However, the beam
sweeping process is unauthenticated. As a result, an adversary, Mallory, can
launch an active beam-stealing attack by injecting forged beacons of high
power, forcing the legitimate devices to beamform towards her direction.
Mallory is now in control of the communication link between the two devices,
thus breaking the false sense of security given by the directionality of mmWave
transmissions.
Prior works have added integrity protection to beam alignment messages to
prevent forgeries. In this paper, we demonstrate a new beam-stealing attack
that does not require message forging. We show that Mallory can amplify and
relay a beam sweeping frame from her direction without altering its contents.
Intuitively, cryptographic primitives cannot verify physical properties such as
the SNR used in beam selection. We propose a new beam sweeping protocol called
SecBeam that utilizes power/sector randomization and coarse angle-of-arrival
information to detect amplify-and-relay attacks. We demonstrate the security
and performance of SecBeam using an experimental mmWave platform and via
ray-tracing simulations.
|
[
{
"version": "v1",
"created": "Sat, 1 Jul 2023 00:08:27 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Li",
"Jingcheng",
""
],
[
"Lazos",
"Loukas",
""
],
[
"Li",
"Ming",
""
]
] |
new_dataset
| 0.983594 |
2307.00200
|
Renwang Li
|
Renwang Li, Xiaodan Shao, Shu Sun, Meixia Tao, and Rui Zhang
|
Beam Scanning for Integrated Sensing and Communication in IRS-aided
mmWave Systems
|
Accepted by IEEE SPAWC
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates an intelligent reflecting surface (IRS) aided
millimeter-wave integrated sensing and communication (ISAC) system.
Specifically, based on the passive beam scanning in the downlink, the IRS finds
the optimal beam for reflecting the signals from the base station to a
communication user. Meanwhile, the IRS estimates the angle of a nearby target
based on its echo signal received by the sensing elements mounted on the IRS
(i.e., semi-passive IRS). We propose an ISAC protocol for achieving the above
objective via simultaneous (beam) training and sensing (STAS). Then, we derive
the achievable rate of the communication user and the Cramer-Rao bound (CRB) of
the angle estimation for the sensing target in closed-form. The achievable rate
and CRB exhibit different performance against the duration of beam scanning.
Specifically, the average achievable rate initially rises and subsequently
declines, while the CRB monotonically decreases. Consequently, the duration of
beam scanning should be carefully selected to balance communication and sensing
performance. Simulation results have verified our analytical findings and shown
that, thanks to the efficient use of downlink beam scanning signal for
simultaneous communication and target sensing, the STAS protocol outperforms
the benchmark protocol with orthogonal beam training and sensing.
|
[
{
"version": "v1",
"created": "Sat, 1 Jul 2023 02:29:46 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Li",
"Renwang",
""
],
[
"Shao",
"Xiaodan",
""
],
[
"Sun",
"Shu",
""
],
[
"Tao",
"Meixia",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.981939 |
2307.00212
|
Dongshen Han
|
Dongshen Han and Seungkyu Lee
|
Internal-External Boundary Attention Fusion for Glass Surface
Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Glass surfaces of transparent objects and mirrors are not able to be uniquely
and explicitly characterized by their visual appearances because they contain
the visual appearance of other reflected or transmitted surfaces as well.
Detecting glass regions from a single-color image is a challenging task. Recent
deep-learning approaches have paid attention to the description of glass
surface boundary where the transition of visual appearances between glass and
non-glass surfaces are observed. In this work, we analytically investigate how
glass surface boundary helps to characterize glass objects. Inspired by prior
semantic segmentation approaches with challenging image types such as X-ray or
CT scans, we propose separated internal-external boundary attention modules
that individually learn and selectively integrate visual characteristics of the
inside and outside region of glass surface from a single color image. Our
proposed method is evaluated on six public benchmarks comparing with
state-of-the-art methods showing promising results.
|
[
{
"version": "v1",
"created": "Sat, 1 Jul 2023 03:30:55 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Han",
"Dongshen",
""
],
[
"Lee",
"Seungkyu",
""
]
] |
new_dataset
| 0.998582 |
2307.00234
|
Ziye Jia
|
Ziye Jia, Chao Dong, Kun Guo, and Qihui Wu
|
The Potential of LEO Satellites in 6G Space-Air-Ground Enabled Access
Networks
| null | null | null | null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Space-air-ground integrated networks (SAGINs) help enhance the service
performance in the sixth generation communication system. SAGIN is basically
composed of satellites, aerial vehicles, ground facilities, as well as multiple
terrestrial users. Therein, the low earth orbit (LEO) satellites are popular in
recent years due to the low cost of development and launch, global coverage and
delay-enabled services. Moreover, LEO satellites can support various
applications, e.g., direct access, relay, caching and computation. In this
work, we firstly provide the preliminaries and framework of SAGIN, in which the
characteristics of LEO satellites, high altitude platforms, as well as unmanned
aerial vehicles are analyzed. Then, the roles and potentials of LEO satellite
in SAGIN are analyzed for access services. A couple of advanced techniques such
as multi-access edge computing (MEC) and network function virtualization are
introduced to enhance the LEO-based access service abilities as hierarchical
MEC and network slicing in SAGIN. In addition, corresponding use cases are
provided to verify the propositions. Besides, we also discuss the open issues
and promising directions in LEO-enabled SAGIN access services for the future
research.
|
[
{
"version": "v1",
"created": "Sat, 1 Jul 2023 05:53:34 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Jia",
"Ziye",
""
],
[
"Dong",
"Chao",
""
],
[
"Guo",
"Kun",
""
],
[
"Wu",
"Qihui",
""
]
] |
new_dataset
| 0.99905 |
2307.00250
|
Weihang Su
|
Weihang Su, Xiangsheng Li, Yiqun Liu, Min Zhang, Shaoping Ma
|
THUIR2 at NTCIR-16 Session Search (SS) Task
|
The technical report of our team at the NTCIR 16 competition. We
achieved second place
| null | null | null |
cs.IR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our team(THUIR2) participated in both FOSS and POSS subtasks of the NTCIR-161
Session Search (SS) Task. This paper describes our approaches and results. In
the FOSS subtask, we submit five runs using learning-to-rank and fine-tuned
pre-trained language models. We fine-tuned the pre-trained language model with
ad-hoc data and session information and assembled them by a learning-to-rank
method. The assembled model achieves the best performance among all
participants in the preliminary evaluation. In the POSS subtask, we used an
assembled model which also achieves the best performance in the preliminary
evaluation.
|
[
{
"version": "v1",
"created": "Sat, 1 Jul 2023 06:55:06 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Su",
"Weihang",
""
],
[
"Li",
"Xiangsheng",
""
],
[
"Liu",
"Yiqun",
""
],
[
"Zhang",
"Min",
""
],
[
"Ma",
"Shaoping",
""
]
] |
new_dataset
| 0.977078 |
2307.00270
|
Yongshang Li
|
Yongshang Li, Ronggui Ma, Han Liu and Gaoli Cheng
|
HrSegNet : Real-time High-Resolution Neural Network with Semantic
Guidance for Crack Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Through extensive research on deep learning in recent years and its
application in construction, crack detection has evolved rapidly from rough
detection at the image-level and patch-level to fine-grained detection at the
pixel-level, which better suits the nature of this field. Despite numerous
existing studies utilizing off-the-shelf deep learning models or enhancing
them, these models are not always effective or efficient in real-world
applications. In order to bridge this gap, we propose a High-resolution model
with Semantic guidance, specifically designed for real-time crack segmentation,
referred to as HrSegNet. Our model maintains high resolution throughout the
entire process, as opposed to recovering from low-resolution features to
high-resolution ones, thereby maximizing the preservation of crack details.
Moreover, to enhance the context information, we use low-resolution semantic
features to guide the reconstruction of high-resolution features. To ensure the
efficiency of the algorithm, we design a simple yet effective method to control
the computation cost of the entire model by controlling the capacity of
high-resolution channels, while providing the model with extremely strong
scalability. Extensive quantitative and qualitative evaluations demonstrate
that our proposed HrSegNet has exceptional crack segmentation capabilities, and
that maintaining high resolution and semantic guidance are crucial to the final
prediction. Compared to state-of-the-art segmentation models, HrSegNet achieves
the best trade-off between efficiency and effectiveness. Specifically, on the
crack dataset CrackSeg9k, our fastest model HrSegNet-B16 achieves a speed of
182 FPS with 78.43% mIoU, while our most accurate model HrSegNet-B48 achieves
80.32% mIoU with an inference speed of 140.3 FPS.
|
[
{
"version": "v1",
"created": "Sat, 1 Jul 2023 08:38:18 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Li",
"Yongshang",
""
],
[
"Ma",
"Ronggui",
""
],
[
"Liu",
"Han",
""
],
[
"Cheng",
"Gaoli",
""
]
] |
new_dataset
| 0.986976 |
2307.00285
|
Lennart Purucker
|
Lennart Purucker, Joeran Beel
|
Assembled-OpenML: Creating Efficient Benchmarks for Ensembles in AutoML
with OpenML
|
5 pages main paper, 13 pages references and appendix, 2 figures, 1
table, poster presented at: International Conference on Automated Machine
Learning 2022, Workshop Track
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Automated Machine Learning (AutoML) frameworks regularly use ensembles.
Developers need to compare different ensemble techniques to select appropriate
techniques for an AutoML framework from the many potential techniques. So far,
the comparison of ensemble techniques is often computationally expensive,
because many base models must be trained and evaluated one or multiple times.
Therefore, we present Assembled-OpenML. Assembled-OpenML is a Python tool,
which builds meta-datasets for ensembles using OpenML. A meta-dataset, called
Metatask, consists of the data of an OpenML task, the task's dataset, and
prediction data from model evaluations for the task. We can make the comparison
of ensemble techniques computationally cheaper by using the predictions stored
in a metatask instead of training and evaluating base models. To introduce
Assembled-OpenML, we describe the first version of our tool. Moreover, we
present an example of using Assembled-OpenML to compare a set of ensemble
techniques. For this example comparison, we built a benchmark using
Assembled-OpenML and implemented ensemble techniques expecting predictions
instead of base models as input. In our example comparison, we gathered the
prediction data of $1523$ base models for $31$ datasets. Obtaining the
prediction data for all base models using Assembled-OpenML took ${\sim} 1$ hour
in total. In comparison, obtaining the prediction data by training and
evaluating just one base model on the most computationally expensive dataset
took ${\sim} 37$ minutes.
|
[
{
"version": "v1",
"created": "Sat, 1 Jul 2023 09:46:59 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Purucker",
"Lennart",
""
],
[
"Beel",
"Joeran",
""
]
] |
new_dataset
| 0.990177 |
2307.00313
|
Peidong Jia
|
Peidong Jia, Jiaming Liu, Senqiao Yang, Jiarui Wu, Xiaodong Xie,
Shanghang Zhang
|
PM-DETR: Domain Adaptive Prompt Memory for Object Detection with
Transformers
|
cs.cv
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The Transformer-based detectors (i.e., DETR) have demonstrated impressive
performance on end-to-end object detection. However, transferring DETR to
different data distributions may lead to a significant performance degradation.
Existing adaptation techniques focus on model-based approaches, which aim to
leverage feature alignment to narrow the distribution shift between different
domains. In this study, we propose a hierarchical Prompt Domain Memory (PDM)
for adapting detection transformers to different distributions. PDM
comprehensively leverages the prompt memory to extract domain-specific
knowledge and explicitly constructs a long-term memory space for the data
distribution, which represents better domain diversity compared to existing
methods. Specifically, each prompt and its corresponding distribution value are
paired in the memory space, and we inject top M distribution-similar prompts
into the input and multi-level embeddings of DETR. Additionally, we introduce
the Prompt Memory Alignment (PMA) to reduce the discrepancy between the source
and target domains by fully leveraging the domain-specific knowledge extracted
from the prompt domain memory. Extensive experiments demonstrate that our
method outperforms state-of-the-art domain adaptive object detection methods on
three benchmarks, including scene, synthetic to real, and weather adaptation.
Codes will be released.
|
[
{
"version": "v1",
"created": "Sat, 1 Jul 2023 12:02:24 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Jia",
"Peidong",
""
],
[
"Liu",
"Jiaming",
""
],
[
"Yang",
"Senqiao",
""
],
[
"Wu",
"Jiarui",
""
],
[
"Xie",
"Xiaodong",
""
],
[
"Zhang",
"Shanghang",
""
]
] |
new_dataset
| 0.986726 |
2307.00323
|
Benzar Glen Grepon
|
Benzar Glen S. Grepon, JC P. Margallo, Jonathan B. Maserin, Rio Al-Di
A. Dompol
|
RUI: A Web-based Road Updates Information System using Google Maps API
|
18 pages
|
International Journal of Computing Sciences Research, [S.l.], v.
7, p. 2253-2271, july 2023. ISSN 2546-115X. Available at:
<//stepacademic.net/ijcsr/article/view/441>. Date accessed: 01 july 2023
|
10.25147/ijcsr.2017.001.1.158
| null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Knowing the current situation on every road in an area is still difficult to
anticipate. Commuters, riders, and drivers are still dependent on road
situations from a local news agency to be well informed and be updated on
possible road updates such as vehicular accidents, government road and bridge
projects/construction, and other related road obstructions. To give solutions
regarding road updates, a web-based roads update information system has been
developed that uses Google Maps API allowing people to view and be notified of
the real-time updates of the road situation of a specific area. This paper
discusses the main system functionalities, including sub-systems and modules of
the system, the research approach and methodology, which is the Agile Model,
and its impact on disseminating road information and its status. The project
has been evaluated using ISO 25010. Based on the evaluation result, the project
has been rated 4.21, signifying an excellent performance based on qualitative
description through a Likert scale descriptive interpretation. The project has
been running and hosted on the world wide web and is expected to expand its
coverage area from its origin country to the rest of the world. Based on the
initial findings of the study, the respondents agreed that the developed web
system was functional and a massive help to commuters, riders, and people who
travel a lot. The system's overall effectiveness and performance were excellent
based on the criteria set by ISO/IEC 25010. It is recommended for future
development to expand the coverage of the road updates, if possible, including
the entire Philippine archipelago for long-drive commuters and drivers to be
more updated in terms of road updates. Also, include the use of mobile
applications for more user-friendly design and interactions.
|
[
{
"version": "v1",
"created": "Sat, 1 Jul 2023 12:29:58 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Grepon",
"Benzar Glen S.",
""
],
[
"Margallo",
"JC P.",
""
],
[
"Maserin",
"Jonathan B.",
""
],
[
"Dompol",
"Rio Al-Di A.",
""
]
] |
new_dataset
| 0.999829 |
2307.00348
|
Maya Torii
|
Maya Grace Torii, Takahito Murakami and Yoichi Ochiai
|
Lottery and Sprint: Generate a Board Game with Design Sprint Method on
Auto-GPT
|
9 pages, 5 figures, pre-print
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a novel approach using the Auto GPT system
alongside Design Sprint methodology to facilitate board game creation for
inexperienced users. We introduce the implementation of Auto GPT for generating
diverse board games and the subsequent optimization process through a
customized Design Sprint. A user study is conducted to investigate the
playability and enjoyment of the generated games, revealing both successes and
challenges in employing systems like Auto GPT for board game design. Insights
and future research directions are proposed to overcome identified limitations
and enhance computational-driven game creation.
|
[
{
"version": "v1",
"created": "Sat, 1 Jul 2023 14:09:55 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Torii",
"Maya Grace",
""
],
[
"Murakami",
"Takahito",
""
],
[
"Ochiai",
"Yoichi",
""
]
] |
new_dataset
| 0.995039 |
2307.00395
|
Mustafa Munir
|
Mustafa Munir, William Avery, Radu Marculescu
|
MobileViG: Graph-Based Sparse Attention for Mobile Vision Applications
|
Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR) Workshops
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditionally, convolutional neural networks (CNN) and vision transformers
(ViT) have dominated computer vision. However, recently proposed vision graph
neural networks (ViG) provide a new avenue for exploration. Unfortunately, for
mobile applications, ViGs are computationally expensive due to the overhead of
representing images as graph structures. In this work, we propose a new
graph-based sparse attention mechanism, Sparse Vision Graph Attention (SVGA),
that is designed for ViGs running on mobile devices. Additionally, we propose
the first hybrid CNN-GNN architecture for vision tasks on mobile devices,
MobileViG, which uses SVGA. Extensive experiments show that MobileViG beats
existing ViG models and existing mobile CNN and ViT architectures in terms of
accuracy and/or speed on image classification, object detection, and instance
segmentation tasks. Our fastest model, MobileViG-Ti, achieves 75.7% top-1
accuracy on ImageNet-1K with 0.78 ms inference latency on iPhone 13 Mini NPU
(compiled with CoreML), which is faster than MobileNetV2x1.4 (1.02 ms, 74.7%
top-1) and MobileNetV2x1.0 (0.81 ms, 71.8% top-1). Our largest model,
MobileViG-B obtains 82.6% top-1 accuracy with only 2.30 ms latency, which is
faster and more accurate than the similarly sized EfficientFormer-L3 model
(2.77 ms, 82.4%). Our work proves that well designed hybrid CNN-GNN
architectures can be a new avenue of exploration for designing models that are
extremely fast and accurate on mobile devices. Our code is publicly available
at https://github.com/SLDGroup/MobileViG.
|
[
{
"version": "v1",
"created": "Sat, 1 Jul 2023 17:49:12 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Munir",
"Mustafa",
""
],
[
"Avery",
"William",
""
],
[
"Marculescu",
"Radu",
""
]
] |
new_dataset
| 0.994793 |
2307.00421
|
Mingzhen Shao
|
Mingzhen Shao
|
Brightness-Restricted Adversarial Attack Patch
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Adversarial attack patches have gained increasing attention due to their
practical applicability in physical-world scenarios. However, the bright colors
used in attack patches represent a significant drawback, as they can be easily
identified by human observers. Moreover, even though these attacks have been
highly successful in deceiving target networks, which specific features of the
attack patch contribute to its success are still unknown. Our paper introduces
a brightness-restricted patch (BrPatch) that uses optical characteristics to
effectively reduce conspicuousness while preserving image independence. We also
conducted an analysis of the impact of various image features (such as color,
texture, noise, and size) on the effectiveness of an attack patch in
physical-world deployment. Our experiments show that attack patches exhibit
strong redundancy to brightness and are resistant to color transfer and noise.
Based on our findings, we propose some additional methods to further reduce the
conspicuousness of BrPatch. Our findings also explain the robustness of attack
patches observed in physical-world scenarios.
|
[
{
"version": "v1",
"created": "Sat, 1 Jul 2023 20:08:55 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Shao",
"Mingzhen",
""
]
] |
new_dataset
| 0.999322 |
2307.00488
|
Jingxing Qian
|
Jingxing Qian, Veronica Chatrath, James Servos, Aaron Mavrinac,
Wolfram Burgard, Steven L. Waslander, Angela P. Schoellig
|
POV-SLAM: Probabilistic Object-Aware Variational SLAM in Semi-Static
Environments
|
Published in Robotics: Science and Systems (RSS) 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Simultaneous localization and mapping (SLAM) in slowly varying scenes is
important for long-term robot task completion. Failing to detect scene changes
may lead to inaccurate maps and, ultimately, lost robots. Classical SLAM
algorithms assume static scenes, and recent works take dynamics into account,
but require scene changes to be observed in consecutive frames. Semi-static
scenes, wherein objects appear, disappear, or move slowly over time, are often
overlooked, yet are critical for long-term operation. We propose an
object-aware, factor-graph SLAM framework that tracks and reconstructs
semi-static object-level changes. Our novel variational
expectation-maximization strategy is used to optimize factor graphs involving a
Gaussian-Uniform bimodal measurement likelihood for potentially-changing
objects. We evaluate our approach alongside the state-of-the-art SLAM solutions
in simulation and on our novel real-world SLAM dataset captured in a warehouse
over four months. Our method improves the robustness of localization in the
presence of semi-static changes, providing object-level reasoning about the
scene.
|
[
{
"version": "v1",
"created": "Sun, 2 Jul 2023 06:26:36 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Qian",
"Jingxing",
""
],
[
"Chatrath",
"Veronica",
""
],
[
"Servos",
"James",
""
],
[
"Mavrinac",
"Aaron",
""
],
[
"Burgard",
"Wolfram",
""
],
[
"Waslander",
"Steven L.",
""
],
[
"Schoellig",
"Angela P.",
""
]
] |
new_dataset
| 0.973861 |
2307.00500
|
Ramviyas Parasuraman
|
Ehsan Latif and Ramviyas Parasuraman
|
CQLite: Communication-Efficient Multi-Robot Exploration Using
Coverage-biased Distributed Q-Learning
| null | null | null | null |
cs.RO cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Frontier exploration and reinforcement learning have historically been used
to solve the problem of enabling many mobile robots to autonomously and
cooperatively explore complex surroundings. These methods need to keep an
internal global map for navigation, but they do not take into consideration the
high costs of communication and information sharing between robots. This study
offers CQLite, a novel distributed Q-learning technique designed to minimize
data communication overhead between robots while achieving rapid convergence
and thorough coverage in multi-robot exploration. The proposed CQLite method
uses ad hoc map merging, and selectively shares updated Q-values at recently
identified frontiers to significantly reduce communication costs. The
theoretical analysis of CQLite's convergence and efficiency, together with
extensive numerical verification on simulated indoor maps utilizing several
robots, demonstrates the method's novelty. With over 2x reductions in
computation and communication alongside improved mapping performance, CQLite
outperformed cutting-edge multi-robot exploration techniques like Rapidly
Exploring Random Trees and Deep Reinforcement Learning.
|
[
{
"version": "v1",
"created": "Sun, 2 Jul 2023 07:20:29 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Latif",
"Ehsan",
""
],
[
"Parasuraman",
"Ramviyas",
""
]
] |
new_dataset
| 0.991908 |
2307.00509
|
Tzuf Paz-Argaman
|
Tzuf Paz-Argaman, Tal Bauman, Itai Mondshine, Itzhak Omer, Sagi
Dalyot, Reut Tsarfaty
|
HeGeL: A Novel Dataset for Geo-Location from Hebrew Text
|
Accepted for ACL findings 2023
| null | null | null |
cs.CL cs.AI cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The task of textual geolocation - retrieving the coordinates of a place based
on a free-form language description - calls for not only grounding but also
natural language understanding and geospatial reasoning. Even though there are
quite a few datasets in English used for geolocation, they are currently based
on open-source data (Wikipedia and Twitter), where the location of the
described place is mostly implicit, such that the location retrieval resolution
is limited. Furthermore, there are no datasets available for addressing the
problem of textual geolocation in morphologically rich and resource-poor
languages, such as Hebrew. In this paper, we present the Hebrew Geo-Location
(HeGeL) corpus, designed to collect literal place descriptions and analyze
lingual geospatial reasoning. We crowdsourced 5,649 literal Hebrew place
descriptions of various place types in three cities in Israel. Qualitative and
empirical analysis show that the data exhibits abundant use of geospatial
reasoning and requires a novel environmental representation.
|
[
{
"version": "v1",
"created": "Sun, 2 Jul 2023 08:09:10 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Paz-Argaman",
"Tzuf",
""
],
[
"Bauman",
"Tal",
""
],
[
"Mondshine",
"Itai",
""
],
[
"Omer",
"Itzhak",
""
],
[
"Dalyot",
"Sagi",
""
],
[
"Tsarfaty",
"Reut",
""
]
] |
new_dataset
| 0.999814 |
2307.00518
|
Ling Chen
|
Binqing Wu, Ling Chen
|
DSTCGCN: Learning Dynamic Spatial-Temporal Cross Dependencies for
Traffic Forecasting
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Traffic forecasting is essential to intelligent transportation systems, which
is challenging due to the complicated spatial and temporal dependencies within
a road network. Existing works usually learn spatial and temporal dependencies
separately, ignoring the dependencies crossing spatial and temporal dimensions.
In this paper, we propose DSTCGCN, a dynamic spatial-temporal cross graph
convolution network to learn dynamic spatial and temporal dependencies jointly
via graphs for traffic forecasting. Specifically, we introduce a fast Fourier
transform (FFT) based attentive selector to choose relevant time steps for each
time step based on time-varying traffic data. Given the selected time steps, we
introduce a dynamic cross graph construction module, consisting of the spatial
graph construction, temporal connection graph construction, and fusion modules,
to learn dynamic spatial-temporal cross dependencies without pre-defined
priors. Extensive experiments on six real-world datasets demonstrate that
DSTCGCN achieves the state-of-the-art performance.
|
[
{
"version": "v1",
"created": "Sun, 2 Jul 2023 08:53:10 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Wu",
"Binqing",
""
],
[
"Chen",
"Ling",
""
]
] |
new_dataset
| 0.988371 |
2307.00549
|
Ningyu He
|
Pengxiang Ma, Ningyu He, Yuhua Huang, Haoyu Wang, Xiapu Luo
|
Abusing the Ethereum Smart Contract Verification Services for Fun and
Profit
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Smart contracts play a vital role in the Ethereum ecosystem. Due to the
prevalence of kinds of security issues in smart contracts, the smart contract
verification is urgently needed, which is the process of matching a smart
contract's source code to its on-chain bytecode for gaining mutual trust
between smart contract developers and users. Although smart contract
verification services are embedded in both popular Ethereum browsers (e.g.,
Etherscan and Blockscout) and official platforms (i.e., Sourcify), and gain
great popularity in the ecosystem, their security and trustworthiness remain
unclear. To fill the void, we present the first comprehensive security analysis
of smart contract verification services in the wild. By diving into the
detailed workflow of existing verifiers, we have summarized the key security
properties that should be met, and observed eight types of vulnerabilities that
can break the verification. Further, we propose a series of detection and
exploitation methods to reveal the presence of vulnerabilities in the most
popular services, and uncover 19 exploitable vulnerabilities in total. All the
studied smart contract verification services can be abused to help spread
malicious smart contracts, and we have already observed the presence of using
this kind of tricks for scamming by attackers. It is hence urgent for our
community to take actions to detect and mitigate security issues related to
smart contract verification, a key component of the Ethereum smart contract
ecosystem.
|
[
{
"version": "v1",
"created": "Sun, 2 Jul 2023 12:05:43 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Ma",
"Pengxiang",
""
],
[
"He",
"Ningyu",
""
],
[
"Huang",
"Yuhua",
""
],
[
"Wang",
"Haoyu",
""
],
[
"Luo",
"Xiapu",
""
]
] |
new_dataset
| 0.993642 |
2307.00561
|
Fu Song
|
Huiyu Tan and Pengfei Gao and Taolue Chen and Fu Song and Zhilin Wu
|
SAT-based Formal Fault-Resistance Verification of Cryptographic Circuits
| null | null | null | null |
cs.CR cs.AR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fault injection attacks represent a type of active, physical attack against
cryptographic circuits. Various countermeasures have been proposed to thwart
such attacks, the design and implementation of which are, however, intricate,
error-prone, and laborious. The current formal fault-resistance verification
approaches are limited in efficiency and scalability. In this paper, we
formalize the fault-resistance verification problem which is shown to be
NP-complete. We then devise a novel approach for encoding the fault-resistance
verification problem as the Boolean satisfiability (SAT) problem so that
off-the-shelf SAT solvers can be utilized. The approach is implemented in an
open-source tool FIRMER which is evaluated extensively on realistic
cryptographic circuit benchmarks. The experimental results show that FIRMER is
able to verify fault-resistance of almost all (46/48) benchmarks in 3 minutes
(the other two are verified in 35 minutes). In contrast, the prior approach
fails on 23 fault-resistance verification tasks even after 24 hours (per task).
|
[
{
"version": "v1",
"created": "Sun, 2 Jul 2023 13:01:32 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Tan",
"Huiyu",
""
],
[
"Gao",
"Pengfei",
""
],
[
"Chen",
"Taolue",
""
],
[
"Song",
"Fu",
""
],
[
"Wu",
"Zhilin",
""
]
] |
new_dataset
| 0.99943 |
2307.00578
|
Tarek Hamdani M.
|
Islem Jarraya, Tarek M. Hamdani, Habib Chabchoub, Adel M. Alimi
|
TinySiamese Network for Biometric Analysis
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Biometric recognition is the process of verifying or classifying human
characteristics in images or videos. It is a complex task that requires machine
learning algorithms, including convolutional neural networks (CNNs) and Siamese
networks. Besides, there are several limitations to consider when using these
algorithms for image verification and classification tasks. In fact, training
may be computationally intensive, requiring specialized hardware and
significant computational resources to train and deploy. Moreover, it
necessitates a large amount of labeled data, which can be time-consuming and
costly to obtain. The main advantage of the proposed TinySiamese compared to
the standard Siamese is that it does not require the whole CNN for training. In
fact, using a pre-trained CNN as a feature extractor and the TinySiamese to
learn the extracted features gave almost the same performance and efficiency as
the standard Siamese for biometric verification. In this way, the TinySiamese
solves the problems of memory and computational time with a small number of
layers which did not exceed 7. It can be run under low-power machines which
possess a normal GPU and cannot allocate a large RAM space. Using TinySiamese
with only 8 GO of memory, the matching time decreased by 76.78% on the B2F
(Biometric images of Fingerprints and Faces), FVC2000, FVC2002 and FVC2004
while the training time for 10 epochs went down by approximately 93.14% on the
B2F, FVC2002, THDD-part1 and CASIA-B datasets. The accuracy of the fingerprint,
gait (NM-angle 180 degree) and face verification tasks was better than the
accuracy of a standard Siamese by 0.87%, 20.24% and 3.85% respectively.
TinySiamese achieved comparable accuracy with related works for the fingerprint
and gait classification tasks.
|
[
{
"version": "v1",
"created": "Sun, 2 Jul 2023 14:15:52 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Jarraya",
"Islem",
""
],
[
"Hamdani",
"Tarek M.",
""
],
[
"Chabchoub",
"Habib",
""
],
[
"Alimi",
"Adel M.",
""
]
] |
new_dataset
| 0.967481 |
2307.00580
|
Hemanth Karnati
|
Hemanth Karnati
|
IoT-Based Air Quality Monitoring System with Machine Learning for
Accurate and Real-time Data Analysis
|
18 pages, 10 figures
| null | null | null |
cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Air pollution in urban areas has severe consequences for both human health
and the environment, predominantly caused by exhaust emissions from vehicles.
To address the issue of air pollution awareness, Air Pollution Monitoring
systems are used to measure the concentration of gases like CO2, smoke,
alcohol, benzene, and NH3 present in the air. However, current mobile
applications are unable to provide users with real-time data specific to their
location. In this paper, we propose the development of a portable air quality
detection device that can be used anywhere. The data collected will be stored
and visualized using the cloud-based web app ThinkSpeak.
The device utilizes two sensors, MQ135 and MQ3, to detect harmful gases and
measure air quality in parts per million (PPM). Additionally, machine learning
analysis will be employed on the collected data.
|
[
{
"version": "v1",
"created": "Sun, 2 Jul 2023 14:18:04 GMT"
}
] | 2023-07-04T00:00:00 |
[
[
"Karnati",
"Hemanth",
""
]
] |
new_dataset
| 0.992837 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.