id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.13890 | Tianyi Liu | Tianyi Liu and Kejun Wu and Yi Wang and Wenyang Liu and Kim-Hui Yap
and Lap-Pui Chau | Bitstream-Corrupted Video Recovery: A Novel Benchmark Dataset and Method | Accepted by NeurIPS Dataset and Benchmark Track 2023 | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | The past decade has witnessed great strides in video recovery by specialist
technologies, like video inpainting, completion, and error concealment.
However, they typically simulate the missing content by manual-designed error
masks, thus failing to fill in the realistic video loss in video communication
(e.g., telepresence, live streaming, and internet video) and multimedia
forensics. To address this, we introduce the bitstream-corrupted video (BSCV)
benchmark, the first benchmark dataset with more than 28,000 video clips, which
can be used for bitstream-corrupted video recovery in the real world. The BSCV
is a collection of 1) a proposed three-parameter corruption model for video
bitstream, 2) a large-scale dataset containing rich error patterns, multiple
corruption levels, and flexible dataset branches, and 3) a plug-and-play module
in video recovery framework that serves as a benchmark. We evaluate
state-of-the-art video inpainting methods on the BSCV dataset, demonstrating
existing approaches' limitations and our framework's advantages in solving the
bitstream-corrupted video recovery problem. The benchmark and dataset are
released at https://github.com/LIUTIGHE/BSCV-Dataset.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 06:06:26 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2023 05:55:08 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Liu",
"Tianyi",
""
],
[
"Wu",
"Kejun",
""
],
[
"Wang",
"Yi",
""
],
[
"Liu",
"Wenyang",
""
],
[
"Yap",
"Kim-Hui",
""
],
[
"Chau",
"Lap-Pui",
""
]
]
| new_dataset | 0.999841 |
2309.14048 | Shaun Azzopardi | Karam Kharraz, Shaun Azzopardi, Gerardo Schneider, Martin Leucker | Synchronous Agents, Verification, and Blame -- A Deontic View | To appear in ICTAC 2023 | null | null | null | cs.LO | http://creativecommons.org/licenses/by/4.0/ | A question we can ask of multi-agent systems is whether the agents'
collective interaction satisfies particular goals or specifications, which can
be either individual or collective. When a collaborative goal is not reached,
or a specification is violated, a pertinent question is whether any agent is to
blame. This paper considers a two-agent synchronous setting and a formal
language to specify when agents' collaboration is required. We take a deontic
approach and use obligations, permissions, and prohibitions to capture notions
of non-interference between agents. We also handle reparations, allowing
violations to be corrected or compensated. We give trace semantics to our
logic, and use it to define blame assignment for violations. We give an
automaton construction for the logic, which we use as the base for model
checking and blame analysis. We also further provide quantitative semantics
that is able to compare different interactions in terms of the required
reparations.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 11:23:59 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2023 08:01:57 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Kharraz",
"Karam",
""
],
[
"Azzopardi",
"Shaun",
""
],
[
"Schneider",
"Gerardo",
""
],
[
"Leucker",
"Martin",
""
]
]
| new_dataset | 0.991032 |
2309.14183 | Wei He | Wei He, Kai Han, Ying Nie, Chengcheng Wang, Yunhe Wang | Species196: A One-Million Semi-supervised Dataset for Fine-grained
Species Recognition | Accepted by NeurIPS 2023 Track Datasets and Benchmarks | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of foundation vision models has pushed the general visual
recognition to a high level, but cannot well address the fine-grained
recognition in specialized domain such as invasive species classification.
Identifying and managing invasive species has strong social and ecological
value. Currently, most invasive species datasets are limited in scale and cover
a narrow range of species, which restricts the development of deep-learning
based invasion biometrics systems. To fill the gap of this area, we introduced
Species196, a large-scale semi-supervised dataset of 196-category invasive
species. It collects over 19K images with expert-level accurate annotations
Species196-L, and 1.2M unlabeled images of invasive species Species196-U. The
dataset provides four experimental settings for benchmarking the existing
models and algorithms, namely, supervised learning, semi-supervised learning,
self-supervised pretraining and zero-shot inference ability of large
multi-modal models. To facilitate future research on these four learning
paradigms, we conduct an empirical study of the representative methods on the
introduced dataset. The dataset is publicly available at
https://species-dataset.github.io/.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 14:46:01 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2023 09:50:24 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"He",
"Wei",
""
],
[
"Han",
"Kai",
""
],
[
"Nie",
"Ying",
""
],
[
"Wang",
"Chengcheng",
""
],
[
"Wang",
"Yunhe",
""
]
]
| new_dataset | 0.999878 |
2309.14266 | Digby Chappell | Digby Chappell, Fernando Bello, Petar Kormushev, and Nicolas Rojas | The Hydra Hand: A Mode-Switching Underactuated Gripper with Precision
and Power Grasping Modes | This paper has been accepted for publication in IEEE Robotics and
Automation Letters. For the purpose of open access, the author(s) has applied
a Creative Commons Attribution (CC BY) license to any Accepted Manuscript
version arising. 8 pages, 11 figures | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Human hands are able to grasp a wide range of object sizes, shapes, and
weights, achieved via reshaping and altering their apparent grasping stiffness
between compliant power and rigid precision. Achieving similar versatility in
robotic hands remains a challenge, which has often been addressed by adding
extra controllable degrees of freedom, tactile sensors, or specialised extra
grasping hardware, at the cost of control complexity and robustness. We
introduce a novel reconfigurable four-fingered two-actuator underactuated
gripper -- the Hydra Hand -- that switches between compliant power and rigid
precision grasps using a single motor, while generating grasps via a single
hydraulic actuator -- exhibiting adaptive grasping between finger pairs,
enabling the power grasping of two objects simultaneously. The mode switching
mechanism and the hand's kinematics are presented and analysed, and performance
is tested on two grasping benchmarks: one focused on rigid objects, and the
other on items of clothing. The Hydra Hand is shown to excel at grasping large
and irregular objects, and small objects with its respective compliant power
and rigid precision configurations. The hand's versatility is then showcased by
executing the challenging manipulation task of safely grasping and placing a
bunch of grapes, and then plucking a single grape from the bunch.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 16:27:51 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2023 10:11:42 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Chappell",
"Digby",
""
],
[
"Bello",
"Fernando",
""
],
[
"Kormushev",
"Petar",
""
],
[
"Rojas",
"Nicolas",
""
]
]
| new_dataset | 0.99904 |
2309.14355 | Lukas Erhard | L. Erhard, S. Hanke, U. Remer, A. Falenska and R. Heiberger | PopBERT. Detecting populism and its host ideologies in the German
Bundestag | null | null | null | null | cs.CL cs.CY cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rise of populism concerns many political scientists and practitioners,
yet the detection of its underlying language remains fragmentary. This paper
aims to provide a reliable, valid, and scalable approach to measure populist
stances. For that purpose, we created an annotated dataset based on
parliamentary speeches of the German Bundestag (2013 to 2021). Following the
ideational definition of populism, we label moralizing references to the
virtuous people or the corrupt elite as core dimensions of populist language.
To identify, in addition, how the thin ideology of populism is thickened, we
annotate how populist statements are attached to left-wing or right-wing host
ideologies. We then train a transformer-based model (PopBERT) as a multilabel
classifier to detect and quantify each dimension. A battery of validation
checks reveals that the model has a strong predictive accuracy, provides high
qualitative face validity, matches party rankings of expert surveys, and
detects out-of-sample text snippets correctly. PopBERT enables dynamic analyses
of how German-speaking politicians and parties use populist language as a
strategic device. Furthermore, the annotator-level data may also be applied in
cross-domain applications or to develop related classifiers.
| [
{
"version": "v1",
"created": "Fri, 22 Sep 2023 14:48:02 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Erhard",
"L.",
""
],
[
"Hanke",
"S.",
""
],
[
"Remer",
"U.",
""
],
[
"Falenska",
"A.",
""
],
[
"Heiberger",
"R.",
""
]
]
| new_dataset | 0.999059 |
2309.14364 | Hiroki Sato | Hiroki Sato, Tanner Lund, Takahide Yoshida, Atsushi Masumori | Automata Quest: NCAs as a Video Game Life Mechanic | This article was submitted to and presented at Alife for and from
Video Games Workshop at ALIFE2023, Sappro (Japan) | Alife for and from Video Games Workshop at ALIFE2023 | null | null | cs.HC cs.GR cs.MA cs.NE cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study life over the course of video game history as represented by their
mechanics. While there have been some variations depending on genre or
"character type", we find that most games converge to a similar representation.
We also examine the development of Conway's Game of Life (one of the first zero
player games) and related automata that have developed over the years. With
this history in mind, we investigate the viability of one popular form of
automata, namely Neural Cellular Automata, as a way to more fully express life
within video game settings and innovate new game mechanics or gameplay loops.
| [
{
"version": "v1",
"created": "Sat, 23 Sep 2023 11:14:09 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Sato",
"Hiroki",
""
],
[
"Lund",
"Tanner",
""
],
[
"Yoshida",
"Takahide",
""
],
[
"Masumori",
"Atsushi",
""
]
]
| new_dataset | 0.994746 |
2309.14393 | Lei Jiang | Ahmad Faiz, Sotaro Kaneda, Ruhan Wang, Rita Osi, Parteek Sharma, Fan
Chen, Lei Jiang | LLMCarbon: Modeling the end-to-end Carbon Footprint of Large Language
Models | null | null | null | null | cs.CL cs.AI cs.CY cs.LG | http://creativecommons.org/licenses/by/4.0/ | The carbon footprint associated with large language models (LLMs) is a
significant concern, encompassing emissions from their training, inference,
experimentation, and storage processes, including operational and embodied
carbon emissions. An essential aspect is accurately estimating the carbon
impact of emerging LLMs even before their training, which heavily relies on GPU
usage. Existing studies have reported the carbon footprint of LLM training, but
only one tool, mlco2, can predict the carbon footprint of new neural networks
prior to physical training. However, mlco2 has several serious limitations. It
cannot extend its estimation to dense or mixture-of-experts (MoE) LLMs,
disregards critical architectural parameters, focuses solely on GPUs, and
cannot model embodied carbon footprints. Addressing these gaps, we introduce
\textit{LLMCarbon}, an end-to-end carbon footprint projection model designed
for both dense and MoE LLMs. Compared to mlco2, LLMCarbon significantly
enhances the accuracy of carbon footprint estimations for various LLMs.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 14:50:04 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Faiz",
"Ahmad",
""
],
[
"Kaneda",
"Sotaro",
""
],
[
"Wang",
"Ruhan",
""
],
[
"Osi",
"Rita",
""
],
[
"Sharma",
"Parteek",
""
],
[
"Chen",
"Fan",
""
],
[
"Jiang",
"Lei",
""
]
]
| new_dataset | 0.997741 |
2309.14463 | Bao Thach | Bao Thach, Tanner Watts, Shing-Hei Ho, Tucker Hermans, Alan Kuntz | DefGoalNet: Contextual Goal Learning from Demonstrations For Deformable
Object Manipulation | Submitted to IEEE Conference on Robotics and Automation (ICRA) 2024.
8 pages, 11 figures | null | null | null | cs.RO cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Shape servoing, a robotic task dedicated to controlling objects to desired
goal shapes, is a promising approach to deformable object manipulation. An
issue arises, however, with the reliance on the specification of a goal shape.
This goal has been obtained either by a laborious domain knowledge engineering
process or by manually manipulating the object into the desired shape and
capturing the goal shape at that specific moment, both of which are impractical
in various robotic applications. In this paper, we solve this problem by
developing a novel neural network DefGoalNet, which learns deformable object
goal shapes directly from a small number of human demonstrations. We
demonstrate our method's effectiveness on various robotic tasks, both in
simulation and on a physical robot. Notably, in the surgical retraction task,
even when trained with as few as 10 demonstrations, our method achieves a
median success percentage of nearly 90%. These results mark a substantial
advancement in enabling shape servoing methods to bring deformable object
manipulation closer to practical, real-world applications.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 18:54:32 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Thach",
"Bao",
""
],
[
"Watts",
"Tanner",
""
],
[
"Ho",
"Shing-Hei",
""
],
[
"Hermans",
"Tucker",
""
],
[
"Kuntz",
"Alan",
""
]
]
| new_dataset | 0.981852 |
2309.14465 | Gordon Fraser | Adina Deiner and Gordon Fraser | NuzzleBug: Debugging Block-Based Programs in Scratch | To appear at the 2024 IEEE/ACM 46th International Conference on
Software Engineering (ICSE '24), April 14--20, 2024, Lisbon, Portugal | null | 10.1145/3597503.3623331 | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While professional integrated programming environments support developers
with advanced debugging functionality, block-based programming environments for
young learners often provide no support for debugging at all, thus inhibiting
debugging and preventing debugging education. In this paper we introduce
NuzzleBug, an extension of the popular block-based programming environment
Scratch that provides the missing debugging support. NuzzleBug allows
controlling the executions of Scratch programs with classical debugging
functionality such as stepping and breakpoints, and it is an omniscient
debugger that also allows reverse stepping. To support learners in deriving
hypotheses that guide debugging, NuzzleBug is an interrogative debugger that
enables to ask questions about executions and provides answers explaining the
behavior in question. In order to evaluate NuzzleBug, we survey the opinions of
teachers, and study the effects on learners in terms of debugging effectiveness
and efficiency. We find that teachers consider NuzzleBug to be useful, and
children can use it to debug faulty programs effectively. However, systematic
debugging requires dedicated training, and even when NuzzleBug can provide
correct answers learners may require further help to comprehend faults and
necessary fixes, thus calling for further research on improving debugging
techniques and the information they provide.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 18:56:26 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Deiner",
"Adina",
""
],
[
"Fraser",
"Gordon",
""
]
]
| new_dataset | 0.973009 |
2309.14468 | Tolga Buz | Lucas Liebe, Franz Sauerwald, Sylwester Sawicki, Matthias Schneider,
Leo Schuhmann, Tolga Buz, Paul Boes, Ahmad Ahmadov, Gerard de Melo | FARSEC: A Reproducible Framework for Automatic Real-Time Vehicle Speed
Estimation Using Traffic Cameras | null | null | null | null | cs.CV cs.LG cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Estimating the speed of vehicles using traffic cameras is a crucial task for
traffic surveillance and management, enabling more optimal traffic flow,
improved road safety, and lower environmental impact. Transportation-dependent
systems, such as for navigation and logistics, have great potential to benefit
from reliable speed estimation. While there is prior research in this area
reporting competitive accuracy levels, their solutions lack reproducibility and
robustness across different datasets. To address this, we provide a novel
framework for automatic real-time vehicle speed calculation, which copes with
more diverse data from publicly available traffic cameras to achieve greater
robustness. Our model employs novel techniques to estimate the length of road
segments via depth map prediction. Additionally, our framework is capable of
handling realistic conditions such as camera movements and different video
stream inputs automatically. We compare our model to three well-known models in
the field using their benchmark datasets. While our model does not set a new
state of the art regarding prediction performance, the results are competitive
on realistic CCTV videos. At the same time, our end-to-end pipeline offers more
consistent results, an easier implementation, and better compatibility. Its
modular structure facilitates reproducibility and future improvements.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 19:02:40 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Liebe",
"Lucas",
""
],
[
"Sauerwald",
"Franz",
""
],
[
"Sawicki",
"Sylwester",
""
],
[
"Schneider",
"Matthias",
""
],
[
"Schuhmann",
"Leo",
""
],
[
"Buz",
"Tolga",
""
],
[
"Boes",
"Paul",
""
],
[
"Ahmadov",
"Ahmad",
""
],
[
"de Melo",
"Gerard",
""
]
]
| new_dataset | 0.978959 |
2309.14477 | Noman Bashir | John Thiede, Noman Bashir, David Irwin, Prashant Shenoy | Carbon Containers: A System-level Facility for Managing
Application-level Carbon Emissions | ACM Symposium on Cloud Computing (SoCC) | null | 10.1145/3620678.3624644 | null | cs.DC cs.ET cs.OS cs.PF cs.SY eess.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | To reduce their environmental impact, cloud datacenters' are increasingly
focused on optimizing applications' carbon-efficiency, or work done per mass of
carbon emitted. To facilitate such optimizations, we present Carbon Containers,
a simple system-level facility, which extends prior work on power containers,
that automatically regulates applications' carbon emissions in response to
variations in both their workload's intensity and their energy's
carbon-intensity. Specifically, \carbonContainerS enable applications to
specify a maximum carbon emissions rate (in g$\cdot$CO$_2$e/hr), and then
transparently enforce this rate via a combination of vertical scaling,
container migration, and suspend/resume while maximizing either
energy-efficiency or performance.
Carbon Containers are especially useful for applications that i) must
continue running even during high-carbon periods, and ii) execute in regions
with few variations in carbon-intensity. These low-variability regions also
tend to have high average carbon-intensity, which increases the importance of
regulating carbon emissions. We implement a Carbon Containers prototype by
extending Linux Containers to incorporate the mechanisms above and evaluate it
using real workload traces and carbon-intensity data from multiple regions. We
compare Carbon Containers with prior work that regulates carbon emissions by
suspending/resuming applications during high/low carbon periods. We show that
Carbon Containers are more carbon-efficient and improve performance while
maintaining similar carbon emissions.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 19:22:25 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Thiede",
"John",
""
],
[
"Bashir",
"Noman",
""
],
[
"Irwin",
"David",
""
],
[
"Shenoy",
"Prashant",
""
]
]
| new_dataset | 0.999105 |
2309.14491 | Jingwei Ji | Mahyar Najibi, Jingwei Ji, Yin Zhou, Charles R. Qi, Xinchen Yan, Scott
Ettinger, Dragomir Anguelov | Unsupervised 3D Perception with 2D Vision-Language Distillation for
Autonomous Driving | ICCV 2023 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Closed-set 3D perception models trained on only a pre-defined set of object
categories can be inadequate for safety critical applications such as
autonomous driving where new object types can be encountered after deployment.
In this paper, we present a multi-modal auto labeling pipeline capable of
generating amodal 3D bounding boxes and tracklets for training models on
open-set categories without 3D human labels. Our pipeline exploits motion cues
inherent in point cloud sequences in combination with the freely available 2D
image-text pairs to identify and track all traffic participants. Compared to
the recent studies in this domain, which can only provide class-agnostic auto
labels limited to moving objects, our method can handle both static and moving
objects in the unsupervised manner and is able to output open-vocabulary
semantic labels thanks to the proposed vision-language knowledge distillation.
Experiments on the Waymo Open Dataset show that our approach outperforms the
prior work by significant margins on various unsupervised 3D perception tasks.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 19:33:52 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Najibi",
"Mahyar",
""
],
[
"Ji",
"Jingwei",
""
],
[
"Zhou",
"Yin",
""
],
[
"Qi",
"Charles R.",
""
],
[
"Yan",
"Xinchen",
""
],
[
"Ettinger",
"Scott",
""
],
[
"Anguelov",
"Dragomir",
""
]
]
| new_dataset | 0.998955 |
2309.14508 | Anav Chaudhary | Anav Chaudhary, Kshitij Tiwari and Aniket Bera | HEROES: Unreal Engine-based Human and Emergency Robot Operation
Education System | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Training and preparing first responders and humanitarian robots for Mass
Casualty Incidents (MCIs) often poses a challenge owing to the lack of
realistic and easily accessible test facilities. While such facilities can
offer realistic scenarios post an MCI that can serve training and educational
purposes for first responders and humanitarian robots, they are often hard to
access owing to logistical constraints. To overcome this challenge, we present
HEROES- a versatile Unreal Engine simulator for designing novel training
simulations for humans and emergency robots for such urban search and rescue
operations. The proposed HEROES simulator is capable of generating synthetic
datasets for machine learning pipelines that are used for training robot
navigation. This work addresses the necessity for a comprehensive training
platform in the robotics community, ensuring pragmatic and efficient
preparation for real-world emergency scenarios. The strengths of our simulator
lie in its adaptability, scalability, and ability to facilitate collaboration
between robot developers and first responders, fostering synergy in developing
effective strategies for search and rescue operations in MCIs. We conducted a
preliminary user study with an 81% positive response supporting the ability of
HEROES to generate sufficiently varied environments, and a 78% positive
response affirming the usefulness of the simulation environment of HEROES.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 20:14:38 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Chaudhary",
"Anav",
""
],
[
"Tiwari",
"Kshitij",
""
],
[
"Bera",
"Aniket",
""
]
]
| new_dataset | 0.992658 |
2309.14516 | Shiming Wang | Shiming Wang, Holger Caesar, Liangliang Nan, Julian F. P. Kooij | UniBEV: Multi-modal 3D Object Detection with Uniform BEV Encoders for
Robustness against Missing Sensor Modalities | 6 pages, 5 figures | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-sensor object detection is an active research topic in automated
driving, but the robustness of such detection models against missing sensor
input (modality missing), e.g., due to a sudden sensor failure, is a critical
problem which remains under-studied. In this work, we propose UniBEV, an
end-to-end multi-modal 3D object detection framework designed for robustness
against missing modalities: UniBEV can operate on LiDAR plus camera input, but
also on LiDAR-only or camera-only input without retraining. To facilitate its
detector head to handle different input combinations, UniBEV aims to create
well-aligned Bird's Eye View (BEV) feature maps from each available modality.
Unlike prior BEV-based multi-modal detection methods, all sensor modalities
follow a uniform approach to resample features from the native sensor
coordinate systems to the BEV features. We furthermore investigate the
robustness of various fusion strategies w.r.t. missing modalities: the commonly
used feature concatenation, but also channel-wise averaging, and a
generalization to weighted averaging termed Channel Normalized Weights. To
validate its effectiveness, we compare UniBEV to state-of-the-art BEVFusion and
MetaBEV on nuScenes over all sensor input combinations. In this setting, UniBEV
achieves $52.5 \%$ mAP on average over all input combinations, significantly
improving over the baselines ($43.5 \%$ mAP on average for BEVFusion, $48.7 \%$
mAP on average for MetaBEV). An ablation study shows the robustness benefits of
fusing by weighted averaging over regular concatenation, and of sharing queries
between the BEV encoders of each modality. Our code will be released upon paper
acceptance.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 20:22:47 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Wang",
"Shiming",
""
],
[
"Caesar",
"Holger",
""
],
[
"Nan",
"Liangliang",
""
],
[
"Kooij",
"Julian F. P.",
""
]
]
| new_dataset | 0.999004 |
2309.14517 | Deepak Kumar | Deepak Kumar, Yousef AbuHashem, Zakir Durumeric | Watch Your Language: Large Language Models and Content Moderation | null | null | null | null | cs.HC cs.AI cs.CL cs.CR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have exploded in popularity due to their ability
to perform a wide array of natural language tasks. Text-based content
moderation is one LLM use case that has received recent enthusiasm, however,
there is little research investigating how LLMs perform in content moderation
settings. In this work, we evaluate a suite of modern, commercial LLMs (GPT-3,
GPT-3.5, GPT-4) on two common content moderation tasks: rule-based community
moderation and toxic content detection. For rule-based community moderation, we
construct 95 LLM moderation-engines prompted with rules from 95 Reddit
subcommunities and find that LLMs can be effective at rule-based moderation for
many communities, achieving a median accuracy of 64% and a median precision of
83%. For toxicity detection, we find that LLMs significantly outperform
existing commercially available toxicity classifiers. However, we also find
that recent increases in model size add only marginal benefit to toxicity
detection, suggesting a potential performance plateau for LLMs on toxicity
detection tasks. We conclude by outlining avenues for future work in studying
LLMs and content moderation.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 20:23:51 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Kumar",
"Deepak",
""
],
[
"AbuHashem",
"Yousef",
""
],
[
"Durumeric",
"Zakir",
""
]
]
| new_dataset | 0.994225 |
2309.14534 | Hyoungwook Jin | Hyoungwook Jin, Seonghee Lee, Hyungyu Shin, Juho Kim | "Teach AI How to Code": Using Large Language Models as Teachable Agents
for Programming Education | null | null | null | null | cs.HC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This work investigates large language models (LLMs) as teachable agents for
learning by teaching (LBT). LBT with teachable agents helps learners identify
their knowledge gaps and discover new knowledge. However, teachable agents
require expensive programming of subject-specific knowledge. While LLMs as
teachable agents can reduce the cost, LLMs' over-competence as tutees
discourages learners from teaching. We propose a prompting pipeline that
restrains LLMs' competence and makes them initiate "why" and "how" questions
for effective knowledge-building. We combined these techniques into TeachYou,
an LBT environment for algorithm learning, and AlgoBo, an LLM-based tutee
chatbot that can simulate misconceptions and unawareness prescribed in its
knowledge state. Our technical evaluation confirmed that our prompting pipeline
can effectively configure AlgoBo's problem-solving performance. Through a
between-subject study with 40 algorithm novices, we also observed that AlgoBo's
questions led to knowledge-dense conversations (effect size=0.73). Lastly, we
discuss design implications, cost-efficiency, and personalization of LLM-based
teachable agents.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 21:20:04 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Jin",
"Hyoungwook",
""
],
[
"Lee",
"Seonghee",
""
],
[
"Shin",
"Hyungyu",
""
],
[
"Kim",
"Juho",
""
]
]
| new_dataset | 0.994498 |
2309.14551 | Daniel Aronoff Dr. | Daniel Aronoff, Isaac Ardis | ADESS: A Proof-of-Work Protocol to Deter Double-Spend Attacks | 33 pages. Accepted at Future of Information and Communications
Conference 2024 | null | null | null | cs.CR cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A principal vulnerability of a proof-of-work ("PoW") blockchain is that an
attacker can re-write the history of transactions by forking a previously
published block and build a new chain segment containing a different sequence
of transactions. If the attacker's chain has the most cumulative mining puzzle
difficulty, nodes will recognize it as canonical. We propose a modification to
PoW protocols, called ADESS, that contains two novel features. The first
modification enables a node to identify the attacker chain by comparing the
temporal sequence of blocks on competing chains. The second modification
penalizes the attacker by requiring it to apply exponentially increasing
hashrate in order to make its chain canonical. We demonstrate two things; (i)
the expected cost of carrying out a double-spend attack is weakly higher under
ADESS compared to the current PoW protocols and (ii) for any value of
transaction, there is a penalty setting in ADESS that renders the expected
profit of a double-spend attack negative.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 21:50:23 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Aronoff",
"Daniel",
""
],
[
"Ardis",
"Isaac",
""
]
]
| new_dataset | 0.996205 |
2309.14568 | Avi Shmidman | Shaltiel Shmidman, Avi Shmidman, Amir David Nissan Cohen, Moshe Koppel | Introducing DictaLM -- A Large Generative Language Model for Modern
Hebrew | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We present DictaLM, a large-scale language model tailored for Modern Hebrew.
Boasting 7B parameters, this model is predominantly trained on Hebrew-centric
data. As a commitment to promoting research and development in the Hebrew
language, we release both the foundation model and the instruct-tuned model
under a Creative Commons license. Concurrently, we introduce DictaLM-Rab,
another foundation model geared towards Rabbinic/Historical Hebrew. These
foundation models serve as ideal starting points for fine-tuning various
Hebrew-specific tasks, such as instruction, Q&A, sentiment analysis, and more.
This release represents a preliminary step, offering an initial Hebrew LLM
model for the Hebrew NLP community to experiment with.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 22:42:09 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Shmidman",
"Shaltiel",
""
],
[
"Shmidman",
"Avi",
""
],
[
"Cohen",
"Amir David Nissan",
""
],
[
"Koppel",
"Moshe",
""
]
]
| new_dataset | 0.99442 |
2309.14590 | Minwoo Jung | Minwoo Jung, Wooseong Yang, Dongjae Lee, Hyeonjae Gil, Giseop Kim,
Ayoung Kim | HeLiPR: Heterogeneous LiDAR Dataset for inter-LiDAR Place Recognition
under Spatial and Temporal Variations | 9 pages, 9 figures, 5 tables | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Place recognition is crucial for robotic localization and loop closure in
simultaneous localization and mapping (SLAM). Recently, LiDARs have gained
popularity due to their robust sensing capability and measurement consistency,
even in the illumination-variant environment, offering an advantage over
traditional imaging sensors. Spinning LiDARs are widely accepted among many
types, while non-repetitive scanning patterns have recently been utilized in
robotic applications. Beyond the range measurements, some LiDARs offer
additional measurements, such as reflectivity, Near Infrared (NIR), and
velocity (e.g., FMCW LiDARs). Despite these advancements, a noticeable dearth
of datasets comprehensively reflects the broad spectrum of LiDAR configurations
optimized for place recognition. To tackle this issue, our paper proposes the
HeLiPR dataset, curated especially for place recognition with heterogeneous
LiDAR systems, embodying spatial-temporal variations. To the best of our
knowledge, the HeLiPR dataset is the first heterogeneous LiDAR dataset designed
to support inter-LiDAR place recognition with both non-repetitive and spinning
LiDARs, accommodating different field of view (FOV) and varying numbers of
rays. Encompassing the distinct LiDAR configurations, it captures varied
environments ranging from urban cityscapes to high-dynamic freeways over a
month, designed to enhance the adaptability and robustness of place recognition
across diverse scenarios. Notably, the HeLiPR dataset also includes
trajectories that parallel sequences from MulRan, underscoring its utility for
research in heterogeneous LiDAR place recognition and long-term studies. The
dataset is accessible at https: //sites.google.com/view/heliprdataset.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 00:45:04 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Jung",
"Minwoo",
""
],
[
"Yang",
"Wooseong",
""
],
[
"Lee",
"Dongjae",
""
],
[
"Gil",
"Hyeonjae",
""
],
[
"Kim",
"Giseop",
""
],
[
"Kim",
"Ayoung",
""
]
]
| new_dataset | 0.999857 |
2309.14594 | Helei Duan | Helei Duan, Bikram Pandit, Mohitvishnu S. Gadde, Bart Jaap van Marum,
Jeremy Dao, Chanho Kim, Alan Fern | Learning Vision-Based Bipedal Locomotion for Challenging Terrain | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning (RL) for bipedal locomotion has recently demonstrated
robust gaits over moderate terrains using only proprioceptive sensing. However,
such blind controllers will fail in environments where robots must anticipate
and adapt to local terrain, which requires visual perception. In this paper, we
propose a fully-learned system that allows bipedal robots to react to local
terrain while maintaining commanded travel speed and direction. Our approach
first trains a controller in simulation using a heightmap expressed in the
robot's local frame. Next, data is collected in simulation to train a heightmap
predictor, whose input is the history of depth images and robot states. We
demonstrate that with appropriate domain randomization, this approach allows
for successful sim-to-real transfer with no explicit pose estimation and no
fine-tuning using real-world data. To the best of our knowledge, this is the
first example of sim-to-real learning for vision-based bipedal locomotion over
challenging terrains.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 00:59:59 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Duan",
"Helei",
""
],
[
"Pandit",
"Bikram",
""
],
[
"Gadde",
"Mohitvishnu S.",
""
],
[
"van Marum",
"Bart Jaap",
""
],
[
"Dao",
"Jeremy",
""
],
[
"Kim",
"Chanho",
""
],
[
"Fern",
"Alan",
""
]
]
| new_dataset | 0.986797 |
2309.14600 | Han Yi | Han Yi, Zhedong Zheng, Xiangyu Xu and Tat-seng Chua | Progressive Text-to-3D Generation for Automatic 3D Prototyping | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text-to-3D generation is to craft a 3D object according to a natural language
description. This can significantly reduce the workload for manually designing
3D models and provide a more natural way of interaction for users. However,
this problem remains challenging in recovering the fine-grained details
effectively and optimizing a large-size 3D output efficiently. Inspired by the
success of progressive learning, we propose a Multi-Scale Triplane Network
(MTN) and a new progressive learning strategy. As the name implies, the
Multi-Scale Triplane Network consists of four triplanes transitioning from low
to high resolution. The low-resolution triplane could serve as an initial shape
for the high-resolution ones, easing the optimization difficulty. To further
enable the fine-grained details, we also introduce the progressive learning
strategy, which explicitly demands the network to shift its focus of attention
from simple coarse-grained patterns to difficult fine-grained patterns. Our
experiment verifies that the proposed method performs favorably against
existing methods. For even the most challenging descriptions, where most
existing methods struggle to produce a viable shape, our proposed method
consistently delivers. We aspire for our work to pave the way for automatic 3D
prototyping via natural language descriptions.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 01:08:35 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Yi",
"Han",
""
],
[
"Zheng",
"Zhedong",
""
],
[
"Xu",
"Xiangyu",
""
],
[
"Chua",
"Tat-seng",
""
]
]
| new_dataset | 0.958168 |
2309.14611 | Xiao Wang | Xiao Wang, Shiao Wang, Chuanming Tang, Lin Zhu, Bo Jiang, Yonghong
Tian, Jin Tang | Event Stream-based Visual Object Tracking: A High-Resolution Benchmark
Dataset and A Novel Baseline | null | null | null | null | cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tracking using bio-inspired event cameras has drawn more and more attention
in recent years. Existing works either utilize aligned RGB and event data for
accurate tracking or directly learn an event-based tracker. The first category
needs more cost for inference and the second one may be easily influenced by
noisy events or sparse spatial resolution. In this paper, we propose a novel
hierarchical knowledge distillation framework that can fully utilize
multi-modal / multi-view information during training to facilitate knowledge
transfer, enabling us to achieve high-speed and low-latency visual tracking
during testing by using only event signals. Specifically, a teacher
Transformer-based multi-modal tracking framework is first trained by feeding
the RGB frame and event stream simultaneously. Then, we design a new
hierarchical knowledge distillation strategy which includes pairwise
similarity, feature representation, and response maps-based knowledge
distillation to guide the learning of the student Transformer network.
Moreover, since existing event-based tracking datasets are all low-resolution
($346 \times 260$), we propose the first large-scale high-resolution ($1280
\times 720$) dataset named EventVOT. It contains 1141 videos and covers a wide
range of categories such as pedestrians, vehicles, UAVs, ping pongs, etc.
Extensive experiments on both low-resolution (FE240hz, VisEvent, COESOT), and
our newly proposed high-resolution EventVOT dataset fully validated the
effectiveness of our proposed method. The dataset, evaluation toolkit, and
source code are available on
\url{https://github.com/Event-AHU/EventVOT_Benchmark}
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 01:42:26 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Wang",
"Xiao",
""
],
[
"Wang",
"Shiao",
""
],
[
"Tang",
"Chuanming",
""
],
[
"Zhu",
"Lin",
""
],
[
"Jiang",
"Bo",
""
],
[
"Tian",
"Yonghong",
""
],
[
"Tang",
"Jin",
""
]
]
| new_dataset | 0.988426 |
2309.14653 | Francis Lau C.M. | Jia Zhan and Francis C.M. Lau | Joint Design of Source-Channel Codes with Linear Source Encoding
Complexity and Good Channel Thresholds Based on Double-Protograph LDPC Codes | 7 pages, 5 figures, 3 tables, to appear in IEEE Communications
Letters | null | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We propose the use of a lower or upper triangular sub-base matrix to replace
the identity matrix in the source-check-channel-variable linking protomatrix of
a double-protograph low-density parity-check joint-source-channel code (DP-LDPC
JSCC). The elements along the diagonal of the proposed lower or upper
triangular sub-base matrix are assigned as "1" and the other non-zero elements
can take any non-negative integral values. Compared with the traditional
DP-LDPC JSCC designs, the new designs show a theoretical channel threshold
improvement of up to 0.41 dB and a simulated source symbol error rate
improvement of up to 0.5 dB at an error rate of 1e-6.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 04:13:00 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Zhan",
"Jia",
""
],
[
"Lau",
"Francis C. M.",
""
]
]
| new_dataset | 0.984359 |
2309.14659 | Ayush Kumar | Ayush Kumar and Vrizlynn L.L. Thing | A Public Key Infrastructure for 5G Service-Based Architecture | Accepted for publication in ITCCN Symposium, TrustCom 2023 | null | null | null | cs.CR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The 3GPP 5G Service-based Architecture (SBA) security specifications leave
several details on how to setup an appropriate Public Key Infrastructure (PKI)
for 5G SBA, unspecified. In this work, we propose 5G-SBA-PKI, a public key
infrastructure for secure inter-NF communication in 5G SBA core networks, where
NF refers to Network Functions. 5G-SBA-PKI is designed to include multiple
certificate authorities (with different scopes of operation and capabilities)
at different PLMN levels for certification operations and key exchange between
communicating NFs, where PLMN refers to a Public Land Mobile Network. We
conduct a formal analysis of 5G-SBA-PKI with respect to the desired security
properties using TAMARIN prover. Finally, we evaluate 5G-SBA-PKI's performance
with "pre-quantum" as well as quantum-safe cryptographic algorithms.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 04:32:23 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Kumar",
"Ayush",
""
],
[
"Thing",
"Vrizlynn L. L.",
""
]
]
| new_dataset | 0.99625 |
2309.14685 | Shuo Sun | Shuo Sun, Zekai Gu, Tianchen Sun, Jiawei Sun, Chengran Yuan, Yuhang
Han, Dongen Li, Marcelo H. Ang Jr | DriveSceneGen: Generating Diverse and Realistic Driving Scenarios from
Scratch | 7 pages, 5 figures, 2 tables | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Realistic and diverse traffic scenarios in large quantities are crucial for
the development and validation of autonomous driving systems. However, owing to
numerous difficulties in the data collection process and the reliance on
intensive annotations, real-world datasets lack sufficient quantity and
diversity to support the increasing demand for data. This work introduces
DriveSceneGen, a data-driven driving scenario generation method that learns
from the real-world driving dataset and generates entire dynamic driving
scenarios from scratch. DriveSceneGen is able to generate novel driving
scenarios that align with real-world data distributions with high fidelity and
diversity. Experimental results on 5k generated scenarios highlight the
generation quality, diversity, and scalability compared to real-world datasets.
To the best of our knowledge, DriveSceneGen is the first method that generates
novel driving scenarios involving both static map elements and dynamic traffic
participants from scratch.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 05:40:43 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Sun",
"Shuo",
""
],
[
"Gu",
"Zekai",
""
],
[
"Sun",
"Tianchen",
""
],
[
"Sun",
"Jiawei",
""
],
[
"Yuan",
"Chengran",
""
],
[
"Han",
"Yuhang",
""
],
[
"Li",
"Dongen",
""
],
[
"Ang",
"Marcelo H.",
"Jr"
]
]
| new_dataset | 0.999758 |
2309.14742 | Boyu Chang | Qinying Wang, Boyu Chang, Shouling Ji, Yuan Tian, Xuhong Zhang, Binbin
Zhao, Gaoning Pan, Chenyang Lyu, Mathias Payer, Wenhai Wang, Raheem Beyah | SyzTrust: State-aware Fuzzing on Trusted OS Designed for IoT Devices | To appear in the IEEE Symposium on Security and Privacy (IEEE S&P)
2024, San Francisco, CA, USA | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Trusted Execution Environments (TEEs) embedded in IoT devices provide a
deployable solution to secure IoT applications at the hardware level. By
design, in TEEs, the Trusted Operating System (Trusted OS) is the primary
component. It enables the TEE to use security-based design techniques, such as
data encryption and identity authentication. Once a Trusted OS has been
exploited, the TEE can no longer ensure security. However, Trusted OSes for IoT
devices have received little security analysis, which is challenging from
several perspectives: (1) Trusted OSes are closed-source and have an
unfavorable environment for sending test cases and collecting feedback. (2)
Trusted OSes have complex data structures and require a stateful workflow,
which limits existing vulnerability detection tools. To address the challenges,
we present SyzTrust, the first state-aware fuzzing framework for vetting the
security of resource-limited Trusted OSes. SyzTrust adopts a hardware-assisted
framework to enable fuzzing Trusted OSes directly on IoT devices as well as
tracking state and code coverage non-invasively. SyzTrust utilizes composite
feedback to guide the fuzzer to effectively explore more states as well as to
increase the code coverage. We evaluate SyzTrust on Trusted OSes from three
major vendors: Samsung, Tsinglink Cloud, and Ali Cloud. These systems run on
Cortex M23/33 MCUs, which provide the necessary abstraction for embedded TEEs.
We discovered 70 previously unknown vulnerabilities in their Trusted OSes,
receiving 10 new CVEs so far. Furthermore, compared to the baseline, SyzTrust
has demonstrated significant improvements, including 66% higher code coverage,
651% higher state coverage, and 31% improved vulnerability-finding capability.
We report all discovered new vulnerabilities to vendors and open source
SyzTrust.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 08:11:38 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Wang",
"Qinying",
""
],
[
"Chang",
"Boyu",
""
],
[
"Ji",
"Shouling",
""
],
[
"Tian",
"Yuan",
""
],
[
"Zhang",
"Xuhong",
""
],
[
"Zhao",
"Binbin",
""
],
[
"Pan",
"Gaoning",
""
],
[
"Lyu",
"Chenyang",
""
],
[
"Payer",
"Mathias",
""
],
[
"Wang",
"Wenhai",
""
],
[
"Beyah",
"Raheem",
""
]
]
| new_dataset | 0.995638 |
2309.14781 | Hichem Sahbi | Hichem Sahbi and Sebastien Deschamps | Frugal Satellite Image Change Detection with Deep-Net Inversion | arXiv admin note: text overlap with arXiv:2212.13974 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Change detection in satellite imagery seeks to find occurrences of targeted
changes in a given scene taken at different instants. This task has several
applications ranging from land-cover mapping, to anthropogenic activity
monitory as well as climate change and natural hazard damage assessment.
However, change detection is highly challenging due to the acquisition
conditions and also to the subjectivity of changes. In this paper, we devise a
novel algorithm for change detection based on active learning. The proposed
method is based on a question and answer model that probes an oracle (user)
about the relevance of changes only on a small set of critical images (referred
to as virtual exemplars), and according to oracle's responses updates deep
neural network (DNN) classifiers. The main contribution resides in a novel
adversarial model that allows learning the most representative, diverse and
uncertain virtual exemplars (as inverted preimages of the trained DNNs) that
challenge (the most) the trained DNNs, and this leads to a better re-estimate
of these networks in the subsequent iterations of active learning. Experiments
show the out-performance of our proposed deep-net inversion against the related
work.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 09:25:53 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Sahbi",
"Hichem",
""
],
[
"Deschamps",
"Sebastien",
""
]
]
| new_dataset | 0.998514 |
2309.14806 | Luuk Spreeuwers | Luuk Spreeuwers, Rasmus van der Grift, Pesigrihastamadya
Normakristagaluh | 3D printed realistic finger vein phantoms | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Finger vein pattern recognition is an emerging biometric with a good
resistance to presentation attacks and low error rates. One problem is that it
is hard to obtain ground truth finger vein patterns from live fingers. In this
paper we propose an advanced method to create finger vein phantoms using 3D
printing where we mimic the optical properties of the various tissues inside
the fingers, like bone, veins and soft tissues using different printing
materials and parameters. We demonstrate that we are able to create finger
phantoms that result in realistic finger vein images and precisely known vein
patterns. These phantoms can be used to develop and evaluate finger vein
extraction and recognition methods. In addition, we show that the finger vein
phantoms can be used to spoof a finger vein recognition system. This paper is
based on the Master's thesis of Rasmus van der Grift.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 10:03:57 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Spreeuwers",
"Luuk",
""
],
[
"van der Grift",
"Rasmus",
""
],
[
"Normakristagaluh",
"Pesigrihastamadya",
""
]
]
| new_dataset | 0.99946 |
2309.14865 | Peter Hardy | Peter Hardy and Hansung Kim | Unsupervised Reconstruction of 3D Human Pose Interactions From 2D Poses
Alone | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Current unsupervised 2D-3D human pose estimation (HPE) methods do not work in
multi-person scenarios due to perspective ambiguity in monocular images.
Therefore, we present one of the first studies investigating the feasibility of
unsupervised multi-person 2D-3D HPE from just 2D poses alone, focusing on
reconstructing human interactions. To address the issue of perspective
ambiguity, we expand upon prior work by predicting the cameras' elevation angle
relative to the subjects' pelvis. This allows us to rotate the predicted poses
to be level with the ground plane, while obtaining an estimate for the vertical
offset in 3D between individuals. Our method involves independently lifting
each subject's 2D pose to 3D, before combining them in a shared 3D coordinate
system. The poses are then rotated and offset by the predicted elevation angle
before being scaled. This by itself enables us to retrieve an accurate 3D
reconstruction of their poses. We present our results on the CHI3D dataset,
introducing its use for unsupervised 2D-3D pose estimation with three new
quantitative metrics, and establishing a benchmark for future research.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 11:42:56 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Hardy",
"Peter",
""
],
[
"Kim",
"Hansung",
""
]
]
| new_dataset | 0.98743 |
2309.14876 | Debarati Bhaumik | Diptish Dey and Debarati Bhaumik | APPRAISE: a framework for managing AI compliance | null | null | null | null | cs.CY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | As AI systems increasingly impact society, the EU AI Act (AIA) is the first
serious attempt to contain its less desired effects. Among others the act
proposes audit as a mechanism and compliance products as tools for
organizations to demonstrate compliance. In this paper, a framework for
managing AI compliance, APPRAISE, is proposed. The framework is built upon the
rationale that driving a balance between generating shareholder value through
innovation in AI systems and managing compliance through organizational
processes will eventually result in value that is responsible. By adhering to
AIA compliance products, the framework operationalizes and hence safeguards
compliance. Furthermore, a two-phase experiment with a limited scope is
presented. The experiment aims to measure the extent to which companies
coordinate technical elements of AI systems to ultimately comply with the AIA.
In the first phase a survey is conducted and in the second phase the survey
results are validated with a couple of respondents to generate additional
in-depth insights and root causes.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 12:20:07 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Dey",
"Diptish",
""
],
[
"Bhaumik",
"Debarati",
""
]
]
| new_dataset | 0.991286 |
2309.14917 | Massimo Battaglioni Dr. | Massimo Battaglioni and Marco Baldi and Franco Chiaraluce and Giovanni
Cancellieri | Rate-compatible LDPC Codes based on Primitive Polynomials and Golomb
Rulers | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce and study a family of rate-compatible Low-Density Parity-Check
(LDPC) codes characterized by very simple encoders. The design of these codes
starts from simplex codes, which are defined by parity-check matrices having a
straightforward form stemming from the coefficients of a primitive polynomial.
For this reason, we call the new codes Primitive Rate-Compatible LDPC
(PRC-LDPC) codes. By applying puncturing to these codes, we obtain a bit-level
granularity of their code rates. We show that, in order to achieve good LDPC
codes, the underlying polynomials, besides being primitive, must meet some more
stringent conditions with respect to those of classical punctured simplex
codes. We leverage non-modular Golomb rulers to take the new requirements into
account. We characterize the minimum distance properties of PRC-LDPC codes, and
study and discuss their encoding and decoding complexity. Finally, we assess
their error rate performance under iterative decoding.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 13:22:45 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Battaglioni",
"Massimo",
""
],
[
"Baldi",
"Marco",
""
],
[
"Chiaraluce",
"Franco",
""
],
[
"Cancellieri",
"Giovanni",
""
]
]
| new_dataset | 0.999616 |
2309.14971 | Matteo Pagin | Manishika Rawat, Matteo Pagin, Marco Giordani, Louis-Adrien Dufrene,
Quentin Lampin, Michele Zorzi | Minimizing Energy Consumption for 5G NR Beam Management for RedCap
Devices | null | null | null | null | cs.NI eess.SP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In 5G New Radio (NR), beam management entails periodic and continuous
transmission and reception of control signals in the form of synchronization
signal blocks (SSBs), used to perform initial access and/or channel estimation.
However, this procedure demands continuous energy consumption, which is
particularly challenging to handle for low-cost, low-complexity, and
battery-constrained devices, such as RedCap devices to support mid-market
Internet of Things (IoT) use cases. In this context, this work aims at reducing
the energy consumption during beam management for RedCap devices, while
ensuring that the desired Quality of Service (QoS) requirements are met. To do
so, we formalize an optimization problem in an Indoor Factory (InF) scenario to
select the best beam management parameters, including the beam update
periodicity and the beamwidth, to minimize energy consumption based on users'
distribution and their speed. The analysis yields the regions of feasibility,
i.e., the upper limit(s) on the beam management parameters for RedCap devices,
that we use to provide design guidelines accordingly.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 14:44:08 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Rawat",
"Manishika",
""
],
[
"Pagin",
"Matteo",
""
],
[
"Giordani",
"Marco",
""
],
[
"Dufrene",
"Louis-Adrien",
""
],
[
"Lampin",
"Quentin",
""
],
[
"Zorzi",
"Michele",
""
]
]
| new_dataset | 0.997609 |
2309.14996 | Gene Cooperman | Yao Xu, Leonid Belyaev, Twinkle Jain, Derek Schafer, Anthony Skjellum,
Gene Cooperman | Implementation-Oblivious Transparent Checkpoint-Restart for MPI | 17 pages, 4 figures | null | null | null | cs.DC | http://creativecommons.org/licenses/by/4.0/ | This work presents experience with traditional use cases of checkpointing on
a novel platform. A single codebase (MANA) transparently checkpoints production
workloads for major available MPI implementations: "develop once, run
everywhere". The new platform enables application developers to compile their
application against any of the available standards-compliant MPI
implementations, and test each MPI implementation according to performance or
other features.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 15:11:33 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Xu",
"Yao",
""
],
[
"Belyaev",
"Leonid",
""
],
[
"Jain",
"Twinkle",
""
],
[
"Schafer",
"Derek",
""
],
[
"Skjellum",
"Anthony",
""
],
[
"Cooperman",
"Gene",
""
]
]
| new_dataset | 0.969319 |
2309.14997 | Qiao Yang | Qiao Yang, Yu Zhang, Jian Zhang, Zijing Zhao, Shunli Zhang, Jinqiao
Wang, Junzhe Chen | IAIFNet: An Illumination-Aware Infrared and Visible Image Fusion Network | Submitted to IEEE | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Infrared and visible image fusion (IVIF) is used to generate fusion images
with comprehensive features of both images, which is beneficial for downstream
vision tasks. However, current methods rarely consider the illumination
condition in low-light environments, and the targets in the fused images are
often not prominent. To address the above issues, we propose an
Illumination-Aware Infrared and Visible Image Fusion Network, named as IAIFNet.
In our framework, an illumination enhancement network first estimates the
incident illumination maps of input images. Afterwards, with the help of
proposed adaptive differential fusion module (ADFM) and salient target aware
module (STAM), an image fusion network effectively integrates the salient
features of the illumination-enhanced infrared and visible images into a fusion
image of high visual quality. Extensive experimental results verify that our
method outperforms five state-of-the-art methods of fusing infrared and visible
images.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 15:12:29 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Yang",
"Qiao",
""
],
[
"Zhang",
"Yu",
""
],
[
"Zhang",
"Jian",
""
],
[
"Zhao",
"Zijing",
""
],
[
"Zhang",
"Shunli",
""
],
[
"Wang",
"Jinqiao",
""
],
[
"Chen",
"Junzhe",
""
]
]
| new_dataset | 0.995833 |
2309.14999 | Hila Levi | Hila Levi, Guy Heller, Dan Levi, Ethan Fetaya | Object-Centric Open-Vocabulary Image-Retrieval with Aggregated Features | BMVC 2023 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The task of open-vocabulary object-centric image retrieval involves the
retrieval of images containing a specified object of interest, delineated by an
open-set text query. As working on large image datasets becomes standard,
solving this task efficiently has gained significant practical importance.
Applications include targeted performance analysis of retrieved images using
ad-hoc queries and hard example mining during training. Recent advancements in
contrastive-based open vocabulary systems have yielded remarkable
breakthroughs, facilitating large-scale open vocabulary image retrieval.
However, these approaches use a single global embedding per image, thereby
constraining the system's ability to retrieve images containing relatively
small object instances. Alternatively, incorporating local embeddings from
detection pipelines faces scalability challenges, making it unsuitable for
retrieval from large databases.
In this work, we present a simple yet effective approach to object-centric
open-vocabulary image retrieval. Our approach aggregates dense embeddings
extracted from CLIP into a compact representation, essentially combining the
scalability of image retrieval pipelines with the object identification
capabilities of dense detection methods. We show the effectiveness of our
scheme to the task by achieving significantly better results than global
feature approaches on three datasets, increasing accuracy by up to 15 mAP
points. We further integrate our scheme into a large scale retrieval framework
and demonstrate our method's advantages in terms of scalability and
interpretability.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 15:13:09 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Levi",
"Hila",
""
],
[
"Heller",
"Guy",
""
],
[
"Levi",
"Dan",
""
],
[
"Fetaya",
"Ethan",
""
]
]
| new_dataset | 0.956378 |
2309.15013 | Jennifer Drexler Fox | Jennifer Drexler Fox, Desh Raj, Natalie Delworth, Quinn McNamara,
Corey Miller, Mig\"uel Jett\'e | Updated Corpora and Benchmarks for Long-Form Speech Recognition | Submitted to ICASSP 2024 | null | null | null | cs.CL cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | The vast majority of ASR research uses corpora in which both the training and
test data have been pre-segmented into utterances. In most real-word ASR
use-cases, however, test audio is not segmented, leading to a mismatch between
inference-time conditions and models trained on segmented utterances. In this
paper, we re-release three standard ASR corpora - TED-LIUM 3, Gigapeech, and
VoxPopuli-en - with updated transcription and alignments to enable their use
for long-form ASR research. We use these reconstituted corpora to study the
train-test mismatch problem for transducers and attention-based
encoder-decoders (AEDs), confirming that AEDs are more susceptible to this
issue. Finally, we benchmark a simple long-form training for these models,
showing its efficacy for model robustness under this domain shift.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 15:32:09 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Fox",
"Jennifer Drexler",
""
],
[
"Raj",
"Desh",
""
],
[
"Delworth",
"Natalie",
""
],
[
"McNamara",
"Quinn",
""
],
[
"Miller",
"Corey",
""
],
[
"Jetté",
"Migüel",
""
]
]
| new_dataset | 0.99788 |
2309.15024 | Chia-Hsin Lin | Chia-Hsin Lin, Charles Jones, Bj\"orn W. Schuller, Harry Coppock | Synthia's Melody: A Benchmark Framework for Unsupervised Domain
Adaptation in Audio | null | null | null | null | cs.SD cs.LG eess.AS | http://creativecommons.org/licenses/by/4.0/ | Despite significant advancements in deep learning for vision and natural
language, unsupervised domain adaptation in audio remains relatively
unexplored. We, in part, attribute this to the lack of an appropriate benchmark
dataset. To address this gap, we present Synthia's melody, a novel audio data
generation framework capable of simulating an infinite variety of 4-second
melodies with user-specified confounding structures characterised by musical
keys, timbre, and loudness. Unlike existing datasets collected under
observational settings, Synthia's melody is free of unobserved biases, ensuring
the reproducibility and comparability of experiments. To showcase its utility,
we generate two types of distribution shifts-domain shift and sample selection
bias-and evaluate the performance of acoustic deep learning models under these
shifts. Our evaluations reveal that Synthia's melody provides a robust testbed
for examining the susceptibility of these models to varying levels of
distribution shift.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 15:46:06 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Lin",
"Chia-Hsin",
""
],
[
"Jones",
"Charles",
""
],
[
"Schuller",
"Björn W.",
""
],
[
"Coppock",
"Harry",
""
]
]
| new_dataset | 0.999322 |
2309.15040 | Papis Ndiaye Dr. | Papis Ndiaye, Dinh-Thuy Phan-Huy, Ayman Hassan, Jingyi Liao, Xiyu
Wang, Kalle Ruttik, Riku Jantti | Zero-Energy-Device for 6G: First Real-Time Backscatter Communication
thanks to the Detection of Pilots from an Ambient Commercial Cellular Network | 3 pages, 7 figures , 6Get2023 | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ambient backscatter communication technology (AmBC) and a novel device
category called zero-energy devices (ZED) have recently emerged as potential
components for the forthcoming 6th generation (6G) networks. A ZED communicates
with a smartphone without emitting additional radio waves, by backscattering
ambient waves from base stations. Thanks to its very low consumption, a ZED
powers itself by harvesting ambient light energy. However, the time variations
of data traffic in cellular networks prevents AmBC to work properly. Recent
works have demonstrated experimentally that a backscatter device could be
detected by listening only ambient pilot signals (which are steady) instead of
the whole ambient signal (which is bursty) of 4G. However, these experiments
were run with a 4G base station emulator and a bulky energy greedy backscatter
device. In this paper, for the first time, we demonstrate real-time AmBC on the
field, with Orange commercial 4G network as ambient source and Orange
Zero-Energy Device.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 16:16:05 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Ndiaye",
"Papis",
""
],
[
"Phan-Huy",
"Dinh-Thuy",
""
],
[
"Hassan",
"Ayman",
""
],
[
"Liao",
"Jingyi",
""
],
[
"Wang",
"Xiyu",
""
],
[
"Ruttik",
"Kalle",
""
],
[
"Jantti",
"Riku",
""
]
]
| new_dataset | 0.999383 |
2309.15054 | Mollik Nayyar | Mollik Nayyar, Alan Wagner | Near Real-Time Position Tracking for Robot-Guided Evacuation | The 2nd Workshop on Social Robot Navigation: Advances and Evaluation.
In conjunction with: IEEE International Conference on Intelligent Robots and
Systems (IROS 2023) | null | null | null | cs.RO cs.CY | http://creativecommons.org/licenses/by/4.0/ | During the evacuation of a building, the rapid and accurate tracking of human
evacuees can be used by a guide robot to increase the effectiveness of the
evacuation [1],[2]. This paper introduces a near real-time human position
tracking solution tailored for evacuation robots. Using a pose detector, our
system first identifies human joints in the camera frame in near real-time and
then translates the position of these pixels into real-world coordinates via a
simple calibration process. We run multiple trials of the system in action in
an indoor lab environment and show that the system can achieve an accuracy of
0.55 meters when compared to ground truth. The system can also achieve an
average of 3 frames per second (FPS) which was sufficient for our study on
robot-guided human evacuation. The potential of our approach extends beyond
mere tracking, paving the way for evacuee motion prediction, allowing the robot
to proactively respond to human movements during an evacuation.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 16:34:18 GMT"
}
]
| 2023-09-27T00:00:00 | [
[
"Nayyar",
"Mollik",
""
],
[
"Wagner",
"Alan",
""
]
]
| new_dataset | 0.994464 |
1809.06044 | Yazan Boshmaf | Yazan Boshmaf, Husam Al Jawaheri, Mashael Al Sabah | BlockTag: Design and applications of a tagging system for blockchain
analysis | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Annotating blockchains with auxiliary data is useful for many applications.
For example, e-crime investigations of illegal Tor hidden services, such as
Silk Road, often involve linking Bitcoin addresses, from which money is sent or
received, to user accounts and related online activities. We present BlockTag,
an open-source tagging system for blockchains that facilitates such tasks. We
describe BlockTag's design and present three analyses that illustrate its
capabilities in the context of privacy research and law enforcement.
| [
{
"version": "v1",
"created": "Mon, 17 Sep 2018 06:53:19 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Oct 2018 10:29:29 GMT"
},
{
"version": "v3",
"created": "Sun, 3 Feb 2019 08:52:28 GMT"
},
{
"version": "v4",
"created": "Wed, 10 Jul 2019 05:44:15 GMT"
},
{
"version": "v5",
"created": "Sun, 24 Sep 2023 08:09:16 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Boshmaf",
"Yazan",
""
],
[
"Jawaheri",
"Husam Al",
""
],
[
"Sabah",
"Mashael Al",
""
]
]
| new_dataset | 0.998698 |
2102.03643 | Irene Rivas-Blanco | Irene Rivas-Blanco, Carlos J. P\'erez-del-Pulgar, Andrea Mariani,
Giuseppe Tortora, and Antonio J. Reina | A surgical dataset from the da Vinci Research Kit for task automation
and recognition | Submitted to The International Conference on Electrical, Computer,
Communications and Mechatronics Engineering (ICECCME). 6 Pages. 3 Figures | null | 10.1109/ICECCME57830.2023.10253032 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of datasets is getting more relevance in surgical robotics since they
can be used to recognise and automate tasks. Also, this allows to use common
datasets to compare different algorithms and methods. The objective of this
work is to provide a complete dataset of three common training surgical tasks
that surgeons perform to improve their skills. For this purpose, 12 subjects
teleoperated the da Vinci Research Kit to perform these tasks. The obtained
dataset includes all the kinematics and dynamics information provided by the da
Vinci robot (both master and slave side) together with the associated video
from the camera. All the information has been carefully timestamped and
provided in a readable csv format. A MATLAB interface integrated with ROS for
using and replicating the data is also provided.
| [
{
"version": "v1",
"created": "Sat, 6 Feb 2021 18:54:36 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jun 2023 12:11:01 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Rivas-Blanco",
"Irene",
""
],
[
"Pérez-del-Pulgar",
"Carlos J.",
""
],
[
"Mariani",
"Andrea",
""
],
[
"Tortora",
"Giuseppe",
""
],
[
"Reina",
"Antonio J.",
""
]
]
| new_dataset | 0.999296 |
2105.01331 | Hasan Kemik | Hasan Kemik, Nusret \"Ozate\c{s}, Meysam Asgari-Chenaghlu, Erik
Cambria | BLM-17m: A Large-Scale Dataset for Black Lives Matter Topic Detection on
Twitter | null | null | null | null | cs.CL cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Protection of human rights is one of the most important problems of our
world. In this paper, our aim is to provide a dataset which covers one of the
most significant human rights contradiction in recent months affected the whole
world, George Floyd incident. We propose a labeled dataset for topic detection
that contains 17 million tweets. These Tweets are collected from 25 May 2020 to
21 August 2020 that covers 89 days from start of this incident. We labeled the
dataset by monitoring most trending news topics from global and local
newspapers. Apart from that, we present two baselines, TF-IDF and LDA. We
evaluated the results of these two methods with three different k values for
metrics of precision, recall and f1-score. The collected dataset is available
at https://github.com/MeysamAsgariC/BLMT.
| [
{
"version": "v1",
"created": "Tue, 4 May 2021 07:27:42 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Sep 2023 19:40:16 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Kemik",
"Hasan",
""
],
[
"Özateş",
"Nusret",
""
],
[
"Asgari-Chenaghlu",
"Meysam",
""
],
[
"Cambria",
"Erik",
""
]
]
| new_dataset | 0.99979 |
2108.00309 | Chao Liu | Chao Liu and Sencheng Yu and Mark Yim | Motion Planning for Variable Topology Trusses: Reconfiguration and
Locomotion | 20 pages, 36 figures | IEEE Transactions on Robotics, vol. 39, no. 3, pp. 2020-2039, June
2023 | 10.1109/TRO.2022.3228400 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Truss robots are highly redundant parallel robotic systems that can be
applied in a variety of scenarios. The variable topology truss (VTT) is a class
of modular truss robots. As self-reconfigurable modular robots, a VTT is
composed of many edge modules that can be rearranged into various structures
depending on the task. These robots change their shape by not only controlling
joint positions as with fixed morphology robots, but also reconfiguring the
connectivity between truss members in order to change their topology. The
motion planning problem for VTT robots is difficult due to their varying
morphology, high dimensionality, the high likelihood for self-collision, and
complex motion constraints. In this paper, a new motion planning framework to
dramatically alter the structure of a VTT is presented. It can also be used to
solve locomotion tasks that are much more efficient compared with previous
work. Several test scenarios are used to show its effectiveness. Supplementary
materials are available at https://www.modlabupenn.org/vtt-motion-planning/.
| [
{
"version": "v1",
"created": "Sat, 31 Jul 2021 19:15:19 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Sep 2023 01:44:55 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Liu",
"Chao",
""
],
[
"Yu",
"Sencheng",
""
],
[
"Yim",
"Mark",
""
]
]
| new_dataset | 0.965593 |
2112.03051 | Aniruddha Mahapatra | Aniruddha Mahapatra and Kuldeep Kulkarni | Controllable Animation of Fluid Elements in Still Images | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We propose a method to interactively control the animation of fluid elements
in still images to generate cinemagraphs. Specifically, we focus on the
animation of fluid elements like water, smoke, fire, which have the properties
of repeating textures and continuous fluid motion. Taking inspiration from
prior works, we represent the motion of such fluid elements in the image in the
form of a constant 2D optical flow map. To this end, we allow the user to
provide any number of arrow directions and their associated speeds along with a
mask of the regions the user wants to animate. The user-provided input arrow
directions, their corresponding speed values, and the mask are then converted
into a dense flow map representing a constant optical flow map (FD). We observe
that FD, obtained using simple exponential operations can closely approximate
the plausible motion of elements in the image. We further refine computed dense
optical flow map FD using a generative-adversarial network (GAN) to obtain a
more realistic flow map. We devise a novel UNet based architecture to
autoregressively generate future frames using the refined optical flow map by
forward-warping the input image features at different resolutions. We conduct
extensive experiments on a publicly available dataset and show that our method
is superior to the baselines in terms of qualitative and quantitative metrics.
In addition, we show the qualitative animations of the objects in directions
that did not exist in the training set and provide a way to synthesize videos
that otherwise would not exist in the real world.
| [
{
"version": "v1",
"created": "Mon, 6 Dec 2021 13:53:08 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2022 16:37:30 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Sep 2023 05:52:17 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Mahapatra",
"Aniruddha",
""
],
[
"Kulkarni",
"Kuldeep",
""
]
]
| new_dataset | 0.997238 |
2203.10759 | Sai Zhang | Sai Zhang, Yuwei Hu, Yuchuan Wu, Jiaman Wu, Yongbin Li, Jian Sun,
Caixia Yuan and Xiaojie Wang | A Slot Is Not Built in One Utterance: Spoken Language Dialogs with
Sub-Slots | Accepted by ACL 2022 Findings | null | 10.18653/v1/2022.findings-acl.27 | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A slot value might be provided segment by segment over multiple-turn
interactions in a dialog, especially for some important information such as
phone numbers and names. It is a common phenomenon in daily life, but little
attention has been paid to it in previous work. To fill the gap, this paper
defines a new task named Sub-Slot based Task-Oriented Dialog (SSTOD) and builds
a Chinese dialog dataset SSD for boosting research on SSTOD. The dataset
includes a total of 40K dialogs and 500K utterances from four different
domains: Chinese names, phone numbers, ID numbers and license plate numbers.
The data is well annotated with sub-slot values, slot values, dialog states and
actions. We find some new linguistic phenomena and interactive manners in SSTOD
which raise critical challenges of building dialog agents for the task. We test
three state-of-the-art dialog models on SSTOD and find they cannot handle the
task well on any of the four domains. We also investigate an improved model by
involving slot knowledge in a plug-in manner. More work should be done to meet
the new challenges raised from SSTOD which widely exists in real-life
applications. The dataset and code are publicly available via
https://github.com/shunjiu/SSTOD.
| [
{
"version": "v1",
"created": "Mon, 21 Mar 2022 07:10:19 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Zhang",
"Sai",
""
],
[
"Hu",
"Yuwei",
""
],
[
"Wu",
"Yuchuan",
""
],
[
"Wu",
"Jiaman",
""
],
[
"Li",
"Yongbin",
""
],
[
"Sun",
"Jian",
""
],
[
"Yuan",
"Caixia",
""
],
[
"Wang",
"Xiaojie",
""
]
]
| new_dataset | 0.999829 |
2205.02282 | Martin Hirzel | Martin Hirzel | Low-Code Programming Models | null | Communications of the ACM (CACM), 66(10), pages 76-85, October
2023 | 10.1145/3587691 | null | cs.PL | http://creativecommons.org/licenses/by/4.0/ | Traditionally, computer programming has been the prerogative of professional
developers using textual programming languages such as C, Java, or Python.
Low-code programming promises an alternative: letting citizen developers create
programs using visual abstractions, demonstrations, or natural language. While
low-code programming is currently getting a lot of attention in industry, the
relevant research literature is scattered, and in fact, rarely uses the term
"low-code". This article brings together low-code literature from various
research fields, explaining how techniques work while providing a unified point
of view. Low-code has the potential to empower more people to automate tasks by
creating computer programs, making them more productive and less dependent on
scarce professional software developers.
| [
{
"version": "v1",
"created": "Wed, 4 May 2022 18:38:48 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Hirzel",
"Martin",
""
]
]
| new_dataset | 0.999442 |
2206.02014 | Andreas Troxler | Andreas Troxler (AT Analytics) and J\"urg Schelldorfer (Swiss Re) | Actuarial Applications of Natural Language Processing Using
Transformers: Case Studies for Using Text Features in an Actuarial Context | 47 pages, 33 figures. v3: Added new Section 10 on the use of ChatGPT
for unsupervised information extraction | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This tutorial demonstrates workflows to incorporate text data into actuarial
classification and regression tasks. The main focus is on methods employing
transformer-based models. A dataset of car accident descriptions with an
average length of 400 words, available in English and German, and a dataset
with short property insurance claims descriptions are used to demonstrate these
techniques. The case studies tackle challenges related to a multi-lingual
setting and long input sequences. They also show ways to interpret model
output, to assess and improve model performance, by fine-tuning the models to
the domain of application or to a specific prediction task. Finally, the
tutorial provides practical approaches to handle classification tasks in
situations with no or only few labeled data, including but not limited to
ChatGPT. The results achieved by using the language-understanding skills of
off-the-shelf natural language processing (NLP) models with only minimal
pre-processing and fine-tuning clearly demonstrate the power of transfer
learning for practical applications.
| [
{
"version": "v1",
"created": "Sat, 4 Jun 2022 15:39:30 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Aug 2022 15:01:19 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Sep 2023 09:17:04 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Troxler",
"Andreas",
"",
"AT Analytics"
],
[
"Schelldorfer",
"Jürg",
"",
"Swiss Re"
]
]
| new_dataset | 0.999788 |
2206.13778 | Fan Xu | Fan Xu and Yunxiang Zhang and Xiaojun Wan | CC-Riddle: A Question Answering Dataset of Chinese Character Riddles | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Chinese character riddle is a unique form of cultural entertainment
specific to the Chinese language. It typically comprises two parts: the riddle
description and the solution. The solution to the riddle is a single character,
while the riddle description primarily describes the glyph of the solution,
occasionally supplemented with its explanation and pronunciation. Solving
Chinese character riddles is a challenging task that demands understanding of
character glyph, general knowledge, and a grasp of figurative language. In this
paper, we construct a \textbf{C}hinese \textbf{C}haracter riddle dataset named
CC-Riddle, which covers the majority of common simplified Chinese characters.
The construction process is a combination of web crawling, language model
generation and manual filtering. In generation stage, we input the Chinese
phonetic alphabet, glyph and meaning of the solution character into the
generation model, which then produces multiple riddle descriptions. The
generated riddles are then manually filtered and the final CC-Riddle dataset is
composed of both human-written riddles and these filtered, generated riddles.
In order to assess the performance of language models on the task of solving
character riddles, we use retrieval-based, generative and multiple-choice QA
strategies to test three language models: BERT, ChatGPT and ChatGLM. The test
results reveal that current language models still struggle to solve Chinese
character riddles. CC-Riddle is publicly available at
\url{https://github.com/pku0xff/CC-Riddle}.
| [
{
"version": "v1",
"created": "Tue, 28 Jun 2022 06:23:13 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Sep 2023 05:15:51 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Xu",
"Fan",
""
],
[
"Zhang",
"Yunxiang",
""
],
[
"Wan",
"Xiaojun",
""
]
]
| new_dataset | 0.999857 |
2208.01819 | Shuchang Tao | Shuchang Tao, Qi Cao, Huawei Shen, Yunfan Wu, Liang Hou, Fei Sun,
Xueqi Cheng | Adversarial Camouflage for Node Injection Attack on Graphs | Published in Information Sciences. Code:
https://github.com/TaoShuchang/CANA | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Node injection attacks on Graph Neural Networks (GNNs) have received
increasing attention recently, due to their ability to degrade GNN performance
with high attack success rates. However, our study indicates that these attacks
often fail in practical scenarios, since defense/detection methods can easily
identify and remove the injected nodes. To address this, we devote to
camouflage node injection attack, making injected nodes appear normal and
imperceptible to defense/detection methods. Unfortunately, the non-Euclidean
structure of graph data and the lack of intuitive prior present great
challenges to the formalization, implementation, and evaluation of camouflage.
In this paper, we first propose and define camouflage as distribution
similarity between ego networks of injected nodes and normal nodes. Then for
implementation, we propose an adversarial CAmouflage framework for Node
injection Attack, namely CANA, to improve attack performance under
defense/detection methods in practical scenarios. A novel camouflage metric is
further designed under the guide of distribution similarity. Extensive
experiments demonstrate that CANA can significantly improve the attack
performance under defense/detection methods with higher camouflage or
imperceptibility. This work urges us to raise awareness of the security
vulnerabilities of GNNs in practical applications.
| [
{
"version": "v1",
"created": "Wed, 3 Aug 2022 02:48:23 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Nov 2022 08:43:52 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Jun 2023 03:22:39 GMT"
},
{
"version": "v4",
"created": "Sat, 23 Sep 2023 07:57:47 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Tao",
"Shuchang",
""
],
[
"Cao",
"Qi",
""
],
[
"Shen",
"Huawei",
""
],
[
"Wu",
"Yunfan",
""
],
[
"Hou",
"Liang",
""
],
[
"Sun",
"Fei",
""
],
[
"Cheng",
"Xueqi",
""
]
]
| new_dataset | 0.990216 |
2208.11553 | Estelle Aflalo Guez | Avinash Madasu, Estelle Aflalo, Gabriela Ben Melech Stan, Shachar
Rosenman, Shao-Yen Tseng, Gedas Bertasius, Vasudev Lal | MuMUR : Multilingual Multimodal Universal Retrieval | This is an extension of the previous MKTVR paper (for which you can
find a reference here :
https://dl.acm.org/doi/abs/10.1007/978-3-031-28244-7_42 or in a previous
version on arxiv). This version was published to the Information Retrieval
Journal | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Multi-modal retrieval has seen tremendous progress with the development of
vision-language models. However, further improving these models require
additional labelled data which is a huge manual effort. In this paper, we
propose a framework MuMUR, that utilizes knowledge transfer from a multilingual
model to boost the performance of multi-modal (image and video) retrieval. We
first use state-of-the-art machine translation models to construct pseudo
ground-truth multilingual visual-text pairs. We then use this data to learn a
joint vision-text representation where English and non-English text queries are
represented in a common embedding space based on pretrained multilingual
models. We evaluate our proposed approach on a diverse set of retrieval
datasets: five video retrieval datasets such as MSRVTT, MSVD, DiDeMo, Charades
and MSRVTT multilingual, two image retrieval datasets such as Flickr30k and
Multi30k . Experimental results demonstrate that our approach achieves
state-of-the-art results on all video retrieval datasets outperforming previous
models. Additionally, our framework MuMUR significantly beats other
multilingual video retrieval dataset. We also observe that MuMUR exhibits
strong performance on image retrieval. This demonstrates the universal ability
of MuMUR to perform retrieval across all visual inputs (image and video) and
text inputs (monolingual and multilingual).
| [
{
"version": "v1",
"created": "Wed, 24 Aug 2022 13:55:15 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Aug 2022 05:20:29 GMT"
},
{
"version": "v3",
"created": "Sun, 28 Aug 2022 04:58:51 GMT"
},
{
"version": "v4",
"created": "Wed, 21 Dec 2022 09:38:50 GMT"
},
{
"version": "v5",
"created": "Tue, 3 Jan 2023 09:05:59 GMT"
},
{
"version": "v6",
"created": "Mon, 18 Sep 2023 15:33:41 GMT"
},
{
"version": "v7",
"created": "Tue, 19 Sep 2023 10:58:41 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Madasu",
"Avinash",
""
],
[
"Aflalo",
"Estelle",
""
],
[
"Stan",
"Gabriela Ben Melech",
""
],
[
"Rosenman",
"Shachar",
""
],
[
"Tseng",
"Shao-Yen",
""
],
[
"Bertasius",
"Gedas",
""
],
[
"Lal",
"Vasudev",
""
]
]
| new_dataset | 0.999242 |
2208.12587 | Mostafa Jahanifar | Mostafa Jahanifar, Adam Shephard, Neda Zamanitajeddin, Simon Graham,
Shan E Ahmed Raza, Fayyaz Minhas, Nasir Rajpoot | Mitosis Detection, Fast and Slow: Robust and Efficient Detection of
Mitotic Figures | Extended version of the work done for MIDOG challenge submission | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Counting of mitotic figures is a fundamental step in grading and
prognostication of several cancers. However, manual mitosis counting is tedious
and time-consuming. In addition, variation in the appearance of mitotic figures
causes a high degree of discordance among pathologists. With advances in deep
learning models, several automatic mitosis detection algorithms have been
proposed but they are sensitive to {\em domain shift} often seen in histology
images. We propose a robust and efficient two-stage mitosis detection
framework, which comprises mitosis candidate segmentation ({\em Detecting
Fast}) and candidate refinement ({\em Detecting Slow}) stages. The proposed
candidate segmentation model, termed \textit{EUNet}, is fast and accurate due
to its architectural design. EUNet can precisely segment candidates at a lower
resolution to considerably speed up candidate detection. Candidates are then
refined using a deeper classifier network, EfficientNet-B7, in the second
stage. We make sure both stages are robust against domain shift by
incorporating domain generalization methods. We demonstrate state-of-the-art
performance and generalizability of the proposed model on the three largest
publicly available mitosis datasets, winning the two mitosis domain
generalization challenge contests (MIDOG21 and MIDOG22). Finally, we showcase
the utility of the proposed algorithm by processing the TCGA breast cancer
cohort (1,125 whole-slide images) to generate and release a repository of more
than 620K mitotic figures.
| [
{
"version": "v1",
"created": "Fri, 26 Aug 2022 11:14:59 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Sep 2023 11:38:03 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Jahanifar",
"Mostafa",
""
],
[
"Shephard",
"Adam",
""
],
[
"Zamanitajeddin",
"Neda",
""
],
[
"Graham",
"Simon",
""
],
[
"Raza",
"Shan E Ahmed",
""
],
[
"Minhas",
"Fayyaz",
""
],
[
"Rajpoot",
"Nasir",
""
]
]
| new_dataset | 0.978924 |
2208.14417 | Prince Grover | Prince Grover, Julia Xu, Justin Tittelfitz, Anqi Cheng, Zheng Li,
Jakub Zablocki, Jianbo Liu, Hao Zhou | Fraud Dataset Benchmark and Applications | null | null | null | null | cs.LG cs.CR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Standardized datasets and benchmarks have spurred innovations in computer
vision, natural language processing, multi-modal and tabular settings. We note
that, as compared to other well researched fields, fraud detection has unique
challenges: high-class imbalance, diverse feature types, frequently changing
fraud patterns, and adversarial nature of the problem. Due to these, the
modeling approaches evaluated on datasets from other research fields may not
work well for the fraud detection. In this paper, we introduce Fraud Dataset
Benchmark (FDB), a compilation of publicly available datasets catered to fraud
detection FDB comprises variety of fraud related tasks, ranging from
identifying fraudulent card-not-present transactions, detecting bot attacks,
classifying malicious URLs, estimating risk of loan default to content
moderation. The Python based library for FDB provides a consistent API for data
loading with standardized training and testing splits. We demonstrate several
applications of FDB that are of broad interest for fraud detection, including
feature engineering, comparison of supervised learning algorithms, label noise
removal, class-imbalance treatment and semi-supervised learning. We hope that
FDB provides a common playground for researchers and practitioners in the fraud
detection domain to develop robust and customized machine learning techniques
targeting various fraud use cases.
| [
{
"version": "v1",
"created": "Tue, 30 Aug 2022 17:35:39 GMT"
},
{
"version": "v2",
"created": "Wed, 31 Aug 2022 22:20:42 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Sep 2023 14:50:22 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Grover",
"Prince",
""
],
[
"Xu",
"Julia",
""
],
[
"Tittelfitz",
"Justin",
""
],
[
"Cheng",
"Anqi",
""
],
[
"Li",
"Zheng",
""
],
[
"Zablocki",
"Jakub",
""
],
[
"Liu",
"Jianbo",
""
],
[
"Zhou",
"Hao",
""
]
]
| new_dataset | 0.977189 |
2210.08298 | Uli Fahrenberg | Uli Fahrenberg and Krzysztof Ziemia\'nski | A Myhill-Nerode Theorem for Higher-Dimensional Automata | null | null | null | null | cs.FL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We establish a Myhill-Nerode type theorem for higher-dimensional automata
(HDAs), stating that a language is regular precisely if it has finite prefix
quotient. HDAs extend standard automata with additional structure, making it
possible to distinguish between interleavings and concurrency. We also
introduce deterministic HDAs and show that not all HDAs are determinizable,
that is, there exist regular languages that cannot be recognised by a
deterministic HDA. Using our theorem, we develop an internal characterisation
of deterministic languages.
| [
{
"version": "v1",
"created": "Sat, 15 Oct 2022 13:50:59 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Sep 2023 06:33:33 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Fahrenberg",
"Uli",
""
],
[
"Ziemiański",
"Krzysztof",
""
]
]
| new_dataset | 0.996895 |
2212.05250 | Pengmiao Zhang | Pengmiao Zhang, Rajgopal Kannan, Viktor K. Prasanna | Phases, Modalities, Temporal and Spatial Locality: Domain Specific ML
Prefetcher for Accelerating Graph Analytics | null | null | null | null | cs.LG cs.AR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Memory performance is a bottleneck in graph analytics acceleration. Existing
Machine Learning (ML) prefetchers struggle with phase transitions and irregular
memory accesses in graph processing. We propose MPGraph, an ML-based Prefetcher
for Graph analytics using domain specific models. MPGraph introduces three
novel optimizations: soft detection for phase transitions, phase-specific
multi-modality models for access delta and page predictions, and chain
spatio-temporal prefetching (CSTP) for prefetch control. Our transition
detector achieves 34.17-82.15% higher precision compared with
Kolmogorov-Smirnov Windowing and decision tree. Our predictors achieve
6.80-16.02% higher F1-score for delta and 11.68-15.41% higher accuracy-at-10
for page prediction compared with LSTM and vanilla attention models. Using
CSTP, MPGraph achieves 12.52-21.23% IPC improvement, outperforming
state-of-the-art non-ML prefetcher BO by 7.58-12.03% and ML-based prefetchers
Voyager and TransFetch by 3.27-4.58%. For practical implementation, we
demonstrate MPGraph using compressed models with reduced latency shows
significantly superior accuracy and coverage compared with BO, leading to 3.58%
higher IPC improvement.
| [
{
"version": "v1",
"created": "Sat, 10 Dec 2022 09:14:44 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Sep 2023 00:30:09 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Zhang",
"Pengmiao",
""
],
[
"Kannan",
"Rajgopal",
""
],
[
"Prasanna",
"Viktor K.",
""
]
]
| new_dataset | 0.964552 |
2301.06052 | Jianrong Zhang | Jianrong Zhang, Yangsong Zhang, Xiaodong Cun, Shaoli Huang, Yong
Zhang, Hongwei Zhao, Hongtao Lu and Xi Shen | T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete
Representations | Accepted to CVPR 2023. Project page:
https://mael-zys.github.io/T2M-GPT/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we investigate a simple and must-known conditional generative
framework based on Vector Quantised-Variational AutoEncoder (VQ-VAE) and
Generative Pre-trained Transformer (GPT) for human motion generation from
textural descriptions. We show that a simple CNN-based VQ-VAE with commonly
used training recipes (EMA and Code Reset) allows us to obtain high-quality
discrete representations. For GPT, we incorporate a simple corruption strategy
during the training to alleviate training-testing discrepancy. Despite its
simplicity, our T2M-GPT shows better performance than competitive approaches,
including recent diffusion-based approaches. For example, on HumanML3D, which
is currently the largest dataset, we achieve comparable performance on the
consistency between text and generated motion (R-Precision), but with FID 0.116
largely outperforming MotionDiffuse of 0.630. Additionally, we conduct analyses
on HumanML3D and observe that the dataset size is a limitation of our approach.
Our work suggests that VQ-VAE still remains a competitive approach for human
motion generation.
| [
{
"version": "v1",
"created": "Sun, 15 Jan 2023 09:34:42 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Jan 2023 11:56:01 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Feb 2023 05:23:51 GMT"
},
{
"version": "v4",
"created": "Sun, 24 Sep 2023 17:00:32 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Zhang",
"Jianrong",
""
],
[
"Zhang",
"Yangsong",
""
],
[
"Cun",
"Xiaodong",
""
],
[
"Huang",
"Shaoli",
""
],
[
"Zhang",
"Yong",
""
],
[
"Zhao",
"Hongwei",
""
],
[
"Lu",
"Hongtao",
""
],
[
"Shen",
"Xi",
""
]
]
| new_dataset | 0.999467 |
2302.02012 | James Holland | James K Holland, Jason Carpenter, Se Eun Oh, Nicholas Hopper | DeTorrent: An Adversarial Padding-only Traffic Analysis Defense | Accepted to the 24th Privacy Enhancing Technologies Symposium (PETS
2024) | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While anonymity networks like Tor aim to protect the privacy of their users,
they are vulnerable to traffic analysis attacks such as Website Fingerprinting
(WF) and Flow Correlation (FC). Recent implementations of WF and FC attacks,
such as Tik-Tok and DeepCoFFEA, have shown that the attacks can be effectively
carried out, threatening user privacy. Consequently, there is a need for
effective traffic analysis defense.
There are a variety of existing defenses, but most are either ineffective,
incur high latency and bandwidth overhead, or require additional
infrastructure. As a result, we aim to design a traffic analysis defense that
is efficient and highly resistant to both WF and FC attacks. We propose
DeTorrent, which uses competing neural networks to generate and evaluate
traffic analysis defenses that insert 'dummy' traffic into real traffic flows.
DeTorrent operates with moderate overhead and without delaying traffic. In a
closed-world WF setting, it reduces an attacker's accuracy by 61.5%, a
reduction 10.5% better than the next-best padding-only defense. Against the
state-of-the-art FC attacker, DeTorrent reduces the true positive rate for a
$10^{-5}$ false positive rate to about .12, which is less than half that of the
next-best defense. We also demonstrate DeTorrent's practicality by deploying it
alongside the Tor network and find that it maintains its performance when
applied to live traffic.
| [
{
"version": "v1",
"created": "Fri, 3 Feb 2023 21:40:56 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Mar 2023 01:33:26 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Sep 2023 22:12:27 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Holland",
"James K",
""
],
[
"Carpenter",
"Jason",
""
],
[
"Oh",
"Se Eun",
""
],
[
"Hopper",
"Nicholas",
""
]
]
| new_dataset | 0.995602 |
2302.12189 | Michele Cafagna | Michele Cafagna, Kees van Deemter, Albert Gatt | HL Dataset: Visually-grounded Description of Scenes, Actions and
Rationales | null | null | null | null | cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | Current captioning datasets focus on object-centric captions, describing the
visible objects in the image, e.g. "people eating food in a park". Although
these datasets are useful to evaluate the ability of Vision & Language models
to recognize and describe visual content, they do not support controlled
experiments involving model testing or fine-tuning, with more high-level
captions, which humans find easy and natural to produce. For example, people
often describe images based on the type of scene they depict ('people at a
holiday resort') and the actions they perform ('people having a picnic'). Such
descriptions draw on personal experience and commonsense assumptions. We
present the High-Level Dataset a dataset extending 14997 images from the COCO
dataset, aligned with a new set of 134,973 human-annotated (high-level)
captions collected along three axes: scenes, actions, and rationales. We
further extend this dataset with confidence scores collected from an
independent set of readers, as well as a set of narrative captions generated
synthetically, by combining each of the three axes. We describe this dataset
and analyse it extensively. We also present baseline results for the High-Level
Captioning task.
| [
{
"version": "v1",
"created": "Thu, 23 Feb 2023 17:30:18 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Aug 2023 09:53:21 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Sep 2023 07:37:20 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Cafagna",
"Michele",
""
],
[
"van Deemter",
"Kees",
""
],
[
"Gatt",
"Albert",
""
]
]
| new_dataset | 0.999869 |
2303.06950 | Chengzhi Ma | Chengzhi Ma, Xi Yang, Jintao Wang, Guanghua Yang, Wei Zhang, Shaodan
Ma | Reconfigurable Distributed Antennas and Reflecting Surface: A New
Architecture for Wireless Communications | 13 pages, 9 figures | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distributed Antenna Systems (DASs) employ multiple antenna arrays in remote
radio units to achieve highly directional transmission and provide great
coverage performance for future-generation networks. However, the utilization
of active antenna arrays results in a significant increase in hardware costs
and power consumption for DAS. To address these issues, integrating DAS with
Reconfigurable Intelligent Surfaces (RIS) offers a viable approach to ensure
transmission performance while maintaining low hardware costs and power
consumption. To incorporate the merits of RIS into the DAS from practical
consideration, a novel architecture of ``Reconfigurable Distributed Antennas
and Reflecting Surfaces (RDARS)'' is proposed in this paper. Specifically,
based on the design of the additional direct-through state together with the
existing high-quality fronthaul link, any element of the RDARS can be
dynamically programmed to connect with the base station (BS) via fibers and
perform the connected mode as remote distributed antennas of the BS to receive
or transmit signals. Additionally, RDARS also inherits the low-cost and
low-energy-consumption benefits of fully passive RISs by default configuring
the elements as passive to perform the reflection mode. As a result, RDARS
offers flexible control over the trade-off between distribution gain and
reflection gain to enhance performance. The ergodic achievable rate under the
RDARS architecture is analyzed and closed-form expression with meaningful
insights is derived. The theoretical analysis and simulation results prove that
the RDARS achieves a higher achievable rate than both DAS and RIS. A RDARS
prototype with 256 elements is built for real experiments which shows that the
RDARS-aided system can achieve an additional 21% and 170% throughput
improvement over DAS and RIS-aided systems, respectively.
| [
{
"version": "v1",
"created": "Mon, 13 Mar 2023 09:35:19 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Sep 2023 04:33:57 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Sep 2023 09:14:55 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Ma",
"Chengzhi",
""
],
[
"Yang",
"Xi",
""
],
[
"Wang",
"Jintao",
""
],
[
"Yang",
"Guanghua",
""
],
[
"Zhang",
"Wei",
""
],
[
"Ma",
"Shaodan",
""
]
]
| new_dataset | 0.999187 |
2304.07666 | Yikang Liu | Yikang Liu, Ziyin Zhang, Wanyang Zhang, Shisen Yue, Xiaojing Zhao,
Xinyuan Cheng, Yiwen Zhang, Hai Hu | ArguGPT: evaluating, understanding and identifying argumentative essays
generated by GPT models | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | AI generated content (AIGC) presents considerable challenge to educators
around the world. Instructors need to be able to detect such text generated by
large language models, either with the naked eye or with the help of some
tools. There is also growing need to understand the lexical, syntactic and
stylistic features of AIGC. To address these challenges in English language
teaching, we first present ArguGPT, a balanced corpus of 4,038 argumentative
essays generated by 7 GPT models in response to essay prompts from three
sources: (1) in-class or homework exercises, (2) TOEFL and (3) GRE writing
tasks. Machine-generated texts are paired with roughly equal number of
human-written essays with three score levels matched in essay prompts. We then
hire English instructors to distinguish machine essays from human ones. Results
show that when first exposed to machine-generated essays, the instructors only
have an accuracy of 61% in detecting them. But the number rises to 67% after
one round of minimal self-training. Next, we perform linguistic analyses of
these essays, which show that machines produce sentences with more complex
syntactic structures while human essays tend to be lexically more complex.
Finally, we test existing AIGC detectors and build our own detectors using SVMs
and RoBERTa. Results suggest that a RoBERTa fine-tuned with the training set of
ArguGPT achieves above 90% accuracy in both essay- and sentence-level
classification. To the best of our knowledge, this is the first comprehensive
analysis of argumentative essays produced by generative large language models.
Machine-authored essays in ArguGPT and our models will be made publicly
available at https://github.com/huhailinguist/ArguGPT
| [
{
"version": "v1",
"created": "Sun, 16 Apr 2023 01:50:26 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Sep 2023 14:05:58 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Liu",
"Yikang",
""
],
[
"Zhang",
"Ziyin",
""
],
[
"Zhang",
"Wanyang",
""
],
[
"Yue",
"Shisen",
""
],
[
"Zhao",
"Xiaojing",
""
],
[
"Cheng",
"Xinyuan",
""
],
[
"Zhang",
"Yiwen",
""
],
[
"Hu",
"Hai",
""
]
]
| new_dataset | 0.986292 |
2304.12046 | Kohei Honda | Kohei Honda, Ryo Yonetani, Mai Nishimura and Tadashi Kozuno | When to Replan? An Adaptive Replanning Strategy for Autonomous
Navigation using Deep Reinforcement Learning | 7 pages, 3 figures | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The hierarchy of global and local planners is one of the most commonly
utilized system designs in autonomous robot navigation. While the global
planner generates a reference path from the current to goal locations based on
the pre-built static map, the local planner produces a kinodynamic trajectory
to follow the reference path while avoiding perceived obstacles. To account for
unforeseen or dynamic obstacles not present on the pre-built map, ``when to
replan'' the reference path is critical for the success of safe and efficient
navigation. However, determining the ideal timing to execute replanning in such
partially unknown environments still remains an open question. In this work, we
first conduct an extensive simulation experiment to compare several common
replanning strategies and confirm that effective strategies are highly
dependent on the environment as well as the global and local planners. Based on
this insight, we derive a new adaptive replanning strategy based on deep
reinforcement learning, which can learn from experience to decide appropriate
replanning timings in the given environment and planning setups. Our
experimental results demonstrate that the proposed replanner can perform on par
or even better than the current best-performing strategies in multiple
situations regarding navigation robustness and efficiency.
| [
{
"version": "v1",
"created": "Mon, 24 Apr 2023 12:39:36 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Sep 2023 21:55:00 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Honda",
"Kohei",
""
],
[
"Yonetani",
"Ryo",
""
],
[
"Nishimura",
"Mai",
""
],
[
"Kozuno",
"Tadashi",
""
]
]
| new_dataset | 0.991019 |
2304.13023 | Lu Zeyu | Zeyu Lu, Di Huang, Lei Bai, Jingjing Qu, Chengyue Wu, Xihui Liu, Wanli
Ouyang | Seeing is not always believing: Benchmarking Human and Model Perception
of AI-Generated Images | null | null | null | null | cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Photos serve as a way for humans to record what they experience in their
daily lives, and they are often regarded as trustworthy sources of information.
However, there is a growing concern that the advancement of artificial
intelligence (AI) technology may produce fake photos, which can create
confusion and diminish trust in photographs. This study aims to comprehensively
evaluate agents for distinguishing state-of-the-art AI-generated visual
content. Our study benchmarks both human capability and cutting-edge fake image
detection AI algorithms, using a newly collected large-scale fake image dataset
Fake2M. In our human perception evaluation, titled HPBench, we discovered that
humans struggle significantly to distinguish real photos from AI-generated
ones, with a misclassification rate of 38.7%. Along with this, we conduct the
model capability of AI-Generated images detection evaluation MPBench and the
top-performing model from MPBench achieves a 13% failure rate under the same
setting used in the human evaluation. We hope that our study can raise
awareness of the potential risks of AI-generated images and facilitate further
research to prevent the spread of false information. More information can refer
to https://github.com/Inf-imagine/Sentry.
| [
{
"version": "v1",
"created": "Tue, 25 Apr 2023 17:51:59 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2023 15:14:57 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Sep 2023 18:16:28 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Lu",
"Zeyu",
""
],
[
"Huang",
"Di",
""
],
[
"Bai",
"Lei",
""
],
[
"Qu",
"Jingjing",
""
],
[
"Wu",
"Chengyue",
""
],
[
"Liu",
"Xihui",
""
],
[
"Ouyang",
"Wanli",
""
]
]
| new_dataset | 0.968443 |
2305.02034 | Di Wang | Di Wang, Jing Zhang, Bo Du, Minqiang Xu, Lin Liu, Dacheng Tao and
Liangpei Zhang | SAMRS: Scaling-up Remote Sensing Segmentation Dataset with Segment
Anything Model | Accepted by NeurIPS 2023 Datasets and Benchmarks Track. The code and
dataset will be available at https://github.com/ViTAE-Transformer/SAMRS | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The success of the Segment Anything Model (SAM) demonstrates the significance
of data-centric machine learning. However, due to the difficulties and high
costs associated with annotating Remote Sensing (RS) images, a large amount of
valuable RS data remains unlabeled, particularly at the pixel level. In this
study, we leverage SAM and existing RS object detection datasets to develop an
efficient pipeline for generating a large-scale RS segmentation dataset, dubbed
SAMRS. SAMRS totally possesses 105,090 images and 1,668,241 instances,
surpassing existing high-resolution RS segmentation datasets in size by several
orders of magnitude. It provides object category, location, and instance
information that can be used for semantic segmentation, instance segmentation,
and object detection, either individually or in combination. We also provide a
comprehensive analysis of SAMRS from various aspects. Moreover, preliminary
experiments highlight the importance of conducting segmentation pre-training
with SAMRS to address task discrepancies and alleviate the limitations posed by
limited training data during fine-tuning. The code and dataset will be
available at https://github.com/ViTAE-Transformer/SAMRS.
| [
{
"version": "v1",
"created": "Wed, 3 May 2023 10:58:07 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Sep 2023 18:28:02 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Wang",
"Di",
""
],
[
"Zhang",
"Jing",
""
],
[
"Du",
"Bo",
""
],
[
"Xu",
"Minqiang",
""
],
[
"Liu",
"Lin",
""
],
[
"Tao",
"Dacheng",
""
],
[
"Zhang",
"Liangpei",
""
]
]
| new_dataset | 0.999827 |
2305.02290 | Henrique De Carvalho Videira | Henrique de Carvalho Videira | The offline digital currency puzzle solved by a local blockchain | 20 pages, 2 tables and 2 figures | IET Blockchain 1-16 (2023) | 10.1049/blc2.12049 | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | A major drawback in deploying central bank digital currencies (CDBC) is the
offline puzzle, which requires that a CBDC must keep the provision given by
cash, and, simultaneously, avoid double-spending, counterfeiting, and other
issues. The puzzle is solved by minting the coins in serials, which are stored
on a local blockchain (e.g. smartphone). The local blockchain is secured by
keys embedded in the hardware and can be continuously mined by the wallet to
enhance security. The coins can be either minted as hot coins, which can be
retrieved in case of loss, or minted as cold coins, like physical cash.
| [
{
"version": "v1",
"created": "Wed, 3 May 2023 17:31:57 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Videira",
"Henrique de Carvalho",
""
]
]
| new_dataset | 0.999222 |
2305.04107 | Aditya Joglekar | Aditya Joglekar, Hongrui Chen, Levent Burak Kara | DMF-TONN: Direct Mesh-free Topology Optimization using Neural Networks | null | null | null | null | cs.CE cs.LG | http://creativecommons.org/licenses/by/4.0/ | We propose a direct mesh-free method for performing topology optimization by
integrating a density field approximation neural network with a displacement
field approximation neural network. We show that this direct integration
approach can give comparable results to conventional topology optimization
techniques, with an added advantage of enabling seamless integration with
post-processing software, and a potential of topology optimization with
objectives where meshing and Finite Element Analysis (FEA) may be expensive or
not suitable. Our approach (DMF-TONN) takes in as inputs the boundary
conditions and domain coordinates and finds the optimum density field for
minimizing the loss function of compliance and volume fraction constraint
violation. The mesh-free nature is enabled by a physics-informed displacement
field approximation neural network to solve the linear elasticity partial
differential equation and replace the FEA conventionally used for calculating
the compliance. We show that using a suitable Fourier Features neural network
architecture and hyperparameters, the density field approximation neural
network can learn the weights to represent the optimal density field for the
given domain and boundary conditions, by directly backpropagating the loss
gradient through the displacement field approximation neural network, and
unlike prior work there is no requirement of a sensitivity filter, optimality
criterion method, or a separate training of density network in each topology
optimization iteration.
| [
{
"version": "v1",
"created": "Sat, 6 May 2023 18:04:51 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Sep 2023 18:59:58 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Joglekar",
"Aditya",
""
],
[
"Chen",
"Hongrui",
""
],
[
"Kara",
"Levent Burak",
""
]
]
| new_dataset | 0.989112 |
2305.14097 | Guangke Chen | Guangke Chen, Yedi Zhang, Zhe Zhao, Fu Song | QFA2SR: Query-Free Adversarial Transfer Attacks to Speaker Recognition
Systems | Accepted by the 32nd USENIX Security Symposium (2023 USENIX
Security); Full Version | null | null | null | cs.CR cs.LG cs.MM cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current adversarial attacks against speaker recognition systems (SRSs)
require either white-box access or heavy black-box queries to the target SRS,
thus still falling behind practical attacks against proprietary commercial APIs
and voice-controlled devices. To fill this gap, we propose QFA2SR, an effective
and imperceptible query-free black-box attack, by leveraging the
transferability of adversarial voices. To improve transferability, we present
three novel methods, tailored loss functions, SRS ensemble, and time-freq
corrosion. The first one tailors loss functions to different attack scenarios.
The latter two augment surrogate SRSs in two different ways. SRS ensemble
combines diverse surrogate SRSs with new strategies, amenable to the unique
scoring characteristics of SRSs. Time-freq corrosion augments surrogate SRSs by
incorporating well-designed time-/frequency-domain modification functions,
which simulate and approximate the decision boundary of the target SRS and
distortions introduced during over-the-air attacks. QFA2SR boosts the targeted
transferability by 20.9%-70.7% on four popular commercial APIs (Microsoft
Azure, iFlytek, Jingdong, and TalentedSoft), significantly outperforming
existing attacks in query-free setting, with negligible effect on the
imperceptibility. QFA2SR is also highly effective when launched over the air
against three wide-spread voice assistants (Google Assistant, Apple Siri, and
TMall Genie) with 60%, 46%, and 70% targeted transferability, respectively.
| [
{
"version": "v1",
"created": "Tue, 23 May 2023 14:20:13 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Sep 2023 15:19:46 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Chen",
"Guangke",
""
],
[
"Zhang",
"Yedi",
""
],
[
"Zhao",
"Zhe",
""
],
[
"Song",
"Fu",
""
]
]
| new_dataset | 0.973141 |
2305.18668 | Ma\"elic Neau | Neau Ma\"elic, Paulo E. Santos, Anne-Gwenn Bosser and C\'edric Buche | Fine-Grained is Too Coarse: A Novel Data-Centric Approach for Efficient
Scene Graph Generation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Learning to compose visual relationships from raw images in the form of scene
graphs is a highly challenging task due to contextual dependencies, but it is
essential in computer vision applications that depend on scene understanding.
However, no current approaches in Scene Graph Generation (SGG) aim at providing
useful graphs for downstream tasks. Instead, the main focus has primarily been
on the task of unbiasing the data distribution for predicting more fine-grained
relations. That being said, all fine-grained relations are not equally relevant
and at least a part of them are of no use for real-world applications. In this
work, we introduce the task of Efficient SGG that prioritizes the generation of
relevant relations, facilitating the use of Scene Graphs in downstream tasks
such as Image Generation. To support further approaches, we present a new
dataset, VG150-curated, based on the annotations of the popular Visual Genome
dataset. We show through a set of experiments that this dataset contains more
high-quality and diverse annotations than the one usually use in SGG. Finally,
we show the efficiency of this dataset in the task of Image Generation from
Scene Graphs.
| [
{
"version": "v1",
"created": "Tue, 30 May 2023 00:55:49 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Sep 2023 12:35:00 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Maëlic",
"Neau",
""
],
[
"Santos",
"Paulo E.",
""
],
[
"Bosser",
"Anne-Gwenn",
""
],
[
"Buche",
"Cédric",
""
]
]
| new_dataset | 0.983177 |
2306.01913 | Xin Dai | Xin Dai, Yujie Fan, Zhongfang Zhuang, Shubham Jain, Chin-Chia Michael
Yeh, Junpeng Wang, Liang Wang, Yan Zheng, Prince Osei Aboagye, Wei Zhang | PDT: Pretrained Dual Transformers for Time-aware Bipartite Graphs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Pre-training on large models is prevalent and emerging with the ever-growing
user-generated content in many machine learning application categories. It has
been recognized that learning contextual knowledge from the datasets depicting
user-content interaction plays a vital role in downstream tasks. Despite
several studies attempting to learn contextual knowledge via pre-training
methods, finding an optimal training objective and strategy for this type of
task remains a challenging problem. In this work, we contend that there are two
distinct aspects of contextual knowledge, namely the user-side and the
content-side, for datasets where user-content interaction can be represented as
a bipartite graph. To learn contextual knowledge, we propose a pre-training
method that learns a bi-directional mapping between the spaces of the user-side
and the content-side. We formulate the training goal as a contrastive learning
task and propose a dual-Transformer architecture to encode the contextual
knowledge. We evaluate the proposed method for the recommendation task. The
empirical studies have demonstrated that the proposed method outperformed all
the baselines with significant gains.
| [
{
"version": "v1",
"created": "Fri, 2 Jun 2023 20:38:43 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jun 2023 06:20:42 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Sep 2023 17:31:16 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Dai",
"Xin",
""
],
[
"Fan",
"Yujie",
""
],
[
"Zhuang",
"Zhongfang",
""
],
[
"Jain",
"Shubham",
""
],
[
"Yeh",
"Chin-Chia Michael",
""
],
[
"Wang",
"Junpeng",
""
],
[
"Wang",
"Liang",
""
],
[
"Zheng",
"Yan",
""
],
[
"Aboagye",
"Prince Osei",
""
],
[
"Zhang",
"Wei",
""
]
]
| new_dataset | 0.996963 |
2306.09341 | Xiaoshi Wu | Xiaoshi Wu, Yiming Hao, Keqiang Sun, Yixiong Chen, Feng Zhu, Rui Zhao,
Hongsheng Li | Human Preference Score v2: A Solid Benchmark for Evaluating Human
Preferences of Text-to-Image Synthesis | Revision | null | null | null | cs.CV cs.AI cs.DB | http://creativecommons.org/licenses/by/4.0/ | Recent text-to-image generative models can generate high-fidelity images from
text inputs, but the quality of these generated images cannot be accurately
evaluated by existing evaluation metrics. To address this issue, we introduce
Human Preference Dataset v2 (HPD v2), a large-scale dataset that captures human
preferences on images from a wide range of sources. HPD v2 comprises 798,090
human preference choices on 433,760 pairs of images, making it the largest
dataset of its kind. The text prompts and images are deliberately collected to
eliminate potential bias, which is a common issue in previous datasets. By
fine-tuning CLIP on HPD v2, we obtain Human Preference Score v2 (HPS v2), a
scoring model that can more accurately predict human preferences on generated
images. Our experiments demonstrate that HPS v2 generalizes better than
previous metrics across various image distributions and is responsive to
algorithmic improvements of text-to-image generative models, making it a
preferable evaluation metric for these models. We also investigate the design
of the evaluation prompts for text-to-image generative models, to make the
evaluation stable, fair and easy-to-use. Finally, we establish a benchmark for
text-to-image generative models using HPS v2, which includes a set of recent
text-to-image models from the academic, community and industry. The code and
dataset is available at https://github.com/tgxs002/HPSv2 .
| [
{
"version": "v1",
"created": "Thu, 15 Jun 2023 17:59:31 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Sep 2023 08:19:23 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Wu",
"Xiaoshi",
""
],
[
"Hao",
"Yiming",
""
],
[
"Sun",
"Keqiang",
""
],
[
"Chen",
"Yixiong",
""
],
[
"Zhu",
"Feng",
""
],
[
"Zhao",
"Rui",
""
],
[
"Li",
"Hongsheng",
""
]
]
| new_dataset | 0.999435 |
2306.15354 | Siqi Zheng | Siqi Zheng, Luyao Cheng, Yafeng Chen, Hui Wang, Qian Chen | 3D-Speaker: A Large-Scale Multi-Device, Multi-Distance, and
Multi-Dialect Corpus for Speech Representation Disentanglement | null | null | null | null | cs.CL cs.SD eess.AS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Disentangling uncorrelated information in speech utterances is a crucial
research topic within speech community. Different speech-related tasks focus on
extracting distinct speech representations while minimizing the affects of
other uncorrelated information. We present a large-scale speech corpus to
facilitate the research of speech representation disentanglement. 3D-Speaker
contains over 10,000 speakers, each of whom are simultaneously recorded by
multiple Devices, locating at different Distances, and some speakers are
speaking multiple Dialects. The controlled combinations of multi-dimensional
audio data yield a matrix of a diverse blend of speech representation
entanglement, thereby motivating intriguing methods to untangle them. The
multi-domain nature of 3D-Speaker also makes it a suitable resource to evaluate
large universal speech models and experiment methods of out-of-domain learning
and self-supervised learning. https://3dspeaker.github.io/
| [
{
"version": "v1",
"created": "Tue, 27 Jun 2023 10:09:43 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Jun 2023 02:44:35 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Sep 2023 02:36:41 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Zheng",
"Siqi",
""
],
[
"Cheng",
"Luyao",
""
],
[
"Chen",
"Yafeng",
""
],
[
"Wang",
"Hui",
""
],
[
"Chen",
"Qian",
""
]
]
| new_dataset | 0.999425 |
2306.15988 | Guoyu Yang | Guoyu Yang, Jie Lei, Zhikuan Zhu, Siyu Cheng, Zunlei Feng, Ronghua
Liang | AFPN: Asymptotic Feature Pyramid Network for Object Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-scale features are of great importance in encoding objects with scale
variance in object detection tasks. A common strategy for multi-scale feature
extraction is adopting the classic top-down and bottom-up feature pyramid
networks. However, these approaches suffer from the loss or degradation of
feature information, impairing the fusion effect of non-adjacent levels. This
paper proposes an asymptotic feature pyramid network (AFPN) to support direct
interaction at non-adjacent levels. AFPN is initiated by fusing two adjacent
low-level features and asymptotically incorporates higher-level features into
the fusion process. In this way, the larger semantic gap between non-adjacent
levels can be avoided. Given the potential for multi-object information
conflicts to arise during feature fusion at each spatial location, adaptive
spatial fusion operation is further utilized to mitigate these inconsistencies.
We incorporate the proposed AFPN into both two-stage and one-stage object
detection frameworks and evaluate with the MS-COCO 2017 validation and test
datasets. Experimental evaluation shows that our method achieves more
competitive results than other state-of-the-art feature pyramid networks. The
code is available at
\href{https://github.com/gyyang23/AFPN}{https://github.com/gyyang23/AFPN}.
| [
{
"version": "v1",
"created": "Wed, 28 Jun 2023 07:58:49 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Sep 2023 12:45:32 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Yang",
"Guoyu",
""
],
[
"Lei",
"Jie",
""
],
[
"Zhu",
"Zhikuan",
""
],
[
"Cheng",
"Siyu",
""
],
[
"Feng",
"Zunlei",
""
],
[
"Liang",
"Ronghua",
""
]
]
| new_dataset | 0.994407 |
2307.12032 | Junzi Sun | Junzi Sun, Esther Roosenbrand | Flight Contrail Segmentation via Augmented Transfer Learning with Novel
SR Loss Function in Hough Space | Source code available at: https://github.com/junzis/contrail-net | null | null | null | cs.CV cs.LG eess.IV | http://creativecommons.org/licenses/by-sa/4.0/ | Air transport poses significant environmental challenges, particularly
regarding the role of flight contrails in climate change due to their potential
global warming impact. Traditional computer vision techniques struggle under
varying remote sensing image conditions, and conventional machine learning
approaches using convolutional neural networks are limited by the scarcity of
hand-labeled contrail datasets. To address these issues, we employ few-shot
transfer learning to introduce an innovative approach for accurate contrail
segmentation with minimal labeled data. Our methodology leverages backbone
segmentation models pre-trained on extensive image datasets and fine-tuned
using an augmented contrail-specific dataset. We also introduce a novel loss
function, termed SR Loss, which enhances contrail line detection by
transforming the image space into Hough space. This transformation results in a
significant performance improvement over generic image segmentation loss
functions. Our approach offers a robust solution to the challenges posed by
limited labeled data and significantly advances the state of contrail detection
models.
| [
{
"version": "v1",
"created": "Sat, 22 Jul 2023 09:44:45 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Sep 2023 14:28:44 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Sun",
"Junzi",
""
],
[
"Roosenbrand",
"Esther",
""
]
]
| new_dataset | 0.985277 |
2307.12626 | Jingxuan Wei | Jingxuan Wei, Cheng Tan, Zhangyang Gao, Linzhuang Sun, Siyuan Li,
Bihui Yu, Ruifeng Guo, Stan Z. Li | Enhancing Human-like Multi-Modal Reasoning: A New Challenging Dataset
and Comprehensive Framework | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal reasoning is a critical component in the pursuit of artificial
intelligence systems that exhibit human-like intelligence, especially when
tackling complex tasks. While the chain-of-thought (CoT) technique has gained
considerable attention, the existing ScienceQA dataset, which focuses on
multimodal scientific questions and explanations from elementary and high
school textbooks, lacks a comprehensive evaluation of diverse approaches. To
address this gap, we present COCO Multi-Modal Reasoning(COCO-MMR) dataset, a
novel dataset that encompasses an extensive collection of open-ended questions,
rationales, and answers derived from the large object dataset COCO. Unlike
previous datasets that rely on multiple-choice questions, our dataset pioneers
the use of open-ended questions in the context of multimodal CoT, introducing a
more challenging problem that effectively assesses the reasoning capability of
CoT models. Through comprehensive evaluations and detailed analyses, we provide
valuable insights and propose innovative techniques, including multi-hop
cross-modal attention and sentence-level contrastive learning, to enhance the
image and text encoders. Extensive experiments demonstrate the efficacy of the
proposed dataset and techniques, offering novel perspectives for advancing
multimodal reasoning. The data and code are available at
\href{https://github.com/weijingxuan/COCO-MMR}{https://github.com/weijingxuan/COCO-MMR}.
| [
{
"version": "v1",
"created": "Mon, 24 Jul 2023 08:58:25 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Sep 2023 15:57:35 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Wei",
"Jingxuan",
""
],
[
"Tan",
"Cheng",
""
],
[
"Gao",
"Zhangyang",
""
],
[
"Sun",
"Linzhuang",
""
],
[
"Li",
"Siyuan",
""
],
[
"Yu",
"Bihui",
""
],
[
"Guo",
"Ruifeng",
""
],
[
"Li",
"Stan Z.",
""
]
]
| new_dataset | 0.994971 |
2308.02234 | Kasun Wickramasinghe | Kasun Wickramasinghe, Nisansa de Silva | Sinhala-English Parallel Word Dictionary Dataset | null | null | 10.1109/ICIIS58898.2023.10253560 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Parallel datasets are vital for performing and evaluating any kind of
multilingual task. However, in the cases where one of the considered language
pairs is a low-resource language, the existing top-down parallel data such as
corpora are lacking in both tally and quality due to the dearth of human
annotation. Therefore, for low-resource languages, it is more feasible to move
in the bottom-up direction where finer granular pairs such as dictionary
datasets are developed first. They may then be used for mid-level tasks such as
supervised multilingual word embedding alignment. These in turn can later guide
higher-level tasks in the order of aligning sentence or paragraph text corpora
used for Machine Translation (MT). Even though more approachable than
generating and aligning a massive corpus for a low-resource language, for the
same reason of apathy from larger research entities, even these finer granular
data sets are lacking for some low-resource languages. We have observed that
there is no free and open dictionary data set for the low-resource language,
Sinhala. Thus, in this work, we introduce three parallel English-Sinhala word
dictionaries (En-Si-dict-large, En-Si-dict-filtered, En-Si-dict-FastText) which
help in multilingual Natural Language Processing (NLP) tasks related to English
and Sinhala languages. In this paper, we explain the dataset creation pipeline
as well as the experimental results of the tests we have carried out to verify
the quality of the data sets. The data sets and the related scripts are
available at https://github.com/kasunw22/sinhala-para-dict.
| [
{
"version": "v1",
"created": "Fri, 4 Aug 2023 10:21:35 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Wickramasinghe",
"Kasun",
""
],
[
"de Silva",
"Nisansa",
""
]
]
| new_dataset | 0.99981 |
2308.02681 | Hongzhao Guan | Pascal Van Hentenryck, Connor Riley, Anthony Trasatti, Hongzhao Guan,
Tejas Santanam, Jorge A. Huertas, Kevin Dalmeijer, Kari Watkins, Juwon Drake,
Samson Baskin | MARTA Reach: Piloting an On-Demand Multimodal Transit System in Atlanta | null | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper reports on the results of the six-month pilot MARTA Reach, which
aimed to demonstrate the potential value of On-Demand Multimodal Transit
Systems (ODMTS) in the city of Atlanta, Georgia. ODMTS take a transit-centric
view by integrating on-demand services and traditional fixed routes in order to
address the first/last mile problem. ODMTS combine fixed routes and on-demand
shuttle services by design (not as an after-thought) into a transit system that
offers a door-to-door multimodal service with fully integrated operations and
fare structure. The paper fills a knowledge gap, i.e., the understanding of the
impact, benefits, and challenges of deploying ODMTS in a city as complex as
Atlanta, Georgia. The pilot was deployed in four different zones with limited
transit options, and used on-demand shuttles integrated with the overall
transit system to address the first/last mile problem. The paper describes the
design and operations of the pilot, and presents the results in terms of
ridership, quality of service, trip purposes, alternative modes of
transportation, multimodal nature of trips, challenges encountered, and cost
estimates. The main findings of the pilot are that Reach offered a highly
valued service that performed a large number of trips that would have otherwise
been served by ride-hailing companies, taxis, or personal cars. Moreover, the
wide majority of Reach trips were multimodal, with connections to rail being
most prominent.
| [
{
"version": "v1",
"created": "Fri, 4 Aug 2023 22:08:56 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Sep 2023 18:41:49 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Van Hentenryck",
"Pascal",
""
],
[
"Riley",
"Connor",
""
],
[
"Trasatti",
"Anthony",
""
],
[
"Guan",
"Hongzhao",
""
],
[
"Santanam",
"Tejas",
""
],
[
"Huertas",
"Jorge A.",
""
],
[
"Dalmeijer",
"Kevin",
""
],
[
"Watkins",
"Kari",
""
],
[
"Drake",
"Juwon",
""
],
[
"Baskin",
"Samson",
""
]
]
| new_dataset | 0.998234 |
2308.11551 | Gengyuan Zhang | Gengyuan Zhang, Jisen Ren, Jindong Gu, Volker Tresp | Multi-event Video-Text Retrieval | accepted to ICCV2023 Poster; some figures are not supported viewed
online, please download the file and view locally | null | null | null | cs.CV cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Video-Text Retrieval (VTR) is a crucial multi-modal task in an era of massive
video-text data on the Internet. A plethora of work characterized by using a
two-stream Vision-Language model architecture that learns a joint
representation of video-text pairs has become a prominent approach for the VTR
task. However, these models operate under the assumption of bijective
video-text correspondences and neglect a more practical scenario where video
content usually encompasses multiple events, while texts like user queries or
webpage metadata tend to be specific and correspond to single events. This
establishes a gap between the previous training objective and real-world
applications, leading to the potential performance degradation of earlier
models during inference. In this study, we introduce the Multi-event Video-Text
Retrieval (MeVTR) task, addressing scenarios in which each video contains
multiple different events, as a niche scenario of the conventional Video-Text
Retrieval Task. We present a simple model, Me-Retriever, which incorporates key
event video representation and a new MeVTR loss for the MeVTR task.
Comprehensive experiments show that this straightforward framework outperforms
other models in the Video-to-Text and Text-to-Video tasks, effectively
establishing a robust baseline for the MeVTR task. We believe this work serves
as a strong foundation for future studies. Code is available at
https://github.com/gengyuanmax/MeVTR.
| [
{
"version": "v1",
"created": "Tue, 22 Aug 2023 16:32:46 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Sep 2023 13:04:22 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Zhang",
"Gengyuan",
""
],
[
"Ren",
"Jisen",
""
],
[
"Gu",
"Jindong",
""
],
[
"Tresp",
"Volker",
""
]
]
| new_dataset | 0.996395 |
2309.02706 | Guijin Son | Guijin Son, Hanwool Lee, Suwan Kim, Huiseo Kim, Jaecheol Lee, Je Won
Yeom, Jihyu Jung, Jung Woo Kim, Songseong Kim | HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models | Revised Erros | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large Language Models (LLMs) trained on massive corpora demonstrate
impressive capabilities in a wide range of tasks. While there are ongoing
efforts to adapt these models to languages beyond English, the attention given
to their evaluation methodologies remains limited. Current multilingual
benchmarks often rely on back translations or re-implementations of English
tests, limiting their capacity to capture unique cultural and linguistic
nuances. To bridge this gap for the Korean language, we introduce HAE-RAE
Bench, a dataset curated to challenge models lacking Korean cultural and
contextual depth. The dataset encompasses six downstream tasks across four
domains: vocabulary, history, general knowledge, and reading comprehension.
Contrary to traditional evaluation suites focused on token or sequence
classification and specific mathematical or logical reasoning, HAE-RAE Bench
emphasizes a model's aptitude for recalling Korean-specific knowledge and
cultural contexts. Comparative analysis with prior Korean benchmarks indicates
that the HAE-RAE Bench presents a greater challenge to non-native models, by
disturbing abilities and knowledge learned from English being transferred.
| [
{
"version": "v1",
"created": "Wed, 6 Sep 2023 04:38:16 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Sep 2023 01:01:24 GMT"
},
{
"version": "v3",
"created": "Fri, 15 Sep 2023 06:02:53 GMT"
},
{
"version": "v4",
"created": "Sat, 23 Sep 2023 07:44:06 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Son",
"Guijin",
""
],
[
"Lee",
"Hanwool",
""
],
[
"Kim",
"Suwan",
""
],
[
"Kim",
"Huiseo",
""
],
[
"Lee",
"Jaecheol",
""
],
[
"Yeom",
"Je Won",
""
],
[
"Jung",
"Jihyu",
""
],
[
"Kim",
"Jung Woo",
""
],
[
"Kim",
"Songseong",
""
]
]
| new_dataset | 0.999788 |
2309.04077 | Abhinav Rajvanshi | Abhinav Rajvanshi, Karan Sikka, Xiao Lin, Bhoram Lee, Han-Pang Chiu
and Alvaro Velasquez | SayNav: Grounding Large Language Models for Dynamic Planning to
Navigation in New Environments | null | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic reasoning and dynamic planning capabilities are crucial for an
autonomous agent to perform complex navigation tasks in unknown environments.
It requires a large amount of common-sense knowledge, that humans possess, to
succeed in these tasks. We present SayNav, a new approach that leverages human
knowledge from Large Language Models (LLMs) for efficient generalization to
complex navigation tasks in unknown large-scale environments. SayNav uses a
novel grounding mechanism, that incrementally builds a 3D scene graph of the
explored environment as inputs to LLMs, for generating feasible and
contextually appropriate high-level plans for navigation. The LLM-generated
plan is then executed by a pre-trained low-level planner, that treats each
planned step as a short-distance point-goal navigation sub-task. SayNav
dynamically generates step-by-step instructions during navigation and
continuously refines future steps based on newly perceived information. We
evaluate SayNav on a new multi-object navigation task, that requires the agent
to utilize a massive amount of human knowledge to efficiently search multiple
different objects in an unknown environment. SayNav outperforms an oracle based
Point-nav baseline, achieving a success rate of 95.35% (vs 56.06% for the
baseline), under the ideal settings on this task, highlighting its ability to
generate dynamic plans for successfully locating objects in large-scale new
environments. In addition, SayNav also enables efficient generalization of
learning to navigate from simulation to real novel environments.
| [
{
"version": "v1",
"created": "Fri, 8 Sep 2023 02:24:37 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Sep 2023 02:37:40 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Sep 2023 20:35:17 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Rajvanshi",
"Abhinav",
""
],
[
"Sikka",
"Karan",
""
],
[
"Lin",
"Xiao",
""
],
[
"Lee",
"Bhoram",
""
],
[
"Chiu",
"Han-Pang",
""
],
[
"Velasquez",
"Alvaro",
""
]
]
| new_dataset | 0.994118 |
2309.07416 | Anton Ratnarajah Mr | Anton Ratnarajah, Shi-Xiong Zhang, Dong Yu | M3-AUDIODEC: Multi-channel multi-speaker multi-spatial audio codec | More results and source code are available at
https://anton-jeran.github.io/MAD/ | null | null | null | cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | We introduce M3-AUDIODEC, an innovative neural spatial audio codec designed
for efficient compression of multi-channel (binaural) speech in both single and
multi-speaker scenarios, while retaining the spatial location information of
each speaker. This model boasts versatility, allowing configuration and
training tailored to a predetermined set of multi-channel, multi-speaker, and
multi-spatial overlapping speech conditions. Key contributions are as follows:
1) Previous neural codecs are extended from single to multi-channel audios. 2)
The ability of our proposed model to compress and decode for overlapping
speech. 3) A groundbreaking architecture that compresses speech content and
spatial cues separately, ensuring the preservation of each speaker's spatial
context after decoding. 4) M3-AUDIODEC's proficiency in reducing the bandwidth
for compressing two-channel speech by 48% when compared to individual binaural
channel compression. Impressively, at a 12.6 kbps operation, it outperforms
Opus at 24 kbps and AUDIODEC at 24 kbps by 37% and 52%, respectively. In our
assessment, we employed speech enhancement and room acoustic metrics to
ascertain the accuracy of clean speech and spatial cue estimates from
M3-AUDIODEC. Audio demonstrations and source code are available online at
https://github.com/anton-jeran/MULTI-AUDIODEC .
| [
{
"version": "v1",
"created": "Thu, 14 Sep 2023 04:04:50 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Sep 2023 03:02:06 GMT"
},
{
"version": "v3",
"created": "Sat, 23 Sep 2023 03:24:12 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Ratnarajah",
"Anton",
""
],
[
"Zhang",
"Shi-Xiong",
""
],
[
"Yu",
"Dong",
""
]
]
| new_dataset | 0.998295 |
2309.10173 | Maloy Kumar Devnath | Maloy Kumar Devnath | GCNIDS: Graph Convolutional Network-Based Intrusion Detection System for
CAN Bus | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | The Controller Area Network (CAN) bus serves as a standard protocol for
facilitating communication among various electronic control units (ECUs) within
contemporary vehicles. However, it has been demonstrated that the CAN bus is
susceptible to remote attacks, which pose risks to the vehicle's safety and
functionality. To tackle this concern, researchers have introduced intrusion
detection systems (IDSs) to identify and thwart such attacks. In this paper, we
present an innovative approach to intruder detection within the CAN bus,
leveraging Graph Convolutional Network (GCN) techniques as introduced by Zhang,
Tong, Xu, and Maciejewski in 2019. By harnessing the capabilities of deep
learning, we aim to enhance attack detection accuracy while minimizing the
requirement for manual feature engineering. Our experimental findings
substantiate that the proposed GCN-based method surpasses existing IDSs in
terms of accuracy, precision, and recall. Additionally, our approach
demonstrates efficacy in detecting mixed attacks, which are more challenging to
identify than single attacks. Furthermore, it reduces the necessity for
extensive feature engineering and is particularly well-suited for real-time
detection systems. To the best of our knowledge, this represents the pioneering
application of GCN to CAN data for intrusion detection. Our proposed approach
holds significant potential in fortifying the security and safety of modern
vehicles, safeguarding against attacks and preventing them from undermining
vehicle functionality.
| [
{
"version": "v1",
"created": "Mon, 18 Sep 2023 21:42:09 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Sep 2023 15:32:09 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Devnath",
"Maloy Kumar",
""
]
]
| new_dataset | 0.997948 |
2309.10475 | Tianhao Xu | Zizhang Wu, Yuanzhu Gan, Tianhao Xu, Rui Tang and Jian Pu | LineMarkNet: Line Landmark Detection for Valet Parking | 29 pages, 12 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We aim for accurate and efficient line landmark detection for valet parking,
which is a long-standing yet unsolved problem in autonomous driving. To this
end, we present a deep line landmark detection system where we carefully design
the modules to be lightweight. Specifically, we first empirically design four
general line landmarks including three physical lines and one novel mental
line. The four line landmarks are effective for valet parking. We then develop
a deep network (LineMarkNet) to detect line landmarks from surround-view
cameras where we, via the pre-calibrated homography, fuse context from four
separate cameras into the unified bird-eye-view (BEV) space, specifically we
fuse the surroundview features and BEV features, then employ the multi-task
decoder to detect multiple line landmarks where we apply the center-based
strategy for object detection task, and design our graph transformer to enhance
the vision transformer with hierarchical level graph reasoning for semantic
segmentation task. At last, we further parameterize the detected line landmarks
(e.g., intercept-slope form) whereby a novel filtering backend incorporates
temporal and multi-view consistency to achieve smooth and stable detection.
Moreover, we annotate a large-scale dataset to validate our method.
Experimental results show that our framework achieves the enhanced performance
compared with several line detection methods and validate the multi-task
network's efficiency about the real-time line landmark detection on the
Qualcomm 820A platform while meantime keeps superior accuracy, with our deep
line landmark detection system.
| [
{
"version": "v1",
"created": "Tue, 19 Sep 2023 09:43:29 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Sep 2023 03:39:34 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Wu",
"Zizhang",
""
],
[
"Gan",
"Yuanzhu",
""
],
[
"Xu",
"Tianhao",
""
],
[
"Tang",
"Rui",
""
],
[
"Pu",
"Jian",
""
]
]
| new_dataset | 0.999068 |
2309.10592 | Shuwei Shao | Shuwei Shao, Zhongcai Pei, Weihai Chen, Xingming Wu and Zhengguo Li | NDDepth: Normal-Distance Assisted Monocular Depth Estimation | Accepted by ICCV 2023 (Oral) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monocular depth estimation has drawn widespread attention from the vision
community due to its broad applications. In this paper, we propose a novel
physics (geometry)-driven deep learning framework for monocular depth
estimation by assuming that 3D scenes are constituted by piece-wise planes.
Particularly, we introduce a new normal-distance head that outputs pixel-level
surface normal and plane-to-origin distance for deriving depth at each
position. Meanwhile, the normal and distance are regularized by a developed
plane-aware consistency constraint. We further integrate an additional depth
head to improve the robustness of the proposed framework. To fully exploit the
strengths of these two heads, we develop an effective contrastive iterative
refinement module that refines depth in a complementary manner according to the
depth uncertainty. Extensive experiments indicate that the proposed method
exceeds previous state-of-the-art competitors on the NYU-Depth-v2, KITTI and
SUN RGB-D datasets. Notably, it ranks 1st among all submissions on the KITTI
depth prediction online benchmark at the submission time.
| [
{
"version": "v1",
"created": "Tue, 19 Sep 2023 13:05:57 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Sep 2023 14:30:04 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Shao",
"Shuwei",
""
],
[
"Pei",
"Zhongcai",
""
],
[
"Chen",
"Weihai",
""
],
[
"Wu",
"Xingming",
""
],
[
"Li",
"Zhengguo",
""
]
]
| new_dataset | 0.999071 |
2309.11002 | Tianhao Xu | Zizhang Wu, Xinyuan Chen, Fan Song, Yuanzhu Gan, Tianhao Xu, Jian Pu,
Rui Tang | PPD: A New Valet Parking Pedestrian Fisheye Dataset for Autonomous
Driving | 9 pages, 6 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Pedestrian detection under valet parking scenarios is fundamental for
autonomous driving. However, the presence of pedestrians can be manifested in a
variety of ways and postures under imperfect ambient conditions, which can
adversely affect detection performance. Furthermore, models trained on
publicdatasets that include pedestrians generally provide suboptimal outcomes
for these valet parking scenarios. In this paper, wepresent the Parking
Pedestrian Dataset (PPD), a large-scale fisheye dataset to support research
dealing with real-world pedestrians, especially with occlusions and diverse
postures. PPD consists of several distinctive types of pedestrians captured
with fisheye cameras. Additionally, we present a pedestrian detection baseline
on PPD dataset, and introduce two data augmentation techniques to improve the
baseline by enhancing the diversity ofthe original dataset. Extensive
experiments validate the effectiveness of our novel data augmentation
approaches over baselinesand the dataset's exceptional generalizability.
| [
{
"version": "v1",
"created": "Wed, 20 Sep 2023 01:55:19 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Sep 2023 03:36:47 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Wu",
"Zizhang",
""
],
[
"Chen",
"Xinyuan",
""
],
[
"Song",
"Fan",
""
],
[
"Gan",
"Yuanzhu",
""
],
[
"Xu",
"Tianhao",
""
],
[
"Pu",
"Jian",
""
],
[
"Tang",
"Rui",
""
]
]
| new_dataset | 0.999757 |
2309.11268 | Bo Zhang | Renqiu Xia, Bo Zhang, Haoyang Peng, Ning Liao, Peng Ye, Botian Shi,
Junchi Yan, Yu Qiao | StructChart: Perception, Structuring, Reasoning for Visual Chart
Understanding | SimChart9K is available for downloading at:
https://github.com/UniModal4Reasoning/SimChart9K. 21 pages, 11 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Charts are common in literature across different scientific fields, conveying
rich information easily accessible to readers. Current chart-related tasks
focus on either chart perception which refers to extracting information from
the visual charts, or performing reasoning given the extracted data, e.g. in a
tabular form. In this paper, we aim to establish a unified and label-efficient
learning paradigm for joint perception and reasoning tasks, which can be
generally applicable to different downstream tasks, beyond the
question-answering task as specifically studied in peer works. Specifically,
StructChart first reformulates the chart information from the popular tubular
form (specifically linearized CSV) to the proposed Structured Triplet
Representations (STR), which is more friendly for reducing the task gap between
chart perception and reasoning due to the employed structured information
extraction for charts. We then propose a Structuring Chart-oriented
Representation Metric (SCRM) to quantitatively evaluate the performance for the
chart perception task. To enrich the dataset for training, we further explore
the possibility of leveraging the Large Language Model (LLM), enhancing the
chart diversity in terms of both chart visual style and its statistical
information. Extensive experiments are conducted on various chart-related
tasks, demonstrating the effectiveness and promising potential for a unified
chart perception-reasoning paradigm to push the frontier of chart
understanding.
| [
{
"version": "v1",
"created": "Wed, 20 Sep 2023 12:51:13 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Sep 2023 06:09:36 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Xia",
"Renqiu",
""
],
[
"Zhang",
"Bo",
""
],
[
"Peng",
"Haoyang",
""
],
[
"Liao",
"Ning",
""
],
[
"Ye",
"Peng",
""
],
[
"Shi",
"Botian",
""
],
[
"Yan",
"Junchi",
""
],
[
"Qiao",
"Yu",
""
]
]
| new_dataset | 0.995508 |
2309.11325 | Shengbin Yue | Shengbin Yue, Wei Chen, Siyuan Wang, Bingxuan Li, Chenchen Shen,
Shujun Liu, Yuxuan Zhou, Yao Xiao, Song Yun, Xuanjing Huang, Zhongyu Wei | DISC-LawLLM: Fine-tuning Large Language Models for Intelligent Legal
Services | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose DISC-LawLLM, an intelligent legal system utilizing large language
models (LLMs) to provide a wide range of legal services. We adopt legal
syllogism prompting strategies to construct supervised fine-tuning datasets in
the Chinese Judicial domain and fine-tune LLMs with legal reasoning capability.
We augment LLMs with a retrieval module to enhance models' ability to access
and utilize external legal knowledge. A comprehensive legal benchmark,
DISC-Law-Eval, is presented to evaluate intelligent legal systems from both
objective and subjective dimensions. Quantitative and qualitative results on
DISC-Law-Eval demonstrate the effectiveness of our system in serving various
users across diverse legal scenarios. The detailed resources are available at
https://github.com/FudanDISC/DISC-LawLLM.
| [
{
"version": "v1",
"created": "Wed, 20 Sep 2023 13:50:26 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Sep 2023 18:36:21 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Yue",
"Shengbin",
""
],
[
"Chen",
"Wei",
""
],
[
"Wang",
"Siyuan",
""
],
[
"Li",
"Bingxuan",
""
],
[
"Shen",
"Chenchen",
""
],
[
"Liu",
"Shujun",
""
],
[
"Zhou",
"Yuxuan",
""
],
[
"Xiao",
"Yao",
""
],
[
"Yun",
"Song",
""
],
[
"Huang",
"Xuanjing",
""
],
[
"Wei",
"Zhongyu",
""
]
]
| new_dataset | 0.986839 |
2309.11625 | Victor Morel | Victor Morel, Cristiana Santos, Viktor Fredholm, Adam Thunberg | Legitimate Interest is the New Consent -- Large-Scale Measurement and
Legal Compliance of IAB Europe TCF Paywalls | Accepted for publication at WPES2023 | null | 10.1145/3603216.3624966 | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | Cookie paywalls allow visitors of a website to access its content only after
they make a choice between paying a fee or accept tracking. European Data
Protection Authorities (DPAs) recently issued guidelines and decisions on
paywalls lawfulness, but it is yet unknown whether websites comply with them.
We study in this paper the prevalence of cookie paywalls on the top one million
websites using an automatic crawler. We identify 431 cookie paywalls, all using
the Transparency and Consent Framework (TCF). We then analyse the data these
paywalls communicate through the TCF, and in particular, the legal grounds and
the purposes used to collect personal data. We observe that cookie paywalls
extensively rely on legitimate interest legal basis systematically conflated
with consent. We also observe a lack of correlation between the presence of
paywalls and legal decisions or guidelines by DPAs.
| [
{
"version": "v1",
"created": "Wed, 20 Sep 2023 20:24:52 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Sep 2023 11:12:28 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Morel",
"Victor",
""
],
[
"Santos",
"Cristiana",
""
],
[
"Fredholm",
"Viktor",
""
],
[
"Thunberg",
"Adam",
""
]
]
| new_dataset | 0.994633 |
2309.11851 | Haodong Ouyang | Haodong Ouyang | DEYOv3: DETR with YOLO for Real-time Object Detection | Work in progress | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, end-to-end object detectors have gained significant attention from
the research community due to their outstanding performance. However, DETR
typically relies on supervised pretraining of the backbone on ImageNet, which
limits the practical application of DETR and the design of the backbone,
affecting the model's potential generalization ability. In this paper, we
propose a new training method called step-by-step training. Specifically, in
the first stage, the one-to-many pre-trained YOLO detector is used to
initialize the end-to-end detector. In the second stage, the backbone and
encoder are consistent with the DETR-like model, but only the detector needs to
be trained from scratch. Due to this training method, the object detector does
not need the additional dataset (ImageNet) to train the backbone, which makes
the design of the backbone more flexible and dramatically reduces the training
cost of the detector, which is helpful for the practical application of the
object detector. At the same time, compared with the DETR-like model, the
step-by-step training method can achieve higher accuracy than the traditional
training method of the DETR-like model. With the aid of this novel training
method, we propose a brand-new end-to-end real-time object detection model
called DEYOv3. DEYOv3-N achieves 41.1% on COCO val2017 and 270 FPS on T4 GPU,
while DEYOv3-L achieves 51.3% AP and 102 FPS. Without the use of additional
training data, DEYOv3 surpasses all existing real-time object detectors in
terms of both speed and accuracy. It is worth noting that for models of N, S,
and M scales, the training on the COCO dataset can be completed using a single
24GB RTX3090 GPU. Code will be released at
https://github.com/ouyanghaodong/DEYOv3.
| [
{
"version": "v1",
"created": "Thu, 21 Sep 2023 07:49:07 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Sep 2023 15:25:30 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Ouyang",
"Haodong",
""
]
]
| new_dataset | 0.987822 |
2309.12269 | Alexander Terenin | Andreas \"Ostling and Holli Sargeant and Huiyuan Xie and Ludwig Bull
and Alexander Terenin and Leif Jonsson and M{\aa}ns Magnusson and Felix
Steffek | The Cambridge Law Corpus: A Corpus for Legal AI Research | null | Advances in Neural Information Processing Systems, Datasets and
Benchmarks Track, 2023 | null | null | cs.CL cs.CY stat.AP | http://creativecommons.org/licenses/by/4.0/ | We introduce the Cambridge Law Corpus (CLC), a corpus for legal AI research.
It consists of over 250 000 court cases from the UK. Most cases are from the
21st century, but the corpus includes cases as old as the 16th century. This
paper presents the first release of the corpus, containing the raw text and
meta-data. Together with the corpus, we provide annotations on case outcomes
for 638 cases, done by legal experts. Using our annotated data, we have trained
and evaluated case outcome extraction with GPT-3, GPT-4 and RoBERTa models to
provide benchmarks. We include an extensive legal and ethical discussion to
address the potentially sensitive nature of this material. As a consequence,
the corpus will only be released for research purposes under certain
restrictions.
| [
{
"version": "v1",
"created": "Thu, 21 Sep 2023 17:24:40 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Sep 2023 19:35:21 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Östling",
"Andreas",
""
],
[
"Sargeant",
"Holli",
""
],
[
"Xie",
"Huiyuan",
""
],
[
"Bull",
"Ludwig",
""
],
[
"Terenin",
"Alexander",
""
],
[
"Jonsson",
"Leif",
""
],
[
"Magnusson",
"Måns",
""
],
[
"Steffek",
"Felix",
""
]
]
| new_dataset | 0.999084 |
2309.12585 | Ming Kang | Ming Kang, Chee-Ming Ting, Fung Fung Ting, Rapha\"el C.-W. Phan | BGF-YOLO: Enhanced YOLOv8 with Multiscale Attentional Feature Fusion for
Brain Tumor Detection | null | null | null | null | cs.CV eess.SP stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | You Only Look Once (YOLO)-based object detectors have shown remarkable
accuracy for automated brain tumor detection. In this paper, we develop a novel
BGF-YOLO architecture by incorporating Bi-level Routing Attention (BRA),
Generalized feature pyramid networks (GFPN), and Fourth detecting head into
YOLOv8. BGF-YOLO contains an attention mechanism to focus more on important
features, and feature pyramid networks to enrich feature representation by
merging high-level semantic features with spatial details. Furthermore, we
investigate the effect of different attention mechanisms and feature fusions,
detection head architectures on brain tumor detection accuracy. Experimental
results show that BGF-YOLO gives a 4.7% absolute increase of mAP$_{50}$
compared to YOLOv8x, and achieves state-of-the-art on the brain tumor detection
dataset Br35H. The code is available at https://github.com/mkang315/BGF-YOLO.
| [
{
"version": "v1",
"created": "Fri, 22 Sep 2023 02:24:58 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Sep 2023 14:44:29 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Kang",
"Ming",
""
],
[
"Ting",
"Chee-Ming",
""
],
[
"Ting",
"Fung Fung",
""
],
[
"Phan",
"Raphaël C. -W.",
""
]
]
| new_dataset | 0.997312 |
2309.13051 | Zahra Hemmat | Zahra Hemmat, Mohammad Mehraeen, Rahmatolloah Fattahi | A Contextual Topic Modeling and Content Analysis of Iranian laws and
Regulations | null | null | null | null | cs.CY cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | A constitution is the highest legal document of a country and serves as a
guide for the establishment of other laws. The constitution defines the
political principles, structure, hierarchy, position, and limits of the
political power of a country's government. It determines and guarantees the
rights of citizens. This study aimed at topic modeling of Iranian laws. As part
of this research, 11760 laws were collected from the Dotic website. Then, topic
modeling was conducted on the title and content of the regularizations using
LDA. Data analysis with topic modeling led to the identification of 10 topics
including Economic, Customs, Housing and Urban Development, Agriculture,
Insurance, Legal and judicial, Cultural, Information Technology, Political, and
Government. The largest topic, Economic, accounts for 29% of regulations, while
the smallest are Political and Government, accounting for 2%. This research
utilizes a topic modeling method in exploring law texts and identifying trends
in regularizations from 2016-2023. In this study, it was found that
regularizations constitute a significant percentage of law, most of which are
related to economics and customs. Cultural regularizations have increased in
2023. It can be concluded any law enacted each year can reflect society's
conditions and legislators' top concerns.
| [
{
"version": "v1",
"created": "Wed, 6 Sep 2023 18:00:51 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Hemmat",
"Zahra",
""
],
[
"Mehraeen",
"Mohammad",
""
],
[
"Fattahi",
"Rahmatolloah",
""
]
]
| new_dataset | 0.957098 |
2309.13054 | Ramanathan Guha | Ramanathan V. Guha, Prashanth Radhakrishnan, Bo Xu, Wei Sun, Carolyn
Au, Ajai Tirumali, Muhammad J. Amjad, Samantha Piekos, Natalie Diaz, Jennifer
Chen, Julia Wu, Prem Ramaswami, James Manyika | Data Commons | null | null | null | null | cs.CY cs.AI | http://creativecommons.org/licenses/by/4.0/ | Publicly available data from open sources (e.g., United States Census Bureau
(Census), World Health Organization (WHO), Intergovernmental Panel on Climate
Change (IPCC)) are vital resources for policy makers, students and researchers
across different disciplines. Combining data from different sources requires
the user to reconcile the differences in schemas, formats, assumptions, and
more. This data wrangling is time consuming, tedious and needs to be repeated
by every user of the data. Our goal with Data Commons (DC) is to help make
public data accessible and useful to those who want to understand this data and
use it to solve societal challenges and opportunities. We do the data
processing and make the processed data widely available via standard schemas
and Cloud APIs. Data Commons is a distributed network of sites that publish
data in a common schema and interoperate using the Data Commons APIs. Data from
different Data Commons can be joined easily. The aggregate of these Data
Commons can be viewed as a single Knowledge Graph. This Knowledge Graph can
then be searched over using Natural Language questions utilizing advances in
Large Language Models. This paper describes the architecture of Data Commons,
some of the major deployments and highlights directions for future work.
| [
{
"version": "v1",
"created": "Fri, 8 Sep 2023 00:14:09 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Guha",
"Ramanathan V.",
""
],
[
"Radhakrishnan",
"Prashanth",
""
],
[
"Xu",
"Bo",
""
],
[
"Sun",
"Wei",
""
],
[
"Au",
"Carolyn",
""
],
[
"Tirumali",
"Ajai",
""
],
[
"Amjad",
"Muhammad J.",
""
],
[
"Piekos",
"Samantha",
""
],
[
"Diaz",
"Natalie",
""
],
[
"Chen",
"Jennifer",
""
],
[
"Wu",
"Julia",
""
],
[
"Ramaswami",
"Prem",
""
],
[
"Manyika",
"James",
""
]
]
| new_dataset | 0.986483 |
2309.13068 | Nour Karessli | Manuel Dibak, Vladimir Vlasov, Nour Karessli, Darya Dedik, Egor
Malykh, Jacek Wasilewski, Ton Torres, Ana Peleteiro Ramallo | UNICON: A unified framework for behavior-based consumer segmentation in
e-commerce | null | null | null | null | cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data-driven personalization is a key practice in fashion e-commerce,
improving the way businesses serve their consumers needs with more relevant
content. While hyper-personalization offers highly targeted experiences to each
consumer, it requires a significant amount of private data to create an
individualized journey. To alleviate this, group-based personalization provides
a moderate level of personalization built on broader common preferences of a
consumer segment, while still being able to personalize the results. We
introduce UNICON, a unified deep learning consumer segmentation framework that
leverages rich consumer behavior data to learn long-term latent representations
and utilizes them to extract two pivotal types of segmentation catering various
personalization use-cases: lookalike, expanding a predefined target seed
segment with consumers of similar behavior, and data-driven, revealing
non-obvious consumer segments with similar affinities. We demonstrate through
extensive experimentation our framework effectiveness in fashion to identify
lookalike Designer audience and data-driven style segments. Furthermore, we
present experiments that showcase how segment information can be incorporated
in a hybrid recommender system combining hyper and group-based personalization
to exploit the advantages of both alternatives and provide improvements on
consumer experience.
| [
{
"version": "v1",
"created": "Mon, 18 Sep 2023 14:58:13 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Dibak",
"Manuel",
""
],
[
"Vlasov",
"Vladimir",
""
],
[
"Karessli",
"Nour",
""
],
[
"Dedik",
"Darya",
""
],
[
"Malykh",
"Egor",
""
],
[
"Wasilewski",
"Jacek",
""
],
[
"Torres",
"Ton",
""
],
[
"Ramallo",
"Ana Peleteiro",
""
]
]
| new_dataset | 0.997429 |
2309.13078 | Ryutaro Yamauchi | Ryutaro Yamauchi, Sho Sonoda, Akiyoshi Sannai, Wataru Kumagai | LPML: LLM-Prompting Markup Language for Mathematical Reasoning | null | null | null | null | cs.AI cs.LG cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In utilizing large language models (LLMs) for mathematical reasoning,
addressing the errors in the reasoning and calculation present in the generated
text by LLMs is a crucial challenge. In this paper, we propose a novel
framework that integrates the Chain-of-Thought (CoT) method with an external
tool (Python REPL). We discovered that by prompting LLMs to generate structured
text in XML-like markup language, we could seamlessly integrate CoT and the
external tool and control the undesired behaviors of LLMs. With our approach,
LLMs can utilize Python computation to rectify errors within CoT. We applied
our method to ChatGPT (GPT-3.5) to solve challenging mathematical problems and
demonstrated that combining CoT and Python REPL through the markup language
enhances the reasoning capability of LLMs. Our approach enables LLMs to write
the markup language and perform advanced mathematical reasoning using only
zero-shot prompting.
| [
{
"version": "v1",
"created": "Thu, 21 Sep 2023 02:46:20 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Yamauchi",
"Ryutaro",
""
],
[
"Sonoda",
"Sho",
""
],
[
"Sannai",
"Akiyoshi",
""
],
[
"Kumagai",
"Wataru",
""
]
]
| new_dataset | 0.995063 |
2309.13080 | Manuel V. Loureiro | Elena Shushkevich, Long Mai, Manuel V. Loureiro, Steven Derby, Tri
Kurniawan Wijaya | SPICED: News Similarity Detection Dataset with Multiple Topics and
Complexity Levels | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Nowadays, the use of intelligent systems to detect redundant information in
news articles has become especially prevalent with the proliferation of news
media outlets in order to enhance user experience. However, the heterogeneous
nature of news can lead to spurious findings in these systems: Simple
heuristics such as whether a pair of news are both about politics can provide
strong but deceptive downstream performance. Segmenting news similarity
datasets into topics improves the training of these models by forcing them to
learn how to distinguish salient characteristics under more narrow domains.
However, this requires the existence of topic-specific datasets, which are
currently lacking. In this article, we propose a new dataset of similar news,
SPICED, which includes seven topics: Crime & Law, Culture & Entertainment,
Disasters & Accidents, Economy & Business, Politics & Conflicts, Science &
Technology, and Sports. Futhermore, we present four distinct approaches for
generating news pairs, which are used in the creation of datasets specifically
designed for news similarity detection task. We benchmarked the created
datasets using MinHash, BERT, SBERT, and SimCSE models.
| [
{
"version": "v1",
"created": "Thu, 21 Sep 2023 10:55:26 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Shushkevich",
"Elena",
""
],
[
"Mai",
"Long",
""
],
[
"Loureiro",
"Manuel V.",
""
],
[
"Derby",
"Steven",
""
],
[
"Wijaya",
"Tri Kurniawan",
""
]
]
| new_dataset | 0.997675 |
2309.13085 | Jieyi Huang | Jieyi Huang, Chunhao Zhang, Yufei Wang, Mengyue Wu, Kenny Zhu | Does My Dog ''Speak'' Like Me? The Acoustic Correlation between Pet Dogs
and Their Human Owners | null | null | null | null | cs.SD cs.LG eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How hosts language influence their pets' vocalization is an interesting yet
underexplored problem. This paper presents a preliminary investigation into the
possible correlation between domestic dog vocal expressions and their human
host's language environment. We first present a new dataset of Shiba Inu dog
vocals from YouTube, which provides 7500 clean sound clips, including their
contextual information of these vocals and their owner's speech clips with a
carefully-designed data processing pipeline. The contextual information
includes the scene category in which the vocal was recorded, the dog's location
and activity. With a classification task and prominent factor analysis, we
discover significant acoustic differences in the dog vocals from the two
language environments. We further identify some acoustic features from dog
vocalizations that are potentially correlated to their host language patterns.
| [
{
"version": "v1",
"created": "Thu, 21 Sep 2023 23:49:21 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Huang",
"Jieyi",
""
],
[
"Zhang",
"Chunhao",
""
],
[
"Wang",
"Yufei",
""
],
[
"Wu",
"Mengyue",
""
],
[
"Zhu",
"Kenny",
""
]
]
| new_dataset | 0.998924 |
2309.13165 | Qianglong Chen | Chenin Li, Qianglong Chen, Yin Zhang, Yifei Zhang, Hongxiang Yao | Large Language Models Are Also Good Prototypical Commonsense Reasoners | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Commonsense reasoning is a pivotal skill for large language models, yet it
presents persistent challenges in specific tasks requiring this competence.
Traditional fine-tuning approaches can be resource-intensive and potentially
compromise a model's generalization capacity. Furthermore, state-of-the-art
language models like GPT-3.5 and Claude are primarily accessible through API
calls, which makes fine-tuning models challenging. To address these challenges,
we draw inspiration from the outputs of large models for tailored tasks and
semi-automatically developed a set of novel prompts from several perspectives,
including task-relevance, supportive evidence generation (e.g. chain-of-thought
and knowledge), diverse path decoding to aid the model. Experimental results on
ProtoQA dataset demonstrate that with better designed prompts we can achieve
the new state-of-art(SOTA) on the ProtoQA leaderboard, improving the Max
Answer@1 score by 8%, Max Incorrect@1 score by 4% (breakthrough 50% for the
first time) compared to the previous SOTA model and achieved an improvement on
StrategyQA and CommonsenseQA2.0 (3% and 1%, respectively). Furthermore, with
the generated Chain-of-Thought and knowledge, we can improve the
interpretability of the model while also surpassing the previous SOTA models.
We hope that our work can provide insight for the NLP community to develop
better prompts and explore the potential of large language models for more
complex reasoning tasks.
| [
{
"version": "v1",
"created": "Fri, 22 Sep 2023 20:07:24 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Li",
"Chenin",
""
],
[
"Chen",
"Qianglong",
""
],
[
"Zhang",
"Yin",
""
],
[
"Zhang",
"Yifei",
""
],
[
"Yao",
"Hongxiang",
""
]
]
| new_dataset | 0.968084 |
2309.13168 | Geza Szabo | G\'eza Szab\'o, Bal\'azs T\'arnok, Levente Vajda, J\'ozsef Pet\H{o},
Attila Vid\'acs | FATHER: FActory on THE Road | In Proc., 35th European Simulation and Modelling Conference, Oct
27-29, 2021 | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | In most factories today the robotic cells are deployed on well enforced bases
to avoid any external impact on the accuracy of production. In contrast to
that, we evaluate a futuristic concept where the whole robotic cell could work
in a moving platform. Imagine a trailer of a truck moving along the motorway
while exposed to heavy physical impacts due to maneuvering. The key question
here is how the robotic cell behaves and how the productivity is affected. We
propose a system architecture (FATHER) and show some solutions including
network related information and artificial intelligence to make the proposed
futuristic concept feasible to implement.
| [
{
"version": "v1",
"created": "Fri, 22 Sep 2023 20:16:11 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Szabó",
"Géza",
""
],
[
"Tárnok",
"Balázs",
""
],
[
"Vajda",
"Levente",
""
],
[
"Pető",
"József",
""
],
[
"Vidács",
"Attila",
""
]
]
| new_dataset | 0.990259 |
2309.13174 | Tianyu Wang | Bangyuan Liu, Tianyu Wang, Velin Kojouharov, Frank L. Hammond III,
Daniel I. Goldman | Robust self-propulsion in sand using simply controlled vibrating cubes | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Much of the Earth and many surfaces of extraterrestrial bodies are composed
of in-cohesive particle matter. Locomoting on granular terrain is challenging
for common robotic devices, either wheeled or legged. In this work, we discover
a robust alternative locomotion mechanism on granular media -- generating
movement via self-vibration. To demonstrate the effectiveness of this
locomotion mechanism, we develop a cube-shaped robot with an embedded vibratory
motor and conduct systematic experiments on diverse granular terrains of
various particle properties. We investigate how locomotion changes as a
function of vibration frequency/intensity on granular terrains. Compared to
hard surfaces, we find such a vibratory locomotion mechanism enables the robot
to move faster, and more stable on granular surfaces, facilitated by the
interaction between the body and surrounding granules. The simplicity in
structural design and controls of this robotic system indicates that vibratory
locomotion can be a valuable alternative way to produce robust locomotion on
granular terrains. We further demonstrate that such cube-shape robots can be
used as modular units for morphologically structured vibratory robots with
capabilities of maneuverable forward and turning motions, showing potential
practical scenarios for robotic systems.
| [
{
"version": "v1",
"created": "Fri, 22 Sep 2023 20:31:07 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Liu",
"Bangyuan",
""
],
[
"Wang",
"Tianyu",
""
],
[
"Kojouharov",
"Velin",
""
],
[
"Hammond",
"Frank L.",
"III"
],
[
"Goldman",
"Daniel I.",
""
]
]
| new_dataset | 0.994416 |
2309.13175 | Somalee Datta | Deepa Balraj, Ayin Vala, Shiying Hao, Melanie Philofsky, Anna
Tsvetkova, Elena Trach, Shravani Priya Narra, Oleg Zhuk, Mary Shamkhorskaya,
Jim Singer, Joseph Mesterhazy, Somalee Datta, Isabella Chu, David Rehkopf | American Family Cohort, a data resource description | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This manuscript is a research resource description and presents a large and
novel Electronic Health Records (EHR) data resource, American Family Cohort
(AFC). The AFC data is derived from Centers for Medicare and Medicaid Services
(CMS) certified American Board of Family Medicine (ABFM) PRIME registry. The
PRIME registry is the largest national Qualified Clinical Data Registry (QCDR)
for Primary Care. The data is converted to a popular common data model, the
Observational Health Data Sciences and Informatics (OHDSI) Observational
Medical Outcomes Partnership (OMOP) Common Data Model (CDM).
The resource presents approximately 90 million encounters for 7.5 million
patients. All 100% of the patients present age, gender, and address
information, and 73% report race. Nealy 93% of patients have lab data in LOINC,
86% have medication data in RxNorm, 93% have diagnosis in SNOWMED and ICD, 81%
have procedures in HCPCS or CPT, and 61% have insurance information. The
richness, breadth, and diversity of this research accessible and research ready
data is expected to accelerate observational studies in many diverse areas. We
expect this resource to facilitate research in many years to come.
| [
{
"version": "v1",
"created": "Fri, 22 Sep 2023 20:36:41 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Balraj",
"Deepa",
""
],
[
"Vala",
"Ayin",
""
],
[
"Hao",
"Shiying",
""
],
[
"Philofsky",
"Melanie",
""
],
[
"Tsvetkova",
"Anna",
""
],
[
"Trach",
"Elena",
""
],
[
"Narra",
"Shravani Priya",
""
],
[
"Zhuk",
"Oleg",
""
],
[
"Shamkhorskaya",
"Mary",
""
],
[
"Singer",
"Jim",
""
],
[
"Mesterhazy",
"Joseph",
""
],
[
"Datta",
"Somalee",
""
],
[
"Chu",
"Isabella",
""
],
[
"Rehkopf",
"David",
""
]
]
| new_dataset | 0.972305 |
2309.13191 | Ehud Shapiro | Andrew Lewis-Pye, Oded Naor and Ehud Shapiro | Grassroots Flash: A Payment System for Grassroots Cryptocurrencies | null | null | null | null | cs.MA cs.CE cs.DC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The goal of grassroots cryptocurrencies is to provide a foundation with which
local digital economies can emerge independently of each other and of global
digital platforms and global cryptocurrencies; can form and grow without
initial capital or external credit; can trade with each other; and can
gradually merge into a global digital economy. Grassroots cryptocurrencies turn
mutual trust into liquidity and thus could be a powerful means for 'banking the
unbanked'. Grassroots cryptocurrencies have not been provided yet with a
payment system, which is the goal of this paper. Here, we present Grassroots
Flash, a payment system for grassroots cryptocurrencies that employs the
blocklace -- a DAG-like counterpart of the blockchain data structure. We
analyze its security (safety, liveness, and privacy) and efficiency, prove that
it is indeed grassroots.
| [
{
"version": "v1",
"created": "Fri, 22 Sep 2023 21:39:09 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Lewis-Pye",
"Andrew",
""
],
[
"Naor",
"Oded",
""
],
[
"Shapiro",
"Ehud",
""
]
]
| new_dataset | 0.999831 |
2309.13193 | Jiangtao Gong | Ye Jin, Xiaoxi Shen, Huiling Peng, Xiaoan Liu, Jingli Qin, Jiayang Li,
Jintao Xie, Peizhong Gao, Guyue Zhou, Jiangtao Gong | SurrealDriver: Designing Generative Driver Agent Simulation Framework in
Urban Contexts based on Large Language Model | 12 pages, 8 figures | null | null | null | cs.HC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Simulation plays a critical role in the research and development of
autonomous driving and intelligent transportation systems. However, the current
simulation platforms exhibit limitations in the realism and diversity of agent
behaviors, which impede the transfer of simulation outcomes to the real world.
In this paper, we propose a generative driver agent simulation framework based
on large language models (LLMs), capable of perceiving complex traffic
scenarios and providing realistic driving maneuvers. Notably, we conducted
interviews with 24 drivers and used their detailed descriptions of driving
behavior as chain-of-thought prompts to develop a `coach agent' module, which
can evaluate and assist driver agents in accumulating driving experience and
developing human-like driving styles. Through practical simulation experiments
and user experiments, we validate the feasibility of this framework in
generating reliable driver agents and analyze the roles of each module. The
results show that the framework with full architect decreased the collision
rate by 81.04% and increased the human-likeness by 50%. Our research proposes
the first urban context driver agent simulation framework based on LLMs and
provides valuable insights into the future of agent simulation for complex
tasks.
| [
{
"version": "v1",
"created": "Fri, 22 Sep 2023 21:56:00 GMT"
}
]
| 2023-09-26T00:00:00 | [
[
"Jin",
"Ye",
""
],
[
"Shen",
"Xiaoxi",
""
],
[
"Peng",
"Huiling",
""
],
[
"Liu",
"Xiaoan",
""
],
[
"Qin",
"Jingli",
""
],
[
"Li",
"Jiayang",
""
],
[
"Xie",
"Jintao",
""
],
[
"Gao",
"Peizhong",
""
],
[
"Zhou",
"Guyue",
""
],
[
"Gong",
"Jiangtao",
""
]
]
| new_dataset | 0.992993 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.