id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.07917
|
Andrea Amaduzzi
|
Andrea Amaduzzi, Giuseppe Lisanti, Samuele Salti, Luigi Di Stefano
|
Looking at words and points with attention: a benchmark for
text-to-shape coherence
|
ICCV 2023 Workshop "AI for 3D Content Creation", Project page:
https://cvlab-unibo.github.io/CrossCoherence-Web/, 26 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While text-conditional 3D object generation and manipulation have seen rapid
progress, the evaluation of coherence between generated 3D shapes and input
textual descriptions lacks a clear benchmark. The reason is twofold: a) the low
quality of the textual descriptions in the only publicly available dataset of
text-shape pairs; b) the limited effectiveness of the metrics used to
quantitatively assess such coherence. In this paper, we propose a comprehensive
solution that addresses both weaknesses. Firstly, we employ large language
models to automatically refine textual descriptions associated with shapes.
Secondly, we propose a quantitative metric to assess text-to-shape coherence,
through cross-attention mechanisms. To validate our approach, we conduct a user
study and compare quantitatively our metric with existing ones. The refined
dataset, the new metric and a set of text-shape pairs validated by the user
study comprise a novel, fine-grained benchmark that we publicly release to
foster research on text-to-shape coherence of text-conditioned 3D generative
models. Benchmark available at
https://cvlab-unibo.github.io/CrossCoherence-Web/.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 17:59:48 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Amaduzzi",
"Andrea",
""
],
[
"Lisanti",
"Giuseppe",
""
],
[
"Salti",
"Samuele",
""
],
[
"Di Stefano",
"Luigi",
""
]
] |
new_dataset
| 0.999624 |
2309.07921
|
Linghao Chen
|
Isabella Liu, Linghao Chen, Ziyang Fu, Liwen Wu, Haian Jin, Zhong Li,
Chin Ming Ryan Wong, Yi Xu, Ravi Ramamoorthi, Zexiang Xu, Hao Su
|
OpenIllumination: A Multi-Illumination Dataset for Inverse Rendering
Evaluation on Real Objects
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce OpenIllumination, a real-world dataset containing over 108K
images of 64 objects with diverse materials, captured under 72 camera views and
a large number of different illuminations. For each image in the dataset, we
provide accurate camera parameters, illumination ground truth, and foreground
segmentation masks. Our dataset enables the quantitative evaluation of most
inverse rendering and material decomposition methods for real objects. We
examine several state-of-the-art inverse rendering methods on our dataset and
compare their performances. The dataset and code can be found on the project
page: https://oppo-us-research.github.io/OpenIllumination.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 17:59:53 GMT"
}
] | 2023-09-15T00:00:00 |
[
[
"Liu",
"Isabella",
""
],
[
"Chen",
"Linghao",
""
],
[
"Fu",
"Ziyang",
""
],
[
"Wu",
"Liwen",
""
],
[
"Jin",
"Haian",
""
],
[
"Li",
"Zhong",
""
],
[
"Wong",
"Chin Ming Ryan",
""
],
[
"Xu",
"Yi",
""
],
[
"Ramamoorthi",
"Ravi",
""
],
[
"Xu",
"Zexiang",
""
],
[
"Su",
"Hao",
""
]
] |
new_dataset
| 0.999851 |
2008.06448
|
Shilin He
|
Jieming Zhu, Shilin He, Pinjia He, Jinyang Liu, and Michael R. Lyu
|
Loghub: A Large Collection of System Log Datasets for AI-driven Log
Analytics
|
Accepted by ISSRE 2023, Loghub datasets available at
https://github.com/logpai/loghub
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Logs have been widely adopted in software system development and maintenance
because of the rich runtime information they record. In recent years, the
increase of software size and complexity leads to the rapid growth of the
volume of logs. To handle these large volumes of logs efficiently and
effectively, a line of research focuses on developing intelligent and automated
log analysis techniques. However, only a few of these techniques have reached
successful deployments in industry due to the lack of public log datasets and
open benchmarking upon them. To fill this significant gap and facilitate more
research on AI-driven log analytics, we have collected and released loghub, a
large collection of system log datasets. In particular, loghub provides 19
real-world log datasets collected from a wide range of software systems,
including distributed systems, supercomputers, operating systems, mobile
systems, server applications, and standalone software. In this paper, we
summarize the statistics of these datasets, introduce some practical usage
scenarios of the loghub datasets, and present our benchmarking results on
loghub to benefit the researchers and practitioners in this field. Up to the
time of this paper writing, the loghub datasets have been downloaded for
roughly 90,000 times in total by hundreds of organizations from both industry
and academia. The loghub datasets are available at
https://github.com/logpai/loghub.
|
[
{
"version": "v1",
"created": "Fri, 14 Aug 2020 16:17:54 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Sep 2023 10:49:33 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Sep 2023 01:23:14 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Zhu",
"Jieming",
""
],
[
"He",
"Shilin",
""
],
[
"He",
"Pinjia",
""
],
[
"Liu",
"Jinyang",
""
],
[
"Lyu",
"Michael R.",
""
]
] |
new_dataset
| 0.997463 |
2203.14092
|
Syed Afaq Ali Shah
|
Zeyad Khalifa, Syed Afaq Ali Shah
|
A large scale multi-view RGBD visual affordance learning dataset
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The physical and textural attributes of objects have been widely studied for
recognition, detection and segmentation tasks in computer vision.~A number of
datasets, such as large scale ImageNet, have been proposed for feature learning
using data hungry deep neural networks and for hand-crafted feature extraction.
To intelligently interact with objects, robots and intelligent machines need
the ability to infer beyond the traditional physical/textural attributes, and
understand/learn visual cues, called visual affordances, for affordance
recognition, detection and segmentation. To date there is no publicly available
large dataset for visual affordance understanding and learning. In this paper,
we introduce a large scale multi-view RGBD visual affordance learning dataset,
a benchmark of 47210 RGBD images from 37 object categories, annotated with 15
visual affordance categories. To the best of our knowledge, this is the first
ever and the largest multi-view RGBD visual affordance learning dataset. We
benchmark the proposed dataset for affordance segmentation and recognition
tasks using popular Vision Transformer and Convolutional Neural Networks.
Several state-of-the-art deep learning networks are evaluated each for
affordance recognition and segmentation tasks. Our experimental results
showcase the challenging nature of the dataset and present definite prospects
for new and robust affordance learning algorithms. The dataset is publicly
available at https://sites.google.com/view/afaqshah/dataset.
|
[
{
"version": "v1",
"created": "Sat, 26 Mar 2022 14:31:35 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Jul 2023 13:48:43 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Sep 2023 01:18:40 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Khalifa",
"Zeyad",
""
],
[
"Shah",
"Syed Afaq Ali",
""
]
] |
new_dataset
| 0.99978 |
2205.07098
|
Sandipan Das
|
Sandipan Das, Navid Mahabadi, Addi Djikic, Cesar Nassir, Saikat
Chatterjee, Maurice Fallon
|
Extrinsic Calibration and Verification of Multiple Non-overlapping Field
of View Lidar Sensors
| null |
ICRA, Philadelphia, PA, USA, 2022, pp. 919-925
|
10.1109/ICRA46639.2022.9811704
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We demonstrate a multi-lidar calibration framework for large mobile platforms
that jointly calibrate the extrinsic parameters of non-overlapping
Field-of-View (FoV) lidar sensors, without the need for any external
calibration aid. The method starts by estimating the pose of each lidar in its
corresponding sensor frame in between subsequent timestamps. Since the pose
estimates from the lidars are not necessarily synchronous, we first align the
poses using a Dual Quaternion (DQ) based Screw Linear Interpolation. Afterward,
a Hand-Eye based calibration problem is solved using the DQ-based formulation
to recover the extrinsics. Furthermore, we verify the extrinsics by matching
chosen lidar semantic features, obtained by projecting the lidar data into the
camera perspective after time alignment using vehicle kinematics. Experimental
results on the data collected from a Scania vehicle [$\sim$ 1 Km sequence]
demonstrate the ability of our approach to obtain better calibration parameters
than the provided vehicle CAD model calibration parameters. This setup can also
be scaled to any combination of multiple lidars.
|
[
{
"version": "v1",
"created": "Sat, 14 May 2022 17:12:25 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Das",
"Sandipan",
""
],
[
"Mahabadi",
"Navid",
""
],
[
"Djikic",
"Addi",
""
],
[
"Nassir",
"Cesar",
""
],
[
"Chatterjee",
"Saikat",
""
],
[
"Fallon",
"Maurice",
""
]
] |
new_dataset
| 0.986275 |
2207.03428
|
Andrzej Bia{\l}ecki
|
Andrzej Bia{\l}ecki, Natalia Jakubowska, Pawe{\l} Dobrowolski, Piotr
Bia{\l}ecki, Leszek Krupi\'nski, Andrzej Szczap, Robert Bia{\l}ecki, Jan
Gajewski
|
SC2EGSet: StarCraft II Esport Replay and Game-state Dataset
| null | null |
10.1038/s41597-023-02510-7
| null |
cs.LG cs.AI stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
As a relatively new form of sport, esports offers unparalleled data
availability. Despite the vast amounts of data that are generated by game
engines, it can be challenging to extract them and verify their integrity for
the purposes of practical and scientific use.
Our work aims to open esports to a broader scientific community by supplying
raw and pre-processed files from StarCraft II esports tournaments. These files
can be used in statistical and machine learning modeling tasks and related to
various laboratory-based measurements (e.g., behavioral tests, brain imaging).
We have gathered publicly available game-engine generated "replays" of
tournament matches and performed data extraction and cleanup using a low-level
application programming interface (API) parser library.
Additionally, we open-sourced and published all the custom tools that were
developed in the process of creating our dataset. These tools include PyTorch
and PyTorch Lightning API abstractions to load and model the data.
Our dataset contains replays from major and premiere StarCraft II tournaments
since 2016. To prepare the dataset, we processed 55 tournament "replaypacks"
that contained 17930 files with game-state information. Based on initial
investigation of available StarCraft II datasets, we observed that our dataset
is the largest publicly available source of StarCraft II esports data upon its
publication.
Analysis of the extracted data holds promise for further Artificial
Intelligence (AI), Machine Learning (ML), psychological, Human-Computer
Interaction (HCI), and sports-related studies in a variety of supervised and
self-supervised tasks.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 16:52:53 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 21:58:45 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Białecki",
"Andrzej",
""
],
[
"Jakubowska",
"Natalia",
""
],
[
"Dobrowolski",
"Paweł",
""
],
[
"Białecki",
"Piotr",
""
],
[
"Krupiński",
"Leszek",
""
],
[
"Szczap",
"Andrzej",
""
],
[
"Białecki",
"Robert",
""
],
[
"Gajewski",
"Jan",
""
]
] |
new_dataset
| 0.999888 |
2207.04320
|
Shihao Zou
|
Shihao Zou, Yuanlu Xu, Chao Li, Lingni Ma, Li Cheng, Minh Vo
|
Snipper: A Spatiotemporal Transformer for Simultaneous Multi-Person 3D
Pose Estimation Tracking and Forecasting on a Video Snippet
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-person pose understanding from RGB videos involves three complex tasks:
pose estimation, tracking and motion forecasting. Intuitively, accurate
multi-person pose estimation facilitates robust tracking, and robust tracking
builds crucial history for correct motion forecasting. Most existing works
either focus on a single task or employ multi-stage approaches to solving
multiple tasks separately, which tends to make sub-optimal decision at each
stage and also fail to exploit correlations among the three tasks. In this
paper, we propose Snipper, a unified framework to perform multi-person 3D pose
estimation, tracking, and motion forecasting simultaneously in a single stage.
We propose an efficient yet powerful deformable attention mechanism to
aggregate spatiotemporal information from the video snippet. Building upon this
deformable attention, a video transformer is learned to encode the
spatiotemporal features from the multi-frame snippet and to decode informative
pose features for multi-person pose queries. Finally, these pose queries are
regressed to predict multi-person pose trajectories and future motions in a
single shot. In the experiments, we show the effectiveness of Snipper on three
challenging public datasets where our generic model rivals specialized
state-of-art baselines for pose estimation, tracking, and forecasting.
|
[
{
"version": "v1",
"created": "Sat, 9 Jul 2022 18:42:14 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Jul 2022 07:55:51 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Sep 2023 21:21:35 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Zou",
"Shihao",
""
],
[
"Xu",
"Yuanlu",
""
],
[
"Li",
"Chao",
""
],
[
"Ma",
"Lingni",
""
],
[
"Cheng",
"Li",
""
],
[
"Vo",
"Minh",
""
]
] |
new_dataset
| 0.964853 |
2210.01154
|
Sandipan Das
|
Sandipan Das, Navid Mahabadi, Maurice Fallon, Saikat Chatterjee
|
M-LIO: Multi-lidar, multi-IMU odometry with sensor dropout tolerance
|
For associated video check https://youtu.be/-xSbfaroEPs
|
2023 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA,
2023
|
10.1109/IV55152.2023.10186548
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present a robust system for state estimation that fuses measurements from
multiple lidars and inertial sensors with GNSS data. To initiate the method, we
use the prior GNSS pose information. We then perform incremental motion in
real-time, which produces robust motion estimates in a global frame by fusing
lidar and IMU signals with GNSS translation components using a factor graph
framework. We also propose methods to account for signal loss with a novel
synchronization and fusion mechanism. To validate our approach extensive tests
were carried out on data collected using Scania test vehicles (5 sequences for
a total of ~ 7 Km). From our evaluations, we show an average improvement of 61%
in relative translation and 42% rotational error compared to a state-of-the-art
estimator fusing a single lidar/inertial sensor pair.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 18:05:57 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Oct 2022 05:02:33 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Das",
"Sandipan",
""
],
[
"Mahabadi",
"Navid",
""
],
[
"Fallon",
"Maurice",
""
],
[
"Chatterjee",
"Saikat",
""
]
] |
new_dataset
| 0.999191 |
2210.15043
|
Matthew Edwards
|
Wentao Chen, Fuzhou Wang, Matthew Edwards
|
Active Countermeasures for Email Fraud
| null |
2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P)
|
10.1109/EuroSP57164.2023.00012
| null |
cs.CR cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
As a major component of online crime, email-based fraud is a threat that
causes substantial economic losses every year. To counteract these scammers,
volunteers called scam-baiters play the roles of victims, reply to scammers,
and try to waste their time and attention with long and unproductive
conversations. To curb email fraud and magnify the effectiveness of
scam-baiting, we developed and deployed an expandable scam-baiting mailserver
that can conduct scam-baiting activities automatically. We implemented three
reply strategies using three different models and conducted a one-month-long
experiment during which we elicited 150 messages from 130 different scammers.
We compare the performance of each strategy at attracting and holding the
attention of scammers, finding tradeoffs between human-written and
automatically-generated response strategies. We also demonstrate that scammers
can be engaged concurrently by multiple servers deploying these strategies in a
second experiment, which used two server instances to contact 92 different
scammers over 12 days. We release both our platform and a dataset containing
conversations between our automatic scam-baiters and real human scammers, to
support future work in preventing online fraud.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 21:20:13 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 19:39:30 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Chen",
"Wentao",
""
],
[
"Wang",
"Fuzhou",
""
],
[
"Edwards",
"Matthew",
""
]
] |
new_dataset
| 0.995519 |
2211.16799
|
Nan Xue
|
Bin Tan, Nan Xue, Tianfu Wu, Gui-Song Xia
|
NOPE-SAC: Neural One-Plane RANSAC for Sparse-View Planar 3D
Reconstruction
|
Accepted to IEEE TPAMI; Code is available at
https://github.com/IceTTTb/NopeSAC
| null |
10.1109/TPAMI.2023.3314745
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper studies the challenging two-view 3D reconstruction in a rigorous
sparse-view configuration, which is suffering from insufficient correspondences
in the input image pairs for camera pose estimation. We present a novel Neural
One-PlanE RANSAC framework (termed NOPE-SAC in short) that exerts excellent
capability to learn one-plane pose hypotheses from 3D plane correspondences.
Building on the top of a siamese plane detection network, our NOPE-SAC first
generates putative plane correspondences with a coarse initial pose. It then
feeds the learned 3D plane parameters of correspondences into shared MLPs to
estimate the one-plane camera pose hypotheses, which are subsequently reweighed
in a RANSAC manner to obtain the final camera pose. Because the neural
one-plane pose minimizes the number of plane correspondences for adaptive pose
hypotheses generation, it enables stable pose voting and reliable pose
refinement in a few plane correspondences for the sparse-view inputs. In the
experiments, we demonstrate that our NOPE-SAC significantly improves the camera
pose estimation for the two-view inputs with severe viewpoint changes, setting
several new state-of-the-art performances on two challenging benchmarks, i.e.,
MatterPort3D and ScanNet, for sparse-view 3D reconstruction. The source code is
released at https://github.com/IceTTTb/NopeSAC for reproducible research.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 07:33:14 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Sep 2023 02:48:16 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Tan",
"Bin",
""
],
[
"Xue",
"Nan",
""
],
[
"Wu",
"Tianfu",
""
],
[
"Xia",
"Gui-Song",
""
]
] |
new_dataset
| 0.999626 |
2301.10672
|
Pascal Mei{\ss}ner
|
Pascal Mei{\ss}ner, R\"udiger Dillmann
|
Implicit Shape Model Trees: Recognition of 3-D Indoor Scenes and
Prediction of Object Poses for Mobile Robots
|
22 pages, 24 figures; For associated video clips, see
https://www.youtube.com/playlist?list=PL3RZ_UQY_uOIfuIJNqdS8wDMjTjOAeOmu
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For a mobile robot, we present an approach to recognize scenes in
arrangements of objects distributed over cluttered environments. Recognition is
made possible by letting the robot alternately search for objects and assign
found objects to scenes. Our scene model "Implicit Shape Model (ISM) trees"
allows us to solve these two tasks together. For the ISM trees, this article
presents novel algorithms for recognizing scenes and predicting the poses of
searched objects. We define scenes as sets of objects, where some objects are
connected by 3-D spatial relations. In previous work, we recognized scenes
using single ISMs. However, these ISMs were prone to false positives. To
address this problem, we introduced ISM trees, a hierarchical model that
includes multiple ISMs. Through the recognition algorithm it contributes, this
article ultimately enables the use of ISM trees in scene recognition. We intend
to enable users to generate ISM trees from object arrangements demonstrated by
humans. The lack of a suitable algorithm is overcome by the introduction of an
ISM tree generation algorithm. In scene recognition, it is usually assumed that
image data is already available. However, this is not always the case for
robots. For this reason, we combined scene recognition and object search in
previous work. However, we did not provide an efficient algorithm to link the
two tasks. This article introduces such an algorithm that predicts the poses of
searched objects with relations. Experiments show that our overall approach
enables robots to find and recognize object arrangements that cannot be
perceived from a single viewpoint.
|
[
{
"version": "v1",
"created": "Wed, 25 Jan 2023 16:20:56 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Sep 2023 17:40:38 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Meißner",
"Pascal",
""
],
[
"Dillmann",
"Rüdiger",
""
]
] |
new_dataset
| 0.999092 |
2303.18013
|
Zijun Long
|
Zijun Long, Zaiqiao Meng, Gerardo Aragon Camarasa, Richard McCreadie
|
LaCViT: A Label-aware Contrastive Training Framework for Vision
Transformers
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Vision Transformers have been incredibly effective when tackling computer
vision tasks due to their ability to model long feature dependencies. By using
large-scale training data and various self-supervised signals (e.g., masked
random patches), vision transformers provide state-of-the-art performance on
several benchmarking datasets, such as ImageNet-1k and CIFAR-10. However, these
vision transformers pretrained over general large-scale image corpora could
only produce an anisotropic representation space, limiting their
generalizability and transferability to the target downstream tasks. In this
paper, we propose a simple and effective Label-aware Contrastive Training
framework LaCViT, which improves the isotropy of the pretrained representation
space for vision transformers, thereby enabling more effective transfer
learning amongst a wide range of image classification tasks. Through
experimentation over five standard image classification datasets, we
demonstrate that LaCViT-trained models outperform the original pretrained
baselines by around 9% absolute Accuracy@1, and consistent improvements can be
observed when applying LaCViT to our three evaluated vision transformers.
|
[
{
"version": "v1",
"created": "Fri, 31 Mar 2023 12:38:08 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 20:59:10 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Long",
"Zijun",
""
],
[
"Meng",
"Zaiqiao",
""
],
[
"Camarasa",
"Gerardo Aragon",
""
],
[
"McCreadie",
"Richard",
""
]
] |
new_dataset
| 0.99861 |
2305.01303
|
No\'e P\'erez-Higueras
|
No\'e P\'erez-Higueras and Roberto Otero and Fernando Caballero and
Luis Merino
|
HuNavSim: A ROS 2 Human Navigation Simulator for Benchmarking
Human-Aware Robot Navigation
|
Preprint version of the paper accepted in the RA-L Journal
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents the Human Navigation Simulator (HuNavSim), a novel
open-source tool for the simulation of different human-agent navigation
behaviors in scenarios with mobile robots. The tool, the first programmed under
the ROS 2 framework, can be employed along with different well-known robotics
simulators like Gazebo. The main goal is to ease the development and evaluation
of human-aware robot navigation systems in simulation. Besides a general
human-navigation model, HuNavSim includes, as a novelty, a rich set of
individual and realistic human navigation behaviors and a complete set of
metrics for social navigation benchmarking.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 10:26:51 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2023 14:13:47 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Sep 2023 13:15:44 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Pérez-Higueras",
"Noé",
""
],
[
"Otero",
"Roberto",
""
],
[
"Caballero",
"Fernando",
""
],
[
"Merino",
"Luis",
""
]
] |
new_dataset
| 0.998572 |
2305.07748
|
Francesco Roscia
|
Francesco Roscia, Michele Focchi, Andrea Del Prete, Darwin G.
Caldwell, and Claudio Semini
|
Reactive Landing Controller for Quadruped Robots
|
8 pages, 5 figures, 2 tables, submitted to ral, accompanying video at
https://youtu.be/KnmNbhkOKWI
|
IEEE Robotics and Automation Letters (RA-L), 2023
| null |
10.1109/LRA.2023.3313919
|
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Quadruped robots are machines intended for challenging and harsh
environments. Despite the progress in locomotion strategy, safely recovering
from unexpected falls or planned drops is still an open problem. It is further
made more difficult when high horizontal velocities are involved. In this work,
we propose an optimization-based reactive Landing Controller that uses only
proprioceptive measures for torque-controlled quadruped robots that free-fall
on a flat horizontal ground, knowing neither the distance to the landing
surface nor the flight time. Based on an estimate of the Center of Mass
horizontal velocity, the method uses the Variable Height Springy Inverted
Pendulum model for continuously recomputing the feet position while the robot
is falling. In this way, the quadruped is ready to attain a successful landing
in all directions, even in the presence of significant horizontal velocities.
The method is demonstrated to dramatically enlarge the region of horizontal
velocities that can be dealt with by a naive approach that keeps the feet still
during the airborne stage. To the best of our knowledge, this is the first time
that a quadruped robot can successfully recover from falls with horizontal
velocities up to 3 m/s in simulation. Experiments prove that the used platform,
Go1, can successfully attain a stable standing configuration from falls with
various horizontal velocity and different angular perturbations.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 20:20:29 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 10:16:06 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Sep 2023 17:21:08 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Roscia",
"Francesco",
""
],
[
"Focchi",
"Michele",
""
],
[
"Del Prete",
"Andrea",
""
],
[
"Caldwell",
"Darwin G.",
""
],
[
"Semini",
"Claudio",
""
]
] |
new_dataset
| 0.99816 |
2307.09143
|
Yuki Kondo
|
Yuki Kondo, Norimichi Ukita, Takayuki Yamaguchi, Hao-Yu Hou, Mu-Yi
Shen, Chia-Chi Hsu, En-Ming Huang, Yu-Chen Huang, Yu-Cheng Xia, Chien-Yao
Wang, Chun-Yi Lee, Da Huo, Marc A. Kastner, Tingwei Liu, Yasutomo Kawanishi,
Takatsugu Hirayama, Takahiro Komamizu, Ichiro Ide, Yosuke Shinya, Xinyao Liu,
Guang Liang, Syusuke Yasui
|
MVA2023 Small Object Detection Challenge for Spotting Birds: Dataset,
Methods, and Results
|
This paper is included in the proceedings of the 18th International
Conference on Machine Vision Applications (MVA2023). It will be officially
published at a later date. Project page :
https://www.mva-org.jp/mva2023/challenge
|
2023 18th International Conference on Machine Vision and
Applications (MVA)
|
10.23919/MVA57639.2023.10215935
| null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Small Object Detection (SOD) is an important machine vision topic because (i)
a variety of real-world applications require object detection for distant
objects and (ii) SOD is a challenging task due to the noisy, blurred, and
less-informative image appearances of small objects. This paper proposes a new
SOD dataset consisting of 39,070 images including 137,121 bird instances, which
is called the Small Object Detection for Spotting Birds (SOD4SB) dataset. The
detail of the challenge with the SOD4SB dataset is introduced in this paper. In
total, 223 participants joined this challenge. This paper briefly introduces
the award-winning methods. The dataset, the baseline code, and the website for
evaluation on the public testset are publicly available.
|
[
{
"version": "v1",
"created": "Tue, 18 Jul 2023 10:52:24 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Kondo",
"Yuki",
""
],
[
"Ukita",
"Norimichi",
""
],
[
"Yamaguchi",
"Takayuki",
""
],
[
"Hou",
"Hao-Yu",
""
],
[
"Shen",
"Mu-Yi",
""
],
[
"Hsu",
"Chia-Chi",
""
],
[
"Huang",
"En-Ming",
""
],
[
"Huang",
"Yu-Chen",
""
],
[
"Xia",
"Yu-Cheng",
""
],
[
"Wang",
"Chien-Yao",
""
],
[
"Lee",
"Chun-Yi",
""
],
[
"Huo",
"Da",
""
],
[
"Kastner",
"Marc A.",
""
],
[
"Liu",
"Tingwei",
""
],
[
"Kawanishi",
"Yasutomo",
""
],
[
"Hirayama",
"Takatsugu",
""
],
[
"Komamizu",
"Takahiro",
""
],
[
"Ide",
"Ichiro",
""
],
[
"Shinya",
"Yosuke",
""
],
[
"Liu",
"Xinyao",
""
],
[
"Liang",
"Guang",
""
],
[
"Yasui",
"Syusuke",
""
]
] |
new_dataset
| 0.999814 |
2307.09225
|
Chenyu Tang
|
Chenyu Tang, Wentian Yi, Edoardo Occhipinti, Yanning Dai, Shuo Gao,
and Luigi G. Occhipinti
|
Human Body Digital Twin: A Master Plan
|
3 figures, 2 boxes
| null | null | null |
cs.AI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
A human body digital twin (DT) is a virtual representation of an individual's
physiological state, created using real-time data from sensors and medical test
devices, with the purpose of simulating, predicting, and optimizing health
outcomes through advanced analytics and simulations. The human body DT has the
potential to revolutionize healthcare and wellness, but its responsible and
effective implementation requires consideration of various factors. This
article presents a comprehensive overview of the current status and future
prospects of the human body DT and proposes a five-level roadmap for its
development. The roadmap covers the development of various components, such as
wearable devices, data collection, data analysis, and decision-making systems.
The article also highlights the necessary support, security, cost, and ethical
considerations that must be addressed in order to ensure responsible and
effective implementation of the human body DT. The proposed roadmap provides a
framework for guiding future development and offers a unique perspective on the
future of the human body DT, facilitating new interdisciplinary research and
innovative solutions in this rapidly evolving field.
|
[
{
"version": "v1",
"created": "Tue, 18 Jul 2023 12:57:35 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 19:57:52 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Tang",
"Chenyu",
""
],
[
"Yi",
"Wentian",
""
],
[
"Occhipinti",
"Edoardo",
""
],
[
"Dai",
"Yanning",
""
],
[
"Gao",
"Shuo",
""
],
[
"Occhipinti",
"Luigi G.",
""
]
] |
new_dataset
| 0.995561 |
2307.10475
|
Parth Patwa
|
S Suryavardan, Shreyash Mishra, Megha Chakraborty, Parth Patwa, Anku
Rani, Aman Chadha, Aishwarya Reganti, Amitava Das, Amit Sheth, Manoj
Chinnakotla, Asif Ekbal, Srijan Kumar
|
Findings of Factify 2: Multimodal Fake News Detection
|
Defactify2 @AAAI 2023
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
With social media usage growing exponentially in the past few years, fake
news has also become extremely prevalent. The detrimental impact of fake news
emphasizes the need for research focused on automating the detection of false
information and verifying its accuracy. In this work, we present the outcome of
the Factify 2 shared task, which provides a multi-modal fact verification and
satire news dataset, as part of the DeFactify 2 workshop at AAAI'23. The data
calls for a comparison based approach to the task by pairing social media
claims with supporting documents, with both text and image, divided into 5
classes based on multi-modal relations. In the second iteration of this task we
had over 60 participants and 9 final test-set submissions. The best
performances came from the use of DeBERTa for text and Swinv2 and CLIP for
image. The highest F1 score averaged for all five classes was 81.82%.
|
[
{
"version": "v1",
"created": "Wed, 19 Jul 2023 22:14:49 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 18:51:05 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Suryavardan",
"S",
""
],
[
"Mishra",
"Shreyash",
""
],
[
"Chakraborty",
"Megha",
""
],
[
"Patwa",
"Parth",
""
],
[
"Rani",
"Anku",
""
],
[
"Chadha",
"Aman",
""
],
[
"Reganti",
"Aishwarya",
""
],
[
"Das",
"Amitava",
""
],
[
"Sheth",
"Amit",
""
],
[
"Chinnakotla",
"Manoj",
""
],
[
"Ekbal",
"Asif",
""
],
[
"Kumar",
"Srijan",
""
]
] |
new_dataset
| 0.993167 |
2308.00802
|
Stergios Chatzikyriakidis
|
Stergios Chatzikyriakidis and Chatrine Qwaider and Ilias Kolokousis
and Christina Koula and Dimitris Papadakis and Efthymia Sakellariou
|
GRDD: A Dataset for Greek Dialectal NLP
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a dataset for the computational study of a number
of Modern Greek dialects. It consists of raw text data from four dialects of
Modern Greek, Cretan, Pontic, Northern Greek and Cypriot Greek. The dataset is
of considerable size, albeit imbalanced, and presents the first attempt to
create large scale dialectal resources of this type for Modern Greek dialects.
We then use the dataset to perform dialect idefntification. We experiment with
traditional ML algorithms, as well as simple DL architectures. The results show
very good performance on the task, potentially revealing that the dialects in
question have distinct enough characteristics allowing even simple ML models to
perform well on the task. Error analysis is performed for the top performing
algorithms showing that in a number of cases the errors are due to insufficient
dataset cleaning.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 19:34:18 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Sep 2023 14:43:45 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Chatzikyriakidis",
"Stergios",
""
],
[
"Qwaider",
"Chatrine",
""
],
[
"Kolokousis",
"Ilias",
""
],
[
"Koula",
"Christina",
""
],
[
"Papadakis",
"Dimitris",
""
],
[
"Sakellariou",
"Efthymia",
""
]
] |
new_dataset
| 0.999873 |
2308.09285
|
Hui Miao
|
Hui Miao, Yuanfang Guo and Yunhong Wang
|
RFDforFin: Robust Deep Forgery Detection for GAN-generated Fingerprint
Images
|
10 pages, 8 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid development of the image generation technologies, the
malicious abuses of the GAN-generated fingerprint images poses a significant
threat to the public safety in certain circumstances. Although the existing
universal deep forgery detection approach can be applied to detect the fake
fingerprint images, they are easily attacked and have poor robustness.
Meanwhile, there is no specifically designed deep forgery detection method for
fingerprint images. In this paper, we propose the first deep forgery detection
approach for fingerprint images, which combines unique ridge features of
fingerprint and generation artifacts of the GAN-generated images, to the best
of our knowledge. Specifically, we firstly construct a ridge stream, which
exploits the grayscale variations along the ridges to extract unique
fingerprint-specific features. Then, we construct a generation artifact stream,
in which the FFT-based spectrums of the input fingerprint images are exploited,
to extract more robust generation artifact features. At last, the unique ridge
features and generation artifact features are fused for binary classification
(i.e., real or fake). Comprehensive experiments demonstrate that our proposed
approach is effective and robust with low complexities.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 04:05:18 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Sep 2023 14:27:42 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Miao",
"Hui",
""
],
[
"Guo",
"Yuanfang",
""
],
[
"Wang",
"Yunhong",
""
]
] |
new_dataset
| 0.995379 |
2308.09392
|
Jehyun Lee
|
Jehyun Lee, Zhe Xin, Melanie Ng Pei See, Kanav Sabharwal, Giovanni
Apruzzese, Dinil Mon Divakaran
|
Attacking logo-based phishing website detectors with adversarial
perturbations
|
To appear in ESORICS 2023
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent times have witnessed the rise of anti-phishing schemes powered by deep
learning (DL). In particular, logo-based phishing detectors rely on DL models
from Computer Vision to identify logos of well-known brands on webpages, to
detect malicious webpages that imitate a given brand. For instance, Siamese
networks have demonstrated notable performance for these tasks, enabling the
corresponding anti-phishing solutions to detect even "zero-day" phishing
webpages. In this work, we take the next step of studying the robustness of
logo-based phishing detectors against adversarial ML attacks. We propose a
novel attack exploiting generative adversarial perturbations to craft
"adversarial logos" that evade phishing detectors. We evaluate our attacks
through: (i) experiments on datasets containing real logos, to evaluate the
robustness of state-of-the-art phishing detectors; and (ii) user studies to
gauge whether our adversarial logos can deceive human eyes. The results show
that our proposed attack is capable of crafting perturbed logos subtle enough
to evade various DL models-achieving an evasion rate of up to 95%. Moreover,
users are not able to spot significant differences between generated
adversarial logos and original ones.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 08:49:11 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Sep 2023 03:50:25 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Lee",
"Jehyun",
""
],
[
"Xin",
"Zhe",
""
],
[
"See",
"Melanie Ng Pei",
""
],
[
"Sabharwal",
"Kanav",
""
],
[
"Apruzzese",
"Giovanni",
""
],
[
"Divakaran",
"Dinil Mon",
""
]
] |
new_dataset
| 0.990712 |
2308.13442
|
Reza Azad
|
Reza Azad, Amirhossein Kazerouni, Alaa Sulaiman, Afshin Bozorgpour,
Ehsan Khodapanah Aghdam, Abin Jose, Dorit Merhof
|
Unlocking Fine-Grained Details with Wavelet-based High-Frequency
Enhancement in Transformers
|
Accepted in MICCAI 2023 workshop MLMI
|
MICCAI 2023 workshop
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Medical image segmentation is a critical task that plays a vital role in
diagnosis, treatment planning, and disease monitoring. Accurate segmentation of
anatomical structures and abnormalities from medical images can aid in the
early detection and treatment of various diseases. In this paper, we address
the local feature deficiency of the Transformer model by carefully re-designing
the self-attention map to produce accurate dense prediction in medical images.
To this end, we first apply the wavelet transformation to decompose the input
feature map into low-frequency (LF) and high-frequency (HF) subbands. The LF
segment is associated with coarse-grained features while the HF components
preserve fine-grained features such as texture and edge information. Next, we
reformulate the self-attention operation using the efficient Transformer to
perform both spatial and context attention on top of the frequency
representation. Furthermore, to intensify the importance of the boundary
information, we impose an additional attention map by creating a Gaussian
pyramid on top of the HF components. Moreover, we propose a multi-scale context
enhancement block within skip connections to adaptively model inter-scale
dependencies to overcome the semantic gap among stages of the encoder and
decoder modules. Throughout comprehensive experiments, we demonstrate the
effectiveness of our strategy on multi-organ and skin lesion segmentation
benchmarks. The implementation code will be available upon acceptance.
\href{https://github.com/mindflow-institue/WaveFormer}{GitHub}.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 15:42:19 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 18:41:16 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Azad",
"Reza",
""
],
[
"Kazerouni",
"Amirhossein",
""
],
[
"Sulaiman",
"Alaa",
""
],
[
"Bozorgpour",
"Afshin",
""
],
[
"Aghdam",
"Ehsan Khodapanah",
""
],
[
"Jose",
"Abin",
""
],
[
"Merhof",
"Dorit",
""
]
] |
new_dataset
| 0.996078 |
2309.02969
|
Christodoulos Peltekis
|
C. Peltekis, D. Filippas, G. Dimitrakopoulos, C. Nicopoulos
|
The Case for Asymmetric Systolic Array Floorplanning
|
CNNA 2023
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The widespread proliferation of deep learning applications has triggered the
need to accelerate them directly in hardware. General Matrix Multiplication
(GEMM) kernels are elemental deep-learning constructs and they inherently map
onto Systolic Arrays (SAs). SAs are regular structures that are well-suited for
accelerating matrix multiplications. Typical SAs use a pipelined array of
Processing Elements (PEs), which communicate with local connections and
pre-orchestrated data movements. In this work, we show that the physical layout
of SAs should be asymmetric to minimize wirelength and improve energy
efficiency. The floorplan of the SA adjusts better to the asymmetric widths of
the horizontal and vertical data buses and their switching activity profiles.
It is demonstrated that such physically asymmetric SAs reduce interconnect
power by 9.1% when executing state-of-the-art Convolutional Neural Network
(CNN) layers, as compared to SAs of the same size but with a square (i.e.,
symmetric) layout. The savings in interconnect power translate, in turn, to
2.1% overall power savings.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 13:08:36 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Sep 2023 12:59:51 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Peltekis",
"C.",
""
],
[
"Filippas",
"D.",
""
],
[
"Dimitrakopoulos",
"G.",
""
],
[
"Nicopoulos",
"C.",
""
]
] |
new_dataset
| 0.978311 |
2309.04573
|
Shyam Nandan Rai
|
Shyam Nandan Rai, Fabio Cermelli, Barbara Caputo, Carlo Masone
|
Mask2Anomaly: Mask Transformer for Universal Open-set Segmentation
|
16 pages. arXiv admin note: substantial text overlap with
arXiv:2307.13316
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Segmenting unknown or anomalous object instances is a critical task in
autonomous driving applications, and it is approached traditionally as a
per-pixel classification problem. However, reasoning individually about each
pixel without considering their contextual semantics results in high
uncertainty around the objects' boundaries and numerous false positives. We
propose a paradigm change by shifting from a per-pixel classification to a mask
classification. Our mask-based method, Mask2Anomaly, demonstrates the
feasibility of integrating a mask-classification architecture to jointly
address anomaly segmentation, open-set semantic segmentation, and open-set
panoptic segmentation. Mask2Anomaly includes several technical novelties that
are designed to improve the detection of anomalies/unknown objects: i) a global
masked attention module to focus individually on the foreground and background
regions; ii) a mask contrastive learning that maximizes the margin between an
anomaly and known classes; iii) a mask refinement solution to reduce false
positives; and iv) a novel approach to mine unknown instances based on the
mask-architecture properties. By comprehensive qualitative and qualitative
evaluation, we show Mask2Anomaly achieves new state-of-the-art results across
the benchmarks of anomaly segmentation, open-set semantic segmentation, and
open-set panoptic segmentation.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 20:07:18 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 14:36:02 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Rai",
"Shyam Nandan",
""
],
[
"Cermelli",
"Fabio",
""
],
[
"Caputo",
"Barbara",
""
],
[
"Masone",
"Carlo",
""
]
] |
new_dataset
| 0.996672 |
2309.05519
|
Hao Fei
|
Shengqiong Wu, Hao Fei, Leigang Qu, Wei Ji, Tat-Seng Chua
|
NExT-GPT: Any-to-Any Multimodal LLM
|
work in progress
| null | null | null |
cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
While recently Multimodal Large Language Models (MM-LLMs) have made exciting
strides, they mostly fall prey to the limitation of only input-side multimodal
understanding, without the ability to produce content in multiple modalities.
As we humans always perceive the world and communicate with people through
various modalities, developing any-to-any MM-LLMs capable of accepting and
delivering content in any modality becomes essential to human-level AI. To fill
the gap, we present an end-to-end general-purpose any-to-any MM-LLM system,
NExT-GPT. We connect an LLM with multimodal adaptors and different diffusion
decoders, enabling NExT-GPT to perceive inputs and generate outputs in
arbitrary combinations of text, images, videos, and audio. By leveraging the
existing well-trained highly-performing encoders and decoders, NExT-GPT is
tuned with only a small amount of parameter (1%) of certain projection layers,
which not only benefits low-cost training and also facilitates convenient
expansion to more potential modalities. Moreover, we introduce a
modality-switching instruction tuning (MosIT) and manually curate a
high-quality dataset for MosIT, based on which NExT-GPT is empowered with
complex cross-modal semantic understanding and content generation. Overall, our
research showcases the promising possibility of building an AI agent capable of
modeling universal modalities, paving the way for more human-like AI research
in the community. Project page: https://next-gpt.github.io/
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 15:02:25 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Sep 2023 16:49:34 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Wu",
"Shengqiong",
""
],
[
"Fei",
"Hao",
""
],
[
"Qu",
"Leigang",
""
],
[
"Ji",
"Wei",
""
],
[
"Chua",
"Tat-Seng",
""
]
] |
new_dataset
| 0.999309 |
2309.06229
|
Zimin Chen
|
Ye He, Zimin Chen and Claire Le Goues
|
PreciseBugCollector: Extensible, Executable and Precise Bug-fix
Collection
|
Accepted at the industry challenge track of ASE 2023
| null | null | null |
cs.SE cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Bug datasets are vital for enabling deep learning techniques to address
software maintenance tasks related to bugs. However, existing bug datasets
suffer from precise and scale limitations: they are either small-scale but
precise with manual validation or large-scale but imprecise with simple commit
message processing. In this paper, we introduce PreciseBugCollector, a precise,
multi-language bug collection approach that overcomes these two limitations.
PreciseBugCollector is based on two novel components: a) A bug tracker to map
the codebase repositories with external bug repositories to trace bug type
information, and b) A bug injector to generate project-specific bugs by
injecting noise into the correct codebases and then executing them against
their test suites to obtain test failure messages.
We implement PreciseBugCollector against three sources: 1) A bug tracker that
links to the national vulnerability data set (NVD) to collect general-wise
vulnerabilities, 2) A bug tracker that links to OSS-Fuzz to collect
general-wise bugs, and 3) A bug injector based on 16 injection rules to
generate project-wise bugs. To date, PreciseBugCollector comprises 1057818 bugs
extracted from 2968 open-source projects. Of these, 12602 bugs are sourced from
bug repositories (NVD and OSS-Fuzz), while the remaining 1045216
project-specific bugs are generated by the bug injector. Considering the
challenge objectives, we argue that a bug injection approach is highly valuable
for the industrial setting, since project-specific bugs align with domain
knowledge, share the same codebase, and adhere to the coding style employed in
industrial projects.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 13:47:44 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Sep 2023 14:20:35 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"He",
"Ye",
""
],
[
"Chen",
"Zimin",
""
],
[
"Goues",
"Claire Le",
""
]
] |
new_dataset
| 0.99956 |
2309.06457
|
Wei Jiang
|
Wei Jiang and Hans D. Schotten
|
Opportunistic Reflection in Reconfigurable Intelligent Surface-Assisted
Wireless Networks
|
IEEE PIMRC 2023, Toronto, Canada. arXiv admin note: text overlap with
arXiv:2303.09183. text overlap with arXiv:2309.06326
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper focuses on multiple-access protocol design in a wireless network
assisted by multiple reconfigurable intelligent surfaces (RISs). By extending
the existing approaches in single-user or single-RIS cases, we present two
benchmark schemes for this multi-user multi-RIS scenario. Inspecting their
shortcomings, a simple but efficient method coined opportunistic multi-user
reflection (OMUR) is proposed. The key idea is to opportunistically select the
best user as the anchor for optimizing the RISs, and non-orthogonally
transmitting all users' signals simultaneously. A simplified version of OMUR
exploiting random phase shifts is also proposed to avoid the complexity of RIS
channel estimation.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 15:45:23 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Jiang",
"Wei",
""
],
[
"Schotten",
"Hans D.",
""
]
] |
new_dataset
| 0.988568 |
2309.06494
|
Matti Vahs
|
Matti Vahs and Jana Tumova
|
Non-smooth Control Barrier Functions for Stochastic Dynamical Systems
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Uncertainties arising in various control systems, such as robots that are
subject to unknown disturbances or environmental variations, pose significant
challenges for ensuring system safety, such as collision avoidance. At the same
time, safety specifications are getting more and more complex, e.g., by
composing multiple safety objectives through Boolean operators resulting in
non-smooth descriptions of safe sets. Control Barrier Functions (CBFs) have
emerged as a control technique to provably guarantee system safety. In most
settings, they rely on an assumption of having deterministic dynamics and
smooth safe sets. This paper relaxes these two assumptions by extending CBFs to
encompass control systems with stochastic dynamics and safe sets defined by
non-smooth functions. By explicitly considering the stochastic nature of system
dynamics and accommodating complex safety specifications, our method enables
the design of safe control strategies in uncertain and complex systems. We
provide formal guarantees on the safety of the system by leveraging the
theoretical foundations of stochastic CBFs and non-smooth safe sets. Numerical
simulations demonstrate the effectiveness of the approach in various scenarios.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 18:07:27 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Vahs",
"Matti",
""
],
[
"Tumova",
"Jana",
""
]
] |
new_dataset
| 0.988087 |
2309.06495
|
Wanling Gao
|
Fei Tang, Wanling Gao, Luzhou Peng, Jianfeng Zhan
|
AGIBench: A Multi-granularity, Multimodal, Human-referenced,
Auto-scoring Benchmark for Large Language Models
|
14 pages
| null | null | null |
cs.CL cs.AI cs.PF
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Large language models (LLMs) like ChatGPT have revealed amazing intelligence.
How to evaluate the question-solving abilities of LLMs and their degrees of
intelligence is a hot-spot but challenging issue. First, the question-solving
abilities are interlaced with different ability branches like understanding and
massive knowledge categories like mathematics. Second, the inputs of questions
are multimodal that may involve text and images. Third, the response format of
LLMs is diverse and thus poses great challenges for result extraction and
evaluation. In this paper, we propose AGIBench -- a multi-granularity,
multimodal, human-referenced, and auto-scoring benchmarking methodology for
LLMs. Instead of a collection of blended questions, AGIBench focuses on three
typical ability branches and adopts a four-tuple <ability branch, knowledge,
difficulty, modal> to label the attributes of each question. First, it supports
multi-granularity benchmarking, e.g., per-question, per-ability branch,
per-knowledge, per-modal, per-dataset, and per-difficulty level granularities.
Second, it contains multimodal input, including text and images. Third, it
classifies all the questions into five degrees of difficulty according to the
average accuracy rate of abundant educated humans (human-referenced). Fourth,
it adopts zero-shot learning to avoid introducing additional unpredictability
and provides an auto-scoring method to extract and judge the result. Finally,
it defines multi-dimensional metrics, including accuracy under the average,
worst, best, and majority voting cases, and repeatability. AGIBench is
publically available from \url{https://www.benchcouncil.org/agibench}.
|
[
{
"version": "v1",
"created": "Tue, 5 Sep 2023 13:43:37 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Tang",
"Fei",
""
],
[
"Gao",
"Wanling",
""
],
[
"Peng",
"Luzhou",
""
],
[
"Zhan",
"Jianfeng",
""
]
] |
new_dataset
| 0.97399 |
2309.06511
|
Aaditya Kharel
|
Aaditya Kharel, Manas Paranjape, Aniket Bera
|
DF-TransFusion: Multimodal Deepfake Detection via Lip-Audio
Cross-Attention and Facial Self-Attention
| null | null | null | null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
With the rise in manipulated media, deepfake detection has become an
imperative task for preserving the authenticity of digital content. In this
paper, we present a novel multi-modal audio-video framework designed to
concurrently process audio and video inputs for deepfake detection tasks. Our
model capitalizes on lip synchronization with input audio through a
cross-attention mechanism while extracting visual cues via a fine-tuned VGG-16
network. Subsequently, a transformer encoder network is employed to perform
facial self-attention. We conduct multiple ablation studies highlighting
different strengths of our approach. Our multi-modal methodology outperforms
state-of-the-art multi-modal deepfake detection techniques in terms of F-1 and
per-video AUC scores.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 18:37:05 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Kharel",
"Aaditya",
""
],
[
"Paranjape",
"Manas",
""
],
[
"Bera",
"Aniket",
""
]
] |
new_dataset
| 0.995637 |
2309.06513
|
Benjamin Reidys
|
Benjamin Reidys, Yuqi Xue, Daixuan Li, Bharat Sukhwani, Wen-mei Hwu,
Deming Chen, Sameh Asaad, Jian Huang
|
RackBlox: A Software-Defined Rack-Scale Storage System with
Network-Storage Co-Design
|
14 pages. Published in published in ACM SIGOPS 29th Symposium on
Operating Systems Principles (SOSP'23)
| null |
10.1145/3600006.3613170
| null |
cs.OS cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software-defined networking (SDN) and software-defined flash (SDF) have been
serving as the backbone of modern data centers. They are managed separately to
handle I/O requests. At first glance, this is a reasonable design by following
the rack-scale hierarchical design principles. However, it suffers from
suboptimal end-to-end performance, due to the lack of coordination between SDN
and SDF.
In this paper, we co-design the SDN and SDF stack by redefining the functions
of their control plane and data plane, and splitting up them within a new
architecture named RackBlox. RackBlox decouples the storage management
functions of flash-based solid-state drives (SSDs), and allow the SDN to track
and manage the states of SSDs in a rack. Therefore, we can enable the state
sharing between SDN and SDF, and facilitate global storage resource management.
RackBlox has three major components: (1) coordinated I/O scheduling, in which
it dynamically adjusts the I/O scheduling in the storage stack with the
measured and predicted network latency, such that it can coordinate the effort
of I/O scheduling across the network and storage stack for achieving
predictable end-to-end performance; (2) coordinated garbage collection (GC), in
which it will coordinate the GC activities across the SSDs in a rack to
minimize their impact on incoming I/O requests; (3) rack-scale wear leveling,
in which it enables global wear leveling among SSDs in a rack by periodically
swapping data, for achieving improved device lifetime for the entire rack. We
implement RackBlox using programmable SSDs and switch. Our experiments
demonstrate that RackBlox can reduce the tail latency of I/O requests by up to
5.8x over state-of-the-art rack-scale storage systems.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 18:42:08 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Reidys",
"Benjamin",
""
],
[
"Xue",
"Yuqi",
""
],
[
"Li",
"Daixuan",
""
],
[
"Sukhwani",
"Bharat",
""
],
[
"Hwu",
"Wen-mei",
""
],
[
"Chen",
"Deming",
""
],
[
"Asaad",
"Sameh",
""
],
[
"Huang",
"Jian",
""
]
] |
new_dataset
| 0.999236 |
2309.06521
|
John Daugman
|
John Daugman, Cathryn Downing, Oluwatobi Noah Akande, Oluwakemi
Christiana Abikoye
|
Ethnicity and Biometric Uniqueness: Iris Pattern Individuality in a West
African Database
|
8 pages, 8 Figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We conducted more than 1.3 million comparisons of iris patterns encoded from
images collected at two Nigerian universities, which constitute the newly
available African Human Iris (AFHIRIS) database. The purpose was to discover
whether ethnic differences in iris structure and appearance such as the
textural feature size, as contrasted with an all-Chinese image database or an
American database in which only 1.53% were of African-American heritage, made a
material difference for iris discrimination. We measured a reduction in entropy
for the AFHIRIS database due to the coarser iris features created by the thick
anterior layer of melanocytes, and we found stochastic parameters that
accurately model the relevant empirical distributions. Quantile-Quantile
analysis revealed that a very small change in operational decision thresholds
for the African database would compensate for the reduced entropy and generate
the same performance in terms of resistance to False Matches. We conclude that
despite demographic difference, individuality can be robustly discerned by
comparison of iris patterns in this West African population.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 18:51:28 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Daugman",
"John",
""
],
[
"Downing",
"Cathryn",
""
],
[
"Akande",
"Oluwatobi Noah",
""
],
[
"Abikoye",
"Oluwakemi Christiana",
""
]
] |
new_dataset
| 0.992443 |
2309.06547
|
Rohit Mohan
|
Ahmed Rida Sekkat, Rohit Mohan, Oliver Sawade, Elmar Matthes, and
Abhinav Valada
|
AmodalSynthDrive: A Synthetic Amodal Perception Dataset for Autonomous
Driving
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unlike humans, who can effortlessly estimate the entirety of objects even
when partially occluded, modern computer vision algorithms still find this
aspect extremely challenging. Leveraging this amodal perception for autonomous
driving remains largely untapped due to the lack of suitable datasets. The
curation of these datasets is primarily hindered by significant annotation
costs and mitigating annotator subjectivity in accurately labeling occluded
regions. To address these limitations, we introduce AmodalSynthDrive, a
synthetic multi-task multi-modal amodal perception dataset. The dataset
provides multi-view camera images, 3D bounding boxes, LiDAR data, and odometry
for 150 driving sequences with over 1M object annotations in diverse traffic,
weather, and lighting conditions. AmodalSynthDrive supports multiple amodal
scene understanding tasks including the introduced amodal depth estimation for
enhanced spatial understanding. We evaluate several baselines for each of these
tasks to illustrate the challenges and set up public benchmarking servers. The
dataset is available at http://amodalsynthdrive.cs.uni-freiburg.de.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 19:46:15 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Sekkat",
"Ahmed Rida",
""
],
[
"Mohan",
"Rohit",
""
],
[
"Sawade",
"Oliver",
""
],
[
"Matthes",
"Elmar",
""
],
[
"Valada",
"Abhinav",
""
]
] |
new_dataset
| 0.999687 |
2309.06551
|
Diomidis Spinellis
|
Diomidis Spinellis
|
Commands as AI Conversations
|
5 pages
| null |
10.1109/MS.2023.3307170
| null |
cs.SE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developers and data scientists often struggle to write command-line inputs,
even though graphical interfaces or tools like ChatGPT can assist. The
solution? "ai-cli," an open-source system inspired by GitHub Copilot that
converts natural language prompts into executable commands for various Linux
command-line tools. By tapping into OpenAI's API, which allows interaction
through JSON HTTP requests, "ai-cli" transforms user queries into actionable
command-line instructions. However, integrating AI assistance across multiple
command-line tools, especially in open source settings, can be complex.
Historically, operating systems could mediate, but individual tool
functionality and the lack of a unified approach have made centralized
integration challenging. The "ai-cli" tool, by bridging this gap through
dynamic loading and linking with each program's Readline library API, makes
command-line interfaces smarter and more user-friendly, opening avenues for
further enhancement and cross-platform applicability.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 19:52:27 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Spinellis",
"Diomidis",
""
]
] |
new_dataset
| 0.978424 |
2309.06565
|
Takahiro Hirofuchi
|
Takahiro Hirofuchi, Takaaki Fukai, Akram Ben Ahmed, Ryousei Takano,
Kento Sato
|
METICULOUS: An FPGA-based Main Memory Emulator for System Software
Studies
| null | null | null | null |
cs.AR cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the scaling problem of the DRAM technology, non-volatile memory
devices, which are based on different principle of operation than DRAM, are now
being intensively developed to expand the main memory of computers.
Disaggregated memory is also drawing attention as an emerging technology to
scale up the main memory. Although system software studies need to discuss
management mechanisms for the new main memory designs incorporating such
emerging memory systems, there are no feasible memory emulation mechanisms that
efficiently work for large-scale, privileged programs such as operating systems
and hypervisors. In this paper, we propose an FPGA-based main memory emulator
for system software studies on new main memory systems. It can emulate the main
memory incorporating multiple memory regions with different performance
characteristics. For the address region of each memory device, it emulates the
latencies, bandwidths and bit-flip error rates of read/write operations,
respectively. The emulator is implemented at the hardware module of an
off-the-self FPGA System-on-Chip board. Any privileged/unprivileged software
programs running on its powerful 64-bit CPU cores can access emulated main
memory devices at a practical speed through the exactly same interface as
normal DRAM main memory. We confirmed that the emulator transparently worked
for CPU cores and successfully changed the performance of a memory region
according to given emulation parameters; for example, the latencies measured by
CPU cores were exactly proportional to the latencies inserted by the emulator,
involving the minimum overhead of approximately 240 ns. As a preliminary use
case, we confirmed that the emulator allows us to change the bandwidth limit
and the inserted latency individually for unmodified software programs, making
discussions on latency sensitivity much easier.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 04:50:25 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Hirofuchi",
"Takahiro",
""
],
[
"Fukai",
"Takaaki",
""
],
[
"Ahmed",
"Akram Ben",
""
],
[
"Takano",
"Ryousei",
""
],
[
"Sato",
"Kento",
""
]
] |
new_dataset
| 0.997515 |
2309.06574
|
Jingsong Lv
|
Jingsong Lv, Hongyang Chen, Yao Qi, Lei Yu
|
Circle Feature Graphormer: Can Circle Features Stimulate Graph
Transformer?
|
3 pages, 2 figures, 1 table, 31 references, manuscript in preparation
| null | null | null |
cs.SI cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce two local graph features for missing link
prediction tasks on ogbl-citation2. We define the features as Circle Features,
which are borrowed from the concept of circle of friends. We propose the
detailed computing formulas for the above features. Firstly, we define the
first circle feature as modified swing for common graph, which comes from
bipartite graph. Secondly, we define the second circle feature as bridge, which
indicates the importance of two nodes for different circle of friends. In
addition, we firstly propose the above features as bias to enhance graph
transformer neural network, such that graph self-attention mechanism can be
improved. We implement a Circled Feature aware Graph transformer (CFG) model
based on SIEG network, which utilizes a double tower structure to capture both
global and local structure features. Experimental results show that CFG
achieves the state-of-the-art performance on dataset ogbl-citation2.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 03:58:26 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Lv",
"Jingsong",
""
],
[
"Chen",
"Hongyang",
""
],
[
"Qi",
"Yao",
""
],
[
"Yu",
"Lei",
""
]
] |
new_dataset
| 0.997774 |
2309.06597
|
Enna Sachdeva
|
Enna Sachdeva, Nakul Agarwal, Suhas Chundi, Sean Roelofs, Jiachen Li,
Behzad Dariush, Chiho Choi, Mykel Kochenderfer
|
Rank2Tell: A Multimodal Driving Dataset for Joint Importance Ranking and
Reasoning
| null | null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The widespread adoption of commercial autonomous vehicles (AVs) and advanced
driver assistance systems (ADAS) may largely depend on their acceptance by
society, for which their perceived trustworthiness and interpretability to
riders are crucial. In general, this task is challenging because modern
autonomous systems software relies heavily on black-box artificial intelligence
models. Towards this goal, this paper introduces a novel dataset, Rank2Tell, a
multi-modal ego-centric dataset for Ranking the importance level and Telling
the reason for the importance. Using various close and open-ended visual
question answering, the dataset provides dense annotations of various semantic,
spatial, temporal, and relational attributes of various important objects in
complex traffic scenarios. The dense annotations and unique attributes of the
dataset make it a valuable resource for researchers working on visual scene
understanding and related fields. Further, we introduce a joint model for joint
importance level ranking and natural language captions generation to benchmark
our dataset and demonstrate performance with quantitative evaluations.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 20:51:07 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Sachdeva",
"Enna",
""
],
[
"Agarwal",
"Nakul",
""
],
[
"Chundi",
"Suhas",
""
],
[
"Roelofs",
"Sean",
""
],
[
"Li",
"Jiachen",
""
],
[
"Dariush",
"Behzad",
""
],
[
"Choi",
"Chiho",
""
],
[
"Kochenderfer",
"Mykel",
""
]
] |
new_dataset
| 0.999201 |
2309.06608
|
Matthew Edwards
|
Joshua Clough and Matthew Edwards
|
Pump, Dump, and then What? The Long-Term Impact of Cryptocurrency
Pump-and-Dump Schemes
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The pump and dump scheme is a form of market manipulation attack in which
coordinated actors drive up the price of an asset in order to sell at a higher
price. Due in part to a lack of enforcement, these schemes are widespread
within the cryptocurrency marketplace, but the negative impact of these events
on the coins they target is not yet fully understood. Drawing upon a novel
dataset of pump events extracted from Telegram channels, an order of magnitude
larger than the nearest comparable dataset in the literature, we explore the
differing tactics of pumping channels and the long-term impact of pump and dump
schemes across 765 coins. We find that, despite a short-term positive impact in
some cases, the long-term impact of pump and dump schemes on the targeted
assets is negative, amounting to an average 30% relative drop in price a year
after the pump event.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 21:23:50 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Clough",
"Joshua",
""
],
[
"Edwards",
"Matthew",
""
]
] |
new_dataset
| 0.999606 |
2309.06633
|
Louis Navarre
|
Louis Navarre, Olivier Pereira, Olivier Bonaventure
|
MCQUIC: Multicast and unicast in a single transport protocol
|
13 pages
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Multicast enables efficient one-to-many communications. Several applications
benefit from its scalability properties, e.g., live-streaming and large-scale
software updates. Historically, multicast applications have used specialized
transport protocols. The flexibility of the recently standardized QUIC protocol
opens the possibility of providing both unicast and multicast services to
applications with a single transport protocol. We present MCQUIC, an extended
version of the QUIC protocol that supports multicast communications. We show
how QUIC features and built-in security can be leveraged for multicast
transport. We present the design of MCQUIC and implement it in Cloudflare
quiche. We assess its performance through benchmarks and in emulated networks
under realistic scenarios. We also demonstrate MCQUIC in a campus network. By
coupling QUIC with our multicast extension, applications can rely on multicast
for efficiency with the possibility to fall back on unicast in case of
incompatible network conditions.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 22:49:22 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Navarre",
"Louis",
""
],
[
"Pereira",
"Olivier",
""
],
[
"Bonaventure",
"Olivier",
""
]
] |
new_dataset
| 0.991247 |
2309.06680
|
Palaash Agrawal
|
Palaash Agrawal, Haidi Azaman, Cheston Tan
|
STUPD: A Synthetic Dataset for Spatial and Temporal Relation Reasoning
|
Submitted to Neurips Dataset track. 24 pages including citations and
appendix
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Understanding relations between objects is crucial for understanding the
semantics of a visual scene. It is also an essential step in order to bridge
visual and language models. However, current state-of-the-art computer vision
models still lack the ability to perform spatial reasoning well. Existing
datasets mostly cover a relatively small number of spatial relations, all of
which are static relations that do not intrinsically involve motion. In this
paper, we propose the Spatial and Temporal Understanding of Prepositions
Dataset (STUPD) -- a large-scale video dataset for understanding static and
dynamic spatial relationships derived from prepositions of the English
language. The dataset contains 150K visual depictions (videos and images),
consisting of 30 distinct spatial prepositional senses, in the form of object
interaction simulations generated synthetically using Unity3D. In addition to
spatial relations, we also propose 50K visual depictions across 10 temporal
relations, consisting of videos depicting event/time-point interactions. To our
knowledge, no dataset exists that represents temporal relations through visual
settings. In this dataset, we also provide 3D information about object
interactions such as frame-wise coordinates, and descriptions of the objects
used. The goal of this synthetic dataset is to help models perform better in
visual relationship detection in real-world settings. We demonstrate an
increase in the performance of various models over 2 real-world datasets
(ImageNet-VidVRD and Spatial Senses) when pretrained on the STUPD dataset, in
comparison to other pretraining datasets.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 02:35:59 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Agrawal",
"Palaash",
""
],
[
"Azaman",
"Haidi",
""
],
[
"Tan",
"Cheston",
""
]
] |
new_dataset
| 0.999766 |
2309.06682
|
Jiawei Xu
|
Karen Li, Shuhang Hou, Matyas Negash, Jiawei Xu, Edward Jeffs, Diego
S. D'Antonio, David Salda\~na
|
A Novel Low-Cost, Recyclable, Easy-to-Build Robot Blimp For Transporting
Supplies in Hard-to-Reach Locations
|
IEEE Global Humanitarian Technology Conference (GHTC 2023)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Rural communities in remote areas often encounter significant challenges when
it comes to accessing emergency healthcare services and essential supplies due
to a lack of adequate transportation infrastructure. The situation is further
exacerbated by poorly maintained, damaged, or flooded roads, making it arduous
for rural residents to obtain the necessary aid in critical situations. Limited
budgets and technological constraints pose additional obstacles, hindering the
prompt response of local rescue teams during emergencies. The transportation of
crucial resources, such as medical supplies and food, plays a vital role in
saving lives in these situations. In light of these obstacles, our objective is
to improve accessibility and alleviate the suffering of vulnerable populations
by automating transportation tasks using low-cost robotic systems. We propose a
low-cost, easy-to-build blimp robot (UAVs), that can significantly enhance the
efficiency and effectiveness of local emergency responses.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 02:41:22 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Li",
"Karen",
""
],
[
"Hou",
"Shuhang",
""
],
[
"Negash",
"Matyas",
""
],
[
"Xu",
"Jiawei",
""
],
[
"Jeffs",
"Edward",
""
],
[
"D'Antonio",
"Diego S.",
""
],
[
"Saldaña",
"David",
""
]
] |
new_dataset
| 0.99966 |
2309.06696
|
Greg Bodwin
|
Greg Bodwin, Bernhard Haeupler, Merav Parter
|
Fault-Tolerant Spanners against Bounded-Degree Edge Failures: Linearly
More Faults, Almost For Free
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study a new and stronger notion of fault-tolerant graph structures whose
size bounds depend on the degree of the failing edge set, rather than the total
number of faults. For a subset of faulty edges $F \subseteq G$, the
faulty-degree $\deg(F)$ is the largest number of faults in $F$ incident to any
given vertex. We design new fault-tolerant structures with size comparable to
previous constructions, but which tolerate every fault set of small
faulty-degree $\deg(F)$, rather than only fault sets of small size $|F|$. Our
main results are:
- New FT-Certificates: For every $n$-vertex graph $G$ and degree threshold
$f$, one can compute a connectivity certificate $H \subseteq G$ with $|E(H)| =
\widetilde{O}(fn)$ edges that has the following guarantee: for any edge set $F$
with faulty-degree $\deg(F)\leq f$ and every vertex pair $u,v$, it holds that
$u$ and $v$ are connected in $H \setminus F$ iff they are connected in $G
\setminus F$. This bound on $|E(H)|$ is nearly tight. Since our certificates
handle some fault sets of size up to $|F|=O(fn)$, prior work did not imply any
nontrivial upper bound for this problem, even when $f=1$.
- New FT-Spanners: We show that every $n$-vertex graph $G$ admits a
$(2k-1)$-spanner $H$ with $|E(H)| = O_k(f^{1-1/k} n^{1+1/k})$ edges, which
tolerates any fault set $F$ of faulty-degree at most $f$. This bound on
$|E(H)|$ optimal up to its hidden dependence on $k$, and it is close to the
bound of $O_k(|F|^{1/2} n^{1+1/k} + |F|n)$ that is known for the case where the
total number of faults is $|F|$ [Bodwin, Dinitz, Robelle SODA '22]. Our proof
of this theorem is non-constructive, but by following a proof strategy of
Dinitz and Robelle [PODC '20], we show that the runtime can be made polynomial
by paying an additional $\text{polylog } n$ factor in spanner size.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 03:38:55 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Bodwin",
"Greg",
""
],
[
"Haeupler",
"Bernhard",
""
],
[
"Parter",
"Merav",
""
]
] |
new_dataset
| 0.998118 |
2309.06698
|
Arda Uzunoglu
|
Arda Uzuno\u{g}lu and G\"ozde G\"ul \c{S}ahin
|
Benchmarking Procedural Language Understanding for Low-Resource
Languages: A Case Study on Turkish
|
9 pages
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Understanding procedural natural language (e.g., step-by-step instructions)
is a crucial step to execution and planning. However, while there are ample
corpora and downstream tasks available in English, the field lacks such
resources for most languages. To address this gap, we conduct a case study on
Turkish procedural texts. We first expand the number of tutorials in Turkish
wikiHow from 2,000 to 52,000 using automated translation tools, where the
translation quality and loyalty to the original meaning are validated by a team
of experts on a random set. Then, we generate several downstream tasks on the
corpus, such as linking actions, goal inference, and summarization. To tackle
these tasks, we implement strong baseline models via fine-tuning large
language-specific models such as TR-BART and BERTurk, as well as multilingual
models such as mBART, mT5, and XLM. We find that language-specific models
consistently outperform their multilingual models by a significant margin
across most procedural language understanding (PLU) tasks. We release our
corpus, downstream tasks and the baseline models with https://github.com/
GGLAB-KU/turkish-plu.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 03:42:28 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Uzunoğlu",
"Arda",
""
],
[
"Şahin",
"Gözde Gül",
""
]
] |
new_dataset
| 0.980824 |
2309.06719
|
Siyao Zhang
|
Siyao Zhang, Daocheng Fu, Zhao Zhang, Bin Yu and Pinlong Cai
|
TrafficGPT: Viewing, Processing and Interacting with Traffic Foundation
Models
| null | null | null | null |
cs.AI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the promotion of chatgpt to the public, Large language models indeed
showcase remarkable common sense, reasoning, and planning skills, frequently
providing insightful guidance. These capabilities hold significant promise for
their application in urban traffic management and control. However, LLMs
struggle with addressing traffic issues, especially processing numerical data
and interacting with simulations, limiting their potential in solving
traffic-related challenges. In parallel, specialized traffic foundation models
exist but are typically designed for specific tasks with limited input-output
interactions. Combining these models with LLMs presents an opportunity to
enhance their capacity for tackling complex traffic-related problems and
providing insightful suggestions. To bridge this gap, we present TrafficGPT, a
fusion of ChatGPT and traffic foundation models. This integration yields the
following key enhancements: 1) empowering ChatGPT with the capacity to view,
analyze, process traffic data, and provide insightful decision support for
urban transportation system management; 2) facilitating the intelligent
deconstruction of broad and complex tasks and sequential utilization of traffic
foundation models for their gradual completion; 3) aiding human decision-making
in traffic control through natural language dialogues; and 4) enabling
interactive feedback and solicitation of revised outcomes. By seamlessly
intertwining large language model and traffic expertise, TrafficGPT not only
advances traffic management but also offers a novel approach to leveraging AI
capabilities in this domain. The TrafficGPT demo can be found in
https://github.com/lijlansg/TrafficGPT.git.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 04:47:43 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Zhang",
"Siyao",
""
],
[
"Fu",
"Daocheng",
""
],
[
"Zhang",
"Zhao",
""
],
[
"Yu",
"Bin",
""
],
[
"Cai",
"Pinlong",
""
]
] |
new_dataset
| 0.966628 |
2309.06723
|
Qinghua Liu
|
Qinghua Liu, Meng Ge, Zhizheng Wu, Haizhou Li
|
PIAVE: A Pose-Invariant Audio-Visual Speaker Extraction Network
|
Interspeech 2023
|
Proc. INTERSPEECH 2023, 3719-3723
|
10.21437/Interspeech.2023-889
| null |
cs.SD cs.MM eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
It is common in everyday spoken communication that we look at the turning
head of a talker to listen to his/her voice. Humans see the talker to listen
better, so do machines. However, previous studies on audio-visual speaker
extraction have not effectively handled the varying talking face. This paper
studies how to take full advantage of the varying talking face. We propose a
Pose-Invariant Audio-Visual Speaker Extraction Network (PIAVE) that
incorporates an additional pose-invariant view to improve audio-visual speaker
extraction. Specifically, we generate the pose-invariant view from each
original pose orientation, which enables the model to receive a consistent
frontal view of the talker regardless of his/her head pose, therefore, forming
a multi-view visual input for the speaker. Experiments on the multi-view MEAD
and in-the-wild LRS3 dataset demonstrate that PIAVE outperforms the
state-of-the-art and is more robust to pose variations.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 04:54:44 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Liu",
"Qinghua",
""
],
[
"Ge",
"Meng",
""
],
[
"Wu",
"Zhizheng",
""
],
[
"Li",
"Haizhou",
""
]
] |
new_dataset
| 0.984948 |
2309.06725
|
Kyle Johnson
|
Kyle Johnson, Vicente Arroyos, Am\'elie Ferran, Tilboon Elberier, Raul
Villanueva, Dennis Yin, Alberto Aliseda, Sawyer Fuller, Vikram Iyer,
Shyamnath Gollakota
|
Solar-powered shape-changing origami microfliers
|
This is the author's version of the work. It is posted here by
permission of the AAAS for personal use, not for redistribution. The
definitive version was published in Science Robotics on September 13, 2023.
DOI: 10.1126/scirobotics.adg4276
| null |
10.1126/scirobotics.adg4276
| null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Using wind to disperse microfliers that fall like seeds and leaves can help
automate large-scale sensor deployments. Here, we present battery-free
microfliers that can change shape in mid-air to vary their dispersal distance.
We design origami microfliers using bi-stable leaf-out structures and uncover
an important property: a simple change in the shape of these origami structures
causes two dramatically different falling behaviors. When unfolded and flat,
the microfliers exhibit a tumbling behavior that increases lateral displacement
in the wind. When folded inward, their orientation is stabilized, resulting in
a downward descent that is less influenced by wind. To electronically
transition between these two shapes, we designed a low-power electromagnetic
actuator that produces peak forces of up to 200 millinewtons within 25
milliseconds while powered by solar cells. We fabricated a circuit directly on
the folded origami structure that includes a programmable microcontroller,
Bluetooth radio, solar power harvesting circuit, a pressure sensor to estimate
altitude and a temperature sensor. Outdoor evaluations show that our 414
milligram origami microfliers are able to electronically change their shape
mid-air, travel up to 98 meters in a light breeze, and wirelessly transmit data
via Bluetooth up to 60 meters away, using only power collected from the sun.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 05:00:47 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Johnson",
"Kyle",
""
],
[
"Arroyos",
"Vicente",
""
],
[
"Ferran",
"Amélie",
""
],
[
"Elberier",
"Tilboon",
""
],
[
"Villanueva",
"Raul",
""
],
[
"Yin",
"Dennis",
""
],
[
"Aliseda",
"Alberto",
""
],
[
"Fuller",
"Sawyer",
""
],
[
"Iyer",
"Vikram",
""
],
[
"Gollakota",
"Shyamnath",
""
]
] |
new_dataset
| 0.998186 |
2309.06742
|
Yihui Huang
|
Yihui Huang, Ningjiang Chen
|
MTD: Multi-Timestep Detector for Delayed Streaming Perception
|
12 pages, accepted by PRCV 2023 (The 6th Chinese Conference on
Pattern Recognition and Computer Vision)
| null | null | null |
cs.CV cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous driving systems require real-time environmental perception to
ensure user safety and experience. Streaming perception is a task of reporting
the current state of the world, which is used to evaluate the delay and
accuracy of autonomous driving systems. In real-world applications, factors
such as hardware limitations and high temperatures inevitably cause delays in
autonomous driving systems, resulting in the offset between the model output
and the world state. In order to solve this problem, this paper propose the
Multi- Timestep Detector (MTD), an end-to-end detector which uses dynamic
routing for multi-branch future prediction, giving model the ability to resist
delay fluctuations. A Delay Analysis Module (DAM) is proposed to optimize the
existing delay sensing method, continuously monitoring the model inference
stack and calculating the delay trend. Moreover, a novel Timestep Branch Module
(TBM) is constructed, which includes static flow and adaptive flow to
adaptively predict specific timesteps according to the delay trend. The
proposed method has been evaluated on the Argoverse-HD dataset, and the
experimental results show that it has achieved state-of-the-art performance
across various delay settings.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 06:23:58 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Huang",
"Yihui",
""
],
[
"Chen",
"Ningjiang",
""
]
] |
new_dataset
| 0.999505 |
2309.06750
|
Tengyang Chen
|
Tengyang Chen and Jiangtao Ren
|
MFL-YOLO: An Object Detection Model for Damaged Traffic Signs
|
11 pages, 8 figures, 4 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traffic signs are important facilities to ensure traffic safety and smooth
flow, but may be damaged due to many reasons, which poses a great safety
hazard. Therefore, it is important to study a method to detect damaged traffic
signs. Existing object detection techniques for damaged traffic signs are still
absent. Since damaged traffic signs are closer in appearance to normal ones, it
is difficult to capture the detailed local damage features of damaged traffic
signs using traditional object detection methods. In this paper, we propose an
improved object detection method based on YOLOv5s, namely MFL-YOLO (Mutual
Feature Levels Loss enhanced YOLO). We designed a simple cross-level loss
function so that each level of the model has its own role, which is beneficial
for the model to be able to learn more diverse features and improve the fine
granularity. The method can be applied as a plug-and-play module and it does
not increase the structural complexity or the computational complexity while
improving the accuracy. We also replaced the traditional convolution and CSP
with the GSConv and VoVGSCSP in the neck of YOLOv5s to reduce the scale and
computational complexity. Compared with YOLOv5s, our MFL-YOLO improves 4.3 and
5.1 in F1 scores and mAP, while reducing the FLOPs by 8.9%. The Grad-CAM heat
map visualization shows that our model can better focus on the local details of
the damaged traffic signs. In addition, we also conducted experiments on
CCTSDB2021 and TT100K to further validate the generalization of our model.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 06:46:27 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Chen",
"Tengyang",
""
],
[
"Ren",
"Jiangtao",
""
]
] |
new_dataset
| 0.999465 |
2309.06802
|
Sacha Lewin
|
Sacha Lewin, Maxime Vandegar, Thomas Hoyoux, Olivier Barnich, Gilles
Louppe
|
Dynamic NeRFs for Soccer Scenes
|
Accepted at the 6th International ACM Workshop on Multimedia Content
Analysis in Sports. 8 pages, 9 figures. Project page:
https://soccernerfs.isach.be
| null |
10.1145/3606038.3616158
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The long-standing problem of novel view synthesis has many applications,
notably in sports broadcasting. Photorealistic novel view synthesis of soccer
actions, in particular, is of enormous interest to the broadcast industry. Yet
only a few industrial solutions have been proposed, and even fewer that achieve
near-broadcast quality of the synthetic replays. Except for their setup of
multiple static cameras around the playfield, the best proprietary systems
disclose close to no information about their inner workings. Leveraging
multiple static cameras for such a task indeed presents a challenge rarely
tackled in the literature, for a lack of public datasets: the reconstruction of
a large-scale, mostly static environment, with small, fast-moving elements.
Recently, the emergence of neural radiance fields has induced stunning progress
in many novel view synthesis applications, leveraging deep learning principles
to produce photorealistic results in the most challenging settings. In this
work, we investigate the feasibility of basing a solution to the task on
dynamic NeRFs, i.e., neural models purposed to reconstruct general dynamic
content. We compose synthetic soccer environments and conduct multiple
experiments using them, identifying key components that help reconstruct soccer
scenes with dynamic NeRFs. We show that, although this approach cannot fully
meet the quality requirements for the target application, it suggests promising
avenues toward a cost-efficient, automatic solution. We also make our work
dataset and code publicly available, with the goal to encourage further efforts
from the research community on the task of novel view synthesis for dynamic
soccer scenes. For code, data, and video results, please see
https://soccernerfs.isach.be.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 08:50:00 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Lewin",
"Sacha",
""
],
[
"Vandegar",
"Maxime",
""
],
[
"Hoyoux",
"Thomas",
""
],
[
"Barnich",
"Olivier",
""
],
[
"Louppe",
"Gilles",
""
]
] |
new_dataset
| 0.997503 |
2309.06806
|
Xiangliang Kong
|
Xiangliang Kong and Ohad Elishco
|
Bounds and Constructions for Generalized Batch Codes
|
25 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Private information retrieval (PIR) codes and batch codes are two important
types of codes that are designed for coded distributed storage systems and
private information retrieval protocols. These codes have been the focus of
much attention in recent years, as they enable efficient and secure storage and
retrieval of data in distributed systems.
In this paper, we introduce a new class of codes called \emph{$(s,t)$-batch
codes}. These codes are a type of storage codes that can handle any multi-set
of $t$ requests, comprised of $s$ distinct information symbols. Importantly,
PIR codes and batch codes are special cases of $(s,t)$-batch codes.
The main goal of this paper is to explore the relationship between the number
of redundancy symbols and the $(s,t)$-batch code property. Specifically, we
establish a lower bound on the number of redundancy symbols required and
present several constructions of $(s,t)$-batch codes. Furthermore, we extend
this property to the case where each request is a linear combination of
information symbols, which we refer to as \emph{functional $(s,t)$-batch
codes}. Specifically, we demonstrate that simplex codes are asymptotically
optimal functional $(s,t)$-batch codes, in terms of the number of redundancy
symbols required, under certain parameter regime.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 08:52:49 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Kong",
"Xiangliang",
""
],
[
"Elishco",
"Ohad",
""
]
] |
new_dataset
| 0.99965 |
2309.06819
|
Lo\"ic Azzalini
|
Lo\"ic J. Azzalini and Dario Izzo
|
Tracking Particles Ejected From Active Asteroid Bennu With Event-Based
Vision
|
6 pages, 3 figures, presented at the XXVII Italian Association of
Aeronautics and Astronautics (AIDAA) Congress, 4-7 September 2023, Padova
Italy
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Early detection and tracking of ejecta in the vicinity of small solar system
bodies is crucial to guarantee spacecraft safety and support scientific
observation. During the visit of active asteroid Bennu, the OSIRIS-REx
spacecraft relied on the analysis of images captured by onboard navigation
cameras to detect particle ejection events, which ultimately became one of the
mission's scientific highlights. To increase the scientific return of similar
time-constrained missions, this work proposes an event-based solution that is
dedicated to the detection and tracking of centimetre-sized particles. Unlike a
standard frame-based camera, the pixels of an event-based camera independently
trigger events indicating whether the scene brightness has increased or
decreased at that time and location in the sensor plane. As a result of the
sparse and asynchronous spatiotemporal output, event cameras combine very high
dynamic range and temporal resolution with low-power consumption, which could
complement existing onboard imaging techniques. This paper motivates the use of
a scientific event camera by reconstructing the particle ejection episodes
reported by the OSIRIS-REx mission in a photorealistic scene generator and in
turn, simulating event-based observations. The resulting streams of
spatiotemporal data support future work on event-based multi-object tracking.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 09:07:42 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Azzalini",
"Loïc J.",
""
],
[
"Izzo",
"Dario",
""
]
] |
new_dataset
| 0.997881 |
2309.06824
|
Zengqiang Yan
|
Xian Lin, Yangyang Xiang, Li Zhang, Xin Yang, Zengqiang Yan, and Li Yu
|
SAMUS: Adapting Segment Anything Model for Clinically-Friendly and
Generalizable Ultrasound Image Segmentation
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Segment anything model (SAM), an eminent universal image segmentation model,
has recently gathered considerable attention within the domain of medical image
segmentation. Despite the remarkable performance of SAM on natural images, it
grapples with significant performance degradation and limited generalization
when confronted with medical images, particularly with those involving objects
of low contrast, faint boundaries, intricate shapes, and diminutive sizes. In
this paper, we propose SAMUS, a universal model tailored for ultrasound image
segmentation. In contrast to previous SAM-based universal models, SAMUS pursues
not only better generalization but also lower deployment cost, rendering it
more suitable for clinical applications. Specifically, based on SAM, a parallel
CNN branch is introduced to inject local features into the ViT encoder through
cross-branch attention for better medical image segmentation. Then, a position
adapter and a feature adapter are developed to adapt SAM from natural to
medical domains and from requiring large-size inputs (1024x1024) to small-size
inputs (256x256) for more clinical-friendly deployment. A comprehensive
ultrasound dataset, comprising about 30k images and 69k masks and covering six
object categories, is collected for verification. Extensive comparison
experiments demonstrate SAMUS's superiority against the state-of-the-art
task-specific models and universal foundation models under both task-specific
evaluation and generalization evaluation. Moreover, SAMUS is deployable on
entry-level GPUs, as it has been liberated from the constraints of long
sequence encoding. The code, data, and models will be released at
https://github.com/xianlin7/SAMUS.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 09:15:20 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Lin",
"Xian",
""
],
[
"Xiang",
"Yangyang",
""
],
[
"Zhang",
"Li",
""
],
[
"Yang",
"Xin",
""
],
[
"Yan",
"Zengqiang",
""
],
[
"Yu",
"Li",
""
]
] |
new_dataset
| 0.989322 |
2309.06844
|
Dimitar Dimitrov
|
Georgi Pachov, Dimitar Dimitrov, Ivan Koychev, Preslav Nakov
|
Gpachov at CheckThat! 2023: A Diverse Multi-Approach Ensemble for
Subjectivity Detection in News Articles
| null | null | null | null |
cs.CL cs.AI cs.MM
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The wide-spread use of social networks has given rise to subjective,
misleading, and even false information on the Internet. Thus, subjectivity
detection can play an important role in ensuring the objectiveness and the
quality of a piece of information. This paper presents the solution built by
the Gpachov team for the CLEF-2023 CheckThat! lab Task~2 on subjectivity
detection. Three different research directions are explored. The first one is
based on fine-tuning a sentence embeddings encoder model and dimensionality
reduction. The second one explores a sample-efficient few-shot learning model.
The third one evaluates fine-tuning a multilingual transformer on an altered
dataset, using data from multiple languages. Finally, the three approaches are
combined in a simple majority voting ensemble, resulting in 0.77 macro F1 on
the test set and achieving 2nd place on the English subtask.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 09:49:20 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Pachov",
"Georgi",
""
],
[
"Dimitrov",
"Dimitar",
""
],
[
"Koychev",
"Ivan",
""
],
[
"Nakov",
"Preslav",
""
]
] |
new_dataset
| 0.981756 |
2309.06882
|
Martin Pil\'at
|
Kate\v{r}ina Mackov\'a, Martin Pil\'at
|
ProMap: Datasets for Product Mapping in E-commerce
| null | null | null | null |
cs.LG cs.CV cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of product mapping is to decide, whether two listings from two
different e-shops describe the same products. Existing datasets of matching and
non-matching pairs of products, however, often suffer from incomplete product
information or contain only very distant non-matching products. Therefore,
while predictive models trained on these datasets achieve good results on them,
in practice, they are unusable as they cannot distinguish very similar but
non-matching pairs of products. This paper introduces two new datasets for
product mapping: ProMapCz consisting of 1,495 Czech product pairs and ProMapEn
consisting of 1,555 English product pairs of matching and non-matching products
manually scraped from two pairs of e-shops. The datasets contain both images
and textual descriptions of the products, including their specifications,
making them one of the most complete datasets for product mapping.
Additionally, the non-matching products were selected in two phases, creating
two types of non-matches -- close non-matches and medium non-matches. Even the
medium non-matches are pairs of products that are much more similar than
non-matches in other datasets -- for example, they still need to have the same
brand and similar name and price. After simple data preprocessing, several
machine learning algorithms were trained on these and two the other datasets to
demonstrate the complexity and completeness of ProMap datasets. ProMap datasets
are presented as a golden standard for further research of product mapping
filling the gaps in existing ones.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 11:16:52 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Macková",
"Kateřina",
""
],
[
"Pilát",
"Martin",
""
]
] |
new_dataset
| 0.999848 |
2309.06888
|
Konrad Abicht
|
Konrad Abicht
|
OWL Reasoners still useable in 2023
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In a systematic literature and software review over 100 OWL reasoners/systems
were analyzed to see if they would still be usable in 2023. This has never been
done in this capacity. OWL reasoners still play an important role in knowledge
organisation and management, but the last comprehensive surveys/studies are
more than 8 years old. The result of this work is a comprehensive list of 95
standalone OWL reasoners and systems using an OWL reasoner. For each item,
information on project pages, source code repositories and related
documentation was gathered. The raw research data is provided in a Github
repository for anyone to use.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 11:22:42 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Abicht",
"Konrad",
""
]
] |
new_dataset
| 0.993706 |
2309.06895
|
Jaeyo Shin
|
Junha Hyung, Jaeyo Shin, and Jaegul Choo
|
MagiCapture: High-Resolution Multi-Concept Portrait Customization
|
8 pages, 7 figures
| null | null | null |
cs.CV cs.GR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Large-scale text-to-image models including Stable Diffusion are capable of
generating high-fidelity photorealistic portrait images. There is an active
research area dedicated to personalizing these models, aiming to synthesize
specific subjects or styles using provided sets of reference images. However,
despite the plausible results from these personalization methods, they tend to
produce images that often fall short of realism and are not yet on a
commercially viable level. This is particularly noticeable in portrait image
generation, where any unnatural artifact in human faces is easily discernible
due to our inherent human bias. To address this, we introduce MagiCapture, a
personalization method for integrating subject and style concepts to generate
high-resolution portrait images using just a few subject and style references.
For instance, given a handful of random selfies, our fine-tuned model can
generate high-quality portrait images in specific styles, such as passport or
profile photos. The main challenge with this task is the absence of ground
truth for the composed concepts, leading to a reduction in the quality of the
final output and an identity shift of the source subject. To address these
issues, we present a novel Attention Refocusing loss coupled with auxiliary
priors, both of which facilitate robust learning within this weakly supervised
learning setting. Our pipeline also includes additional post-processing steps
to ensure the creation of highly realistic outputs. MagiCapture outperforms
other baselines in both quantitative and qualitative evaluations and can also
be generalized to other non-human objects.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 11:37:04 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Hyung",
"Junha",
""
],
[
"Shin",
"Jaeyo",
""
],
[
"Choo",
"Jaegul",
""
]
] |
new_dataset
| 0.998622 |
2309.06933
|
Namhyuk Ahn
|
Namhyuk Ahn, Junsoo Lee, Chunggi Lee, Kunhee Kim, Daesik Kim,
Seung-Hun Nam, Kibeom Hong
|
DreamStyler: Paint by Style Inversion with Text-to-Image Diffusion
Models
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent progresses in large-scale text-to-image models have yielded remarkable
accomplishments, finding various applications in art domain. However,
expressing unique characteristics of an artwork (e.g. brushwork, colortone, or
composition) with text prompts alone may encounter limitations due to the
inherent constraints of verbal description. To this end, we introduce
DreamStyler, a novel framework designed for artistic image synthesis,
proficient in both text-to-image synthesis and style transfer. DreamStyler
optimizes a multi-stage textual embedding with a context-aware text prompt,
resulting in prominent image quality. In addition, with content and style
guidance, DreamStyler exhibits flexibility to accommodate a range of style
references. Experimental results demonstrate its superior performance across
multiple scenarios, suggesting its promising potential in artistic product
creation.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 13:13:29 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Ahn",
"Namhyuk",
""
],
[
"Lee",
"Junsoo",
""
],
[
"Lee",
"Chunggi",
""
],
[
"Kim",
"Kunhee",
""
],
[
"Kim",
"Daesik",
""
],
[
"Nam",
"Seung-Hun",
""
],
[
"Hong",
"Kibeom",
""
]
] |
new_dataset
| 0.991678 |
2309.07009
|
Konstantinos Kogkalidis
|
Konstantinos Kogkalidis, Stergios Chatzikyriakidis, Eirini
Chrysovalantou Giannikouri, Vassiliki Katsouli, Christina Klironomou,
Christina Koula, Dimitris Papadakis, Thelka Pasparaki, Erofili Psaltaki,
Efthymia Sakellariou, Hara Soupiona
|
OYXOY: A Modern NLP Test Suite for Modern Greek
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper serves as a foundational step towards the development of a
linguistically motivated and technically relevant evaluation suite for Greek
NLP. We initiate this endeavor by introducing four expert-verified evaluation
tasks, specifically targeted at natural language inference, word sense
disambiguation (through example comparison or sense selection) and metaphor
detection. More than language-adapted replicas of existing tasks, we contribute
two innovations which will resonate with the broader resource and evaluation
community. Firstly, our inference dataset is the first of its kind, marking not
just \textit{one}, but rather \textit{all} possible inference labels,
accounting for possible shifts due to e.g. ambiguity or polysemy. Secondly, we
demonstrate a cost-efficient method to obtain datasets for under-resourced
languages. Using ChatGPT as a language-neutral parser, we transform the
Dictionary of Standard Modern Greek into a structured format, from which we
derive the other three tasks through simple projections. Alongside each task,
we conduct experiments using currently available state of the art machinery.
Our experimental baselines affirm the challenging nature of our tasks and
highlight the need for expedited progress in order for the Greek NLP ecosystem
to keep pace with contemporary mainstream research.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 15:00:56 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Kogkalidis",
"Konstantinos",
""
],
[
"Chatzikyriakidis",
"Stergios",
""
],
[
"Giannikouri",
"Eirini Chrysovalantou",
""
],
[
"Katsouli",
"Vassiliki",
""
],
[
"Klironomou",
"Christina",
""
],
[
"Koula",
"Christina",
""
],
[
"Papadakis",
"Dimitris",
""
],
[
"Pasparaki",
"Thelka",
""
],
[
"Psaltaki",
"Erofili",
""
],
[
"Sakellariou",
"Efthymia",
""
],
[
"Soupiona",
"Hara",
""
]
] |
new_dataset
| 0.999744 |
2309.07028
|
Marianne Bossema
|
Marianne Bossema, Rob Saunders, Somaya Ben Allouch
|
Human-Machine Co-Creativity with Older Adults -- A Learning Community to
Study Explainable Dialogues
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
This position paper is part of a long-term research project on human-machine
co-creativity with older adults. The goal is to investigate how robots and
AI-generated content can contribute to older adults' creative experiences, with
a focus on collaborative drawing and painting. The research has recently
started, and current activities are centred around literature studies,
interviews with seniors and artists, and developing initial prototypes. In
addition, a course "Drawing with Robots", is being developed to establish
collaboration between human and machine learners: older adults, artists,
students, researchers, and artificial agents. We present this course as a
learning community and as an opportunity for studying how explainable AI and
creative dialogues can be intertwined in human-machine co-creativity with older
adults.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 15:33:29 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Bossema",
"Marianne",
""
],
[
"Saunders",
"Rob",
""
],
[
"Allouch",
"Somaya Ben",
""
]
] |
new_dataset
| 0.996016 |
2309.07045
|
Zhexin Zhang
|
Zhexin Zhang, Leqi Lei, Lindong Wu, Rui Sun, Yongkang Huang, Chong
Long, Xiao Liu, Xuanyu Lei, Jie Tang, Minlie Huang
|
SafetyBench: Evaluating the Safety of Large Language Models with
Multiple Choice Questions
|
15 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid development of Large Language Models (LLMs), increasing
attention has been paid to their safety concerns. Consequently, evaluating the
safety of LLMs has become an essential task for facilitating the broad
applications of LLMs. Nevertheless, the absence of comprehensive safety
evaluation benchmarks poses a significant impediment to effectively assess and
enhance the safety of LLMs. In this work, we present SafetyBench, a
comprehensive benchmark for evaluating the safety of LLMs, which comprises
11,435 diverse multiple choice questions spanning across 7 distinct categories
of safety concerns. Notably, SafetyBench also incorporates both Chinese and
English data, facilitating the evaluation in both languages. Our extensive
tests over 25 popular Chinese and English LLMs in both zero-shot and few-shot
settings reveal a substantial performance advantage for GPT-4 over its
counterparts, and there is still significant room for improving the safety of
current LLMs. We believe SafetyBench will enable fast and comprehensive
evaluation of LLMs' safety, and foster the development of safer LLMs. Data and
evaluation guidelines are available at https://github.com/thu-coai/SafetyBench.
Submission entrance and leaderboard are available at
https://llmbench.ai/safety.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 15:56:50 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Zhang",
"Zhexin",
""
],
[
"Lei",
"Leqi",
""
],
[
"Wu",
"Lindong",
""
],
[
"Sun",
"Rui",
""
],
[
"Huang",
"Yongkang",
""
],
[
"Long",
"Chong",
""
],
[
"Liu",
"Xiao",
""
],
[
"Lei",
"Xuanyu",
""
],
[
"Tang",
"Jie",
""
],
[
"Huang",
"Minlie",
""
]
] |
new_dataset
| 0.985817 |
2309.07051
|
Sicheng Yang
|
Sicheng Yang, Zilin Wang, Zhiyong Wu, Minglei Li, Zhensong Zhang,
Qiaochu Huang, Lei Hao, Songcen Xu, Xiaofei Wu, changpeng yang, Zonghong Dai
|
UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons
|
16 pages, 11 figures, ACM MM 2023
| null |
10.1145/3581783.3612503
| null |
cs.HC cs.AI cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The automatic co-speech gesture generation draws much attention in computer
animation. Previous works designed network structures on individual datasets,
which resulted in a lack of data volume and generalizability across different
motion capture standards. In addition, it is a challenging task due to the weak
correlation between speech and gestures. To address these problems, we present
UnifiedGesture, a novel diffusion model-based speech-driven gesture synthesis
approach, trained on multiple gesture datasets with different skeletons.
Specifically, we first present a retargeting network to learn latent
homeomorphic graphs for different motion capture standards, unifying the
representations of various gestures while extending the dataset. We then
capture the correlation between speech and gestures based on a diffusion model
architecture using cross-local attention and self-attention to generate better
speech-matched and realistic gestures. To further align speech and gesture and
increase diversity, we incorporate reinforcement learning on the discrete
gesture units with a learned reward function. Extensive experiments show that
UnifiedGesture outperforms recent approaches on speech-driven gesture
generation in terms of CCA, FGD, and human-likeness. All code, pre-trained
models, databases, and demos are available to the public at
https://github.com/YoungSeng/UnifiedGesture.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 16:07:25 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Yang",
"Sicheng",
""
],
[
"Wang",
"Zilin",
""
],
[
"Wu",
"Zhiyong",
""
],
[
"Li",
"Minglei",
""
],
[
"Zhang",
"Zhensong",
""
],
[
"Huang",
"Qiaochu",
""
],
[
"Hao",
"Lei",
""
],
[
"Xu",
"Songcen",
""
],
[
"Wu",
"Xiaofei",
""
],
[
"yang",
"changpeng",
""
],
[
"Dai",
"Zonghong",
""
]
] |
new_dataset
| 0.997494 |
2309.07066
|
Yufei Zhu
|
Yufei Zhu, Andrey Rudenko, Tomasz P. Kucner, Luigi Palmieri, Kai O.
Arras, Achim J. Lilienthal, Martin Magnusson
|
CLiFF-LHMP: Using Spatial Dynamics Patterns for Long-Term Human Motion
Prediction
|
Accepted to the 2023 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Human motion prediction is important for mobile service robots and
intelligent vehicles to operate safely and smoothly around people. The more
accurate predictions are, particularly over extended periods of time, the
better a system can, e.g., assess collision risks and plan ahead. In this
paper, we propose to exploit maps of dynamics (MoDs, a class of general
representations of place-dependent spatial motion patterns, learned from prior
observations) for long-term human motion prediction (LHMP). We present a new
MoD-informed human motion prediction approach, named CLiFF-LHMP, which is data
efficient, explainable, and insensitive to errors from an upstream tracking
system. Our approach uses CLiFF-map, a specific MoD trained with human motion
data recorded in the same environment. We bias a constant velocity prediction
with samples from the CLiFF-map to generate multi-modal trajectory predictions.
In two public datasets we show that this algorithm outperforms the state of the
art for predictions over very extended periods of time, achieving 45% more
accurate prediction performance at 50s compared to the baseline.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 16:26:48 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Zhu",
"Yufei",
""
],
[
"Rudenko",
"Andrey",
""
],
[
"Kucner",
"Tomasz P.",
""
],
[
"Palmieri",
"Luigi",
""
],
[
"Arras",
"Kai O.",
""
],
[
"Lilienthal",
"Achim J.",
""
],
[
"Magnusson",
"Martin",
""
]
] |
new_dataset
| 0.959718 |
2309.07084
|
Yiran Qin
|
Yiran Qin, Chaoqun Wang, Zijian Kang, Ningning Ma, Zhen Li, Ruimao
Zhang
|
SupFusion: Supervised LiDAR-Camera Fusion for 3D Object Detection
|
Accepted to ICCV2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel training strategy called SupFusion, which
provides an auxiliary feature level supervision for effective LiDAR-Camera
fusion and significantly boosts detection performance. Our strategy involves a
data enhancement method named Polar Sampling, which densifies sparse objects
and trains an assistant model to generate high-quality features as the
supervision. These features are then used to train the LiDAR-Camera fusion
model, where the fusion feature is optimized to simulate the generated
high-quality features. Furthermore, we propose a simple yet effective deep
fusion module, which contiguously gains superior performance compared with
previous fusion methods with SupFusion strategy. In such a manner, our proposal
shares the following advantages. Firstly, SupFusion introduces auxiliary
feature-level supervision which could boost LiDAR-Camera detection performance
without introducing extra inference costs. Secondly, the proposed deep fusion
could continuously improve the detector's abilities. Our proposed SupFusion and
deep fusion module is plug-and-play, we make extensive experiments to
demonstrate its effectiveness. Specifically, we gain around 2% 3D mAP
improvements on KITTI benchmark based on multiple LiDAR-Camera 3D detectors.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 16:52:23 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Qin",
"Yiran",
""
],
[
"Wang",
"Chaoqun",
""
],
[
"Kang",
"Zijian",
""
],
[
"Ma",
"Ningning",
""
],
[
"Li",
"Zhen",
""
],
[
"Zhang",
"Ruimao",
""
]
] |
new_dataset
| 0.974388 |
2309.07104
|
Derek Gloudemans
|
Derek Gloudemans, Xinxuan Lu, Shepard Xia, Daniel B. Work
|
Polygon Intersection-over-Union Loss for Viewpoint-Agnostic Monocular 3D
Vehicle Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Monocular 3D object detection is a challenging task because depth information
is difficult to obtain from 2D images. A subset of viewpoint-agnostic monocular
3D detection methods also do not explicitly leverage scene homography or
geometry during training, meaning that a model trained thusly can detect
objects in images from arbitrary viewpoints. Such works predict the projections
of the 3D bounding boxes on the image plane to estimate the location of the 3D
boxes, but these projections are not rectangular so the calculation of IoU
between these projected polygons is not straightforward. This work proposes an
efficient, fully differentiable algorithm for the calculation of IoU between
two convex polygons, which can be utilized to compute the IoU between two 3D
bounding box footprints viewed from an arbitrary angle. We test the performance
of the proposed polygon IoU loss (PIoU loss) on three state-of-the-art
viewpoint-agnostic 3D detection models. Experiments demonstrate that the
proposed PIoU loss converges faster than L1 loss and that in 3D detection
models, a combination of PIoU loss and L1 loss gives better results than L1
loss alone (+1.64% AP70 for MonoCon on cars, +0.18% AP70 for RTM3D on cars, and
+0.83%/+2.46% AP50/AP25 for MonoRCNN on cyclists).
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 17:25:06 GMT"
}
] | 2023-09-14T00:00:00 |
[
[
"Gloudemans",
"Derek",
""
],
[
"Lu",
"Xinxuan",
""
],
[
"Xia",
"Shepard",
""
],
[
"Work",
"Daniel B.",
""
]
] |
new_dataset
| 0.98114 |
2110.14185
|
Muneeb Ahmad
|
Muneeb Ahmad, Soo Young Shin
|
Massive MIMO NOMA with Wavelet Pulse Shaping to Minimize Undesired
Channel Interference
|
8 pages, 3 figures, ICT Express (Accepted 9 June 2022)
|
ICT Express, 2023, 9(4), pp.635-641
|
10.1016/j.icte.2022.06.005
| null |
cs.IT cs.SY eess.SY math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this article, wavelet OFDM based non-orthogonal-multiple-access (NOMA)
combined with massive MIMO system for 6G networks is proposed. For mMIMO
transmissions, the proposed system could enhance the performance by utilizing
wavelets to compensate for channel impairments on the transmitted signal.
Performance measures include spectral efficiency, symbol error rate (SER), and
peak to average ratio (PAPR). Simulation results prove that the proposed system
outperforms the conventional OFDM based NOMA systems.
|
[
{
"version": "v1",
"created": "Wed, 27 Oct 2021 05:34:29 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jun 2022 01:56:23 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Ahmad",
"Muneeb",
""
],
[
"Shin",
"Soo Young",
""
]
] |
new_dataset
| 0.98271 |
2210.08202
|
Changwoon Choi
|
Changwoon Choi, Juhyeon Kim, Young Min Kim
|
IBL-NeRF: Image-Based Lighting Formulation of Neural Radiance Fields
|
Computer Graphics Forum (Pacific Graphics 2023)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose IBL-NeRF, which decomposes the neural radiance fields (NeRF) of
large-scale indoor scenes into intrinsic components. Recent approaches further
decompose the baked radiance of the implicit volume into intrinsic components
such that one can partially approximate the rendering equation. However, they
are limited to representing isolated objects with a shared environment
lighting, and suffer from computational burden to aggregate rays with Monte
Carlo integration. In contrast, our prefiltered radiance field extends the
original NeRF formulation to capture the spatial variation of lighting within
the scene volume, in addition to surface properties. Specifically, the scenes
of diverse materials are decomposed into intrinsic components for rendering,
namely, albedo, roughness, surface normal, irradiance, and prefiltered
radiance. All of the components are inferred as neural images from MLP, which
can model large-scale general scenes. Especially the prefiltered radiance
effectively models the volumetric light field, and captures spatial variation
beyond a single environment light. The prefiltering aggregates rays in a set of
predefined neighborhood sizes such that we can replace the costly Monte Carlo
integration of global illumination with a simple query from a neural image. By
adopting NeRF, our approach inherits superior visual quality and multi-view
consistency for synthesized images as well as the intrinsic components. We
demonstrate the performance on scenes with complex object layouts and light
configurations, which could not be processed in any of the previous works.
|
[
{
"version": "v1",
"created": "Sat, 15 Oct 2022 05:38:55 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 01:36:47 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Choi",
"Changwoon",
""
],
[
"Kim",
"Juhyeon",
""
],
[
"Kim",
"Young Min",
""
]
] |
new_dataset
| 0.983311 |
2210.10424
|
Zikang Yuan
|
Zikang Yuan, Fengtian Lang, Tianle Xu, Xin Yang
|
SR-LIO: LiDAR-Inertial Odometry with Sweep Reconstruction
|
Submitted to ICRA
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a novel LiDAR-Inertial odometry (LIO), named SR-LIO,
based on an iterated extended Kalman filter (iEKF) framework. We adapt the
sweep reconstruction method, which segments and reconstructs raw input sweeps
from spinning LiDAR to obtain reconstructed sweeps with higher frequency. We
found that such method can effectively reduce the time interval for each
iterated state update, improving the state estimation accuracy and enabling the
usage of iEKF framework for fusing high-frequency IMU and low-frequency LiDAR.
To prevent inaccurate trajectory caused by multiple distortion correction to a
particular point, we further propose to perform distortion correction for each
segment. Experimental results on four public datasets demonstrate that our
SR-LIO outperforms all existing state-of-the-art methods on accuracy, and
reducing the time interval of iterated state update via the proposed sweep
reconstruction can improve the accuracy and frequency of estimated states. The
source code of SR-LIO is publicly available for the development of the
community.
|
[
{
"version": "v1",
"created": "Wed, 19 Oct 2022 09:44:37 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 08:09:27 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Yuan",
"Zikang",
""
],
[
"Lang",
"Fengtian",
""
],
[
"Xu",
"Tianle",
""
],
[
"Yang",
"Xin",
""
]
] |
new_dataset
| 0.991456 |
2211.00323
|
Jinghe Wang
|
Jinghe Wang, Wankai Tang, Jing Cheng Liang, Lei Zhang, Jun Yan Dai,
Xiao Li, Shi Jin, Qiang Cheng, and Tie Jun Cui
|
Reconfigurable Intelligent Surface: Power Consumption Modeling and
Practical Measurement Validation
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The reconfigurable intelligent surface (RIS) has received a lot of interest
because of its capacity to reconfigure the wireless communication environment
in a cost- and energy-efficient way. However, the realistic power consumption
modeling and measurement validation of RIS has received far too little
attention. Therefore, in this work, we model the power consumption of RIS and
conduct measurement validations using various RISs to fill this vacancy.
Firstly, we propose a practical power consumption model of RIS. The RIS
hardware is divided into three basic parts: the FPGA control board, the drive
circuits, and the RIS unit cells. The power consumption of the first two parts
is modeled as $P_{\text {static}}$ and that of the last part is modeled as
$P_{\text {units}}$. Expressions of $P_{\text {static}}$ and $P_{\text
{units}}$ vary amongst different types of RISs. Secondly, we conduct
measurements on various RISs to validate the proposed model. Five different
RISs including the PIN diode, varactor diode, and RF switch types are measured,
and measurement results validate the generality and applicability of the
proposed power consumption model of RIS. Finally, we summarize the measurement
results and discuss the approaches to achieve the low-power-consumption design
of RIS-assisted wireless communication systems.
|
[
{
"version": "v1",
"created": "Tue, 1 Nov 2022 08:22:08 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 07:15:58 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Wang",
"Jinghe",
""
],
[
"Tang",
"Wankai",
""
],
[
"Liang",
"Jing Cheng",
""
],
[
"Zhang",
"Lei",
""
],
[
"Dai",
"Jun Yan",
""
],
[
"Li",
"Xiao",
""
],
[
"Jin",
"Shi",
""
],
[
"Cheng",
"Qiang",
""
],
[
"Cui",
"Tie Jun",
""
]
] |
new_dataset
| 0.9901 |
2211.05838
|
Ataberk Olgun
|
Ataberk Olgun, Hasan Hassan, A. Giray Ya\u{g}l{\i}k\c{c}{\i}, Yahya
Can Tu\u{g}rul, Lois Orosa, Haocong Luo, Minesh Patel, O\u{g}uz Ergin, Onur
Mutlu
|
DRAM Bender: An Extensible and Versatile FPGA-based Infrastructure to
Easily Test State-of-the-art DRAM Chips
|
Extended version of paper that is to appear in IEEE Transactions on
Computer-Aided Design of Integrated Circuits and Systems (TCAD)
| null | null | null |
cs.AR cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
To understand and improve DRAM performance, reliability, security and energy
efficiency, prior works study characteristics of commodity DRAM chips.
Unfortunately, state-of-the-art open source infrastructures capable of
conducting such studies are obsolete, poorly supported, or difficult to use, or
their inflexibility limit the types of studies they can conduct.
We propose DRAM Bender, a new FPGA-based infrastructure that enables
experimental studies on state-of-the-art DRAM chips. DRAM Bender offers three
key features at the same time. First, DRAM Bender enables directly interfacing
with a DRAM chip through its low-level interface. This allows users to issue
DRAM commands in arbitrary order and with finer-grained time intervals compared
to other open source infrastructures. Second, DRAM Bender exposes easy-to-use
C++ and Python programming interfaces, allowing users to quickly and easily
develop different types of DRAM experiments. Third, DRAM Bender is easily
extensible. The modular design of DRAM Bender allows extending it to (i)
support existing and emerging DRAM interfaces, and (ii) run on new commercial
or custom FPGA boards with little effort.
To demonstrate that DRAM Bender is a versatile infrastructure, we conduct
three case studies, two of which lead to new observations about the DRAM
RowHammer vulnerability. In particular, we show that data patterns supported by
DRAM Bender uncovers a larger set of bit-flips on a victim row compared to the
data patterns commonly used by prior work. We demonstrate the extensibility of
DRAM Bender by implementing it on five different FPGAs with DDR4 and DDR3
support. DRAM Bender is freely and openly available at
https://github.com/CMU-SAFARI/DRAM-Bender.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 19:43:03 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jun 2023 12:52:54 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Jun 2023 05:27:04 GMT"
},
{
"version": "v4",
"created": "Wed, 19 Jul 2023 05:51:42 GMT"
},
{
"version": "v5",
"created": "Tue, 12 Sep 2023 10:58:51 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Olgun",
"Ataberk",
""
],
[
"Hassan",
"Hasan",
""
],
[
"Yağlıkçı",
"A. Giray",
""
],
[
"Tuğrul",
"Yahya Can",
""
],
[
"Orosa",
"Lois",
""
],
[
"Luo",
"Haocong",
""
],
[
"Patel",
"Minesh",
""
],
[
"Ergin",
"Oğuz",
""
],
[
"Mutlu",
"Onur",
""
]
] |
new_dataset
| 0.999413 |
2211.07440
|
Sergio Romero-Tapiador
|
Sergio Romero-Tapiador, Ruben Tolosana, Aythami Morales, Isabel
Espinosa-Salinas, Gala Freixer, Julian Fierrez, Ruben Vera-Rodriguez, Enrique
Carrillo de Santa Pau, Ana Ram\'irez de Molina and Javier Ortega-Garcia
|
Leveraging Automatic Personalised Nutrition: Food Image Recognition
Benchmark and Dataset based on Nutrition Taxonomy
|
10 pages, 3 figures, 4 tables
| null | null | null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Leading a healthy lifestyle has become one of the most challenging goals in
today's society due to our sedentary lifestyle and poor eating habits. As a
result, national and international organisms have made numerous efforts to
promote healthier food diets and physical activity habits. However, these
recommendations are sometimes difficult to follow in our daily life and they
are also based on a general population. As a consequence, a new area of
research, personalised nutrition, has been conceived focusing on individual
solutions through smart devices and Artificial Intelligence (AI) methods.
This study presents the AI4Food-NutritionDB database, the first nutrition
database that considers food images and a nutrition taxonomy based on
recommendations by national and international organisms. In addition, four
different categorisation levels are considered following nutrition experts: 6
nutritional levels, 19 main categories (e.g., "Meat"), 73 subcategories (e.g.,
"White Meat"), and 893 final food products (e.g., "Chicken"). The
AI4Food-NutritionDB opens the doors to new food computing approaches in terms
of food intake frequency, quality, and categorisation. Also, in addition to the
database, we propose a standard experimental protocol and benchmark including
three tasks based on the nutrition taxonomy (i.e., category, subcategory, and
final product) to be used for the research community. Finally, we also release
our Deep Learning models trained with the AI4Food-NutritionDB, which can be
used as pre-trained models, achieving accurate recognition results with
challenging food image databases.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 15:14:50 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 14:07:13 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Romero-Tapiador",
"Sergio",
""
],
[
"Tolosana",
"Ruben",
""
],
[
"Morales",
"Aythami",
""
],
[
"Espinosa-Salinas",
"Isabel",
""
],
[
"Freixer",
"Gala",
""
],
[
"Fierrez",
"Julian",
""
],
[
"Vera-Rodriguez",
"Ruben",
""
],
[
"Pau",
"Enrique Carrillo de Santa",
""
],
[
"de Molina",
"Ana Ramírez",
""
],
[
"Ortega-Garcia",
"Javier",
""
]
] |
new_dataset
| 0.985329 |
2211.15501
|
Maithili Patel
|
Maithili Patel, Sonia Chernova
|
Proactive Robot Assistance via Spatio-Temporal Object Modeling
| null |
205:881-891, 2023
| null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Proactive robot assistance enables a robot to anticipate and provide for a
user's needs without being explicitly asked. We formulate proactive assistance
as the problem of the robot anticipating temporal patterns of object movements
associated with everyday user routines, and proactively assisting the user by
placing objects to adapt the environment to their needs. We introduce a
generative graph neural network to learn a unified spatio-temporal predictive
model of object dynamics from temporal sequences of object arrangements. We
additionally contribute the Household Object Movements from Everyday Routines
(HOMER) dataset, which tracks household objects associated with human
activities of daily living across 50+ days for five simulated households. Our
model outperforms the leading baseline in predicting object movement, correctly
predicting locations for 11.1% more objects and wrongly predicting locations
for 11.5% fewer objects used by the human user.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 16:20:50 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Patel",
"Maithili",
""
],
[
"Chernova",
"Sonia",
""
]
] |
new_dataset
| 0.99955 |
2302.09108
|
Shashank Nag
|
Shashank Nag, Gourav Datta, Souvik Kundu, Nitin Chandrachoodan, Peter
A. Beerel
|
ViTA: A Vision Transformer Inference Accelerator for Edge Applications
|
Accepted at ISCAS 2023
|
2023 IEEE International Symposium on Circuits and Systems (ISCAS),
Monterey, CA, USA, 2023, pp. 1-5
|
10.1109/ISCAS46773.2023.10181988
| null |
cs.AR cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Vision Transformer models, such as ViT, Swin Transformer, and
Transformer-in-Transformer, have recently gained significant traction in
computer vision tasks due to their ability to capture the global relation
between features which leads to superior performance. However, they are
compute-heavy and difficult to deploy in resource-constrained edge devices.
Existing hardware accelerators, including those for the closely-related BERT
transformer models, do not target highly resource-constrained environments. In
this paper, we address this gap and propose ViTA - a configurable hardware
accelerator for inference of vision transformer models, targeting
resource-constrained edge computing devices and avoiding repeated off-chip
memory accesses. We employ a head-level pipeline and inter-layer MLP
optimizations, and can support several commonly used vision transformer models
with changes solely in our control logic. We achieve nearly 90% hardware
utilization efficiency on most vision transformer models, report a power of
0.88W when synthesised with a clock of 150 MHz, and get reasonable frame rates
- all of which makes ViTA suitable for edge applications.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 19:35:36 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Nag",
"Shashank",
""
],
[
"Datta",
"Gourav",
""
],
[
"Kundu",
"Souvik",
""
],
[
"Chandrachoodan",
"Nitin",
""
],
[
"Beerel",
"Peter A.",
""
]
] |
new_dataset
| 0.992035 |
2302.14627
|
Nallappabhavithran G
|
NallappaBhavithran G, Selvakumar R
|
DNA digital data storage and retrieval using algebraic codes
|
7 pages, 3 figures
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
DNA is a promising storage medium, but its stability and occurrence of Indel
errors pose a significant challenge. The relative occurrence of Guanine(G) and
Cytosine(C) in DNA is crucial for its longevity, and reverse complementary base
pairs should be avoided to prevent the formation of a secondary structure in
DNA strands. We overcome these challenges by selecting appropriate group
homomorphisms. For storing and retrieving information in DNA strings we use
kernel code and the Varshamov-Tenengolts algorithm. The Varshamov-Tenengolts
algorithm corrects single indel errors. Additionally, we construct codes of any
desired length (n) while calculating its reverse complement distance based on
the value of n.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 11:06:48 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Sep 2023 11:23:55 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Sep 2023 06:44:08 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"G",
"NallappaBhavithran",
""
],
[
"R",
"Selvakumar",
""
]
] |
new_dataset
| 0.984394 |
2303.00299
|
Jinghe Wang
|
Jinghe Wang, Wankai Tang, Shi Jin, Xiao Li, and Michail Matthaiou
|
Static Power Consumption Modeling and Measurement of Reconfigurable
Intelligent Surfaces
|
arXiv admin note: substantial text overlap with arXiv:2211.00323
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconfigurable intelligent surfaces (RISs) are anticipated to transform
wireless communication in a way that is both economical and energy efficient.
Revealing the practical power consumption characteristics of RISs can provide
an essential toolkit for the optimal design of RIS-assisted wireless
communication systems and energy efficiency performance evaluation. Based on
our previous work that modeled the dynamic power consumption of RISs, we
henceforth concentrate more on static power consumption. We first divide the
RIS hardware into three basic parts: the FPGA control board, the drive
circuits, and the RIS unit cells. The first two parts are mainly to be
investigated and the last part has been modeled as the dynamic power
consumption in the previous work. In this work, the power consumption of the
FPGA control board is regarded as a constant value, however, that of the drive
circuit is a variant that is affected by the number of control signals and its
self-power consumption characteristics. Therefore, we model the power
consumption of the drive circuits of various kinds of RISs, i.e., PIN
diode-/Varactor diode-/RF switch-based RIS. Finally, the measurement results
and typical value of static power consumption are illustrated and discussed.
|
[
{
"version": "v1",
"created": "Wed, 1 Mar 2023 07:48:18 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Wang",
"Jinghe",
""
],
[
"Tang",
"Wankai",
""
],
[
"Jin",
"Shi",
""
],
[
"Li",
"Xiao",
""
],
[
"Matthaiou",
"Michail",
""
]
] |
new_dataset
| 0.976625 |
2303.10042
|
Vanessa Wirth
|
Vanessa Wirth, Anna-Maria Liphardt, Birte Coppers, Johanna Br\"aunig,
Simon Heinrich, Sigrid Leyendecker, Arnd Kleyer, Georg Schett, Martin
Vossiek, Bernhard Egger, Marc Stamminger
|
ShaRPy: Shape Reconstruction and Hand Pose Estimation from RGB-D with
Uncertainty
|
Accepted at ICCVW (CVAMD) 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite their potential, markerless hand tracking technologies are not yet
applied in practice to the diagnosis or monitoring of the activity in
inflammatory musculoskeletal diseases. One reason is that the focus of most
methods lies in the reconstruction of coarse, plausible poses, whereas in the
clinical context, accurate, interpretable, and reliable results are required.
Therefore, we propose ShaRPy, the first RGB-D Shape Reconstruction and hand
Pose tracking system, which provides uncertainty estimates of the computed
pose, e.g., when a finger is hidden or its estimate is inconsistent with the
observations in the input, to guide clinical decision-making. Besides pose,
ShaRPy approximates a personalized hand shape, promoting a more realistic and
intuitive understanding of its digital twin. Our method requires only a
light-weight setup with a single consumer-level RGB-D camera yet it is able to
distinguish similar poses with only small joint angle deviations in a
metrically accurate space. This is achieved by combining a data-driven dense
correspondence predictor with traditional energy minimization. To bridge the
gap between interactive visualization and biomedical simulation we leverage a
parametric hand model in which we incorporate biomedical constraints and
optimize for both, its pose and hand shape. We evaluate ShaRPy on a keypoint
detection benchmark and show qualitative results of hand function assessments
for activity monitoring of musculoskeletal diseases.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 15:12:25 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 13:08:53 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Wirth",
"Vanessa",
""
],
[
"Liphardt",
"Anna-Maria",
""
],
[
"Coppers",
"Birte",
""
],
[
"Bräunig",
"Johanna",
""
],
[
"Heinrich",
"Simon",
""
],
[
"Leyendecker",
"Sigrid",
""
],
[
"Kleyer",
"Arnd",
""
],
[
"Schett",
"Georg",
""
],
[
"Vossiek",
"Martin",
""
],
[
"Egger",
"Bernhard",
""
],
[
"Stamminger",
"Marc",
""
]
] |
new_dataset
| 0.97671 |
2303.13592
|
Zheng-Xin Yong
|
Zheng-Xin Yong, Ruochen Zhang, Jessica Zosa Forde, Skyler Wang, Arjun
Subramonian, Holy Lovenia, Samuel Cahyawijaya, Genta Indra Winata, Lintang
Sutawika, Jan Christian Blaise Cruz, Yin Lin Tan, Long Phan, Rowena Garcia,
Thamar Solorio, Alham Fikri Aji
|
Prompting Multilingual Large Language Models to Generate Code-Mixed
Texts: The Case of South East Asian Languages
|
Updating Authors
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
While code-mixing is a common linguistic practice in many parts of the world,
collecting high-quality and low-cost code-mixed data remains a challenge for
natural language processing (NLP) research. The recent proliferation of Large
Language Models (LLMs) compels one to ask: how capable are these systems in
generating code-mixed data? In this paper, we explore prompting multilingual
LLMs in a zero-shot manner to generate code-mixed data for seven languages in
South East Asia (SEA), namely Indonesian, Malay, Chinese, Tagalog, Vietnamese,
Tamil, and Singlish. We find that publicly available multilingual
instruction-tuned models such as BLOOMZ and Flan-T5-XXL are incapable of
producing texts with phrases or clauses from different languages. ChatGPT
exhibits inconsistent capabilities in generating code-mixed texts, wherein its
performance varies depending on the prompt template and language pairing. For
instance, ChatGPT generates fluent and natural Singlish texts (an English-based
creole spoken in Singapore), but for English-Tamil language pair, the system
mostly produces grammatically incorrect or semantically meaningless utterances.
Furthermore, it may erroneously introduce languages not specified in the
prompt. Based on our investigation, existing multilingual LLMs exhibit a wide
range of proficiency in code-mixed data generation for SEA languages. As such,
we advise against using LLMs in this context without extensive human checks.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 18:16:30 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Mar 2023 14:59:26 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Sep 2023 03:20:41 GMT"
},
{
"version": "v4",
"created": "Tue, 12 Sep 2023 16:35:30 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Yong",
"Zheng-Xin",
""
],
[
"Zhang",
"Ruochen",
""
],
[
"Forde",
"Jessica Zosa",
""
],
[
"Wang",
"Skyler",
""
],
[
"Subramonian",
"Arjun",
""
],
[
"Lovenia",
"Holy",
""
],
[
"Cahyawijaya",
"Samuel",
""
],
[
"Winata",
"Genta Indra",
""
],
[
"Sutawika",
"Lintang",
""
],
[
"Cruz",
"Jan Christian Blaise",
""
],
[
"Tan",
"Yin Lin",
""
],
[
"Phan",
"Long",
""
],
[
"Garcia",
"Rowena",
""
],
[
"Solorio",
"Thamar",
""
],
[
"Aji",
"Alham Fikri",
""
]
] |
new_dataset
| 0.955723 |
2305.06456
|
Zhengyi Luo
|
Zhengyi Luo, Jinkun Cao, Alexander Winkler, Kris Kitani, Weipeng Xu
|
Perpetual Humanoid Control for Real-time Simulated Avatars
|
ICCV 2023. Project page: https://zhengyiluo.github.io/PHC/
| null | null | null |
cs.CV cs.GR cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present a physics-based humanoid controller that achieves high-fidelity
motion imitation and fault-tolerant behavior in the presence of noisy input
(e.g. pose estimates from video or generated from language) and unexpected
falls. Our controller scales up to learning ten thousand motion clips without
using any external stabilizing forces and learns to naturally recover from
fail-state. Given reference motion, our controller can perpetually control
simulated avatars without requiring resets. At its core, we propose the
progressive multiplicative control policy (PMCP), which dynamically allocates
new network capacity to learn harder and harder motion sequences. PMCP allows
efficient scaling for learning from large-scale motion databases and adding new
tasks, such as fail-state recovery, without catastrophic forgetting. We
demonstrate the effectiveness of our controller by using it to imitate noisy
poses from video-based pose estimators and language-based motion generators in
a live and real-time multi-person avatar use case.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 20:51:37 GMT"
},
{
"version": "v2",
"created": "Wed, 24 May 2023 22:05:21 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Sep 2023 19:05:13 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Luo",
"Zhengyi",
""
],
[
"Cao",
"Jinkun",
""
],
[
"Winkler",
"Alexander",
""
],
[
"Kitani",
"Kris",
""
],
[
"Xu",
"Weipeng",
""
]
] |
new_dataset
| 0.99584 |
2306.17061
|
Haocong Luo
|
Haocong Luo, Ataberk Olgun, A. Giray Ya\u{g}l{\i}k\c{c}{\i}, Yahya Can
Tu\u{g}rul, Steve Rhyner, Meryem Banu Cavlak, Jo\"el Lindegger, Mohammad
Sadrosadati, Onur Mutlu
|
RowPress: Amplifying Read Disturbance in Modern DRAM Chips
|
Extended version of the paper "RowPress: Amplifying Read Disturbance
in Modern DRAM Chips" at the 50th Annual International Symposium on Computer
Architecture (ISCA), 2023
| null | null | null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Memory isolation is critical for system reliability, security, and safety.
Unfortunately, read disturbance can break memory isolation in modern DRAM
chips. For example, RowHammer is a well-studied read-disturb phenomenon where
repeatedly opening and closing (i.e., hammering) a DRAM row many times causes
bitflips in physically nearby rows.
This paper experimentally demonstrates and analyzes another widespread
read-disturb phenomenon, RowPress, in real DDR4 DRAM chips. RowPress breaks
memory isolation by keeping a DRAM row open for a long period of time, which
disturbs physically nearby rows enough to cause bitflips. We show that RowPress
amplifies DRAM's vulnerability to read-disturb attacks by significantly
reducing the number of row activations needed to induce a bitflip by one to two
orders of magnitude under realistic conditions. In extreme cases, RowPress
induces bitflips in a DRAM row when an adjacent row is activated only once. Our
detailed characterization of 164 real DDR4 DRAM chips shows that RowPress 1)
affects chips from all three major DRAM manufacturers, 2) gets worse as DRAM
technology scales down to smaller node sizes, and 3) affects a different set of
DRAM cells from RowHammer and behaves differently from RowHammer as temperature
and access pattern changes.
We demonstrate in a real DDR4-based system with RowHammer protection that 1)
a user-level program induces bitflips by leveraging RowPress while conventional
RowHammer cannot do so, and 2) a memory controller that adaptively keeps the
DRAM row open for a longer period of time based on access pattern can
facilitate RowPress-based attacks. To prevent bitflips due to RowPress, we
describe and evaluate a new methodology that adapts existing RowHammer
mitigation techniques to also mitigate RowPress with low additional performance
overhead. We open source all our code and data to facilitate future research on
RowPress.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 16:09:56 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jul 2023 14:01:07 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Sep 2023 16:27:19 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Luo",
"Haocong",
""
],
[
"Olgun",
"Ataberk",
""
],
[
"Yağlıkçı",
"A. Giray",
""
],
[
"Tuğrul",
"Yahya Can",
""
],
[
"Rhyner",
"Steve",
""
],
[
"Cavlak",
"Meryem Banu",
""
],
[
"Lindegger",
"Joël",
""
],
[
"Sadrosadati",
"Mohammad",
""
],
[
"Mutlu",
"Onur",
""
]
] |
new_dataset
| 0.995122 |
2307.10344
|
Roberto Murcio
|
Roberto Murcio, Nilufer Sari Aslam and Joana Barros
|
Post-pandemic mobility patterns in London
|
version 2 - Case of study added
| null | null | null |
cs.SI physics.soc-ph
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Understanding human mobility is crucial for urban and transport studies in
cities. People's daily activities provide valuable insight, such as where
people live, work, shop, leisure or eat during midday or after-work hours.
However, such activities are changed due to travel behaviours after COVID-19 in
cities. This study examines the mobility patterns captured from mobile phone
apps to explore the behavioural patterns established since the COVID-19
lockdowns triggered a series of changes in urban environments.
|
[
{
"version": "v1",
"created": "Wed, 19 Jul 2023 22:41:47 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Sep 2023 18:08:56 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Murcio",
"Roberto",
""
],
[
"Aslam",
"Nilufer Sari",
""
],
[
"Barros",
"Joana",
""
]
] |
new_dataset
| 0.998031 |
2307.10705
|
Quang Huy Che
|
Quang Huy Che and Dinh Phuc Nguyen and Minh Quan Pham and Duc Khai Lam
|
TwinLiteNet: An Efficient and Lightweight Model for Driveable Area and
Lane Segmentation in Self-Driving Cars
|
Accepted by MAPR 2023
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Semantic segmentation is a common task in autonomous driving to understand
the surrounding environment. Driveable Area Segmentation and Lane Detection are
particularly important for safe and efficient navigation on the road. However,
original semantic segmentation models are computationally expensive and require
high-end hardware, which is not feasible for embedded systems in autonomous
vehicles. This paper proposes a lightweight model for the driveable area and
lane line segmentation. TwinLiteNet is designed cheaply but achieves accurate
and efficient segmentation results. We evaluate TwinLiteNet on the BDD100K
dataset and compare it with modern models. Experimental results show that our
TwinLiteNet performs similarly to existing approaches, requiring significantly
fewer computational resources. Specifically, TwinLiteNet achieves a mIoU score
of 91.3% for the Drivable Area task and 31.08% IoU for the Lane Detection task
with only 0.4 million parameters and achieves 415 FPS on GPU RTX A5000.
Furthermore, TwinLiteNet can run in real-time on embedded devices with limited
computing power, especially since it achieves 60FPS on Jetson Xavier NX, making
it an ideal solution for self-driving vehicles. Code is available:
url{https://github.com/chequanghuy/TwinLiteNet}.
|
[
{
"version": "v1",
"created": "Thu, 20 Jul 2023 08:53:47 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Jul 2023 04:52:57 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Jul 2023 08:23:19 GMT"
},
{
"version": "v4",
"created": "Tue, 12 Sep 2023 07:21:05 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Che",
"Quang Huy",
""
],
[
"Nguyen",
"Dinh Phuc",
""
],
[
"Pham",
"Minh Quan",
""
],
[
"Lam",
"Duc Khai",
""
]
] |
new_dataset
| 0.988387 |
2307.14991
|
Jiyang Zhang
|
Jiyang Zhang, Pengyu Nie, Junyi Jessy Li, Milos Gligoric
|
Multilingual Code Co-Evolution Using Large Language Models
|
FSE 2023 (camera ready)
| null | null | null |
cs.SE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Many software projects implement APIs and algorithms in multiple programming
languages. Maintaining such projects is tiresome, as developers have to ensure
that any change (e.g., a bug fix or a new feature) is being propagated, timely
and without errors, to implementations in other programming languages. In the
world of ever-changing software, using rule-based translation tools (i.e.,
transpilers) or machine learning models for translating code from one language
to another provides limited value. Translating each time the entire codebase
from one language to another is not the way developers work. In this paper, we
target a novel task: translating code changes from one programming language to
another using large language models (LLMs). We design and implement the first
LLM, dubbed Codeditor, to tackle this task. Codeditor explicitly models code
changes as edit sequences and learns to correlate changes across programming
languages. To evaluate Codeditor, we collect a corpus of 6,613 aligned code
changes from 8 pairs of open-source software projects implementing similar
functionalities in two programming languages (Java and C#). Results show that
Codeditor outperforms the state-of-the-art approaches by a large margin on all
commonly used automatic metrics. Our work also reveals that Codeditor is
complementary to the existing generation-based models, and their combination
ensures even greater performance.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 16:37:30 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Sep 2023 19:37:27 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Zhang",
"Jiyang",
""
],
[
"Nie",
"Pengyu",
""
],
[
"Li",
"Junyi Jessy",
""
],
[
"Gligoric",
"Milos",
""
]
] |
new_dataset
| 0.997124 |
2308.01040
|
Xinfeng Li
|
Xinfeng Li, Chen Yan, Xuancun Lu, Zihan Zeng, Xiaoyu Ji, Wenyuan Xu
|
Inaudible Adversarial Perturbation: Manipulating the Recognition of User
Speech in Real Time
|
Accepted by NDSS Symposium 2024. Please cite this paper as "Xinfeng
Li, Chen Yan, Xuancun Lu, Zihan Zeng, Xiaoyu Ji, Wenyuan Xu. Inaudible
Adversarial Perturbation: Manipulating the Recognition of User Speech in Real
Time. In Network and Distributed System Security (NDSS) Symposium 2024."
| null | null | null |
cs.CR cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Automatic speech recognition (ASR) systems have been shown to be vulnerable
to adversarial examples (AEs). Recent success all assumes that users will not
notice or disrupt the attack process despite the existence of music/noise-like
sounds and spontaneous responses from voice assistants. Nonetheless, in
practical user-present scenarios, user awareness may nullify existing attack
attempts that launch unexpected sounds or ASR usage. In this paper, we seek to
bridge the gap in existing research and extend the attack to user-present
scenarios. We propose VRIFLE, an inaudible adversarial perturbation (IAP)
attack via ultrasound delivery that can manipulate ASRs as a user speaks. The
inherent differences between audible sounds and ultrasounds make IAP delivery
face unprecedented challenges such as distortion, noise, and instability. In
this regard, we design a novel ultrasonic transformation model to enhance the
crafted perturbation to be physically effective and even survive long-distance
delivery. We further enable VRIFLE's robustness by adopting a series of
augmentation on user and real-world variations during the generation process.
In this way, VRIFLE features an effective real-time manipulation of the ASR
output from different distances and under any speech of users, with an
alter-and-mute strategy that suppresses the impact of user disruption. Our
extensive experiments in both digital and physical worlds verify VRIFLE's
effectiveness under various configurations, robustness against six kinds of
defenses, and universality in a targeted manner. We also show that VRIFLE can
be delivered with a portable attack device and even everyday-life loudspeakers.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 09:32:17 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Aug 2023 04:32:11 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Sep 2023 04:14:48 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Li",
"Xinfeng",
""
],
[
"Yan",
"Chen",
""
],
[
"Lu",
"Xuancun",
""
],
[
"Zeng",
"Zihan",
""
],
[
"Ji",
"Xiaoyu",
""
],
[
"Xu",
"Wenyuan",
""
]
] |
new_dataset
| 0.996677 |
2308.02756
|
Jitesh Joshi
|
Jitesh Joshi, Katherine Wang, Youngjun Cho
|
PhysioKit: Open-source, Low-cost Physiological Computing Toolkit for
Single and Multi-user Studies
|
25 pages, 8 figures, 4 tables
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The proliferation of physiological sensors opens new opportunities to explore
interactions, conduct experiments and evaluate the user experience with
continuous monitoring of bodily functions. Commercial devices, however, can be
costly or limit access to raw waveform data, while low-cost sensors are
efforts-intensive to setup. To address these challenges, we introduce
PhysioKit, an open-source, low-cost physiological computing toolkit. PhysioKit
provides a one-stop pipeline consisting of (i) a sensing and data acquisition
layer that can be configured in a modular manner per research needs, (ii) a
software application layer that enables data acquisition, real-time
visualization and machine learning (ML)-enabled signal quality assessment. This
also supports basic visual biofeedback configurations and synchronized
acquisition for co-located or remote multi-user settings. In a validation study
with 16 participants, PhysioKit shows strong agreement with research-grade
sensors on measuring heart rate and heart rate variability metrics data.
Furthermore, we report usability survey results from 10 small-project teams (44
individual members in total) who used PhysioKit for 4-6 weeks, providing
insights into its use cases and research benefits. Lastly, we discuss the
extensibility and potential impact of the toolkit on the research community.
|
[
{
"version": "v1",
"created": "Sat, 5 Aug 2023 00:54:29 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 17:03:31 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Joshi",
"Jitesh",
""
],
[
"Wang",
"Katherine",
""
],
[
"Cho",
"Youngjun",
""
]
] |
new_dataset
| 0.997992 |
2308.04904
|
Tengchuan Kou
|
Tengchuan Kou, Xiaohong Liu, Wei Sun, Jun Jia, Xiongkuo Min, Guangtao
Zhai, Ning Liu
|
StableVQA: A Deep No-Reference Quality Assessment Model for Video
Stability
| null | null |
10.1145/3581783.3611860
| null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video shakiness is an unpleasant distortion of User Generated Content (UGC)
videos, which is usually caused by the unstable hold of cameras. In recent
years, many video stabilization algorithms have been proposed, yet no specific
and accurate metric enables comprehensively evaluating the stability of videos.
Indeed, most existing quality assessment models evaluate video quality as a
whole without specifically taking the subjective experience of video stability
into consideration. Therefore, these models cannot measure the video stability
explicitly and precisely when severe shakes are present. In addition, there is
no large-scale video database in public that includes various degrees of shaky
videos with the corresponding subjective scores available, which hinders the
development of Video Quality Assessment for Stability (VQA-S). To this end, we
build a new database named StableDB that contains 1,952 diversely-shaky UGC
videos, where each video has a Mean Opinion Score (MOS) on the degree of video
stability rated by 34 subjects. Moreover, we elaborately design a novel VQA-S
model named StableVQA, which consists of three feature extractors to acquire
the optical flow, semantic, and blur features respectively, and a regression
layer to predict the final stability score. Extensive experiments demonstrate
that the StableVQA achieves a higher correlation with subjective opinions than
the existing VQA-S models and generic VQA models. The database and codes are
available at https://github.com/QMME/StableVQA.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 12:04:36 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Aug 2023 03:52:49 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Kou",
"Tengchuan",
""
],
[
"Liu",
"Xiaohong",
""
],
[
"Sun",
"Wei",
""
],
[
"Jia",
"Jun",
""
],
[
"Min",
"Xiongkuo",
""
],
[
"Zhai",
"Guangtao",
""
],
[
"Liu",
"Ning",
""
]
] |
new_dataset
| 0.983055 |
2308.10161
|
Qiao Yan
|
Qiao Yan, Yihan Wang
|
ThermRad: A Multi-modal Dataset for Robust 3D Object Detection under
Challenging Conditions
|
At this time, we have not reached a definitive agreement regarding
the ownership and copyright of this dataset. Due to the unresolved issue
regarding the dataset, I am writing to formally request the withdrawal of our
paper
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robust 3D object detection in extreme weather and illumination conditions is
a challenging task. While radars and thermal cameras are known for their
resilience to these conditions, few studies have been conducted on
radar-thermal fusion due to the lack of corresponding datasets. To address this
gap, we first present a new multi-modal dataset called ThermRad, which includes
a 3D LiDAR, a 4D radar, an RGB camera and a thermal camera. This dataset is
unique because it includes data from all four sensors in extreme weather
conditions, providing a valuable resource for future research in this area. To
validate the robustness of 4D radars and thermal cameras for 3D object
detection in challenging weather conditions, we propose a new multi-modal
fusion method called RTDF-RCNN, which leverages the complementary strengths of
4D radars and thermal cameras to boost object detection performance. To further
prove the effectiveness of our proposed framework, we re-implement
state-of-the-art (SOTA) 3D detectors on our dataset as benchmarks for
evaluation. Our method achieves significant enhancements in detecting cars,
pedestrians, and cyclists, with improvements of over 7.98%, 24.27%, and 27.15%,
respectively, while achieving comparable results to LiDAR-based approaches. Our
contributions in both the ThermRad dataset and the new multi-modal fusion
method provide a new approach to robust 3D object detection in adverse weather
and illumination conditions. The ThermRad dataset will be released.
|
[
{
"version": "v1",
"created": "Sun, 20 Aug 2023 04:34:30 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Aug 2023 07:38:50 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Sep 2023 09:45:02 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Yan",
"Qiao",
""
],
[
"Wang",
"Yihan",
""
]
] |
new_dataset
| 0.999894 |
2308.16139
|
Jan Egger
|
Jianning Li, Antonio Pepe, Christina Gsaxner, Gijs Luijten, Yuan Jin,
Narmada Ambigapathy, Enrico Nasca, Naida Solak, Gian Marco Melito, Viet Duc
Vu, Afaque R. Memon, Xiaojun Chen, Jan Stefan Kirschke, Ezequiel de la Rosa,
Patrick Ferdinand Christ, Hongwei Bran Li, David G. Ellis, Michele R.
Aizenberg, Sergios Gatidis, Thomas K\"ustner, Nadya Shusharina, Nicholas
Heller, Vincent Andrearczyk, Adrien Depeursinge, Mathieu Hatt, Anjany
Sekuboyina, Maximilian L\"offler, Hans Liebl, Reuben Dorent, Tom Vercauteren,
Jonathan Shapey, Aaron Kujawa, Stefan Cornelissen, Patrick Langenhuizen,
Achraf Ben-Hamadou, Ahmed Rekik, Sergi Pujades, Edmond Boyer, Federico
Bolelli, Costantino Grana, Luca Lumetti, Hamidreza Salehi, Jun Ma, Yao Zhang,
Ramtin Gharleghi, Susann Beier, Arcot Sowmya, Eduardo A. Garza-Villarreal,
Thania Balducci, Diego Angeles-Valdez, Roberto Souza, Leticia Rittner,
Richard Frayne, Yuanfeng Ji, Soumick Chatterjee, Florian Dubost, Stefanie
Schreiber, Hendrik Mattern, Oliver Speck, Daniel Haehn, Christoph John,
Andreas N\"urnberger, Jo\~ao Pedrosa, Carlos Ferreira, Guilherme Aresta,
Ant\'onio Cunha, Aur\'elio Campilho, Yannick Suter, Jose Garcia, Alain
Lalande, Emmanuel Audenaert, Claudia Krebs, Timo Van Leeuwen, Evie Vereecke,
Rainer R\"ohrig, Frank H\"olzle, Vahid Badeli, Kathrin Krieger, Matthias
Gunzer, Jianxu Chen, Amin Dada, Miriam Balzer, Jana Fragemann, Frederic
Jonske, Moritz Rempe, Stanislav Malorodov, Fin H. Bahnsen, Constantin
Seibold, Alexander Jaus, Ana Sofia Santos, Mariana Lindo, Andr\'e Ferreira,
Victor Alves, Michael Kamp, Amr Abourayya, Felix Nensa, Fabian H\"orst,
Alexander Brehmer, Lukas Heine, Lars E. Podleska, Matthias A. Fink, Julius
Keyl, Konstantinos Tserpes, Moon-Sung Kim, Shireen Elhabian, Hans Lamecker,
D\v{z}enan Zuki\'c, Beatriz Paniagua, Christian Wachinger, Martin Urschler,
Luc Duong, Jakob Wasserthal, Peter F. Hoyer, Oliver Basu, Thomas Maal, Max J.
H. Witjes, Ti-chiun Chang, Seyed-Ahmad Ahmadi, Ping Luo, Bjoern Menze,
Mauricio Reyes, Christos Davatzikos, Behrus Puladi, Jens Kleesiek, Jan Egger
|
MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer
Vision
|
21 pages
| null | null | null |
cs.CV cs.DB cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present MedShapeNet, a large collection of anatomical shapes (e.g., bones,
organs, vessels) and 3D surgical instrument models. Prior to the deep learning
era, the broad application of statistical shape models (SSMs) in medical image
analysis is evidence that shapes have been commonly used to describe medical
data. Nowadays, however, state-of-the-art (SOTA) deep learning algorithms in
medical imaging are predominantly voxel-based. In computer vision, on the
contrary, shapes (including, voxel occupancy grids, meshes, point clouds and
implicit surface models) are preferred data representations in 3D, as seen from
the numerous shape-related publications in premier vision conferences, such as
the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), as
well as the increasing popularity of ShapeNet (about 51,300 models) and
Princeton ModelNet (127,915 models) in computer vision research. MedShapeNet is
created as an alternative to these commonly used shape benchmarks to facilitate
the translation of data-driven vision algorithms to medical applications, and
it extends the opportunities to adapt SOTA vision algorithms to solve critical
medical problems. Besides, the majority of the medical shapes in MedShapeNet
are modeled directly on the imaging data of real patients, and therefore it
complements well existing shape benchmarks comprising of computer-aided design
(CAD) models. MedShapeNet currently includes more than 100,000 medical shapes,
and provides annotations in the form of paired data. It is therefore also a
freely available repository of 3D models for extended reality (virtual reality
- VR, augmented reality - AR, mixed reality - MR) and medical 3D printing. This
white paper describes in detail the motivations behind MedShapeNet, the shape
acquisition procedures, the use cases, as well as the usage of the online shape
search portal: https://medshapenet.ikim.nrw/
|
[
{
"version": "v1",
"created": "Wed, 30 Aug 2023 16:52:20 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Aug 2023 07:26:50 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Sep 2023 09:37:47 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Li",
"Jianning",
""
],
[
"Pepe",
"Antonio",
""
],
[
"Gsaxner",
"Christina",
""
],
[
"Luijten",
"Gijs",
""
],
[
"Jin",
"Yuan",
""
],
[
"Ambigapathy",
"Narmada",
""
],
[
"Nasca",
"Enrico",
""
],
[
"Solak",
"Naida",
""
],
[
"Melito",
"Gian Marco",
""
],
[
"Vu",
"Viet Duc",
""
],
[
"Memon",
"Afaque R.",
""
],
[
"Chen",
"Xiaojun",
""
],
[
"Kirschke",
"Jan Stefan",
""
],
[
"de la Rosa",
"Ezequiel",
""
],
[
"Christ",
"Patrick Ferdinand",
""
],
[
"Li",
"Hongwei Bran",
""
],
[
"Ellis",
"David G.",
""
],
[
"Aizenberg",
"Michele R.",
""
],
[
"Gatidis",
"Sergios",
""
],
[
"Küstner",
"Thomas",
""
],
[
"Shusharina",
"Nadya",
""
],
[
"Heller",
"Nicholas",
""
],
[
"Andrearczyk",
"Vincent",
""
],
[
"Depeursinge",
"Adrien",
""
],
[
"Hatt",
"Mathieu",
""
],
[
"Sekuboyina",
"Anjany",
""
],
[
"Löffler",
"Maximilian",
""
],
[
"Liebl",
"Hans",
""
],
[
"Dorent",
"Reuben",
""
],
[
"Vercauteren",
"Tom",
""
],
[
"Shapey",
"Jonathan",
""
],
[
"Kujawa",
"Aaron",
""
],
[
"Cornelissen",
"Stefan",
""
],
[
"Langenhuizen",
"Patrick",
""
],
[
"Ben-Hamadou",
"Achraf",
""
],
[
"Rekik",
"Ahmed",
""
],
[
"Pujades",
"Sergi",
""
],
[
"Boyer",
"Edmond",
""
],
[
"Bolelli",
"Federico",
""
],
[
"Grana",
"Costantino",
""
],
[
"Lumetti",
"Luca",
""
],
[
"Salehi",
"Hamidreza",
""
],
[
"Ma",
"Jun",
""
],
[
"Zhang",
"Yao",
""
],
[
"Gharleghi",
"Ramtin",
""
],
[
"Beier",
"Susann",
""
],
[
"Sowmya",
"Arcot",
""
],
[
"Garza-Villarreal",
"Eduardo A.",
""
],
[
"Balducci",
"Thania",
""
],
[
"Angeles-Valdez",
"Diego",
""
],
[
"Souza",
"Roberto",
""
],
[
"Rittner",
"Leticia",
""
],
[
"Frayne",
"Richard",
""
],
[
"Ji",
"Yuanfeng",
""
],
[
"Chatterjee",
"Soumick",
""
],
[
"Dubost",
"Florian",
""
],
[
"Schreiber",
"Stefanie",
""
],
[
"Mattern",
"Hendrik",
""
],
[
"Speck",
"Oliver",
""
],
[
"Haehn",
"Daniel",
""
],
[
"John",
"Christoph",
""
],
[
"Nürnberger",
"Andreas",
""
],
[
"Pedrosa",
"João",
""
],
[
"Ferreira",
"Carlos",
""
],
[
"Aresta",
"Guilherme",
""
],
[
"Cunha",
"António",
""
],
[
"Campilho",
"Aurélio",
""
],
[
"Suter",
"Yannick",
""
],
[
"Garcia",
"Jose",
""
],
[
"Lalande",
"Alain",
""
],
[
"Audenaert",
"Emmanuel",
""
],
[
"Krebs",
"Claudia",
""
],
[
"Van Leeuwen",
"Timo",
""
],
[
"Vereecke",
"Evie",
""
],
[
"Röhrig",
"Rainer",
""
],
[
"Hölzle",
"Frank",
""
],
[
"Badeli",
"Vahid",
""
],
[
"Krieger",
"Kathrin",
""
],
[
"Gunzer",
"Matthias",
""
],
[
"Chen",
"Jianxu",
""
],
[
"Dada",
"Amin",
""
],
[
"Balzer",
"Miriam",
""
],
[
"Fragemann",
"Jana",
""
],
[
"Jonske",
"Frederic",
""
],
[
"Rempe",
"Moritz",
""
],
[
"Malorodov",
"Stanislav",
""
],
[
"Bahnsen",
"Fin H.",
""
],
[
"Seibold",
"Constantin",
""
],
[
"Jaus",
"Alexander",
""
],
[
"Santos",
"Ana Sofia",
""
],
[
"Lindo",
"Mariana",
""
],
[
"Ferreira",
"André",
""
],
[
"Alves",
"Victor",
""
],
[
"Kamp",
"Michael",
""
],
[
"Abourayya",
"Amr",
""
],
[
"Nensa",
"Felix",
""
],
[
"Hörst",
"Fabian",
""
],
[
"Brehmer",
"Alexander",
""
],
[
"Heine",
"Lukas",
""
],
[
"Podleska",
"Lars E.",
""
],
[
"Fink",
"Matthias A.",
""
],
[
"Keyl",
"Julius",
""
],
[
"Tserpes",
"Konstantinos",
""
],
[
"Kim",
"Moon-Sung",
""
],
[
"Elhabian",
"Shireen",
""
],
[
"Lamecker",
"Hans",
""
],
[
"Zukić",
"Dženan",
""
],
[
"Paniagua",
"Beatriz",
""
],
[
"Wachinger",
"Christian",
""
],
[
"Urschler",
"Martin",
""
],
[
"Duong",
"Luc",
""
],
[
"Wasserthal",
"Jakob",
""
],
[
"Hoyer",
"Peter F.",
""
],
[
"Basu",
"Oliver",
""
],
[
"Maal",
"Thomas",
""
],
[
"Witjes",
"Max J. H.",
""
],
[
"Chang",
"Ti-chiun",
""
],
[
"Ahmadi",
"Seyed-Ahmad",
""
],
[
"Luo",
"Ping",
""
],
[
"Menze",
"Bjoern",
""
],
[
"Reyes",
"Mauricio",
""
],
[
"Davatzikos",
"Christos",
""
],
[
"Puladi",
"Behrus",
""
],
[
"Kleesiek",
"Jens",
""
],
[
"Egger",
"Jan",
""
]
] |
new_dataset
| 0.999796 |
2308.16349
|
Kilichbek Haydarov
|
Kilichbek Haydarov, Xiaoqian Shen, Avinash Madasu, Mahmoud Salem,
Li-Jia Li, Gamaleldin Elsayed, Mohamed Elhoseiny
|
Affective Visual Dialog: A Large-Scale Benchmark for Emotional Reasoning
Based on Visually Grounded Conversations
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce Affective Visual Dialog, an emotion explanation and reasoning
task as a testbed for research on understanding the formation of emotions in
visually grounded conversations. The task involves three skills: (1)
Dialog-based Question Answering (2) Dialog-based Emotion Prediction and (3)
Affective emotion explanation generation based on the dialog. Our key
contribution is the collection of a large-scale dataset, dubbed AffectVisDial,
consisting of 50K 10-turn visually grounded dialogs as well as concluding
emotion attributions and dialog-informed textual emotion explanations,
resulting in a total of 27,180 working hours. We explain our design decisions
in collecting the dataset and introduce the questioner and answerer tasks that
are associated with the participants in the conversation. We train and
demonstrate solid Affective Visual Dialog baselines adapted from
state-of-the-art models. Remarkably, the responses generated by our models show
promising emotional reasoning abilities in response to visually grounded
conversations. Our project page is available at
https://affective-visual-dialog.github.io.
|
[
{
"version": "v1",
"created": "Wed, 30 Aug 2023 22:50:32 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 04:37:37 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Haydarov",
"Kilichbek",
""
],
[
"Shen",
"Xiaoqian",
""
],
[
"Madasu",
"Avinash",
""
],
[
"Salem",
"Mahmoud",
""
],
[
"Li",
"Li-Jia",
""
],
[
"Elsayed",
"Gamaleldin",
""
],
[
"Elhoseiny",
"Mohamed",
""
]
] |
new_dataset
| 0.999338 |
2309.03378
|
Radu Tudor Ionescu
|
Codrut Rotaru, Nicolae-Catalin Ristea, Radu Tudor Ionescu
|
RoDia: A New Dataset for Romanian Dialect Identification from Speech
| null | null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dialect identification is a critical task in speech processing and language
technology, enhancing various applications such as speech recognition, speaker
verification, and many others. While most research studies have been dedicated
to dialect identification in widely spoken languages, limited attention has
been given to dialect identification in low-resource languages, such as
Romanian. To address this research gap, we introduce RoDia, the first dataset
for Romanian dialect identification from speech. The RoDia dataset includes a
varied compilation of speech samples from five distinct regions of Romania,
covering both urban and rural environments, totaling 2 hours of manually
annotated speech data. Along with our dataset, we introduce a set of
competitive models to be used as baselines for future research. The top scoring
model achieves a macro F1 score of 59.83% and a micro F1 score of 62.08%,
indicating that the task is challenging. We thus believe that RoDia is a
valuable resource that will stimulate research aiming to address the challenges
of Romanian dialect identification. We publicly release our dataset and code at
https://github.com/codrut2/RoDia.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 21:56:24 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 14:07:54 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Rotaru",
"Codrut",
""
],
[
"Ristea",
"Nicolae-Catalin",
""
],
[
"Ionescu",
"Radu Tudor",
""
]
] |
new_dataset
| 0.9999 |
2309.03905
|
Renrui Zhang
|
Jiaming Han, Renrui Zhang, Wenqi Shao, Peng Gao, Peng Xu, Han Xiao,
Kaipeng Zhang, Chris Liu, Song Wen, Ziyu Guo, Xudong Lu, Shuai Ren, Yafei
Wen, Xiaoxin Chen, Xiangyu Yue, Hongsheng Li, Yu Qiao
|
ImageBind-LLM: Multi-modality Instruction Tuning
|
Code is available at https://github.com/OpenGVLab/LLaMA-Adapter
| null | null | null |
cs.MM cs.CL cs.CV cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present ImageBind-LLM, a multi-modality instruction tuning method of large
language models (LLMs) via ImageBind. Existing works mainly focus on language
and image instruction tuning, different from which, our ImageBind-LLM can
respond to multi-modality conditions, including audio, 3D point clouds, video,
and their embedding-space arithmetic by only image-text alignment training.
During training, we adopt a learnable bind network to align the embedding space
between LLaMA and ImageBind's image encoder. Then, the image features
transformed by the bind network are added to word tokens of all layers in
LLaMA, which progressively injects visual instructions via an attention-free
and zero-initialized gating mechanism. Aided by the joint embedding of
ImageBind, the simple image-text training enables our model to exhibit superior
multi-modality instruction-following capabilities. During inference, the
multi-modality inputs are fed into the corresponding ImageBind encoders, and
processed by a proposed visual cache model for further cross-modal embedding
enhancement. The training-free cache model retrieves from three million image
features extracted by ImageBind, which effectively mitigates the
training-inference modality discrepancy. Notably, with our approach,
ImageBind-LLM can respond to instructions of diverse modalities and demonstrate
significant language generation quality. Code is released at
https://github.com/OpenGVLab/LLaMA-Adapter.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 17:59:45 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Sep 2023 20:25:16 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Han",
"Jiaming",
""
],
[
"Zhang",
"Renrui",
""
],
[
"Shao",
"Wenqi",
""
],
[
"Gao",
"Peng",
""
],
[
"Xu",
"Peng",
""
],
[
"Xiao",
"Han",
""
],
[
"Zhang",
"Kaipeng",
""
],
[
"Liu",
"Chris",
""
],
[
"Wen",
"Song",
""
],
[
"Guo",
"Ziyu",
""
],
[
"Lu",
"Xudong",
""
],
[
"Ren",
"Shuai",
""
],
[
"Wen",
"Yafei",
""
],
[
"Chen",
"Xiaoxin",
""
],
[
"Yue",
"Xiangyu",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Qiao",
"Yu",
""
]
] |
new_dataset
| 0.999457 |
2309.04198
|
Du Yanrui
|
Yanrui Du, Sendong Zhao, Muzhen Cai, Jianyu Chen, Haochun Wang, Yuhan
Chen, Haoqiang Guo, Bing Qin
|
The CALLA Dataset: Probing LLMs' Interactive Knowledge Acquisition from
Chinese Medical Literature
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The application of Large Language Models (LLMs) to the medical domain has
stimulated the interest of researchers. Recent studies have focused on
constructing Instruction Fine-Tuning (IFT) data through medical knowledge
graphs to enrich the interactive medical knowledge of LLMs. However, the
medical literature serving as a rich source of medical knowledge remains
unexplored. Our work introduces the CALLA dataset to probe LLMs' interactive
knowledge acquisition from Chinese medical literature. It assesses the
proficiency of LLMs in mastering medical knowledge through a free-dialogue
fact-checking task. We identify a phenomenon called the ``fact-following
response``, where LLMs tend to affirm facts mentioned in questions and display
a reluctance to challenge them. To eliminate the inaccurate evaluation caused
by this phenomenon, for the golden fact, we artificially construct test data
from two perspectives: one consistent with the fact and one inconsistent with
the fact. Drawing from the probing experiment on the CALLA dataset, we conclude
that IFT data highly correlated with the medical literature corpus serves as a
potent catalyst for LLMs, enabling themselves to skillfully employ the medical
knowledge acquired during the pre-training phase within interactive scenarios,
enhancing accuracy. Furthermore, we design a framework for automatically
constructing IFT data based on medical literature and discuss some real-world
applications.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 08:20:46 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 13:51:14 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Du",
"Yanrui",
""
],
[
"Zhao",
"Sendong",
""
],
[
"Cai",
"Muzhen",
""
],
[
"Chen",
"Jianyu",
""
],
[
"Wang",
"Haochun",
""
],
[
"Chen",
"Yuhan",
""
],
[
"Guo",
"Haoqiang",
""
],
[
"Qin",
"Bing",
""
]
] |
new_dataset
| 0.998697 |
2309.04266
|
Naoto Sato
|
Naoto Sato and Ryota Katsube
|
Locating Buggy Segments in Quantum Program Debugging
| null | null | null | null |
cs.SE
|
http://creativecommons.org/publicdomain/zero/1.0/
|
When a bug is detected by testing a quantum program on a quantum computer, we
want to determine its detailed location to fix it. To locate the bug, the
quantum program is divided into several segments and each segment is tested.
However, to prepare a quantum state that is input to a segment, it is necessary
to execute all the segments ahead of that segment in a quantum computer. This
means that the cost of testing each segment depends on its location. We can
also locate a buggy segment only if it is confirmed that there are no bugs in
all segments ahead of that buggy segment. Since a quantum program is tested
statistically on the basis of measurement results, there is a tradeoff between
testing accuracy and cost. Although these characteristics are unique to quantum
programs and complicate locating bugs, they have not been investigated. We
suggest for the first time that these characteristics should be considered to
efficiently locate bugs. We are also the first to propose a bug-locating method
that takes these characteristics into account. The results from experiments
indicate that the bug-locating cost that is represented as the number of
executed quantum gates can be reduced with the proposed method compared with
naive methods.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 11:25:04 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 13:44:57 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Sato",
"Naoto",
""
],
[
"Katsube",
"Ryota",
""
]
] |
new_dataset
| 0.999785 |
2309.04408
|
Khandaker Foysal Haque
|
Khandaker Foysal Haque, Francesca Meneghello, Francesco Restuccia
|
Wi-BFI: Extracting the IEEE 802.11 Beamforming Feedback Information from
Commercial Wi-Fi Devices
|
To be presented at ACM WiNTECH, Madrid, Spain, October 6, 2023
| null | null | null |
cs.NI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, researchers have shown that the beamforming feedback angles (BFAs)
used for Wi-Fi multiple-input multiple-output (MIMO) operations can be
effectively leveraged as a proxy of the channel frequency response (CFR) for
different purposes. Examples are passive human activity recognition and device
fingerprinting. However, even though the BFAs report frames are sent in clear
text, there is not yet a unified open-source tool to extract and decode the
BFAs from the frames. To fill this gap, we developed Wi-BFI, the first tool
that allows retrieving Wi-Fi BFAs and reconstructing the beamforming feedback
information (BFI) - a compressed representation of the CFR - from the BFAs
frames captured over the air. The tool supports BFAs extraction within both
IEEE 802.11ac and 802.11ax networks operating on radio channels with
160/80/40/20 MHz bandwidth. Both multi-user and single-user MIMO feedback can
be decoded through Wi-BFI. The tool supports real-time and offline extraction
and storage of BFAs and BFI. The real-time mode also includes a visual
representation of the channel state that continuously updates based on the
collected data. Wi-BFI code is open source and the tool is also available as a
pip package.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 16:12:27 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 17:23:08 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Haque",
"Khandaker Foysal",
""
],
[
"Meneghello",
"Francesca",
""
],
[
"Restuccia",
"Francesco",
""
]
] |
new_dataset
| 0.990367 |
2309.04801
|
Ole-Christoffer Granmo
|
Ole-Christoffer Granmo
|
TMComposites: Plug-and-Play Collaboration Between Specialized Tsetlin
Machines
|
8 pages, 6 figures
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tsetlin Machines (TMs) provide a fundamental shift from arithmetic-based to
logic-based machine learning. Supporting convolution, they deal successfully
with image classification datasets like MNIST, Fashion-MNIST, and CIFAR-2.
However, the TM struggles with getting state-of-the-art performance on CIFAR-10
and CIFAR-100, representing more complex tasks. This paper introduces
plug-and-play collaboration between specialized TMs, referred to as TM
Composites. The collaboration relies on a TM's ability to specialize during
learning and to assess its competence during inference. When teaming up, the
most confident TMs make the decisions, relieving the uncertain ones. In this
manner, a TM Composite becomes more competent than its members, benefiting from
their specializations. The collaboration is plug-and-play in that members can
be combined in any way, at any time, without fine-tuning. We implement three TM
specializations in our empirical evaluation: Histogram of Gradients, Adaptive
Gaussian Thresholding, and Color Thermometers. The resulting TM Composite
increases accuracy on Fashion-MNIST by two percentage points, CIFAR-10 by
twelve points, and CIFAR-100 by nine points, yielding new state-of-the-art
results for TMs. Overall, we envision that TM Composites will enable an
ultra-low energy and transparent alternative to state-of-the-art deep learning
on more tasks and datasets.
|
[
{
"version": "v1",
"created": "Sat, 9 Sep 2023 14:00:39 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 15:00:36 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Granmo",
"Ole-Christoffer",
""
]
] |
new_dataset
| 0.999842 |
2309.04914
|
Guoan Xu
|
Guoan Xu, Wenjing Jia, Tao Wu, Ligeng Chen
|
MFPNet: Multi-scale Feature Propagation Network For Lightweight Semantic
Segmentation
|
5 pages, 3 figures, 5tables, conference
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In contrast to the abundant research focusing on large-scale models, the
progress in lightweight semantic segmentation appears to be advancing at a
comparatively slower pace. However, existing compact methods often suffer from
limited feature representation capability due to the shallowness of their
networks. In this paper, we propose a novel lightweight segmentation
architecture, called Multi-scale Feature Propagation Network (MFPNet), to
address the dilemma. Specifically, we design a robust Encoder-Decoder structure
featuring symmetrical residual blocks that consist of flexible bottleneck
residual modules (BRMs) to explore deep and rich muti-scale semantic context.
Furthermore, taking benefit from their capacity to model latent long-range
contextual relationships, we leverage Graph Convolutional Networks (GCNs) to
facilitate multi-scale feature propagation between the BRM blocks. When
evaluated on benchmark datasets, our proposed approach shows superior
segmentation results.
|
[
{
"version": "v1",
"created": "Sun, 10 Sep 2023 02:02:29 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 05:08:47 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Xu",
"Guoan",
""
],
[
"Jia",
"Wenjing",
""
],
[
"Wu",
"Tao",
""
],
[
"Chen",
"Ligeng",
""
]
] |
new_dataset
| 0.994026 |
2309.05073
|
Jiong Wang
|
Jiong Wang, Fengyu Yang, Wenbo Gou, Bingliang Li, Danqi Yan, Ailing
Zeng, Yijun Gao, Junle Wang, Ruimao Zhang
|
FreeMan: Towards Benchmarking 3D Human Pose Estimation in the Wild
|
18 pages, 9 figures. Project page:
https://wangjiongw.github.io/freeman/ ; API:
https://github.com/wangjiongw/FreeMan_API
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Estimating the 3D structure of the human body from natural scenes is a
fundamental aspect of visual perception. This task carries great importance for
fields like AIGC and human-robot interaction. In practice, 3D human pose
estimation in real-world settings is a critical initial step in solving this
problem. However, the current datasets, often collected under controlled
laboratory conditions using complex motion capture equipment and unvarying
backgrounds, are insufficient. The absence of real-world datasets is stalling
the progress of this crucial task. To facilitate the development of 3D pose
estimation, we present FreeMan, the first large-scale, real-world multi-view
dataset. FreeMan was captured by synchronizing 8 smartphones across diverse
scenarios. It comprises 11M frames from 8000 sequences, viewed from different
perspectives. These sequences cover 40 subjects across 10 different scenarios,
each with varying lighting conditions. We have also established an automated,
precise labeling pipeline that allows for large-scale processing efficiently.
We provide comprehensive evaluation baselines for a range of tasks, underlining
the significant challenges posed by FreeMan. Further evaluations of standard
indoor/outdoor human sensing datasets reveal that FreeMan offers robust
representation transferability in real and complex scenes. FreeMan is now
publicly available at https://wangjiongw.github.io/freeman.
|
[
{
"version": "v1",
"created": "Sun, 10 Sep 2023 16:42:11 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 15:39:30 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Wang",
"Jiong",
""
],
[
"Yang",
"Fengyu",
""
],
[
"Gou",
"Wenbo",
""
],
[
"Li",
"Bingliang",
""
],
[
"Yan",
"Danqi",
""
],
[
"Zeng",
"Ailing",
""
],
[
"Gao",
"Yijun",
""
],
[
"Wang",
"Junle",
""
],
[
"Zhang",
"Ruimao",
""
]
] |
new_dataset
| 0.998353 |
2309.05396
|
Haoxu Wang
|
Haoxu Wang and Fan Yu and Xian Shi and Yuezhang Wang and Shiliang
Zhang and Ming Li
|
SlideSpeech: A Large-Scale Slide-Enriched Audio-Visual Corpus
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-Modal automatic speech recognition (ASR) techniques aim to leverage
additional modalities to improve the performance of speech recognition systems.
While existing approaches primarily focus on video or contextual information,
the utilization of extra supplementary textual information has been overlooked.
Recognizing the abundance of online conference videos with slides, which
provide rich domain-specific information in the form of text and images, we
release SlideSpeech, a large-scale audio-visual corpus enriched with slides.
The corpus contains 1,705 videos, 1,000+ hours, with 473 hours of high-quality
transcribed speech. Moreover, the corpus contains a significant amount of
real-time synchronized slides. In this work, we present the pipeline for
constructing the corpus and propose baseline methods for utilizing text
information in the visual slide context. Through the application of keyword
extraction and contextual ASR methods in the benchmark system, we demonstrate
the potential of improving speech recognition performance by incorporating
textual information from supplementary video slides.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 11:56:44 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 03:08:34 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Wang",
"Haoxu",
""
],
[
"Yu",
"Fan",
""
],
[
"Shi",
"Xian",
""
],
[
"Wang",
"Yuezhang",
""
],
[
"Zhang",
"Shiliang",
""
],
[
"Li",
"Ming",
""
]
] |
new_dataset
| 0.999499 |
2309.05665
|
Ziwen Zhuang
|
Ziwen Zhuang, Zipeng Fu, Jianren Wang, Christopher Atkeson, Soeren
Schwertfeger, Chelsea Finn, Hang Zhao
|
Robot Parkour Learning
|
CoRL 2023 (Oral). Project website at https://robot-parkour.github.io
| null | null | null |
cs.RO cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Parkour is a grand challenge for legged locomotion that requires robots to
overcome various obstacles rapidly in complex environments. Existing methods
can generate either diverse but blind locomotion skills or vision-based but
specialized skills by using reference animal data or complex rewards. However,
autonomous parkour requires robots to learn generalizable skills that are both
vision-based and diverse to perceive and react to various scenarios. In this
work, we propose a system for learning a single end-to-end vision-based parkour
policy of diverse parkour skills using a simple reward without any reference
motion data. We develop a reinforcement learning method inspired by direct
collocation to generate parkour skills, including climbing over high obstacles,
leaping over large gaps, crawling beneath low barriers, squeezing through thin
slits, and running. We distill these skills into a single vision-based parkour
policy and transfer it to a quadrupedal robot using its egocentric depth
camera. We demonstrate that our system can empower two different low-cost
robots to autonomously select and execute appropriate parkour skills to
traverse challenging real-world environments.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 17:59:17 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 03:01:55 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Zhuang",
"Ziwen",
""
],
[
"Fu",
"Zipeng",
""
],
[
"Wang",
"Jianren",
""
],
[
"Atkeson",
"Christopher",
""
],
[
"Schwertfeger",
"Soeren",
""
],
[
"Finn",
"Chelsea",
""
],
[
"Zhao",
"Hang",
""
]
] |
new_dataset
| 0.997985 |
2309.05769
|
Kenneth Odoh E
|
Kenneth Odoh
|
Tortoise: An Authenticated Encryption Scheme
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
We present Tortoise, an experimental nonce-based authenticated encryption
scheme modeled on the Synthetic Counter-in-Tweak framework to convert any block
cipher into Authenticated Encryption with Associated Data. Our work supports
two modes: nonce-respecting and nonce-misuse-resistant. \textbf{Source code}
available at
\url{https://github.com/kenluck2001/cipherResearch/tree/main/src/tortoise}.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 18:55:07 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Odoh",
"Kenneth",
""
]
] |
new_dataset
| 0.998356 |
2309.05810
|
Hongge Chen
|
Hongge Chen, Zhao Chen, Gregory P. Meyer, Dennis Park, Carl Vondrick,
Ashish Shrivastava, Yuning Chai
|
SHIFT3D: Synthesizing Hard Inputs For Tricking 3D Detectors
|
Accepted by ICCV 2023
| null | null | null |
cs.CV cs.CR cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present SHIFT3D, a differentiable pipeline for generating 3D shapes that
are structurally plausible yet challenging to 3D object detectors. In
safety-critical applications like autonomous driving, discovering such novel
challenging objects can offer insight into unknown vulnerabilities of 3D
detectors. By representing objects with a signed distanced function (SDF), we
show that gradient error signals allow us to smoothly deform the shape or pose
of a 3D object in order to confuse a downstream 3D detector. Importantly, the
objects generated by SHIFT3D physically differ from the baseline object yet
retain a semantically recognizable shape. Our approach provides interpretable
failure modes for modern 3D object detectors, and can aid in preemptive
discovery of potential safety risks within 3D perception systems before these
risks become critical failures.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 20:28:18 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Chen",
"Hongge",
""
],
[
"Chen",
"Zhao",
""
],
[
"Meyer",
"Gregory P.",
""
],
[
"Park",
"Dennis",
""
],
[
"Vondrick",
"Carl",
""
],
[
"Shrivastava",
"Ashish",
""
],
[
"Chai",
"Yuning",
""
]
] |
new_dataset
| 0.995906 |
2309.05818
|
Ahmad Sebaq
|
Yara Ali Alnaggar, Ahmad Sebaq, Karim Amer, ElSayed Naeem, Mohamed
Elhelw
|
Rice Plant Disease Detection and Diagnosis using Deep Convolutional
Neural Networks and Multispectral Imaging
| null | null |
10.1007/978-3-031-21595-7
| null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Rice is considered a strategic crop in Egypt as it is regularly consumed in
the Egyptian people's diet. Even though Egypt is the highest rice producer in
Africa with a share of 6 million tons per year, it still imports rice to
satisfy its local needs due to production loss, especially due to rice disease.
Rice blast disease is responsible for 30% loss in rice production worldwide.
Therefore, it is crucial to target limiting yield damage by detecting rice
crops diseases in its early stages. This paper introduces a public
multispectral and RGB images dataset and a deep learning pipeline for rice
plant disease detection using multi-modal data. The collected multispectral
images consist of Red, Green and Near-Infrared channels and we show that using
multispectral along with RGB channels as input archives a higher F1 accuracy
compared to using RGB input only.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 20:51:21 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Alnaggar",
"Yara Ali",
""
],
[
"Sebaq",
"Ahmad",
""
],
[
"Amer",
"Karim",
""
],
[
"Naeem",
"ElSayed",
""
],
[
"Elhelw",
"Mohamed",
""
]
] |
new_dataset
| 0.995809 |
2309.05900
|
Gustavo Olague Dr.
|
Gustavo Olague, Roberto Pineda, Gerardo Ibarra-Vazquez, Matthieu
Olague, Axel Martinez, Sambit Bakshi, Jonathan Vargas and Isnardo Reducindo
|
Adversarial Attacks Assessment of Salient Object Detection via Symbolic
Learning
|
14 pages, 8 figures, 6 tables, IEEE Transactions on Emerging Topics
in Computing, Accepted for publication
| null | null | null |
cs.CV cs.CR cs.LG cs.NE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Machine learning is at the center of mainstream technology and outperforms
classical approaches to handcrafted feature design. Aside from its learning
process for artificial feature extraction, it has an end-to-end paradigm from
input to output, reaching outstandingly accurate results. However, security
concerns about its robustness to malicious and imperceptible perturbations have
drawn attention since its prediction can be changed entirely. Salient object
detection is a research area where deep convolutional neural networks have
proven effective but whose trustworthiness represents a significant issue
requiring analysis and solutions to hackers' attacks. Brain programming is a
kind of symbolic learning in the vein of good old-fashioned artificial
intelligence. This work provides evidence that symbolic learning robustness is
crucial in designing reliable visual attention systems since it can withstand
even the most intense perturbations. We test this evolutionary computation
methodology against several adversarial attacks and noise perturbations using
standard databases and a real-world problem of a shorebird called the Snowy
Plover portraying a visual attention task. We compare our methodology with five
different deep learning approaches, proving that they do not match the symbolic
paradigm regarding robustness. All neural networks suffer significant
performance losses, while brain programming stands its ground and remains
unaffected. Also, by studying the Snowy Plover, we remark on the importance of
security in surveillance activities regarding wildlife protection and
conservation.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 01:03:43 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Olague",
"Gustavo",
""
],
[
"Pineda",
"Roberto",
""
],
[
"Ibarra-Vazquez",
"Gerardo",
""
],
[
"Olague",
"Matthieu",
""
],
[
"Martinez",
"Axel",
""
],
[
"Bakshi",
"Sambit",
""
],
[
"Vargas",
"Jonathan",
""
],
[
"Reducindo",
"Isnardo",
""
]
] |
new_dataset
| 0.982734 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.