id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.05953
|
Yufei Li
|
Yufei Li, Yanchi Liu, Haoyu Wang, Zhengzhang Chen, Wei Cheng, Yuncong
Chen, Wenchao Yu, Haifeng Chen, Cong Liu
|
GLAD: Content-aware Dynamic Graphs For Log Anomaly Detection
|
Accepted by ICKG 2023
| null | null | null |
cs.LG cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Logs play a crucial role in system monitoring and debugging by recording
valuable system information, including events and states. Although various
methods have been proposed to detect anomalies in log sequences, they often
overlook the significance of considering relations among system components,
such as services and users, which can be identified from log contents.
Understanding these relations is vital for detecting anomalies and their
underlying causes. To address this issue, we introduce GLAD, a Graph-based Log
Anomaly Detection framework designed to detect relational anomalies in system
logs. GLAD incorporates log semantics, relational patterns, and sequential
patterns into a unified framework for anomaly detection. Specifically, GLAD
first introduces a field extraction module that utilizes prompt-based few-shot
learning to identify essential fields from log contents. Then GLAD constructs
dynamic log graphs for sliding windows by interconnecting extracted fields and
log events parsed from the log parser. These graphs represent events and fields
as nodes and their relations as edges. Subsequently, GLAD utilizes a
temporal-attentive graph edge anomaly detection model for identifying anomalous
relations in these dynamic log graphs. This model employs a Graph Neural
Network (GNN)-based encoder enhanced with transformers to capture content,
structural and temporal features. We evaluate our proposed method on three
datasets, and the results demonstrate the effectiveness of GLAD in detecting
anomalies indicated by varying relational patterns.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 04:21:30 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Li",
"Yufei",
""
],
[
"Liu",
"Yanchi",
""
],
[
"Wang",
"Haoyu",
""
],
[
"Chen",
"Zhengzhang",
""
],
[
"Cheng",
"Wei",
""
],
[
"Chen",
"Yuncong",
""
],
[
"Yu",
"Wenchao",
""
],
[
"Chen",
"Haifeng",
""
],
[
"Liu",
"Cong",
""
]
] |
new_dataset
| 0.979414 |
2309.05964
|
Xuelin Cao
|
Xuelin Cao, Bo Yang, Chongwen Huang, George C. Alexandropoulos, Chau
Yuen, Zhu Han, H. Vincent Poor, Lajos Hanzo
|
Massive Access of Static and Mobile Users via Reconfigurable Intelligent
Surfaces: Protocol Design and Performance Analysis
| null | null | null | null |
cs.NI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
The envisioned wireless networks of the future entail the provisioning of
massive numbers of connections, heterogeneous data traffic, ultra-high spectral
efficiency, and low latency services. This vision is spurring research
activities focused on defining a next generation multiple access (NGMA)
protocol that can accommodate massive numbers of users in different resource
blocks, thereby, achieving higher spectral efficiency and increased
connectivity compared to conventional multiple access schemes. In this article,
we present a multiple access scheme for NGMA in wireless communication systems
assisted by multiple reconfigurable intelligent surfaces (RISs). In this
regard, considering the practical scenario of static users operating together
with mobile ones, we first study the interplay of the design of NGMA schemes
and RIS phase configuration in terms of efficiency and complexity. Based on
this, we then propose a multiple access framework for RIS-assisted
communication systems, and we also design a medium access control (MAC)
protocol incorporating RISs. In addition, we give a detailed performance
analysis of the designed RIS-assisted MAC protocol. Our extensive simulation
results demonstrate that the proposed MAC design outperforms the benchmarks in
terms of system throughput and access fairness, and also reveal a trade-off
relationship between the system throughput and fairness.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 05:18:09 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Cao",
"Xuelin",
""
],
[
"Yang",
"Bo",
""
],
[
"Huang",
"Chongwen",
""
],
[
"Alexandropoulos",
"George C.",
""
],
[
"Yuen",
"Chau",
""
],
[
"Han",
"Zhu",
""
],
[
"Poor",
"H. Vincent",
""
],
[
"Hanzo",
"Lajos",
""
]
] |
new_dataset
| 0.963974 |
2309.05987
|
Xuefeng Wei
|
Xuefeng Wei, Xuan Zhou
|
FLDNet: A Foreground-Aware Network for Polyp Segmentation Leveraging
Long-Distance Dependencies
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given the close association between colorectal cancer and polyps, the
diagnosis and identification of colorectal polyps play a critical role in the
detection and surgical intervention of colorectal cancer. In this context, the
automatic detection and segmentation of polyps from various colonoscopy images
has emerged as a significant problem that has attracted broad attention.
Current polyp segmentation techniques face several challenges: firstly, polyps
vary in size, texture, color, and pattern; secondly, the boundaries between
polyps and mucosa are usually blurred, existing studies have focused on
learning the local features of polyps while ignoring the long-range
dependencies of the features, and also ignoring the local context and global
contextual information of the combined features. To address these challenges,
we propose FLDNet (Foreground-Long-Distance Network), a Transformer-based
neural network that captures long-distance dependencies for accurate polyp
segmentation. Specifically, the proposed model consists of three main modules:
a pyramid-based Transformer encoder, a local context module, and a
foreground-Aware module. Multilevel features with long-distance dependency
information are first captured by the pyramid-based transformer encoder. On the
high-level features, the local context module obtains the local characteristics
related to the polyps by constructing different local context information. The
coarse map obtained by decoding the reconstructed highest-level features guides
the feature fusion process in the foreground-Aware module of the high-level
features to achieve foreground enhancement of the polyps. Our proposed method,
FLDNet, was evaluated using seven metrics on common datasets and demonstrated
superiority over state-of-the-art methods on widely-used evaluation measures.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 06:32:42 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Wei",
"Xuefeng",
""
],
[
"Zhou",
"Xuan",
""
]
] |
new_dataset
| 0.988298 |
2309.05993
|
Zhengsong Jiang
|
Zhengsong Jiang, Guohui Tian, Yongcheng Cui, Tiantian Liu, Yu Gu,
Yifei Wang
|
Digital Twin System for Home Service Robot Based on Motion Simulation
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to improve the task execution capability of home service robot, and
to cope with the problem that purely physical robot platforms cannot sense the
environment and make decisions online, a method for building digital twin
system for home service robot based on motion simulation is proposed. A
reliable mapping of the home service robot and its working environment from
physical space to digital space is achieved in three dimensions: geometric,
physical and functional. In this system, a digital space-oriented URDF file
parser is designed and implemented for the automatic construction of the robot
geometric model. Next, the physical model is constructed from the kinematic
equations of the robot and an improved particle swarm optimization algorithm is
proposed for the inverse kinematic solution. In addition, to adapt to the home
environment, functional attributes are used to describe household objects, thus
improving the semantic description of the digital space for the real home
environment. Finally, through geometric model consistency verification,
physical model validity verification and virtual-reality consistency
verification, it shows that the digital twin system designed in this paper can
construct the robot geometric model accurately and completely, complete the
operation of household objects successfully, and the digital twin system is
effective and practical.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 06:48:30 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Jiang",
"Zhengsong",
""
],
[
"Tian",
"Guohui",
""
],
[
"Cui",
"Yongcheng",
""
],
[
"Liu",
"Tiantian",
""
],
[
"Gu",
"Yu",
""
],
[
"Wang",
"Yifei",
""
]
] |
new_dataset
| 0.996031 |
2309.06000
|
Shuoqi Chen
|
Shuoqi Chen, Aaron Roth
|
Gait Design of a Novel Arboreal Concertina Locomotion for Snake-like
Robots
|
4 pages, 3 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel strategy for a snake robot to move straight
up a cylindrical surface. Prior works on pole-climbing for a snake robot mainly
utilized a rolling helix gait, and although proven to be efficient, it does not
reassemble movements made by a natural snake. We take inspiration from nature
and seek to imitate the Arboreal Concertina Locomotion (ACL) from real-life
serpents. In order to represent the 3D curves that make up the key motion
patterns of ACL, we establish a set of parametric equations that identify
periodic functions, which produce a sequence of backbone curves. We then build
up the gait equation using the curvature integration method, and finally, we
propose a simple motion estimation strategy using virtual chassis and non-slip
model assumptions. We present experimental results using a 20-DOF snake robot
traversing outside of a straight pipe.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 06:57:47 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Chen",
"Shuoqi",
""
],
[
"Roth",
"Aaron",
""
]
] |
new_dataset
| 0.998392 |
2309.06027
|
Adrien Cassagne
|
Clara Ciocan (ALSOC), Mathuran Kandeepan (ALSOC), Adrien Cassagne
(ALSOC), Jeremie Vaubaillon (IMCCE), Fabian Zander (USQ), Lionel Lacassagne
(ALSOC)
|
A new meteor detection application robust to camera movements
|
in French language, Groupe de Recherche et d'{\'E}tudes de Traitement
du Signal et des Images (GRETSI), Aug 2023, Grenoble, France
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article presents a new tool for the automatic detection of meteors. Fast
Meteor Detection Toolbox (FMDT) is able to detect meteor sightings by analyzing
videos acquired by cameras onboard weather balloons or within airplane with
stabilization. The challenge consists in designing a processing chain composed
of simple algorithms, that are robust to the high fluctuation of the videos and
that satisfy the constraints on power consumption (10 W) and real-time
processing (25 frames per second).
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 07:56:55 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Ciocan",
"Clara",
"",
"ALSOC"
],
[
"Kandeepan",
"Mathuran",
"",
"ALSOC"
],
[
"Cassagne",
"Adrien",
"",
"ALSOC"
],
[
"Vaubaillon",
"Jeremie",
"",
"IMCCE"
],
[
"Zander",
"Fabian",
"",
"USQ"
],
[
"Lacassagne",
"Lionel",
"",
"ALSOC"
]
] |
new_dataset
| 0.9993 |
2309.06046
|
Jeroen Galjaard
|
Jeroen M. Galjaard, Robert Birke, Juan Perez, Lydia Y. Chen
|
BatMan-CLR: Making Few-shots Meta-Learners Resilient Against Label Noise
|
10 pages,3 figures
| null | null | null |
cs.LG cs.AI cs.CV cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The negative impact of label noise is well studied in classical supervised
learning yet remains an open research question in meta-learning. Meta-learners
aim to adapt to unseen learning tasks by learning a good initial model in
meta-training and consecutively fine-tuning it according to new tasks during
meta-testing. In this paper, we present the first extensive analysis of the
impact of varying levels of label noise on the performance of state-of-the-art
meta-learners, specifically gradient-based $N$-way $K$-shot learners. We show
that the accuracy of Reptile, iMAML, and foMAML drops by up to 42% on the
Omniglot and CifarFS datasets when meta-training is affected by label noise. To
strengthen the resilience against label noise, we propose two sampling
techniques, namely manifold (Man) and batch manifold (BatMan), which transform
the noisy supervised learners into semi-supervised ones to increase the utility
of noisy labels. We first construct manifold samples of $N$-way
$2$-contrastive-shot tasks through augmentation, learning the embedding via a
contrastive loss in meta-training, and then perform classification through
zeroing on the embedding in meta-testing. We show that our approach can
effectively mitigate the impact of meta-training label noise. Even with 60%
wrong labels \batman and \man can limit the meta-testing accuracy drop to
${2.5}$, ${9.4}$, ${1.1}$ percent points, respectively, with existing
meta-learners across the Omniglot, CifarFS, and MiniImagenet datasets.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 08:30:35 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Galjaard",
"Jeroen M.",
""
],
[
"Birke",
"Robert",
""
],
[
"Perez",
"Juan",
""
],
[
"Chen",
"Lydia Y.",
""
]
] |
new_dataset
| 0.997396 |
2309.06077
|
Tommaso Bianchi
|
Massimiliano Baldo, Tommaso Bianchi, Mauro Conti, Alessio Trevisan,
Federico Turrin
|
HoneyEVSE: An Honeypot to emulate Electric Vehicle Supply Equipments
|
15 pages
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
To fight climate change, new "green" technology are emerging, most of them
using electricity as a power source. Among the solutions, Electric Vehicles
(EVs) represent a central asset in the future transport system. EVs require a
complex infrastructure to enable the so-called Vehicle-to-Grid (V2G) paradigm
to manage the charging process between the smart grid and the EV. In this
paradigm, the Electric Vehicle Supply Equipment (EVSE), or charging station, is
the end device that authenticates the vehicle and delivers the power to charge
it. However, since an EVSE is publicly exposed and connected to the Internet,
recent works show how an attacker with physical tampering and remote access can
target an EVSE, exposing the security of the entire infrastructure and the
final user. For this reason, it is important to develop novel strategies to
secure such infrastructures. In this paper we present HoneyEVSE, the first
honeypot conceived to simulate an EVSE. HoneyEVSE can simulate with high
fidelity the EV charging process and, at the same time, enables a user to
interact with it through a dashboard. Furthermore, based on other charging
columns exposed on the Internet, we emulate the login and device information
pages to increase user engagement. We exposed HoneyEVSE for 30 days to the
Internet to assess its capability and measured the interaction received with
its Shodan Honeyscore. Results show that HoneyEVSE can successfully evade the
Shodan honeyscore metric while attracting a high number of interactions on the
exposed services.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 09:15:07 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Baldo",
"Massimiliano",
""
],
[
"Bianchi",
"Tommaso",
""
],
[
"Conti",
"Mauro",
""
],
[
"Trevisan",
"Alessio",
""
],
[
"Turrin",
"Federico",
""
]
] |
new_dataset
| 0.999619 |
2309.06121
|
Peter Mosses
|
Peter D. Mosses
|
Online Name-Based Navigation for Software Meta-languages
|
6 pages, incl. 5 figures, to be published in Proceedings of the 16th
ACM SIGPLAN International Conference on Software Language Engineering (SLE
'23), October 23--24, 2023, Cascais, Portugal
| null | null | null |
cs.SE cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Software language design and implementation often involve specifications
written in various esoteric meta-languages. Language workbenches generally
include support for precise name-based navigation when browsing language
specifications locally, but such support is lacking when browsing the same
specifications online in code repositories.
This paper presents a technique to support precise name-based navigation of
language specifications in online repositories using ordinary web browsers. The
idea is to generate hyperlinked twins: websites where verbatim copies of
specification text are enhanced with hyperlinks between name references and
declarations. By generating hyperlinks directly from the name binding analysis
used internally in a language workbench, online navigation in hyperlinked twins
is automatically consistent with local navigation.
The presented technique has been implemented for the Spoofax language
workbench, and used to generate hyperlinked twin websites from various language
specifications in Spoofax meta-languages. However, the applicability of the
technique is not limited to Spoofax, and developers of other language
workbenches could presumably implement similar tooling, to make their language
specifications more accessible to those who do not have the workbench
installed.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 10:44:01 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Mosses",
"Peter D.",
""
]
] |
new_dataset
| 0.998092 |
2309.06130
|
Mohammed Guermal
|
Mohammed Guermal, Francois Bremond, Rui Dai, Abid Ali
|
JOADAA: joint online action detection and action anticipation
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Action anticipation involves forecasting future actions by connecting past
events to future ones. However, this reasoning ignores the real-life hierarchy
of events which is considered to be composed of three main parts: past,
present, and future. We argue that considering these three main parts and their
dependencies could improve performance. On the other hand, online action
detection is the task of predicting actions in a streaming manner. In this
case, one has access only to the past and present information. Therefore, in
online action detection (OAD) the existing approaches miss semantics or future
information which limits their performance. To sum up, for both of these tasks,
the complete set of knowledge (past-present-future) is missing, which makes it
challenging to infer action dependencies, therefore having low performances. To
address this limitation, we propose to fuse both tasks into a single uniform
architecture. By combining action anticipation and online action detection, our
approach can cover the missing dependencies of future information in online
action detection. This method referred to as JOADAA, presents a uniform model
that jointly performs action anticipation and online action detection. We
validate our proposed model on three challenging datasets: THUMOS'14, which is
a sparsely annotated dataset with one action per time step, CHARADES, and
Multi-THUMOS, two densely annotated datasets with more complex scenarios.
JOADAA achieves SOTA results on these benchmarks for both tasks.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 11:17:25 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Guermal",
"Mohammed",
""
],
[
"Bremond",
"Francois",
""
],
[
"Dai",
"Rui",
""
],
[
"Ali",
"Abid",
""
]
] |
new_dataset
| 0.997882 |
2309.06141
|
Xiaoxiao Miao
|
Xiaoxiao Miao, Xin Wang, Erica Cooper, Junichi Yamagishi, Nicholas
Evans, Massimiliano Todisco, Jean-Fran\c{c}ois Bonastre, Mickael Rouvier
|
SynVox2: Towards a privacy-friendly VoxCeleb2 dataset
|
conference
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
The success of deep learning in speaker recognition relies heavily on the use
of large datasets. However, the data-hungry nature of deep learning methods has
already being questioned on account the ethical, privacy, and legal concerns
that arise when using large-scale datasets of natural speech collected from
real human speakers. For example, the widely-used VoxCeleb2 dataset for speaker
recognition is no longer accessible from the official website. To mitigate
these concerns, this work presents an initiative to generate a privacy-friendly
synthetic VoxCeleb2 dataset that ensures the quality of the generated speech in
terms of privacy, utility, and fairness. We also discuss the challenges of
using synthetic data for the downstream task of speaker verification.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 11:28:07 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Miao",
"Xiaoxiao",
""
],
[
"Wang",
"Xin",
""
],
[
"Cooper",
"Erica",
""
],
[
"Yamagishi",
"Junichi",
""
],
[
"Evans",
"Nicholas",
""
],
[
"Todisco",
"Massimiliano",
""
],
[
"Bonastre",
"Jean-François",
""
],
[
"Rouvier",
"Mickael",
""
]
] |
new_dataset
| 0.998758 |
2309.06196
|
Ralf Gundelach
|
Ralf Gundelach and Dominik Herrmann
|
Cookiescanner: An Automated Tool for Detecting and Evaluating GDPR
Consent Notices on Websites
|
8 pages. For source code, see
https://github.com/UBA-PSI/cookiescanner . For dataset, see
https://zenodo.org/record/7884468
|
ARES '23: Proceedings of the 18th International Conference on
Availability, Reliability and Security. August 2023. Article No.: 148
|
10.1145/3600160.3605000
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The enforcement of the GDPR led to the widespread adoption of consent
notices, colloquially known as cookie banners. Studies have shown that many
website operators do not comply with the law and track users prior to any
interaction with the consent notice, or attempt to trick users into giving
consent through dark patterns. Previous research has relied on manually curated
filter lists or automated detection methods limited to a subset of websites,
making research on GDPR compliance of consent notices tedious or limited. We
present \emph{cookiescanner}, an automated scanning tool that detects and
extracts consent notices via various methods and checks if they offer a decline
option or use color diversion. We evaluated cookiescanner on a random sample of
the top 10,000 websites listed by Tranco. We found that manually curated filter
lists have the highest precision but recall fewer consent notices than our
keyword-based methods. Our BERT model achieves high precision for English
notices, which is in line with previous work, but suffers from low recall due
to insufficient candidate extraction. While the automated detection of decline
options proved to be challenging due to the dynamic nature of many sites,
detecting instances of different colors of the buttons was successful in most
cases. Besides systematically evaluating our various detection techniques, we
have manually annotated 1,000 websites to provide a ground-truth baseline,
which has not existed previously. Furthermore, we release our code and the
annotated dataset in the interest of reproducibility and repeatability.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 13:04:00 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Gundelach",
"Ralf",
""
],
[
"Herrmann",
"Dominik",
""
]
] |
new_dataset
| 0.993434 |
2309.06197
|
Laurenz Reichardt
|
Laurenz Reichardt, Nikolas Ebert, Oliver Wasenm\"uller
|
360$^\circ$ from a Single Camera: A Few-Shot Approach for LiDAR
Segmentation
|
ICCV Workshop 2023
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Deep learning applications on LiDAR data suffer from a strong domain gap when
applied to different sensors or tasks. In order for these methods to obtain
similar accuracy on different data in comparison to values reported on public
benchmarks, a large scale annotated dataset is necessary. However, in practical
applications labeled data is costly and time consuming to obtain. Such factors
have triggered various research in label-efficient methods, but a large gap
remains to their fully-supervised counterparts. Thus, we propose ImageTo360, an
effective and streamlined few-shot approach to label-efficient LiDAR
segmentation. Our method utilizes an image teacher network to generate semantic
predictions for LiDAR data within a single camera view. The teacher is used to
pretrain the LiDAR segmentation student network, prior to optional fine-tuning
on 360$^\circ$ data. Our method is implemented in a modular manner on the point
level and as such is generalizable to different architectures. We improve over
the current state-of-the-art results for label-efficient methods and even
surpass some traditional fully-supervised segmentation networks.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 13:04:41 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Reichardt",
"Laurenz",
""
],
[
"Ebert",
"Nikolas",
""
],
[
"Wasenmüller",
"Oliver",
""
]
] |
new_dataset
| 0.99864 |
2309.06207
|
Qianliang Wu
|
Qianliang Wu, Yaqing Ding, Lei Luo, Chuanwei Zhou, Jin Xie, Jian Yang
|
SGFeat: Salient Geometric Feature for Point Cloud Registration
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Point Cloud Registration (PCR) is a critical and challenging task in computer
vision. One of the primary difficulties in PCR is identifying salient and
meaningful points that exhibit consistent semantic and geometric properties
across different scans. Previous methods have encountered challenges with
ambiguous matching due to the similarity among patch blocks throughout the
entire point cloud and the lack of consideration for efficient global geometric
consistency. To address these issues, we propose a new framework that includes
several novel techniques. Firstly, we introduce a semantic-aware geometric
encoder that combines object-level and patch-level semantic information. This
encoder significantly improves registration recall by reducing ambiguity in
patch-level superpoint matching. Additionally, we incorporate a prior knowledge
approach that utilizes an intrinsic shape signature to identify salient points.
This enables us to extract the most salient super points and meaningful dense
points in the scene. Secondly, we introduce an innovative transformer that
encodes High-Order (HO) geometric features. These features are crucial for
identifying salient points within initial overlap regions while considering
global high-order geometric consistency. To optimize this high-order
transformer further, we introduce an anchor node selection strategy. By
encoding inter-frame triangle or polyhedron consistency features based on these
anchor nodes, we can effectively learn high-order geometric features of salient
super points. These high-order features are then propagated to dense points and
utilized by a Sinkhorn matching module to identify key correspondences for
successful registration. In our experiments conducted on well-known datasets
such as 3DMatch/3DLoMatch and KITTI, our approach has shown promising results,
highlighting the effectiveness of our novel method.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 13:21:12 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Wu",
"Qianliang",
""
],
[
"Ding",
"Yaqing",
""
],
[
"Luo",
"Lei",
""
],
[
"Zhou",
"Chuanwei",
""
],
[
"Xie",
"Jin",
""
],
[
"Yang",
"Jian",
""
]
] |
new_dataset
| 0.999386 |
2309.06217
|
Xiaopeng Li
|
Xiaopeng Li, Fan Yan, Xiangyu Zhao, Yichao Wang, Bo Chen, Huifeng Guo,
Ruiming Tang
|
HAMUR: Hyper Adapter for Multi-Domain Recommendation
|
Accepted by CIKM'2023
| null |
10.1145/3583780.3615137
| null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-Domain Recommendation (MDR) has gained significant attention in recent
years, which leverages data from multiple domains to enhance their performance
concurrently.However, current MDR models are confronted with two limitations.
Firstly, the majority of these models adopt an approach that explicitly shares
parameters between domains, leading to mutual interference among them.
Secondly, due to the distribution differences among domains, the utilization of
static parameters in existing methods limits their flexibility to adapt to
diverse domains. To address these challenges, we propose a novel model Hyper
Adapter for Multi-Domain Recommendation (HAMUR). Specifically, HAMUR consists
of two components: (1). Domain-specific adapter, designed as a pluggable module
that can be seamlessly integrated into various existing multi-domain backbone
models, and (2). Domain-shared hyper-network, which implicitly captures shared
information among domains and dynamically generates the parameters for the
adapter. We conduct extensive experiments on two public datasets using various
backbone networks. The experimental results validate the effectiveness and
scalability of the proposed model.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 13:34:33 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Li",
"Xiaopeng",
""
],
[
"Yan",
"Fan",
""
],
[
"Zhao",
"Xiangyu",
""
],
[
"Wang",
"Yichao",
""
],
[
"Chen",
"Bo",
""
],
[
"Guo",
"Huifeng",
""
],
[
"Tang",
"Ruiming",
""
]
] |
new_dataset
| 0.998352 |
2309.06267
|
Wei Yan
|
Wei Yan and Yunghsiang S. Han
|
A Complete Proof of an Important Theorem for Variable-to-Variable Length
Codes
|
arXiv admin note: substantial text overlap with arXiv:2204.07398
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Variable-to-variable length (VV) codes are a class of lossless source coding.
As their name implies, VV codes encode a variable-length sequence of source
symbols into a variable-length codeword. This paper will give a complete proof
of an important theorem for variable-to-variable length codes.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 14:30:18 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Yan",
"Wei",
""
],
[
"Han",
"Yunghsiang S.",
""
]
] |
new_dataset
| 0.996202 |
2309.06276
|
Zhengrong Xue
|
Yuerong Li, Zhengrong Xue, Huazhe Xu
|
OTAS: Unsupervised Boundary Detection for Object-Centric Temporal Action
Segmentation
|
Accepted to WACV 2024
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal action segmentation is typically achieved by discovering the
dramatic variances in global visual descriptors. In this paper, we explore the
merits of local features by proposing the unsupervised framework of
Object-centric Temporal Action Segmentation (OTAS). Broadly speaking, OTAS
consists of self-supervised global and local feature extraction modules as well
as a boundary selection module that fuses the features and detects salient
boundaries for action segmentation. As a second contribution, we discuss the
pros and cons of existing frame-level and boundary-level evaluation metrics.
Through extensive experiments, we find OTAS is superior to the previous
state-of-the-art method by $41\%$ on average in terms of our recommended F1
score. Surprisingly, OTAS even outperforms the ground-truth human annotations
in the user study. Moreover, OTAS is efficient enough to allow real-time
inference.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 14:37:41 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Li",
"Yuerong",
""
],
[
"Xue",
"Zhengrong",
""
],
[
"Xu",
"Huazhe",
""
]
] |
new_dataset
| 0.998405 |
2309.06284
|
Yin Wang
|
Yin Wang, Zhiying Leng, Frederick W. B. Li, Shun-Cheng Wu, Xiaohui
Liang
|
Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion
Model
| null | null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-driven human motion generation in computer vision is both significant
and challenging. However, current methods are limited to producing either
deterministic or imprecise motion sequences, failing to effectively control the
temporal and spatial relationships required to conform to a given text
description. In this work, we propose a fine-grained method for generating
high-quality, conditional human motion sequences supporting precise text
description. Our approach consists of two key components: 1) a
linguistics-structure assisted module that constructs accurate and complete
language feature to fully utilize text information; and 2) a context-aware
progressive reasoning module that learns neighborhood and overall semantic
linguistics features from shallow and deep graph neural networks to achieve a
multi-step inference. Experiments show that our approach outperforms
text-driven motion generation methods on HumanML3D and KIT test sets and
generates better visually confirmed motion to the text conditions.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 14:43:47 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Wang",
"Yin",
""
],
[
"Leng",
"Zhiying",
""
],
[
"Li",
"Frederick W. B.",
""
],
[
"Wu",
"Shun-Cheng",
""
],
[
"Liang",
"Xiaohui",
""
]
] |
new_dataset
| 0.982866 |
2309.06285
|
Jerrin Bright
|
Bavesh Balaji, Jerrin Bright, Harish Prakash, Yuhao Chen, David A
Clausi and John Zelek
|
Jersey Number Recognition using Keyframe Identification from
Low-Resolution Broadcast Videos
|
Accepted in the 6th International Workshop on Multimedia Content
Analysis in Sports (MMSports'23) @ ACM Multimedia
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Player identification is a crucial component in vision-driven soccer
analytics, enabling various downstream tasks such as player assessment, in-game
analysis, and broadcast production. However, automatically detecting jersey
numbers from player tracklets in videos presents challenges due to motion blur,
low resolution, distortions, and occlusions. Existing methods, utilizing
Spatial Transformer Networks, CNNs, and Vision Transformers, have shown success
in image data but struggle with real-world video data, where jersey numbers are
not visible in most of the frames. Hence, identifying frames that contain the
jersey number is a key sub-problem to tackle. To address these issues, we
propose a robust keyframe identification module that extracts frames containing
essential high-level information about the jersey number. A spatio-temporal
network is then employed to model spatial and temporal context and predict the
probabilities of jersey numbers in the video. Additionally, we adopt a
multi-task loss function to predict the probability distribution of each digit
separately. Extensive evaluations on the SoccerNet dataset demonstrate that
incorporating our proposed keyframe identification module results in a
significant 37.81% and 37.70% increase in the accuracies of 2 different test
sets with domain gaps. These results highlight the effectiveness and importance
of our approach in tackling the challenges of automatic jersey number detection
in sports videos.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 14:43:50 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Balaji",
"Bavesh",
""
],
[
"Bright",
"Jerrin",
""
],
[
"Prakash",
"Harish",
""
],
[
"Chen",
"Yuhao",
""
],
[
"Clausi",
"David A",
""
],
[
"Zelek",
"John",
""
]
] |
new_dataset
| 0.999787 |
2309.06313
|
Maria Priisalu
|
Maria Priisalu
|
Semantic and Articulated Pedestrian Sensing Onboard a Moving Vehicle
| null | null | null | null |
cs.CV cs.LG cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
It is difficult to perform 3D reconstruction from on-vehicle gathered video
due to the large forward motion of the vehicle. Even object detection and human
sensing models perform significantly worse on onboard videos when compared to
standard benchmarks because objects often appear far away from the camera
compared to the standard object detection benchmarks, image quality is often
decreased by motion blur and occlusions occur often. This has led to the
popularisation of traffic data-specific benchmarks. Recently Light Detection
And Ranging (LiDAR) sensors have become popular to directly estimate depths
without the need to perform 3D reconstructions. However, LiDAR-based methods
still lack in articulated human detection at a distance when compared to
image-based methods. We hypothesize that benchmarks targeted at articulated
human sensing from LiDAR data could bring about increased research in human
sensing and prediction in traffic and could lead to improved traffic safety for
pedestrians.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 15:24:26 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Priisalu",
"Maria",
""
]
] |
new_dataset
| 0.995524 |
2309.06352
|
Joseph Prince Mathew
|
Joseph Prince Mathew, Dinesh Karri, James Yang, Kevin Zhu, Yojan
Gautam, Kentaro Nojima-Schmunk, Daigo Shishika, Ningshi Yao and Cameron
Nowzari
|
Lighter-Than-Air Autonomous Ball Capture and Scoring Robot -- Design,
Development, and Deployment
|
10 pages, 13 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes the full end-to-end design of our primary scoring agent
in an aerial autonomous robotics competition from April 2023. As open-ended
robotics competitions become more popular, we wish to begin documenting
successful team designs and approaches. The intended audience of this paper is
not only any future or potential participant in this particular national Defend
The Republic (DTR) competition, but rather anyone thinking about designing
their first robot or system to be entered in a competition with clear goals.
Future DTR participants can and should either build on the ideas here, or find
new alternate strategies that can defeat the most successful design last time.
For non-DTR participants but students interested in robotics competitions,
identifying the minimum viable system needed to be competitive is still
important in helping manage time and prioritizing tasks that are crucial to
competition success first.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 16:16:47 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Mathew",
"Joseph Prince",
""
],
[
"Karri",
"Dinesh",
""
],
[
"Yang",
"James",
""
],
[
"Zhu",
"Kevin",
""
],
[
"Gautam",
"Yojan",
""
],
[
"Nojima-Schmunk",
"Kentaro",
""
],
[
"Shishika",
"Daigo",
""
],
[
"Yao",
"Ningshi",
""
],
[
"Nowzari",
"Cameron",
""
]
] |
new_dataset
| 0.959825 |
2309.06419
|
Zhengliang Liu
|
Zhengliang Liu, Yiwei Li, Peng Shu, Aoxiao Zhong, Longtao Yang, Chao
Ju, Zihao Wu, Chong Ma, Jie Luo, Cheng Chen, Sekeun Kim, Jiang Hu, Haixing
Dai, Lin Zhao, Dajiang Zhu, Jun Liu, Wei Liu, Dinggang Shen, Tianming Liu,
Quanzheng Li, and Xiang Li
|
Radiology-Llama2: Best-in-Class Large Language Model for Radiology
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces Radiology-Llama2, a large language model specialized
for radiology through a process known as instruction tuning. Radiology-Llama2
is based on the Llama2 architecture and further trained on a large dataset of
radiology reports to generate coherent and clinically useful impressions from
radiological findings. Quantitative evaluations using ROUGE metrics on the
MIMIC-CXR and OpenI datasets demonstrate that Radiology-Llama2 achieves
state-of-the-art performance compared to other generative language models, with
a Rouge-1 score of 0.4834 on MIMIC-CXR and 0.4185 on OpenI. Additional
assessments by radiology experts highlight the model's strengths in
understandability, coherence, relevance, conciseness, and clinical utility. The
work illustrates the potential of localized language models designed and tuned
for specialized domains like radiology. When properly evaluated and deployed,
such models can transform fields like radiology by automating rote tasks and
enhancing human expertise.
|
[
{
"version": "v1",
"created": "Tue, 29 Aug 2023 17:44:28 GMT"
}
] | 2023-09-13T00:00:00 |
[
[
"Liu",
"Zhengliang",
""
],
[
"Li",
"Yiwei",
""
],
[
"Shu",
"Peng",
""
],
[
"Zhong",
"Aoxiao",
""
],
[
"Yang",
"Longtao",
""
],
[
"Ju",
"Chao",
""
],
[
"Wu",
"Zihao",
""
],
[
"Ma",
"Chong",
""
],
[
"Luo",
"Jie",
""
],
[
"Chen",
"Cheng",
""
],
[
"Kim",
"Sekeun",
""
],
[
"Hu",
"Jiang",
""
],
[
"Dai",
"Haixing",
""
],
[
"Zhao",
"Lin",
""
],
[
"Zhu",
"Dajiang",
""
],
[
"Liu",
"Jun",
""
],
[
"Liu",
"Wei",
""
],
[
"Shen",
"Dinggang",
""
],
[
"Liu",
"Tianming",
""
],
[
"Li",
"Quanzheng",
""
],
[
"Li",
"Xiang",
""
]
] |
new_dataset
| 0.958354 |
1912.10298
|
Chaitanya Rahalkar
|
Chaitanya Rahalkar and Dhaval Gujar
|
Content Addressed P2P File System for the Web with Blockchain-Based
Meta-Data Integrity
|
Inaccuracies and inconsistencies in paper
| null | null | null |
cs.NI cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
With the exponentially scaled World Wide Web, the standard HTTP protocol has
started showing its limitations. With the increased amount of data duplication
& accidental deletion of files on the Internet, the P2P file system called IPFS
completely changes the way files are stored. IPFS is a file storage protocol
allowing files to be stored on decentralized systems. In the HTTP client-server
protocol, files are downloaded from a single source. With files stored on a
decentralized network, IPFS allows packet retrieval from multiple sources,
simultaneously saving considerable bandwidth. IPFS uses a content-addressed
block storage model with content-addressed hyperlinks. Large amounts of data is
addressable with IPFS with the immutable and permanent IPFS links with
meta-data stored as Blockchain transactions. This timestamps and secures the
data, instead of having to put it on the chain itself. Our paper proposes a
model that uses the decentralized file storage system of IPFS, and the
integrity preservation properties of the Blockchain, to store and distribute
data on the Web.
|
[
{
"version": "v1",
"created": "Sat, 21 Dec 2019 17:11:31 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jan 2020 03:00:40 GMT"
},
{
"version": "v3",
"created": "Sat, 9 Sep 2023 17:43:55 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Rahalkar",
"Chaitanya",
""
],
[
"Gujar",
"Dhaval",
""
]
] |
new_dataset
| 0.996549 |
2005.12522
|
Jerry Wei
|
Jerry Wei, Chengyu Huang, Soroush Vosoughi, Jason Wei
|
What Are People Asking About COVID-19? A Question Classification Dataset
|
Published in Proceedings of the 1st Workshop on NLP for COVID-19 at
ACL 2020
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present COVID-Q, a set of 1,690 questions about COVID-19 from 13 sources,
which we annotate into 15 question categories and 207 question clusters. The
most common questions in our dataset asked about transmission, prevention, and
societal effects of COVID, and we found that many questions that appeared in
multiple sources were not answered by any FAQ websites of reputable
organizations such as the CDC and FDA. We post our dataset publicly at
https://github.com/JerryWeiAI/COVID-Q. For classifying questions into 15
categories, a BERT baseline scored 58.1% accuracy when trained on 20 examples
per category, and for a question clustering task, a BERT + triplet loss
baseline achieved 49.5% accuracy. We hope COVID-Q can help either for direct
use in developing applied systems or as a domain-specific resource for model
evaluation.
|
[
{
"version": "v1",
"created": "Tue, 26 May 2020 05:41:58 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Sep 2020 01:16:53 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Sep 2023 21:44:52 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Wei",
"Jerry",
""
],
[
"Huang",
"Chengyu",
""
],
[
"Vosoughi",
"Soroush",
""
],
[
"Wei",
"Jason",
""
]
] |
new_dataset
| 0.988827 |
2006.03051
|
Jerry Wei
|
Jerry Wei
|
NewB: 200,000+ Sentences for Political Bias Detection
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present the Newspaper Bias Dataset (NewB), a text corpus of more than
200,000 sentences from eleven news sources regarding Donald Trump. While
previous datasets have labeled sentences as either liberal or conservative,
NewB covers the political views of eleven popular media sources, capturing more
nuanced political viewpoints than a traditional binary classification system
does. We train two state-of-the-art deep learning models to predict the news
source of a given sentence from eleven newspapers and find that a recurrent
neural network achieved top-1, top-3, and top-5 accuracies of 33.3%, 61.4%, and
77.6%, respectively, significantly outperforming a baseline logistic regression
model's accuracies of 18.3%, 42.6%, and 60.8%. Using the news source label of
sentences, we analyze the top n-grams with our model to gain meaningful insight
into the portrayal of Trump by media sources.We hope that the public release of
our dataset will encourage further research in using natural language
processing to analyze more complex political biases.
Our dataset is posted at https://github.com/JerryWeiAI/NewB .
|
[
{
"version": "v1",
"created": "Thu, 4 Jun 2020 18:21:50 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Sep 2023 22:05:42 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Wei",
"Jerry",
""
]
] |
new_dataset
| 0.999755 |
2101.04248
|
Ajay Bangalore Harish
|
Ajay B. Harish and Abhishek Rajendra Prasad
|
Photo2CAD: Automated 3D solid reconstruction from 2D drawings using
OpenCV
| null | null | null | null |
cs.CG math.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study showcases the utilisation of OpenCV for extracting features from
photos of 2D engineering drawings. These features are then employed to
reconstruct 3D CAD models in SCAD format and generate 3D point cloud data
similar to LIDAR scans. Many historical mechanical, aerospace, and civil
engineering designs exist only as drawings, lacking software-generated CAD or
BIM models. While 2D to 3D conversion itself is not novel, the novelty of this
work is in the usage of simple photos rather than scans or electronic
documentation of 2D drawings. The method can also use scanned drawing data.
While the approach is effective for simple shapes, it currently does not
address hidden lines in CAD drawings. The Python Jupyter notebook codes
developed for this purpose are accessible through GitHub.
|
[
{
"version": "v1",
"created": "Tue, 12 Jan 2021 00:57:24 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Sep 2023 23:28:48 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Harish",
"Ajay B.",
""
],
[
"Prasad",
"Abhishek Rajendra",
""
]
] |
new_dataset
| 0.999675 |
2110.04149
|
Ho-Chun Herbert Chang
|
Ho-Chun Herbert Chang, Becky Pham, Emilio Ferrara
|
KPop Fandoms drive COVID-19 Public Health Messaging on Social Media
|
12 pages, 2 figures, 2 tables
| null | null | null |
cs.SI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
We examine an unexpected but significant source of positive public health
messaging during the COVID-19 pandemic -- K-pop fandoms. Leveraging more than 7
million tweets related to mask-wearing and K-pop between March 2020 and
December 2021, we analyzed the online spread of the hashtag \#WearAMask and
vaccine-related tweets amid anti-mask sentiments and public health
misinformation. Analyses reveal the South Korean boyband BTS as one of the most
significant driver of health discourse. Tweets from health agencies and
prominent figures that mentioned K-pop generate 111 times more online responses
compared to tweets that did not. These tweets also elicited strong responses
from South America, Southeast Asia, and rural States -- areas often neglected
in Twitter-based messaging by mainstream social media campaigns. Network and
temporal analysis show increased use from right-leaning elites over time.
Mechanistically, strong-levels of parasocial engagement and connectedness allow
sustained activism in the community. Our results suggest that public health
institutions may leverage pre-existing audience markets to synergistically
diffuse and target under-served communities both domestically and globally,
especially during health crises such as COVID-19.
|
[
{
"version": "v1",
"created": "Thu, 7 Oct 2021 17:55:27 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Sep 2023 14:51:54 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Chang",
"Ho-Chun Herbert",
""
],
[
"Pham",
"Becky",
""
],
[
"Ferrara",
"Emilio",
""
]
] |
new_dataset
| 0.995018 |
2112.00234
|
Kaihao Zhang
|
Kaihao Zhang, Tao Wang, Wenhan Luo, Boheng Chen, Wenqi Ren, Bjorn
Stenger, Wei Liu, Hongdong Li, Ming-Hsuan Yang
|
MC-Blur: A Comprehensive Benchmark for Image Deblurring
|
To appear in IEEE TCSVT
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blur artifacts can seriously degrade the visual quality of images, and
numerous deblurring methods have been proposed for specific scenarios. However,
in most real-world images, blur is caused by different factors, e.g., motion
and defocus. In this paper, we address how different deblurring methods perform
in the case of multiple types of blur. For in-depth performance evaluation, we
construct a new large-scale multi-cause image deblurring dataset (called
MC-Blur), including real-world and synthesized blurry images with mixed factors
of blurs. The images in the proposed MC-Blur dataset are collected using
different techniques: averaging sharp images captured by a 1000-fps high-speed
camera, convolving Ultra-High-Definition (UHD) sharp images with large-size
kernels, adding defocus to images, and real-world blurry images captured by
various camera models. Based on the MC-Blur dataset, we conduct extensive
benchmarking studies to compare SOTA methods in different scenarios, analyze
their efficiency, and investigate the built dataset's capacity. These
benchmarking results provide a comprehensive overview of the advantages and
limitations of current deblurring methods, and reveal the advances of our
dataset.
|
[
{
"version": "v1",
"created": "Wed, 1 Dec 2021 02:10:42 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jun 2022 03:59:04 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Sep 2023 10:13:21 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Zhang",
"Kaihao",
""
],
[
"Wang",
"Tao",
""
],
[
"Luo",
"Wenhan",
""
],
[
"Chen",
"Boheng",
""
],
[
"Ren",
"Wenqi",
""
],
[
"Stenger",
"Bjorn",
""
],
[
"Liu",
"Wei",
""
],
[
"Li",
"Hongdong",
""
],
[
"Yang",
"Ming-Hsuan",
""
]
] |
new_dataset
| 0.999533 |
2203.04802
|
Fu Li
|
Fu Li, Hao Yu, Ivan Shugurov, Benjamin Busam, Shaowu Yang, Slobodan
Ilic
|
NeRF-Pose: A First-Reconstruct-Then-Regress Approach for
Weakly-supervised 6D Object Pose Estimation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Pose estimation of 3D objects in monocular images is a fundamental and
long-standing problem in computer vision. Existing deep learning approaches for
6D pose estimation typically rely on the assumption of availability of 3D
object models and 6D pose annotations. However, precise annotation of 6D poses
in real data is intricate, time-consuming and not scalable, while synthetic
data scales well but lacks realism. To avoid these problems, we present a
weakly-supervised reconstruction-based pipeline, named NeRF-Pose, which needs
only 2D object segmentation and known relative camera poses during training.
Following the first-reconstruct-then-regress idea, we first reconstruct the
objects from multiple views in the form of an implicit neural representation.
Then, we train a pose regression network to predict pixel-wise 2D-3D
correspondences between images and the reconstructed model. At inference, the
approach only needs a single image as input. A NeRF-enabled PnP+RANSAC
algorithm is used to estimate stable and accurate pose from the predicted
correspondences. Experiments on LineMod and LineMod-Occlusion show that the
proposed method has state-of-the-art accuracy in comparison to the best 6D pose
estimation methods in spite of being trained only with weak labels. Besides, we
extend the Homebrewed DB dataset with more real training images to support the
weakly supervised task and achieve compelling results on this dataset. The
extended dataset and code will be released soon.
|
[
{
"version": "v1",
"created": "Wed, 9 Mar 2022 15:28:02 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Sep 2023 04:49:33 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Li",
"Fu",
""
],
[
"Yu",
"Hao",
""
],
[
"Shugurov",
"Ivan",
""
],
[
"Busam",
"Benjamin",
""
],
[
"Yang",
"Shaowu",
""
],
[
"Ilic",
"Slobodan",
""
]
] |
new_dataset
| 0.987133 |
2204.13304
|
Huanqi Cao
|
Huanqi Cao, Shizhi Tang, Qianchao Zhu, Bowen Yu, Wenguang Chen
|
Mat2Stencil: A Modular Matrix-Based DSL for Explicit and Implicit
Matrix-Free PDE Solvers on Structured Grid
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Partial differential equation (PDE) solvers are extensively utilized across
numerous scientific and engineering fields. However, achieving high performance
and scalability often necessitates intricate and low-level programming,
particularly when leveraging deterministic sparsity patterns in structured
grids.
In this paper, we propose an innovative domain-specific language (DSL),
Mat2Stencil, with its compiler, for PDE solvers on structured grids.
Mat2Stencil introduces a structured sparse matrix abstraction, facilitating
modular, flexible, and easy-to-use expression of solvers across a broad
spectrum, encompassing components such as Jacobi or Gauss-Seidel
preconditioners, incomplete LU or Cholesky decompositions, and multigrid
methods built upon them. Our DSL compiler subsequently generates matrix-free
code consisting of generalized stencils through multi-stage programming. The
code allows spatial loop-carried dependence in the form of quasi-affine loops,
in addition to the Jacobi-style stencil's embarrassingly parallel on spatial
dimensions. We further propose a novel automatic parallelization technique for
the spatially dependent loops, which offers a compile-time deterministic task
partitioning for threading, calculates necessary inter-thread synchronization
automatically, and generates an efficient multi-threaded implementation with
fine-grained synchronization.
Implementing 4 benchmarking programs, 3 of them being the pseudo-applications
in NAS Parallel Benchmarks with $6.3\%$ lines of code and 1 being matrix-free
High Performance Conjugate Gradients with $16.4\%$ lines of code, we achieve up
to $1.67\times$ and on average $1.03\times$ performance compared to manual
implementations.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 06:47:02 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Sep 2023 15:59:52 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Cao",
"Huanqi",
""
],
[
"Tang",
"Shizhi",
""
],
[
"Zhu",
"Qianchao",
""
],
[
"Yu",
"Bowen",
""
],
[
"Chen",
"Wenguang",
""
]
] |
new_dataset
| 0.998029 |
2205.15279
|
Vijanti Ramautar MSc
|
Vijanti Ramautar and Sergio Espa\~na
|
The openESEA Modelling Language for Ethical, Social and Environmental
Accounting: Technical Report
| null | null | null | null |
cs.OH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Over the years ethical, social and environmental accounting (ESEA) has become
a common practice among responsible organisations. ESEA entails assessing and
reporting organisations" performance on environmental, social and governance
topics. In this report, we present a textual grammar for specifying ESEA
methods. With the grammar ESEA models can be created. Such models can be
interpreted by our open-source, model-driven tool, called openESEA. The report
presents the metamodel of the grammar, the grammar itself, and explanations of
each grammar primitive.
|
[
{
"version": "v1",
"created": "Sun, 22 May 2022 15:44:04 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Sep 2023 13:57:31 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Ramautar",
"Vijanti",
""
],
[
"España",
"Sergio",
""
]
] |
new_dataset
| 0.988572 |
2207.01545
|
Jincen Jiang
|
Jincen Jiang, Xuequan Lu, Lizhi Zhao, Richard Dazeley, Meili Wang
|
Masked Autoencoders in 3D Point Cloud Representation Learning
|
Accepted to IEEE Transactions on Multimedia
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer-based Self-supervised Representation Learning methods learn
generic features from unlabeled datasets for providing useful network
initialization parameters for downstream tasks. Recently, self-supervised
learning based upon masking local surface patches for 3D point cloud data has
been under-explored. In this paper, we propose masked Autoencoders in 3D point
cloud representation learning (abbreviated as MAE3D), a novel autoencoding
paradigm for self-supervised learning. We first split the input point cloud
into patches and mask a portion of them, then use our Patch Embedding Module to
extract the features of unmasked patches. Secondly, we employ patch-wise MAE3D
Transformers to learn both local features of point cloud patches and high-level
contextual relationships between patches and complete the latent
representations of masked patches. We use our Point Cloud Reconstruction Module
with multi-task loss to complete the incomplete point cloud as a result. We
conduct self-supervised pre-training on ShapeNet55 with the point cloud
completion pre-text task and fine-tune the pre-trained model on ModelNet40 and
ScanObjectNN (PB\_T50\_RS, the hardest variant). Comprehensive experiments
demonstrate that the local features extracted by our MAE3D from point cloud
patches are beneficial for downstream classification tasks, soundly
outperforming state-of-the-art methods ($93.4\%$ and $86.2\%$ classification
accuracy, respectively).
|
[
{
"version": "v1",
"created": "Mon, 4 Jul 2022 16:13:27 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Sep 2023 11:33:58 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Jiang",
"Jincen",
""
],
[
"Lu",
"Xuequan",
""
],
[
"Zhao",
"Lizhi",
""
],
[
"Dazeley",
"Richard",
""
],
[
"Wang",
"Meili",
""
]
] |
new_dataset
| 0.955441 |
2209.04436
|
Ruslan Rakhimov
|
Egor Burkov, Ruslan Rakhimov, Aleksandr Safin, Evgeny Burnaev, Victor
Lempitsky
|
Multi-NeuS: 3D Head Portraits from Single Image with Neural Implicit
Functions
| null |
in IEEE Access, vol. 11, pp. 95681-95691, 2023
|
10.1109/ACCESS.2023.3309412
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present an approach for the reconstruction of textured 3D meshes of human
heads from one or few views. Since such few-shot reconstruction is
underconstrained, it requires prior knowledge which is hard to impose on
traditional 3D reconstruction algorithms. In this work, we rely on the recently
introduced 3D representation $\unicode{x2013}$ neural implicit functions
$\unicode{x2013}$ which, being based on neural networks, allows to naturally
learn priors about human heads from data, and is directly convertible to
textured mesh. Namely, we extend NeuS, a state-of-the-art neural implicit
function formulation, to represent multiple objects of a class (human heads in
our case) simultaneously. The underlying neural net architecture is designed to
learn the commonalities among these objects and to generalize to unseen ones.
Our model is trained on just a hundred smartphone videos and does not require
any scanned 3D data. Afterwards, the model can fit novel heads in the few-shot
or one-shot modes with good results.
|
[
{
"version": "v1",
"created": "Wed, 7 Sep 2022 21:09:24 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Sep 2023 21:33:11 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Burkov",
"Egor",
""
],
[
"Rakhimov",
"Ruslan",
""
],
[
"Safin",
"Aleksandr",
""
],
[
"Burnaev",
"Evgeny",
""
],
[
"Lempitsky",
"Victor",
""
]
] |
new_dataset
| 0.970475 |
2209.09979
|
Roberto Rossi
|
Roberto Rossi
|
jsdp: a Java Stochastic DP Library
|
8 pages
| null | null | null |
cs.AI math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stochastic Programming is a framework for modelling and solving problems of
decision making under uncertainty. Stochastic Dynamic Programming is a branch
of Stochastic Programming that takes a "functional equation" approach to the
discovery of optimal policies. By leveraging constructs - lambda expressions,
functional interfaces, collections and aggregate operators - implemented in
Java to operationalise the MapReduce framework, jsdp provides a general purpose
library for modelling and solving Stochastic Dynamic Programs.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 20:20:20 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Sep 2023 12:27:33 GMT"
},
{
"version": "v3",
"created": "Sun, 10 Sep 2023 13:39:53 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Rossi",
"Roberto",
""
]
] |
new_dataset
| 0.978291 |
2301.00387
|
S.M. Dhannya
|
S.M. Dhannya, N.S. Narayanaswamy, K.K. Nisha
|
Exactly Hittable Interval Graphs
|
17 pages. arXiv admin note: text overlap with arXiv:1707.05071
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Given a set system $\mathcal{X} = \{\mathcal{U},\mathcal{S}\}$, where
$\mathcal{U}$ is a set of elements and $\mathcal{S}$ is a set of subsets of
$\mathcal{U}$, an exact hitting set $\mathcal{U}'$ is a subset of $\mathcal{U}$
such that each subset in $\mathcal{S}$ contains exactly one element in
$\mathcal{U}'$. We refer to a set system as exactly hittable if it has an exact
hitting set. In this paper, we study interval graphs which have intersection
models that are exactly hittable. We refer to these interval graphs as exactly
hittable interval graphs (EHIG). We present a forbidden structure
characterization for EHIG. We also show that the class of proper interval
graphs is a strict subclass of EHIG. Finally, we give an algorithm that runs in
polynomial time to recognize graphs belonging to the class of EHIG.
|
[
{
"version": "v1",
"created": "Sun, 1 Jan 2023 11:33:28 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Sep 2023 01:49:31 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Dhannya",
"S. M.",
""
],
[
"Narayanaswamy",
"N. S.",
""
],
[
"Nisha",
"K. K.",
""
]
] |
new_dataset
| 0.980059 |
2301.12503
|
Haohe Liu
|
Haohe Liu, Zehua Chen, Yi Yuan, Xinhao Mei, Xubo Liu, Danilo Mandic,
Wenwu Wang, Mark D. Plumbley
|
AudioLDM: Text-to-Audio Generation with Latent Diffusion Models
|
Accepted by ICML 2023. Demo and implementation at
https://audioldm.github.io. Evaluation toolbox at
https://github.com/haoheliu/audioldm_eval
| null | null | null |
cs.SD cs.AI cs.MM eess.AS eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-to-audio (TTA) system has recently gained attention for its ability to
synthesize general audio based on text descriptions. However, previous studies
in TTA have limited generation quality with high computational costs. In this
study, we propose AudioLDM, a TTA system that is built on a latent space to
learn the continuous audio representations from contrastive language-audio
pretraining (CLAP) latents. The pretrained CLAP models enable us to train LDMs
with audio embedding while providing text embedding as a condition during
sampling. By learning the latent representations of audio signals and their
compositions without modeling the cross-modal relationship, AudioLDM is
advantageous in both generation quality and computational efficiency. Trained
on AudioCaps with a single GPU, AudioLDM achieves state-of-the-art TTA
performance measured by both objective and subjective metrics (e.g., frechet
distance). Moreover, AudioLDM is the first TTA system that enables various
text-guided audio manipulations (e.g., style transfer) in a zero-shot fashion.
Our implementation and demos are available at https://audioldm.github.io.
|
[
{
"version": "v1",
"created": "Sun, 29 Jan 2023 17:48:17 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Feb 2023 19:40:59 GMT"
},
{
"version": "v3",
"created": "Sat, 9 Sep 2023 15:27:58 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Liu",
"Haohe",
""
],
[
"Chen",
"Zehua",
""
],
[
"Yuan",
"Yi",
""
],
[
"Mei",
"Xinhao",
""
],
[
"Liu",
"Xubo",
""
],
[
"Mandic",
"Danilo",
""
],
[
"Wang",
"Wenwu",
""
],
[
"Plumbley",
"Mark D.",
""
]
] |
new_dataset
| 0.992865 |
2302.07854
|
Alexander Norcliffe MSc MSci BA
|
Alexander Norcliffe, Lev Proleev, Diana Mincu, Fletcher Lee Hartsell,
Katherine Heller, Subhrajit Roy
|
Benchmarking Continuous Time Models for Predicting Multiple Sclerosis
Progression
|
32 pages, 2 figures, 17 tables, published in TMLR 2023
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Multiple sclerosis is a disease that affects the brain and spinal cord, it
can lead to severe disability and has no known cure. The majority of prior work
in machine learning for multiple sclerosis has been centered around using
Magnetic Resonance Imaging scans or laboratory tests; these modalities are both
expensive to acquire and can be unreliable. In a recent paper it was shown that
disease progression can be predicted effectively using performance outcome
measures and demographic data. In our work we build on this to investigate the
modeling side, using continuous time models to predict progression. We
benchmark four continuous time models using a publicly available multiple
sclerosis dataset. We find that the best continuous model is often able to
outperform the best benchmarked discrete time model. We also carry out an
extensive ablation to discover the sources of performance gains, we find that
standardizing existing features leads to a larger performance increase than
interpolating missing features.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 18:45:32 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Sep 2023 23:04:15 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Norcliffe",
"Alexander",
""
],
[
"Proleev",
"Lev",
""
],
[
"Mincu",
"Diana",
""
],
[
"Hartsell",
"Fletcher Lee",
""
],
[
"Heller",
"Katherine",
""
],
[
"Roy",
"Subhrajit",
""
]
] |
new_dataset
| 0.961119 |
2303.06053
|
Si-An Chen
|
Si-An Chen, Chun-Liang Li, Nate Yoder, Sercan O. Arik, Tomas Pfister
|
TSMixer: An All-MLP Architecture for Time Series Forecasting
| null |
Transactions on Machine Learning Research (TMLR), 09/2023
| null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Real-world time-series datasets are often multivariate with complex dynamics.
To capture this complexity, high capacity architectures like recurrent- or
attention-based sequential deep learning models have become popular. However,
recent work demonstrates that simple univariate linear models can outperform
such deep learning models on several commonly used academic benchmarks.
Extending them, in this paper, we investigate the capabilities of linear models
for time-series forecasting and present Time-Series Mixer (TSMixer), a novel
architecture designed by stacking multi-layer perceptrons (MLPs). TSMixer is
based on mixing operations along both the time and feature dimensions to
extract information efficiently. On popular academic benchmarks, the
simple-to-implement TSMixer is comparable to specialized state-of-the-art
models that leverage the inductive biases of specific benchmarks. On the
challenging and large scale M5 benchmark, a real-world retail dataset, TSMixer
demonstrates superior performance compared to the state-of-the-art
alternatives. Our results underline the importance of efficiently utilizing
cross-variate and auxiliary information for improving the performance of time
series forecasting. We present various analyses to shed light into the
capabilities of TSMixer. The design paradigms utilized in TSMixer are expected
to open new horizons for deep learning-based time series forecasting. The
implementation is available at
https://github.com/google-research/google-research/tree/master/tsmixer
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 16:41:24 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Apr 2023 06:30:08 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Jun 2023 14:56:28 GMT"
},
{
"version": "v4",
"created": "Wed, 6 Sep 2023 09:02:26 GMT"
},
{
"version": "v5",
"created": "Mon, 11 Sep 2023 11:19:49 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Chen",
"Si-An",
""
],
[
"Li",
"Chun-Liang",
""
],
[
"Yoder",
"Nate",
""
],
[
"Arik",
"Sercan O.",
""
],
[
"Pfister",
"Tomas",
""
]
] |
new_dataset
| 0.996222 |
2303.06460
|
Wenchao Li
|
Wenchao Li, Zhan Wang, Yun Wang, Di Weng, Liwenhan Xie, Siming Chen,
Haidong Zhang, Huamin Qu
|
GeoCamera: Telling Stories in Geographic Visualizations with Camera
Movements
|
15 pages. Published as a conference paper at the ACM Conference on
Human Factors in Computing Systems (CHI) 2023
| null |
10.1145/3544548.3581470
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
In geographic data videos, camera movements are frequently used and combined
to present information from multiple perspectives. However, creating and
editing camera movements requires significant time and professional skills.
This work aims to lower the barrier of crafting diverse camera movements for
geographic data videos. First, we analyze a corpus of 66 geographic data videos
and derive a design space of camera movements with a dimension for geospatial
targets and one for narrative purposes. Based on the design space, we propose a
set of adaptive camera shots and further develop an interactive tool called
GeoCamera. This interactive tool allows users to flexibly design camera
movements for geographic visualizations. We verify the expressiveness of our
tool through case studies and evaluate its usability with a user study. The
participants find that the tool facilitates the design of camera movements.
|
[
{
"version": "v1",
"created": "Sat, 11 Mar 2023 17:20:39 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 13:39:21 GMT"
},
{
"version": "v3",
"created": "Sat, 9 Sep 2023 08:51:02 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Li",
"Wenchao",
""
],
[
"Wang",
"Zhan",
""
],
[
"Wang",
"Yun",
""
],
[
"Weng",
"Di",
""
],
[
"Xie",
"Liwenhan",
""
],
[
"Chen",
"Siming",
""
],
[
"Zhang",
"Haidong",
""
],
[
"Qu",
"Huamin",
""
]
] |
new_dataset
| 0.999139 |
2303.12074
|
Sherwin Bahmani
|
Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Xingguang Yan,
Gordon Wetzstein, Leonidas Guibas, Andrea Tagliasacchi
|
CC3D: Layout-Conditioned Generation of Compositional 3D Scenes
|
ICCV 2023; Webpage: https://sherwinbahmani.github.io/cc3d/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we introduce CC3D, a conditional generative model that
synthesizes complex 3D scenes conditioned on 2D semantic scene layouts, trained
using single-view images. Different from most existing 3D GANs that limit their
applicability to aligned single objects, we focus on generating complex scenes
with multiple objects, by modeling the compositional nature of 3D scenes. By
devising a 2D layout-based approach for 3D synthesis and implementing a new 3D
field representation with a stronger geometric inductive bias, we have created
a 3D GAN that is both efficient and of high quality, while allowing for a more
controllable generation process. Our evaluations on synthetic 3D-FRONT and
real-world KITTI-360 datasets demonstrate that our model generates scenes of
improved visual and geometric quality in comparison to previous works.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 17:59:02 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Sep 2023 19:27:42 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Bahmani",
"Sherwin",
""
],
[
"Park",
"Jeong Joon",
""
],
[
"Paschalidou",
"Despoina",
""
],
[
"Yan",
"Xingguang",
""
],
[
"Wetzstein",
"Gordon",
""
],
[
"Guibas",
"Leonidas",
""
],
[
"Tagliasacchi",
"Andrea",
""
]
] |
new_dataset
| 0.996942 |
2303.17368
|
Haiyi Mei
|
Zhitao Yang, Zhongang Cai, Haiyi Mei, Shuai Liu, Zhaoxi Chen, Weiye
Xiao, Yukun Wei, Zhongfei Qing, Chen Wei, Bo Dai, Wayne Wu, Chen Qian, Dahua
Lin, Ziwei Liu, Lei Yang
|
SynBody: Synthetic Dataset with Layered Human Models for 3D Human
Perception and Modeling
|
Accepted by ICCV 2023. Project webpage: https://synbody.github.io/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Synthetic data has emerged as a promising source for 3D human research as it
offers low-cost access to large-scale human datasets. To advance the diversity
and annotation quality of human models, we introduce a new synthetic dataset,
SynBody, with three appealing features: 1) a clothed parametric human model
that can generate a diverse range of subjects; 2) the layered human
representation that naturally offers high-quality 3D annotations to support
multiple tasks; 3) a scalable system for producing realistic data to facilitate
real-world tasks. The dataset comprises 1.2M images with corresponding accurate
3D annotations, covering 10,000 human body models, 1,187 actions, and various
viewpoints. The dataset includes two subsets for human pose and shape
estimation as well as human neural rendering. Extensive experiments on SynBody
indicate that it substantially enhances both SMPL and SMPL-X estimation.
Furthermore, the incorporation of layered annotations offers a valuable
training resource for investigating the Human Neural Radiance Fields (NeRF).
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 13:30:12 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Sep 2023 17:06:27 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Yang",
"Zhitao",
""
],
[
"Cai",
"Zhongang",
""
],
[
"Mei",
"Haiyi",
""
],
[
"Liu",
"Shuai",
""
],
[
"Chen",
"Zhaoxi",
""
],
[
"Xiao",
"Weiye",
""
],
[
"Wei",
"Yukun",
""
],
[
"Qing",
"Zhongfei",
""
],
[
"Wei",
"Chen",
""
],
[
"Dai",
"Bo",
""
],
[
"Wu",
"Wayne",
""
],
[
"Qian",
"Chen",
""
],
[
"Lin",
"Dahua",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Yang",
"Lei",
""
]
] |
new_dataset
| 0.999746 |
2304.02129
|
Justin Yim
|
Justin K. Yim, Jiming Ren, David Ologan, Selvin Garcia Gonzalez, Aaron
M. Johnson
|
Proprioception and reaction for walking among entanglements
|
2023 IEEE/RSJ International Conference on Intelligent Robots and
Systems
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Entanglements like vines and branches in natural settings or cords and pipes
in human spaces prevent mobile robots from accessing many environments. Legged
robots should be effective in these settings, and more so than wheeled or
tracked platforms, but naive controllers quickly become entangled and stuck. In
this paper we present a method for proprioception aimed specifically at the
task of sensing entanglements of a robot's legs as well as a reaction strategy
to disentangle legs during their swing phase as they advance to their next
foothold. We demonstrate our proprioception and reaction strategy enables
traversal of entanglements of many stiffnesses and geometries succeeding in 14
out of 16 trials in laboratory tests, as well as a natural outdoor environment.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 21:24:58 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Sep 2023 03:03:45 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Yim",
"Justin K.",
""
],
[
"Ren",
"Jiming",
""
],
[
"Ologan",
"David",
""
],
[
"Gonzalez",
"Selvin Garcia",
""
],
[
"Johnson",
"Aaron M.",
""
]
] |
new_dataset
| 0.9995 |
2304.04321
|
Ran Gong
|
Ran Gong, Jiangyong Huang, Yizhou Zhao, Haoran Geng, Xiaofeng Gao,
Qingyang Wu, Wensi Ai, Ziheng Zhou, Demetri Terzopoulos, Song-Chun Zhu,
Baoxiong Jia, Siyuan Huang
|
ARNOLD: A Benchmark for Language-Grounded Task Learning With Continuous
States in Realistic 3D Scenes
|
The first two authors contributed equally; 20 pages; 17 figures;
project availalbe: https://arnold-benchmark.github.io/ ICCV 2023
| null | null | null |
cs.AI cs.CL cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding the continuous states of objects is essential for task learning
and planning in the real world. However, most existing task learning benchmarks
assume discrete (e.g., binary) object goal states, which poses challenges for
the learning of complex tasks and transferring learned policy from simulated
environments to the real world. Furthermore, state discretization limits a
robot's ability to follow human instructions based on the grounding of actions
and states. To tackle these challenges, we present ARNOLD, a benchmark that
evaluates language-grounded task learning with continuous states in realistic
3D scenes. ARNOLD is comprised of 8 language-conditioned tasks that involve
understanding object states and learning policies for continuous goals. To
promote language-instructed learning, we provide expert demonstrations with
template-generated language descriptions. We assess task performance by
utilizing the latest language-conditioned policy learning models. Our results
indicate that current models for language-conditioned manipulations continue to
experience significant challenges in novel goal-state generalizations, scene
generalizations, and object generalizations. These findings highlight the need
to develop new algorithms that address this gap and underscore the potential
for further research in this area. Project website:
https://arnold-benchmark.github.io.
|
[
{
"version": "v1",
"created": "Sun, 9 Apr 2023 21:42:57 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Sep 2023 11:27:53 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Gong",
"Ran",
""
],
[
"Huang",
"Jiangyong",
""
],
[
"Zhao",
"Yizhou",
""
],
[
"Geng",
"Haoran",
""
],
[
"Gao",
"Xiaofeng",
""
],
[
"Wu",
"Qingyang",
""
],
[
"Ai",
"Wensi",
""
],
[
"Zhou",
"Ziheng",
""
],
[
"Terzopoulos",
"Demetri",
""
],
[
"Zhu",
"Song-Chun",
""
],
[
"Jia",
"Baoxiong",
""
],
[
"Huang",
"Siyuan",
""
]
] |
new_dataset
| 0.999776 |
2304.06197
|
Arjun Mani
|
Arjun Mani, Ishaan Preetam Chandratreya, Elliot Creager, Carl
Vondrick, Richard Zemel
|
SURFSUP: Learning Fluid Simulation for Novel Surfaces
|
Website: https://surfsup.cs.columbia.edu/
| null | null | null |
cs.LG physics.flu-dyn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modeling the mechanics of fluid in complex scenes is vital to applications in
design, graphics, and robotics. Learning-based methods provide fast and
differentiable fluid simulators, however most prior work is unable to
accurately model how fluids interact with genuinely novel surfaces not seen
during training. We introduce SURFSUP, a framework that represents objects
implicitly using signed distance functions (SDFs), rather than an explicit
representation of meshes or particles. This continuous representation of
geometry enables more accurate simulation of fluid-object interactions over
long time periods while simultaneously making computation more efficient.
Moreover, SURFSUP trained on simple shape primitives generalizes considerably
out-of-distribution, even to complex real-world scenes and objects. Finally, we
show we can invert our model to design simple objects to manipulate fluid flow.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 00:17:38 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Sep 2023 18:32:16 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Mani",
"Arjun",
""
],
[
"Chandratreya",
"Ishaan Preetam",
""
],
[
"Creager",
"Elliot",
""
],
[
"Vondrick",
"Carl",
""
],
[
"Zemel",
"Richard",
""
]
] |
new_dataset
| 0.999244 |
2304.12635
|
Jian Gao
|
Jian Gao, Xin Cao, Xin Yao, Gong Zhang, Wei Wang
|
LMSFC: A Novel Multidimensional Index based on Learned Monotonic Space
Filling Curves
|
Extended Version. Accepted by VLDB 2023
| null |
10.14778/3603581.3603598
| null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recently proposed learned indexes have attracted much attention as they
can adapt to the actual data and query distributions to attain better search
efficiency. Based on this technique, several existing works build up indexes
for multi-dimensional data and achieve improved query performance. A common
paradigm of these works is to (i) map multi-dimensional data points to a
one-dimensional space using a fixed space-filling curve (SFC) or its variant
and (ii) then apply the learned indexing techniques. We notice that the first
step typically uses a fixed SFC method, such as row-major order and z-order. It
definitely limits the potential of learned multi-dimensional indexes to adapt
variable data distributions via different query workloads. In this paper, we
propose a novel idea of learning a space-filling curve that is carefully
designed and actively optimized for efficient query processing. We also
identify innovative offline and online optimization opportunities common to
SFC-based learned indexes and offer optimal and/or heuristic solutions.
Experimental results demonstrate that our proposed method, LMSFC, outperforms
state-of-the-art non-learned or learned methods across three commonly used
real-world datasets and diverse experimental settings.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 08:04:49 GMT"
},
{
"version": "v2",
"created": "Wed, 3 May 2023 08:53:11 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Jun 2023 03:14:31 GMT"
},
{
"version": "v4",
"created": "Sat, 9 Sep 2023 08:37:16 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Gao",
"Jian",
""
],
[
"Cao",
"Xin",
""
],
[
"Yao",
"Xin",
""
],
[
"Zhang",
"Gong",
""
],
[
"Wang",
"Wei",
""
]
] |
new_dataset
| 0.989535 |
2304.13000
|
Simiao Ren
|
Simiao Ren, Francesco Luzi, Saad Lahrichi, Kaleb Kassaw, Leslie M.
Collins, Kyle Bradbury, Jordan M. Malof
|
Segment anything, from space?
| null | null | null | null |
cs.CV cs.AI eess.IV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recently, the first foundation model developed specifically for image
segmentation tasks was developed, termed the "Segment Anything Model" (SAM).
SAM can segment objects in input imagery based on cheap input prompts, such as
one (or more) points, a bounding box, or a mask. The authors examined the
\textit{zero-shot} image segmentation accuracy of SAM on a large number of
vision benchmark tasks and found that SAM usually achieved recognition accuracy
similar to, or sometimes exceeding, vision models that had been trained on the
target tasks. The impressive generalization of SAM for segmentation has major
implications for vision researchers working on natural imagery. In this work,
we examine whether SAM's performance extends to overhead imagery problems and
help guide the community's response to its development. We examine SAM's
performance on a set of diverse and widely studied benchmark tasks. We find
that SAM does often generalize well to overhead imagery, although it fails in
some cases due to the unique characteristics of overhead imagery and its common
target objects. We report on these unique systematic failure cases for remote
sensing imagery that may comprise useful future research for the community.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 17:14:36 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2023 14:05:25 GMT"
},
{
"version": "v3",
"created": "Sun, 10 Sep 2023 19:42:02 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Ren",
"Simiao",
""
],
[
"Luzi",
"Francesco",
""
],
[
"Lahrichi",
"Saad",
""
],
[
"Kassaw",
"Kaleb",
""
],
[
"Collins",
"Leslie M.",
""
],
[
"Bradbury",
"Kyle",
""
],
[
"Malof",
"Jordan M.",
""
]
] |
new_dataset
| 0.998168 |
2305.04032
|
Kechi Zhang
|
Kechi Zhang, Huangzhao Zhang, Ge Li, Jia Li, Zhuo Li, Zhi Jin
|
ToolCoder: Teach Code Generation Models to use API search tools
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatically generating source code from natural language descriptions has
been a growing field of research in recent years. However, current large-scale
code generation models often encounter difficulties when selecting appropriate
APIs for specific contexts. These models may generate APIs that do not meet
requirements or refer to non-existent APIs in third-party libraries, especially
for lesser-known or private libraries. Inspired by the process of human
developers using tools to search APIs, we propose ToolCoder, a novel approach
that integrates API search tools with existing models to assist in code
generation and API selection. To teach our model to use tools, we introduce an
automated data annotation method using ChatGPT to add tool usage information
into the source code data and fine-tune code generation models. During
inference, we integrate API search tools into the generation process so that
our model can automatically use the search tool to get suggestions when
selecting an API. Our experimental results demonstrate that ToolCoder exhibits
excellent performance and generalization across five public and private library
code generation benchmarks, with at least 6.21\% improvement on average pass@1
metrics and 9.64\% improvement on average pass@10 metrics compared to
state-of-the-art methods. Furthermore, we show that our relatively small
ToolCoder model is comparable to one of the current best models, GPT-3.5,
highlighting the potential of incorporating programming tools into the code
generation process.
|
[
{
"version": "v1",
"created": "Sat, 6 May 2023 12:45:28 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 19:34:50 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Aug 2023 10:45:26 GMT"
},
{
"version": "v4",
"created": "Thu, 17 Aug 2023 12:16:47 GMT"
},
{
"version": "v5",
"created": "Mon, 11 Sep 2023 06:33:46 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Zhang",
"Kechi",
""
],
[
"Zhang",
"Huangzhao",
""
],
[
"Li",
"Ge",
""
],
[
"Li",
"Jia",
""
],
[
"Li",
"Zhuo",
""
],
[
"Jin",
"Zhi",
""
]
] |
new_dataset
| 0.983805 |
2305.04087
|
Kechi Zhang
|
Kechi Zhang, Zhuo Li, Jia Li, Ge Li, Zhi Jin
|
Self-Edit: Fault-Aware Code Editor for Code Generation
|
Accepted by ACL2023
| null | null | null |
cs.SE cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) have demonstrated an impressive ability to
generate codes on competitive programming tasks. However, with limited sample
numbers, LLMs still suffer from poor accuracy. Inspired by the process of human
programming, we propose a generate-and-edit approach named Self-Edit that
utilizes execution results of the generated code from LLMs to improve the code
quality on the competitive programming task. We execute the generated code on
the example test case provided in the question and wrap execution results into
a supplementary comment. Utilizing this comment as guidance, our fault-aware
code editor is employed to correct errors in the generated code. We perform
extensive evaluations across two competitive programming datasets with nine
different LLMs. Compared to directly generating from LLMs, our approach can
improve the average of pass@1 by 89\% on APPS-dev, 31\% on APPS-test, and 48\%
on HumanEval over nine popular code generation LLMs with parameter sizes
ranging from 110M to 175B. Compared to other post-processing methods, our
method demonstrates superior accuracy and efficiency.
|
[
{
"version": "v1",
"created": "Sat, 6 May 2023 16:12:19 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 07:00:47 GMT"
},
{
"version": "v3",
"created": "Mon, 5 Jun 2023 04:38:07 GMT"
},
{
"version": "v4",
"created": "Thu, 17 Aug 2023 12:20:27 GMT"
},
{
"version": "v5",
"created": "Mon, 11 Sep 2023 06:27:53 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Zhang",
"Kechi",
""
],
[
"Li",
"Zhuo",
""
],
[
"Li",
"Jia",
""
],
[
"Li",
"Ge",
""
],
[
"Jin",
"Zhi",
""
]
] |
new_dataset
| 0.964128 |
2305.08455
|
Jordy Van Landeghem
|
Jordy Van Landeghem, Rub\'en Tito, {\L}ukasz Borchmann, Micha{\l}
Pietruszka, Pawe{\l} J\'oziak, Rafa{\l} Powalski, Dawid Jurkiewicz, Micka\"el
Coustaty, Bertrand Ackaert, Ernest Valveny, Matthew Blaschko, Sien Moens,
Tomasz Stanis{\l}awek
|
Document Understanding Dataset and Evaluation (DUDE)
|
Accepted at ICCV 2023
| null | null | null |
cs.CV cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We call on the Document AI (DocAI) community to reevaluate current
methodologies and embrace the challenge of creating more practically-oriented
benchmarks. Document Understanding Dataset and Evaluation (DUDE) seeks to
remediate the halted research progress in understanding visually-rich documents
(VRDs). We present a new dataset with novelties related to types of questions,
answers, and document layouts based on multi-industry, multi-domain, and
multi-page VRDs of various origins, and dates. Moreover, we are pushing the
boundaries of current methods by creating multi-task and multi-domain
evaluation setups that more accurately simulate real-world situations where
powerful generalization and adaptation under low-resource settings are desired.
DUDE aims to set a new standard as a more practical, long-standing benchmark
for the community, and we hope that it will lead to future extensions and
contributions that address real-world challenges. Finally, our work illustrates
the importance of finding more efficient ways to model language, images, and
layout in DocAI.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 08:54:32 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 10:06:57 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Sep 2023 10:36:41 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Van Landeghem",
"Jordy",
""
],
[
"Tito",
"Rubén",
""
],
[
"Borchmann",
"Łukasz",
""
],
[
"Pietruszka",
"Michał",
""
],
[
"Józiak",
"Paweł",
""
],
[
"Powalski",
"Rafał",
""
],
[
"Jurkiewicz",
"Dawid",
""
],
[
"Coustaty",
"Mickaël",
""
],
[
"Ackaert",
"Bertrand",
""
],
[
"Valveny",
"Ernest",
""
],
[
"Blaschko",
"Matthew",
""
],
[
"Moens",
"Sien",
""
],
[
"Stanisławek",
"Tomasz",
""
]
] |
new_dataset
| 0.997833 |
2305.10346
|
Igor Sfiligoi
|
Igor Sfiligoi, Daniel McDonald, Rob Knight and Frank W\"urthwein
|
Testing GitHub projects on custom resources using unprivileged
Kubernetes runners
|
5 pages, 1 figure, To be published in proceedings of PEARC23
|
Practice and Experience in Advanced Research Computing (PEARC
'23). Association for Computing Machinery, New York, NY, USA, 332-335. (2023)
|
10.1145/3569951.3597586
| null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
GitHub is a popular repository for hosting software projects, both due to
ease of use and the seamless integration with its testing environment. Native
GitHub Actions make it easy for software developers to validate new commits and
have confidence that new code does not introduce major bugs. The freely
available test environments are limited to only a few popular setups but can be
extended with custom Action Runners. Our team had access to a Kubernetes
cluster with GPU accelerators, so we explored the feasibility of automatically
deploying GPU-providing runners there. All available Kubernetes-based setups,
however, require cluster-admin level privileges. To address this problem, we
developed a simple custom setup that operates in a completely unprivileged
manner. In this paper we provide a summary description of the setup and our
experience using it in the context of two Knight lab projects on the Prototype
National Research Platform system.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 16:31:41 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Sfiligoi",
"Igor",
""
],
[
"McDonald",
"Daniel",
""
],
[
"Knight",
"Rob",
""
],
[
"Würthwein",
"Frank",
""
]
] |
new_dataset
| 0.972311 |
2305.18365
|
Taicheng Guo
|
Taicheng Guo, Kehan Guo, Bozhao Nan, Zhenwen Liang, Zhichun Guo,
Nitesh V. Chawla, Olaf Wiest, Xiangliang Zhang
|
What can Large Language Models do in chemistry? A comprehensive
benchmark on eight tasks
|
Add extra LLMs experiments; more baselines and more investigations on
SELFIES, label interpretation, etc
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Models (LLMs) with strong abilities in natural language
processing tasks have emerged and have been applied in various kinds of areas
such as science, finance and software engineering. However, the capability of
LLMs to advance the field of chemistry remains unclear. In this paper, rather
than pursuing state-of-the-art performance, we aim to evaluate capabilities of
LLMs in a wide range of tasks across the chemistry domain. We identify three
key chemistry-related capabilities including understanding, reasoning and
explaining to explore in LLMs and establish a benchmark containing eight
chemistry tasks. Our analysis draws on widely recognized datasets facilitating
a broad exploration of the capacities of LLMs within the context of practical
chemistry. Five LLMs (GPT-4, GPT-3.5, Davinci-003, Llama and Galactica) are
evaluated for each chemistry task in zero-shot and few-shot in-context learning
settings with carefully selected demonstration examples and specially crafted
prompts. Our investigation found that GPT-4 outperformed other models and LLMs
exhibit different competitive levels in eight chemistry tasks. In addition to
the key findings from the comprehensive benchmark analysis, our work provides
insights into the limitation of current LLMs and the impact of in-context
learning settings on LLMs' performance across various chemistry tasks. The code
and datasets used in this study are available at
https://github.com/ChemFoundationModels/ChemLLMBench.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 14:17:33 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Sep 2023 16:37:36 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Guo",
"Taicheng",
""
],
[
"Guo",
"Kehan",
""
],
[
"Nan",
"Bozhao",
""
],
[
"Liang",
"Zhenwen",
""
],
[
"Guo",
"Zhichun",
""
],
[
"Chawla",
"Nitesh V.",
""
],
[
"Wiest",
"Olaf",
""
],
[
"Zhang",
"Xiangliang",
""
]
] |
new_dataset
| 0.991725 |
2307.06868
|
Markus Heinrichs
|
Markus Heinrichs, Aydin Sezgin, Rainer Kronberger
|
Open Source Reconfigurable Intelligent Surface for the Frequency Range
of 5 GHz WiFi
| null | null | null | null |
cs.NI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconfigurable Intelligent Surfaces (RIS) have been identified as a potential
ingredient to enhance the performance of contemporary wireless communication
and sensing systems. Yet, most of the existing devices are either costly or not
available for reproduction. To close this gap, a Reconfigurable Intelligent
Surface for the frequency range of 5 GHz WiFi is presented in this work. We
describe the designed unit cell, which is optimized for the full frequency
range of 5.15 to 5.875 GHz. Standard FR4 substrate is used for cost
optimization. The measured reflection coefficient of a rectangular RIS
prototype with 256 elements is used for RF performance evaluation. Fabrication
data and firmware source code are made open source, which makes RIS more
available in real measurement setups.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 17:33:53 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Sep 2023 16:59:54 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Heinrichs",
"Markus",
""
],
[
"Sezgin",
"Aydin",
""
],
[
"Kronberger",
"Rainer",
""
]
] |
new_dataset
| 0.999681 |
2308.06791
|
Yongxin Shao
|
Yongxin Shao and Aihong Tan and Zhetao Sun and Enhui Zheng and
Tianhong Yan
|
PV-SSD: A Projection and Voxel-based Double Branch Single-Stage 3D
Object Detector
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LIDAR-based 3D object detection and classification is crucial for autonomous
driving. However, inference in real-time from extremely sparse 3D data poses a
formidable challenge. To address this issue, a common approach is to project
point clouds onto a bird's-eye or perspective view, effectively converting them
into an image-like data format. However, this excessive compression of point
cloud data often leads to the loss of information. This paper proposes a 3D
object detector based on voxel and projection double branch feature extraction
(PV-SSD) to address the problem of information loss. We add voxel features
input containing rich local semantic information, which is fully fused with the
projected features in the feature extraction stage to reduce the local
information loss caused by projection. A good performance is achieved compared
to the previous work. In addition, this paper makes the following
contributions: 1) a voxel feature extraction method with variable receptive
fields is proposed; 2) a feature point sampling method by weight sampling is
used to filter out the feature points that are more conducive to the detection
task; 3) the MSSFA module is proposed based on the SSFA module. To verify the
effectiveness of our method, we designed comparison experiments.
|
[
{
"version": "v1",
"created": "Sun, 13 Aug 2023 15:30:02 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Aug 2023 07:49:41 GMT"
},
{
"version": "v3",
"created": "Sat, 9 Sep 2023 15:01:31 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Shao",
"Yongxin",
""
],
[
"Tan",
"Aihong",
""
],
[
"Sun",
"Zhetao",
""
],
[
"Zheng",
"Enhui",
""
],
[
"Yan",
"Tianhong",
""
]
] |
new_dataset
| 0.975802 |
2308.14448
|
Yicheng Zhong
|
Yicheng Zhong, Huawei Wei, Peiji Yang, Zhisheng Wang
|
ExpCLIP: Bridging Text and Facial Expressions via Semantic Alignment
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The objective of stylized speech-driven facial animation is to create
animations that encapsulate specific emotional expressions. Existing methods
often depend on pre-established emotional labels or facial expression
templates, which may limit the necessary flexibility for accurately conveying
user intent. In this research, we introduce a technique that enables the
control of arbitrary styles by leveraging natural language as emotion prompts.
This technique presents benefits in terms of both flexibility and
user-friendliness. To realize this objective, we initially construct a
Text-Expression Alignment Dataset (TEAD), wherein each facial expression is
paired with several prompt-like descriptions.We propose an innovative automatic
annotation method, supported by Large Language Models (LLMs), to expedite the
dataset construction, thereby eliminating the substantial expense of manual
annotation. Following this, we utilize TEAD to train a CLIP-based model, termed
ExpCLIP, which encodes text and facial expressions into semantically aligned
style embeddings. The embeddings are subsequently integrated into the facial
animation generator to yield expressive and controllable facial animations.
Given the limited diversity of facial emotions in existing speech-driven facial
animation training data, we further introduce an effective Expression Prompt
Augmentation (EPA) mechanism to enable the animation generator to support
unprecedented richness in style control. Comprehensive experiments illustrate
that our method accomplishes expressive facial animation generation and offers
enhanced flexibility in effectively conveying the desired style.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 09:35:13 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Sep 2023 08:56:32 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Zhong",
"Yicheng",
""
],
[
"Wei",
"Huawei",
""
],
[
"Yang",
"Peiji",
""
],
[
"Wang",
"Zhisheng",
""
]
] |
new_dataset
| 0.999704 |
2309.01199
|
Jiaxin Jiang
|
Jiaxin Jiang, Byron Choi, Xin Huang, Jianliang Xu and Sourav S
Bhowmick
|
DKWS: A Distributed System for Keyword Search on Massive Graphs
(Complete Version)
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Due to the unstructuredness and the lack of schemas of graphs, such as
knowledge graphs, social networks, and RDF graphs, keyword search for querying
such graphs has been proposed. As graphs have become voluminous, large-scale
distributed processing has attracted much interest from the database research
community. While there have been several distributed systems, distributed
querying techniques for keyword search are still limited. This paper proposes a
novel distributed keyword search system called $\DKWS$. First, we
\revise{present} a {\em monotonic} property with keyword search algorithms that
guarantees correct parallelization. Second, we present a keyword search
algorithm as monotonic backward and forward search phases. Moreover, we propose
new tight bounds for pruning nodes being searched. Third, we propose a {\em
notify-push} paradigm and $\PINE$ {\em programming model} of $\DKWS$. The
notify-push paradigm allows {\em asynchronously} exchanging the upper bounds of
matches across the workers and the coordinator in $\DKWS$. The $\PINE$
programming model naturally fits keyword search algorithms, as they have
distinguished phases, to allow {\em preemptive} searches to mitigate staleness
in a distributed system. Finally, we investigate the performance and
effectiveness of $\DKWS$ through experiments using real-world datasets. We find
that $\DKWS$ is up to two orders of magnitude faster than related techniques,
and its communication costs are $7.6$ times smaller than those of other
techniques.
|
[
{
"version": "v1",
"created": "Sun, 3 Sep 2023 15:14:12 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Sep 2023 08:41:30 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Jiang",
"Jiaxin",
""
],
[
"Choi",
"Byron",
""
],
[
"Huang",
"Xin",
""
],
[
"Xu",
"Jianliang",
""
],
[
"Bhowmick",
"Sourav S",
""
]
] |
new_dataset
| 0.997079 |
2309.01237
|
Alexander Kolpakov
|
Alexander Kolpakov, Aidan Rocke
|
The Information Geometry of UMAP
|
8 pages, 2 figures; Github repo
(https://github.com/sashakolpakov/info-geometry-umap)
| null | null | null |
cs.CG cs.DM cs.IT math.GT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this note we highlight some connections of UMAP to the basic principles of
Information Geometry. Originally, UMAP was derived from Category Theory
observations. However, we posit that it also has a natural geometric
interpretation.
|
[
{
"version": "v1",
"created": "Sun, 3 Sep 2023 18:10:00 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Sep 2023 09:31:41 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Kolpakov",
"Alexander",
""
],
[
"Rocke",
"Aidan",
""
]
] |
new_dataset
| 0.99897 |
2309.01361
|
Yasir Latif
|
Yasir Latif, Peter Anastasiou, Yonhon Ng, Zebb Prime, Tien-Fu Lu,
Matthew Tetlow, Robert Mahony, Tat-Jun Chin
|
High Frequency, High Accuracy Pointing onboard Nanosats using
Neuromorphic Event Sensing and Piezoelectric Actuation
| null | null | null | null |
cs.ET cs.CV cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
As satellites become smaller, the ability to maintain stable pointing
decreases as external forces acting on the satellite come into play. At the
same time, reaction wheels used in the attitude determination and control
system (ADCS) introduce high frequency jitter which can disrupt pointing
stability. For space domain awareness (SDA) tasks that track objects tens of
thousands of kilometres away, the pointing accuracy offered by current
nanosats, typically in the range of 10 to 100 arcseconds, is not sufficient. In
this work, we develop a novel payload that utilises a neuromorphic event sensor
(for high frequency and highly accurate relative attitude estimation) paired in
a closed loop with a piezoelectric stage (for active attitude corrections) to
provide highly stable sensor-specific pointing. Event sensors are especially
suited for space applications due to their desirable characteristics of low
power consumption, asynchronous operation, and high dynamic range. We use the
event sensor to first estimate a reference background star field from which
instantaneous relative attitude is estimated at high frequency. The
piezoelectric stage works in a closed control loop with the event sensor to
perform attitude corrections based on the discrepancy between the current and
desired attitude. Results in a controlled setting show that we can achieve a
pointing accuracy in the range of 1-5 arcseconds using our novel payload at an
operating frequency of up to 50Hz using a prototype built from
commercial-off-the-shelf components. Further details can be found at
https://ylatif.github.io/ultrafinestabilisation
|
[
{
"version": "v1",
"created": "Mon, 4 Sep 2023 05:05:15 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Sep 2023 07:14:45 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Sep 2023 03:50:57 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Latif",
"Yasir",
""
],
[
"Anastasiou",
"Peter",
""
],
[
"Ng",
"Yonhon",
""
],
[
"Prime",
"Zebb",
""
],
[
"Lu",
"Tien-Fu",
""
],
[
"Tetlow",
"Matthew",
""
],
[
"Mahony",
"Robert",
""
],
[
"Chin",
"Tat-Jun",
""
]
] |
new_dataset
| 0.978799 |
2309.01380
|
Soumya Jahagirdar
|
Soumya Jahagirdar, Minesh Mathew, Dimosthenis Karatzas, C. V. Jawahar
|
Understanding Video Scenes through Text: Insights from Text-based Video
Question Answering
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Researchers have extensively studied the field of vision and language,
discovering that both visual and textual content is crucial for understanding
scenes effectively. Particularly, comprehending text in videos holds great
significance, requiring both scene text understanding and temporal reasoning.
This paper focuses on exploring two recently introduced datasets, NewsVideoQA
and M4-ViteVQA, which aim to address video question answering based on textual
content. The NewsVideoQA dataset contains question-answer pairs related to the
text in news videos, while M4-ViteVQA comprises question-answer pairs from
diverse categories like vlogging, traveling, and shopping. We provide an
analysis of the formulation of these datasets on various levels, exploring the
degree of visual understanding and multi-frame comprehension required for
answering the questions. Additionally, the study includes experimentation with
BERT-QA, a text-only model, which demonstrates comparable performance to the
original methods on both datasets, indicating the shortcomings in the
formulation of these datasets. Furthermore, we also look into the domain
adaptation aspect by examining the effectiveness of training on M4-ViteVQA and
evaluating on NewsVideoQA and vice-versa, thereby shedding light on the
challenges and potential benefits of out-of-domain training.
|
[
{
"version": "v1",
"created": "Mon, 4 Sep 2023 06:11:00 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Sep 2023 07:01:24 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Jahagirdar",
"Soumya",
""
],
[
"Mathew",
"Minesh",
""
],
[
"Karatzas",
"Dimosthenis",
""
],
[
"Jawahar",
"C. V.",
""
]
] |
new_dataset
| 0.999702 |
2309.01940
|
Lingyue Fu Miss
|
Lingyue Fu, Huacan Chai, Shuang Luo, Kounianhua Du, Weiming Zhang,
Longteng Fan, Jiayi Lei, Renting Rui, Jianghao Lin, Yuchen Fang, Yifan Liu,
Jingkuan Wang, Siyuan Qi, Kangning Zhang, Weinan Zhang, Yong Yu
|
CodeApex: A Bilingual Programming Evaluation Benchmark for Large
Language Models
|
21 pages
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the emergence of Large Language Models (LLMs), there has been a
significant improvement in the programming capabilities of models, attracting
growing attention from researchers. We propose CodeApex, a bilingual benchmark
dataset focusing on the programming comprehension and code generation abilities
of LLMs. CodeApex comprises three types of multiple-choice questions:
conceptual understanding, commonsense reasoning, and multi-hop reasoning,
designed to evaluate LLMs on programming comprehension tasks. Additionally,
CodeApex utilizes algorithmic questions and corresponding test cases to assess
the code quality generated by LLMs. We evaluate 14 state-of-the-art LLMs,
including both general-purpose and specialized models. GPT exhibits the best
programming capabilities, achieving approximate accuracies of 50% and 56% on
the two tasks, respectively. There is still significant room for improvement in
programming tasks. We hope that CodeApex can serve as a reference for
evaluating the coding capabilities of LLMs, further promoting their development
and growth. Datasets are released at https://github.com/APEXLAB/CodeApex.git.
CodeApex submission website is https://apex.sjtu.edu.cn/codeapex/.
|
[
{
"version": "v1",
"created": "Tue, 5 Sep 2023 04:12:01 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Sep 2023 15:36:11 GMT"
},
{
"version": "v3",
"created": "Sun, 10 Sep 2023 13:32:38 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Fu",
"Lingyue",
""
],
[
"Chai",
"Huacan",
""
],
[
"Luo",
"Shuang",
""
],
[
"Du",
"Kounianhua",
""
],
[
"Zhang",
"Weiming",
""
],
[
"Fan",
"Longteng",
""
],
[
"Lei",
"Jiayi",
""
],
[
"Rui",
"Renting",
""
],
[
"Lin",
"Jianghao",
""
],
[
"Fang",
"Yuchen",
""
],
[
"Liu",
"Yifan",
""
],
[
"Wang",
"Jingkuan",
""
],
[
"Qi",
"Siyuan",
""
],
[
"Zhang",
"Kangning",
""
],
[
"Zhang",
"Weinan",
""
],
[
"Yu",
"Yong",
""
]
] |
new_dataset
| 0.999803 |
2309.01961
|
Pyunghwan Ahn
|
Taehoon Kim, Pyunghwan Ahn, Sangyun Kim, Sihaeng Lee, Mark Marsden,
Alessandra Sala, Seung Hwan Kim, Bohyung Han, Kyoung Mu Lee, Honglak Lee,
Kyounghoon Bae, Xiangyu Wu, Yi Gao, Hailiang Zhang, Yang Yang, Weili Guo,
Jianfeng Lu, Youngtaek Oh, Jae Won Cho, Dong-jin Kim, In So Kweon, Junmo Kim,
Wooyoung Kang, Won Young Jhoo, Byungseok Roh, Jonghwan Mun, Solgil Oh, Kenan
Emir Ak, Gwang-Gook Lee, Yan Xu, Mingwei Shen, Kyomin Hwang, Wonsik Shin,
Kamin Lee, Wonhark Park, Dongkwan Lee, Nojun Kwak, Yujin Wang, Yimu Wang,
Tiancheng Gu, Xingchang Lv, Mingmao Sun
|
NICE: CVPR 2023 Challenge on Zero-shot Image Captioning
|
Tech report, project page https://nice.lgresearch.ai/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this report, we introduce NICE (New frontiers for zero-shot Image
Captioning Evaluation) project and share the results and outcomes of 2023
challenge. This project is designed to challenge the computer vision community
to develop robust image captioning models that advance the state-of-the-art
both in terms of accuracy and fairness. Through the challenge, the image
captioning models were tested using a new evaluation dataset that includes a
large variety of visual concepts from many domains. There was no specific
training data provided for the challenge, and therefore the challenge entries
were required to adapt to new types of image descriptions that had not been
seen during training. This report includes information on the newly proposed
NICE dataset, evaluation methods, challenge results, and technical details of
top-ranking entries. We expect that the outcomes of the challenge will
contribute to the improvement of AI models on various vision-language tasks.
|
[
{
"version": "v1",
"created": "Tue, 5 Sep 2023 05:32:19 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Sep 2023 06:13:34 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Sep 2023 02:15:30 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Kim",
"Taehoon",
""
],
[
"Ahn",
"Pyunghwan",
""
],
[
"Kim",
"Sangyun",
""
],
[
"Lee",
"Sihaeng",
""
],
[
"Marsden",
"Mark",
""
],
[
"Sala",
"Alessandra",
""
],
[
"Kim",
"Seung Hwan",
""
],
[
"Han",
"Bohyung",
""
],
[
"Lee",
"Kyoung Mu",
""
],
[
"Lee",
"Honglak",
""
],
[
"Bae",
"Kyounghoon",
""
],
[
"Wu",
"Xiangyu",
""
],
[
"Gao",
"Yi",
""
],
[
"Zhang",
"Hailiang",
""
],
[
"Yang",
"Yang",
""
],
[
"Guo",
"Weili",
""
],
[
"Lu",
"Jianfeng",
""
],
[
"Oh",
"Youngtaek",
""
],
[
"Cho",
"Jae Won",
""
],
[
"Kim",
"Dong-jin",
""
],
[
"Kweon",
"In So",
""
],
[
"Kim",
"Junmo",
""
],
[
"Kang",
"Wooyoung",
""
],
[
"Jhoo",
"Won Young",
""
],
[
"Roh",
"Byungseok",
""
],
[
"Mun",
"Jonghwan",
""
],
[
"Oh",
"Solgil",
""
],
[
"Ak",
"Kenan Emir",
""
],
[
"Lee",
"Gwang-Gook",
""
],
[
"Xu",
"Yan",
""
],
[
"Shen",
"Mingwei",
""
],
[
"Hwang",
"Kyomin",
""
],
[
"Shin",
"Wonsik",
""
],
[
"Lee",
"Kamin",
""
],
[
"Park",
"Wonhark",
""
],
[
"Lee",
"Dongkwan",
""
],
[
"Kwak",
"Nojun",
""
],
[
"Wang",
"Yujin",
""
],
[
"Wang",
"Yimu",
""
],
[
"Gu",
"Tiancheng",
""
],
[
"Lv",
"Xingchang",
""
],
[
"Sun",
"Mingmao",
""
]
] |
new_dataset
| 0.998396 |
2309.04138
|
Daegyu Lim
|
Daegyu Lim, Myeong-Ju Kim, Junhyeok Cha, Donghyeon Kim, Jaeheung Park
|
Proprioceptive External Torque Learning for Floating Base Robot and its
Applications to Humanoid Locomotion
|
Accepted by 2023 IROS conference
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The estimation of external joint torque and contact wrench is essential for
achieving stable locomotion of humanoids and safety-oriented robots. Although
the contact wrench on the foot of humanoids can be measured using a
force-torque sensor (FTS), FTS increases the cost, inertia, complexity, and
failure possibility of the system. This paper introduces a method for learning
external joint torque solely using proprioceptive sensors (encoders and IMUs)
for a floating base robot. For learning, the GRU network is used and random
walking data is collected. Real robot experiments demonstrate that the network
can estimate the external torque and contact wrench with significantly smaller
errors compared to the model-based method, momentum observer (MOB) with
friction modeling. The study also validates that the estimated contact wrench
can be utilized for zero moment point (ZMP) feedback control, enabling stable
walking. Moreover, even when the robot's feet and the inertia of the upper body
are changed, the trained network shows consistent performance with a
model-based calibration. This result demonstrates the possibility of removing
FTS on the robot, which reduces the disadvantages of hardware sensors. The
summary video is available at https://youtu.be/gT1D4tOiKpo.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 05:33:56 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Lim",
"Daegyu",
""
],
[
"Kim",
"Myeong-Ju",
""
],
[
"Cha",
"Junhyeok",
""
],
[
"Kim",
"Donghyeon",
""
],
[
"Park",
"Jaeheung",
""
]
] |
new_dataset
| 0.991088 |
2309.04389
|
Jinyuan Wang
|
Jinyuan Wang, Hai Zhao, Zhong Wang, Zeyang Zhu, Jinhao Xie, Yong Yu,
Yongjian Fei, Yue Huang and Dawei Cheng
|
CSPRD: A Financial Policy Retrieval Dataset for Chinese Stock Market
| null | null | null | null |
cs.CL cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, great advances in pre-trained language models (PLMs) have
sparked considerable research focus and achieved promising performance on the
approach of dense passage retrieval, which aims at retrieving relative passages
from massive corpus with given questions. However, most of existing datasets
mainly benchmark the models with factoid queries of general commonsense, while
specialised fields such as finance and economics remain unexplored due to the
deficiency of large-scale and high-quality datasets with expert annotations. In
this work, we propose a new task, policy retrieval, by introducing the Chinese
Stock Policy Retrieval Dataset (CSPRD), which provides 700+ prospectus passages
labeled by experienced experts with relevant articles from 10k+ entries in our
collected Chinese policy corpus. Experiments on lexical, embedding and
fine-tuned bi-encoder models show the effectiveness of our proposed CSPRD yet
also suggests ample potential for improvement. Our best performing baseline
achieves 56.1% MRR@10, 28.5% NDCG@10, 37.5% Recall@10 and 80.6% Precision@10 on
dev set.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 15:40:54 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Sep 2023 05:19:16 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Wang",
"Jinyuan",
""
],
[
"Zhao",
"Hai",
""
],
[
"Wang",
"Zhong",
""
],
[
"Zhu",
"Zeyang",
""
],
[
"Xie",
"Jinhao",
""
],
[
"Yu",
"Yong",
""
],
[
"Fei",
"Yongjian",
""
],
[
"Huang",
"Yue",
""
],
[
"Cheng",
"Dawei",
""
]
] |
new_dataset
| 0.99954 |
2309.04505
|
Asmaa Shati
|
Asmaa Shati, Ghulam Mubashar Hassan and Amitava Datta
|
COVID-19 Detection System: A Comparative Analysis of System Performance
Based on Acoustic Features of Cough Audio Signals
|
8 pages, 3 figures
| null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A wide range of respiratory diseases, such as cold and flu, asthma, and
COVID-19, affect people's daily lives worldwide. In medical practice,
respiratory sounds are widely used in medical services to diagnose various
respiratory illnesses and lung disorders. The traditional diagnosis of such
sounds requires specialized knowledge, which can be costly and reliant on human
expertise. Recently, cough audio recordings have been used to automate the
process of detecting respiratory conditions. This research aims to examine
various acoustic features that enhance the performance of machine learning (ML)
models in detecting COVID-19 from cough signals. This study investigates the
efficacy of three feature extraction techniques, including Mel Frequency
Cepstral Coefficients (MFCC), Chroma, and Spectral Contrast features, on two ML
algorithms, Support Vector Machine (SVM) and Multilayer Perceptron (MLP), and
thus proposes an efficient COVID-19 detection system. The proposed system
produces a practical solution and demonstrates higher state-of-the-art
classification performance on COUGHVID and Virufy datasets for COVID-19
detection.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 08:33:24 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Shati",
"Asmaa",
""
],
[
"Hassan",
"Ghulam Mubashar",
""
],
[
"Datta",
"Amitava",
""
]
] |
new_dataset
| 0.979765 |
2309.04542
|
SaiKiran Tedla
|
SaiKiran Tedla, Beixuan Yang, Michael S. Brown
|
Examining Autoexposure for Challenging Scenes
|
ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autoexposure (AE) is a critical step applied by camera systems to ensure
properly exposed images. While current AE algorithms are effective in well-lit
environments with constant illumination, these algorithms still struggle in
environments with bright light sources or scenes with abrupt changes in
lighting. A significant hurdle in developing new AE algorithms for challenging
environments, especially those with time-varying lighting, is the lack of
suitable image datasets. To address this issue, we have captured a new 4D
exposure dataset that provides a large solution space (i.e., shutter speed
range from (1/500 to 15 seconds) over a temporal sequence with moving objects,
bright lights, and varying lighting. In addition, we have designed a software
platform to allow AE algorithms to be used in a plug-and-play manner with the
dataset. Our dataset and associate platform enable repeatable evaluation of
different AE algorithms and provide a much-needed starting point to develop
better AE methods. We examine several existing AE strategies using our dataset
and show that most users prefer a simple saliency method for challenging
lighting conditions.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 18:12:39 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Tedla",
"SaiKiran",
""
],
[
"Yang",
"Beixuan",
""
],
[
"Brown",
"Michael S.",
""
]
] |
new_dataset
| 0.999741 |
2309.04566
|
Yun Wen
|
Yun Wen (1), Gaojie Chen (1), Sisai Fang (2), Zheng Chu (1), Pei Xiao
(1) and Rahim Tafazolli (1) ((1) Institute for Communication Systems (ICS),
5GIC & 6GIC, University of Surrey (2) School of Engineering, University of
Leicester)
|
STAR-RIS-Assisted-Full-Duplex Jamming Design for Secure Wireless
Communications System
|
12 pages, 7 figures
| null | null | null |
cs.IT cs.CR math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physical layer security (PLS) technologies are expected to play an important
role in the next-generation wireless networks, by providing secure
communication to protect critical and sensitive information from illegitimate
devices. In this paper, we propose a novel secure communication scheme where
the legitimate receiver use full-duplex (FD) technology to transmit jamming
signals with the assistance of simultaneous transmitting and reflecting
reconfigurable intelligent surface (STARRIS) which can operate under the energy
splitting (ES) model and the mode switching (MS) model, to interfere with the
undesired reception by the eavesdropper. We aim to maximize the secrecy
capacity by jointly optimizing the FD beamforming vectors, amplitudes and phase
shift coefficients for the ESRIS, and mode selection and phase shift
coefficients for the MS-RIS. With above optimization, the proposed scheme can
concentrate the jamming signals on the eavesdropper while simultaneously
eliminating the self-interference (SI) in the desired receiver. To tackle the
coupling effect of multiple variables, we propose an alternating optimization
algorithm to solve the problem iteratively. Furthermore, we handle the
non-convexity of the problem by the the successive convex approximation (SCA)
scheme for the beamforming optimizations, amplitudes and phase shifts
optimizations for the ES-RIS, as well as the phase shifts optimizations for the
MS-RIS. In addition, we adopt a semi-definite relaxation (SDR) and Gaussian
randomization process to overcome the difficulty introduced by the binary
nature of mode optimization of the MS-RIS. Simulation results validate the
performance of our proposed schemes as well as the efficacy of adapting both
two types of STAR-RISs in enhancing secure communications when compared to the
traditional selfinterference cancellation technology.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 19:36:02 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Wen",
"Yun",
""
],
[
"Chen",
"Gaojie",
""
],
[
"Fang",
"Sisai",
""
],
[
"Chu",
"Zheng",
""
],
[
"Xiao",
"Pei",
""
],
[
"Tafazolli",
"Rahim",
""
]
] |
new_dataset
| 0.9975 |
2309.04579
|
Xueyi Wang
|
Xueyi Wang
|
EGOFALLS: A visual-audio dataset and benchmark for fall detection using
egocentric cameras
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Falls are significant and often fatal for vulnerable populations such as the
elderly. Previous works have addressed the detection of falls by relying on
data capture by a single sensor, images or accelerometers. In this work, we
rely on multimodal descriptors extracted from videos captured by egocentric
cameras. Our proposed method includes a late decision fusion layer that builds
on top of the extracted descriptors. Furthermore, we collect a new dataset on
which we assess our proposed approach. We believe this is the first public
dataset of its kind. The dataset comprises 10,948 video samples by 14 subjects.
We conducted ablation experiments to assess the performance of individual
feature extractors, fusion of visual information, and fusion of both visual and
audio information. Moreover, we experimented with internal and external
cross-validation. Our results demonstrate that the fusion of audio and visual
information through late decision fusion improves detection performance, making
it a promising tool for fall prevention and mitigation.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 20:14:25 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Wang",
"Xueyi",
""
]
] |
new_dataset
| 0.998696 |
2309.04590
|
Arpit Agarwal Mr.
|
Arpit Agarwal, Abhiroop Ajith, Chengtao Wen, Veniamin Stryzheus, Brian
Miller, Matthew Chen, Micah K. Johnson, Jose Luis Susa Rincon, Justinian
Rosca and Wenzhen Yuan
|
Robotic Defect Inspection with Visual and Tactile Perception for
Large-scale Components
|
This is a pre-print for International Conference on Intelligent
Robots and Systems 2023 publication
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In manufacturing processes, surface inspection is a key requirement for
quality assessment and damage localization. Due to this, automated surface
anomaly detection has become a promising area of research in various industrial
inspection systems. A particular challenge in industries with large-scale
components, like aircraft and heavy machinery, is inspecting large parts with
very small defect dimensions. Moreover, these parts can be of curved shapes. To
address this challenge, we present a 2-stage multi-modal inspection pipeline
with visual and tactile sensing. Our approach combines the best of both visual
and tactile sensing by identifying and localizing defects using a global view
(vision) and using the localized area for tactile scanning for identifying
remaining defects. To benchmark our approach, we propose a novel real-world
dataset with multiple metallic defect types per image, collected in the
production environments on real aerospace manufacturing parts, as well as
online robot experiments in two environments. Our approach is able to identify
85% defects using Stage I and identify 100% defects after Stage II. The dataset
is publicly available at https://zenodo.org/record/8327713
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 20:36:56 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Agarwal",
"Arpit",
""
],
[
"Ajith",
"Abhiroop",
""
],
[
"Wen",
"Chengtao",
""
],
[
"Stryzheus",
"Veniamin",
""
],
[
"Miller",
"Brian",
""
],
[
"Chen",
"Matthew",
""
],
[
"Johnson",
"Micah K.",
""
],
[
"Rincon",
"Jose Luis Susa",
""
],
[
"Rosca",
"Justinian",
""
],
[
"Yuan",
"Wenzhen",
""
]
] |
new_dataset
| 0.998654 |
2309.04662
|
Sneha Kudugunta
|
Sneha Kudugunta, Isaac Caswell, Biao Zhang, Xavier Garcia, Christopher
A. Choquette-Choo, Katherine Lee, Derrick Xin, Aditya Kusupati, Romi Stella,
Ankur Bapna, Orhan Firat
|
MADLAD-400: A Multilingual And Document-Level Large Audited Dataset
|
Preprint
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce MADLAD-400, a manually audited, general domain 3T token
monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss
the limitations revealed by self-auditing MADLAD-400, and the role data
auditing had in the dataset creation process. We then train and release a
10.7B-parameter multilingual machine translation model on 250 billion tokens
covering over 450 languages using publicly available data, and find that it is
competitive with models that are significantly larger, and report the results
on different domains. In addition, we train a 8B-parameter language model, and
assess the results on few-shot translation. We make the baseline models
available to the research community.
|
[
{
"version": "v1",
"created": "Sat, 9 Sep 2023 02:34:01 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Kudugunta",
"Sneha",
""
],
[
"Caswell",
"Isaac",
""
],
[
"Zhang",
"Biao",
""
],
[
"Garcia",
"Xavier",
""
],
[
"Choquette-Choo",
"Christopher A.",
""
],
[
"Lee",
"Katherine",
""
],
[
"Xin",
"Derrick",
""
],
[
"Kusupati",
"Aditya",
""
],
[
"Stella",
"Romi",
""
],
[
"Bapna",
"Ankur",
""
],
[
"Firat",
"Orhan",
""
]
] |
new_dataset
| 0.999849 |
2309.04675
|
Takuro Fujii
|
Takuro Fujii and Shuhei Tarashima
|
BiLMa: Bidirectional Local-Matching for Text-based Person
Re-identification
|
Accepted at ICCVW 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-based person re-identification (TBPReID) aims to retrieve person images
represented by a given textual query. In this task, how to effectively align
images and texts globally and locally is a crucial challenge. Recent works have
obtained high performances by solving Masked Language Modeling (MLM) to align
image/text parts. However, they only performed uni-directional (i.e., from
image to text) local-matching, leaving room for improvement by introducing
opposite-directional (i.e., from text to image) local-matching. In this work,
we introduce Bidirectional Local-Matching (BiLMa) framework that jointly
optimize MLM and Masked Image Modeling (MIM) in TBPReID model training. With
this framework, our model is trained so as the labels of randomly masked both
image and text tokens are predicted by unmasked tokens. In addition, to narrow
the semantic gap between image and text in MIM, we propose Semantic MIM
(SemMIM), in which the labels of masked image tokens are automatically given by
a state-of-the-art human parser. Experimental results demonstrate that our
BiLMa framework with SemMIM achieves state-of-the-art Rank@1 and mAP scores on
three benchmarks.
|
[
{
"version": "v1",
"created": "Sat, 9 Sep 2023 04:01:24 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Fujii",
"Takuro",
""
],
[
"Tarashima",
"Shuhei",
""
]
] |
new_dataset
| 0.990257 |
2309.04710
|
Lin Shao
|
Gang Yang and Siyuan Luo and Lin Shao
|
Jade: A Differentiable Physics Engine for Articulated Rigid Bodies with
Intersection-Free Frictional Contact
| null | null | null | null |
cs.RO cs.AI cs.CV cs.GR cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Jade, a differentiable physics engine for articulated rigid
bodies. Jade models contacts as the Linear Complementarity Problem (LCP).
Compared to existing differentiable simulations, Jade offers features including
intersection-free collision simulation and stable LCP solutions for multiple
frictional contacts. We use continuous collision detection to detect the time
of impact and adopt the backtracking strategy to prevent intersection between
bodies with complex geometry shapes. We derive the gradient calculation to
ensure the whole simulation process is differentiable under the backtracking
mechanism. We modify the popular Dantzig algorithm to get valid solutions under
multiple frictional contacts. We conduct extensive experiments to demonstrate
the effectiveness of our differentiable physics simulation over a variety of
contact-rich tasks.
|
[
{
"version": "v1",
"created": "Sat, 9 Sep 2023 07:39:36 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Yang",
"Gang",
""
],
[
"Luo",
"Siyuan",
""
],
[
"Shao",
"Lin",
""
]
] |
new_dataset
| 0.983928 |
2309.04722
|
Shah Rukh Humayoun
|
Ilya Nemtsov, MST Jasmine Jahan, Chuting Yan, Shah Rukh Humayoun
|
TECVis: A Visual Analytics Tool to Compare People's Emotion Feelings
|
2 pages
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Twitter is one of the popular social media platforms where people share news
or reactions towards an event or topic using short text messages called
"tweets". Emotion analysis in these tweets can play a vital role in
understanding peoples' feelings towards the underlying event or topic. In this
work, we present our visual analytics tool, called TECVis, that focuses on
providing comparison views of peoples' emotion feelings in tweets towards an
event or topic. The comparison is done based on geolocations or timestamps.
TECVis provides several interaction and filtering options for navigation and
better exploration of underlying tweet data for emotion feelings comparison.
|
[
{
"version": "v1",
"created": "Sat, 9 Sep 2023 08:52:20 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Nemtsov",
"Ilya",
""
],
[
"Jahan",
"MST Jasmine",
""
],
[
"Yan",
"Chuting",
""
],
[
"Humayoun",
"Shah Rukh",
""
]
] |
new_dataset
| 0.984874 |
2309.04752
|
Ziqian Shao
|
Xuanxi Chen, Tao Wang, Ziqian Shao, Kaihao Zhang, Wenhan Luo, Tong Lu,
Zikun Liu, Tae-Kyun Kim, Hongdong Li
|
Deep Video Restoration for Under-Display Camera
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Images or videos captured by the Under-Display Camera (UDC) suffer from
severe degradation, such as saturation degeneration and color shift. While
restoration for UDC has been a critical task, existing works of UDC restoration
focus only on images. UDC video restoration (UDC-VR) has not been explored in
the community. In this work, we first propose a GAN-based generation pipeline
to simulate the realistic UDC degradation process. With the pipeline, we build
the first large-scale UDC video restoration dataset called PexelsUDC, which
includes two subsets named PexelsUDC-T and PexelsUDC-P corresponding to
different displays for UDC. Using the proposed dataset, we conduct extensive
benchmark studies on existing video restoration methods and observe their
limitations on the UDC-VR task. To this end, we propose a novel
transformer-based baseline method that adaptively enhances degraded videos. The
key components of the method are a spatial branch with local-aware
transformers, a temporal branch embedded temporal transformers, and a
spatial-temporal fusion module. These components drive the model to fully
exploit spatial and temporal information for UDC-VR. Extensive experiments show
that our method achieves state-of-the-art performance on PexelsUDC. The
benchmark and the baseline method are expected to promote the progress of
UDC-VR in the community, which will be made public.
|
[
{
"version": "v1",
"created": "Sat, 9 Sep 2023 10:48:06 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Chen",
"Xuanxi",
""
],
[
"Wang",
"Tao",
""
],
[
"Shao",
"Ziqian",
""
],
[
"Zhang",
"Kaihao",
""
],
[
"Luo",
"Wenhan",
""
],
[
"Lu",
"Tong",
""
],
[
"Liu",
"Zikun",
""
],
[
"Kim",
"Tae-Kyun",
""
],
[
"Li",
"Hongdong",
""
]
] |
new_dataset
| 0.99858 |
2309.04791
|
Chengqian Li
|
Delin Feng, Chengqian Li, Yongqi Zhang, Chen Yu, and Soeren
Schwertfeger
|
osmAG: Hierarchical Semantic Topometric Area Graph Maps in the OSM
Format for Mobile Robotics
|
7 pages
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Maps are essential to mobile robotics tasks like localization and planning.
We propose the open street map (osm) XML based Area Graph file format to store
hierarchical, topometric semantic multi-floor maps of indoor and outdoor
environments, since currently no such format is popular within the robotics
community. Building on-top of osm we leverage the available open source editing
tools and libraries of osm, while adding the needed mobile robotics aspect with
building-level obstacle representation yet very compact, topometric data that
facilitates planning algorithms. Through the use of common osm keys as well as
custom ones we leverage the power of semantic annotation to enable various
applications. For example, we support planning based on robot capabilities, to
take the locomotion mode and attributes in conjunction with the environment
information into account. The provided C++ library is integrated into ROS. We
evaluate the performance of osmAG using real data in a global path planning
application on a very big osmAG map, demonstrating its convenience and
effectiveness for mobile robots.
|
[
{
"version": "v1",
"created": "Sat, 9 Sep 2023 13:36:24 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Feng",
"Delin",
""
],
[
"Li",
"Chengqian",
""
],
[
"Zhang",
"Yongqi",
""
],
[
"Yu",
"Chen",
""
],
[
"Schwertfeger",
"Soeren",
""
]
] |
new_dataset
| 0.998765 |
2309.04814
|
Xiuzhe Wu
|
Xiuzhe Wu, Pengfei Hu, Yang Wu, Xiaoyang Lyu, Yan-Pei Cao, Ying Shan,
Wenming Yang, Zhongqian Sun, Xiaojuan Qi
|
Speech2Lip: High-fidelity Speech to Lip Generation by Learning from a
Short Video
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Synthesizing realistic videos according to a given speech is still an open
challenge. Previous works have been plagued by issues such as inaccurate lip
shape generation and poor image quality. The key reason is that only motions
and appearances on limited facial areas (e.g., lip area) are mainly driven by
the input speech. Therefore, directly learning a mapping function from speech
to the entire head image is prone to ambiguity, particularly when using a short
video for training. We thus propose a decomposition-synthesis-composition
framework named Speech to Lip (Speech2Lip) that disentangles speech-sensitive
and speech-insensitive motion/appearance to facilitate effective learning from
limited training data, resulting in the generation of natural-looking videos.
First, given a fixed head pose (i.e., canonical space), we present a
speech-driven implicit model for lip image generation which concentrates on
learning speech-sensitive motion and appearance. Next, to model the major
speech-insensitive motion (i.e., head movement), we introduce a geometry-aware
mutual explicit mapping (GAMEM) module that establishes geometric mappings
between different head poses. This allows us to paste generated lip images at
the canonical space onto head images with arbitrary poses and synthesize
talking videos with natural head movements. In addition, a Blend-Net and a
contrastive sync loss are introduced to enhance the overall synthesis
performance. Quantitative and qualitative results on three benchmarks
demonstrate that our model can be trained by a video of just a few minutes in
length and achieve state-of-the-art performance in both visual quality and
speech-visual synchronization. Code: https://github.com/CVMI-Lab/Speech2Lip.
|
[
{
"version": "v1",
"created": "Sat, 9 Sep 2023 14:52:39 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Wu",
"Xiuzhe",
""
],
[
"Hu",
"Pengfei",
""
],
[
"Wu",
"Yang",
""
],
[
"Lyu",
"Xiaoyang",
""
],
[
"Cao",
"Yan-Pei",
""
],
[
"Shan",
"Ying",
""
],
[
"Yang",
"Wenming",
""
],
[
"Sun",
"Zhongqian",
""
],
[
"Qi",
"Xiaojuan",
""
]
] |
new_dataset
| 0.98646 |
2309.04820
|
Michael Hobley
|
Michael A. Hobley and Victor A. Prisacariu
|
ABC Easy as 123: A Blind Counter for Exemplar-Free Multi-Class
Class-agnostic Counting
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Class-agnostic counting methods enumerate objects of an arbitrary class,
providing tremendous utility in many fields. Prior works have limited
usefulness as they require either a set of examples of the type to be counted
or that the image contains only a single type of object. A significant factor
in these shortcomings is the lack of a dataset to properly address counting in
settings with more than one kind of object present. To address these issues, we
propose the first Multi-class, Class-Agnostic Counting dataset (MCAC) and A
Blind Counter (ABC123), a method that can count multiple types of objects
simultaneously without using examples of type during training or inference.
ABC123 introduces a new paradigm where instead of requiring exemplars to guide
the enumeration, examples are found after the counting stage to help a user
understand the generated outputs. We show that ABC123 outperforms contemporary
methods on MCAC without the requirement of human in-the-loop annotations. We
also show that this performance transfers to FSC-147, the standard
class-agnostic counting dataset.
|
[
{
"version": "v1",
"created": "Sat, 9 Sep 2023 15:18:46 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Hobley",
"Michael A.",
""
],
[
"Prisacariu",
"Victor A.",
""
]
] |
new_dataset
| 0.999068 |
2309.04839
|
Yujie Wang
|
Yujie Wang and Xiangru Xu
|
Safe Control of Euler-Lagrange Systems with Limited Model Information
|
Accepted to IEEE CDC 2023 and this is the extended version
| null | null | null |
cs.SY eess.SY math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a new safe control framework for Euler-Lagrange (EL)
systems with limited model information, external disturbances, and measurement
uncertainties. The EL system is decomposed into two subsystems called the proxy
subsystem and the virtual tracking subsystem. An adaptive safe controller based
on barrier Lyapunov functions is designed for the virtual tracking subsystem to
ensure the boundedness of the safe velocity tracking error, and a safe
controller based on control barrier functions is designed for the proxy
subsystem to ensure controlled invariance of the safe set defined either in the
joint space or task space. Theorems that guarantee the safety of the proposed
controllers are provided. In contrast to existing safe control strategies for
EL systems, the proposed method requires much less model information and can
ensure safety rather than input-to-state safety. Simulation results are
provided to illustrate the effectiveness of the proposed method.
|
[
{
"version": "v1",
"created": "Sat, 9 Sep 2023 16:57:31 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Wang",
"Yujie",
""
],
[
"Xu",
"Xiangru",
""
]
] |
new_dataset
| 0.995354 |
2309.04840
|
Zixing Wang
|
Zixing Wang, Ahmed H. Qureshi
|
AnyPose: Anytime 3D Human Pose Forecasting via Neural Ordinary
Differential Equations
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Anytime 3D human pose forecasting is crucial to synchronous real-world
human-machine interaction, where the term ``anytime" corresponds to predicting
human pose at any real-valued time step. However, to the best of our knowledge,
all the existing methods in human pose forecasting perform predictions at
preset, discrete time intervals. Therefore, we introduce AnyPose, a lightweight
continuous-time neural architecture that models human behavior dynamics with
neural ordinary differential equations. We validate our framework on the
Human3.6M, AMASS, and 3DPW dataset and conduct a series of comprehensive
analyses towards comparison with existing methods and the intersection of human
pose and neural ordinary differential equations. Our results demonstrate that
AnyPose exhibits high-performance accuracy in predicting future poses and takes
significantly lower computational time than traditional methods in solving
anytime prediction tasks.
|
[
{
"version": "v1",
"created": "Sat, 9 Sep 2023 16:59:57 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Wang",
"Zixing",
""
],
[
"Qureshi",
"Ahmed H.",
""
]
] |
new_dataset
| 0.994838 |
2309.04859
|
Jintao Sun
|
Jintao Sun, Zeke Wang, Tao Lu, Wenzhi Chen
|
PyHGL: A Python-based Hardware Generation Language Framework
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hardware generation languages (HGLs) increase hardware design productivity by
creating parameterized modules and test benches. Unfortunately, existing tools
are not widely adopted due to several demerits, including limited support for
asynchronous circuits and unknown states, lack of concise and efficient
language features, and low integration of simulation and verification
functions. This paper introduces PyHGL, an open-source Python framework that
aims to provide a simple and unified environment for hardware generation,
simulation, and verification. PyHGL language is a syntactical superset of
Python, which greatly reduces the lines of code (LOC) and improves productivity
by providing unique features such as dynamic typing, vectorized operations, and
automatic port deduction. In addition, PyHGL integrates an event-driven
simulator that simulates the asynchronous behaviors of digital circuits using
three-state logic. We also propose an algorithm that eliminates the calculation
and transmission overhead of unknown state propagation for binary stimuli. The
results suggest that PyHGL code is up to 6.1x denser than traditional RTL and
generates high-quality synthesizable RTL code. Moreover, the optimized
simulator achieves 2.9x speed up and matches the performance of a commonly used
open-source logic simulator.
|
[
{
"version": "v1",
"created": "Sat, 9 Sep 2023 18:28:41 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Sun",
"Jintao",
""
],
[
"Wang",
"Zeke",
""
],
[
"Lu",
"Tao",
""
],
[
"Chen",
"Wenzhi",
""
]
] |
new_dataset
| 0.999778 |
2309.04888
|
Long Chen
|
Long Chen, Weiwen Zhang, Yuli Wu, Martin Strauch, Dorit Merhof
|
Semi-supervised Instance Segmentation with a Learned Shape Prior
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
To date, most instance segmentation approaches are based on supervised
learning that requires a considerable amount of annotated object contours as
training ground truth. Here, we propose a framework that searches for the
target object based on a shape prior. The shape prior model is learned with a
variational autoencoder that requires only a very limited amount of training
data: In our experiments, a few dozens of object shape patches from the target
dataset, as well as purely synthetic shapes, were sufficient to achieve results
en par with supervised methods with full access to training data on two out of
three cell segmentation datasets. Our method with a synthetic shape prior was
superior to pre-trained supervised models with access to limited
domain-specific training data on all three datasets. Since the learning of
prior models requires shape patches, whether real or synthetic data, we call
this framework semi-supervised learning.
|
[
{
"version": "v1",
"created": "Sat, 9 Sep 2023 22:55:25 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Chen",
"Long",
""
],
[
"Zhang",
"Weiwen",
""
],
[
"Wu",
"Yuli",
""
],
[
"Strauch",
"Martin",
""
],
[
"Merhof",
"Dorit",
""
]
] |
new_dataset
| 0.984835 |
2309.04902
|
Aref Miri Rekavandi
|
Aref Miri Rekavandi, Shima Rashidi, Farid Boussaid, Stephen Hoefs,
Emre Akbas, Mohammed bennamoun
|
Transformers in Small Object Detection: A Benchmark and Survey of
State-of-the-Art
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Transformers have rapidly gained popularity in computer vision, especially in
the field of object recognition and detection. Upon examining the outcomes of
state-of-the-art object detection methods, we noticed that transformers
consistently outperformed well-established CNN-based detectors in almost every
video or image dataset. While transformer-based approaches remain at the
forefront of small object detection (SOD) techniques, this paper aims to
explore the performance benefits offered by such extensive networks and
identify potential reasons for their SOD superiority. Small objects have been
identified as one of the most challenging object types in detection frameworks
due to their low visibility. We aim to investigate potential strategies that
could enhance transformers' performance in SOD. This survey presents a taxonomy
of over 60 research studies on developed transformers for the task of SOD,
spanning the years 2020 to 2023. These studies encompass a variety of detection
applications, including small object detection in generic images, aerial
images, medical images, active millimeter images, underwater images, and
videos. We also compile and present a list of 12 large-scale datasets suitable
for SOD that were overlooked in previous studies and compare the performance of
the reviewed studies using popular metrics such as mean Average Precision
(mAP), Frames Per Second (FPS), number of parameters, and more. Researchers can
keep track of newer studies on our web page, which is available at
\url{https://github.com/arekavandi/Transformer-SOD}.
|
[
{
"version": "v1",
"created": "Sun, 10 Sep 2023 00:08:29 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Rekavandi",
"Aref Miri",
""
],
[
"Rashidi",
"Shima",
""
],
[
"Boussaid",
"Farid",
""
],
[
"Hoefs",
"Stephen",
""
],
[
"Akbas",
"Emre",
""
],
[
"bennamoun",
"Mohammed",
""
]
] |
new_dataset
| 0.989922 |
2309.04918
|
Aryan Jadon
|
Shashank Kumar and Sachin Sharma and Aryan Jadon
|
Distributed Kafka Clusters: A Novel Approach to Global Message Ordering
|
6 Pages, 6 Figures
| null | null | null |
cs.DC cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
In contemporary distributed systems, logs are produced at an astounding rate,
generating terabytes of data within mere seconds. These logs, containing
pivotal details like system metrics, user actions, and diverse events, are
foundational to the system's consistent and accurate operations. Precise log
ordering becomes indispensable to avert potential ambiguities and discordances
in system functionalities. Apache Kafka, a prevalent distributed message queue,
offers significant solutions to various distributed log processing challenges.
However, it presents an inherent limitation while Kafka ensures the in-order
delivery of messages within a single partition to the consumer, it falls short
in guaranteeing a global order for messages spanning multiple partitions. This
research delves into innovative methodologies to achieve global ordering of
messages within a Kafka topic, aiming to bolster the integrity and consistency
of log processing in distributed systems. Our code is available on GitHub.
|
[
{
"version": "v1",
"created": "Sun, 10 Sep 2023 02:34:29 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Kumar",
"Shashank",
""
],
[
"Sharma",
"Sachin",
""
],
[
"Jadon",
"Aryan",
""
]
] |
new_dataset
| 0.98359 |
2309.04945
|
Haoran Lin
|
Haoran Lin and Lifeng Yan and Qixin Chang and Haitian Lu and Chenlin
Li and Quanjie He and Zeyu Song and Xiaohui Duan and Zekun Yin and Yuxuan Li
and Zhao Liu and Wei Xue and Haohuan Fu and Lin Gan and Guangwen Yang and
Weiguo Liu
|
O2ATH: An OpenMP Offloading Toolkit for the Sunway Heterogeneous
Manycore Platform
|
15 pages, 6 figures, 5 tables,
| null | null | null |
cs.PL cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The next generation Sunway supercomputer employs the SW26010pro processor,
which features a specialized on-chip heterogeneous architecture. Applications
with significant hotspots can benefit from the great computation capacity
improvement of Sunway many-core architectures by carefully making intensive
manual many-core parallelization efforts. However, some legacy projects with
large codebases, such as CESM, ROMS and WRF, contain numerous lines of code and
do not have significant hotspots. The cost of manually porting such
applications to the Sunway architecture is almost unaffordable. To overcome
such a challenge, we have developed a toolkit named O2ATH. O2ATH forwards GNU
OpenMP runtime library calls to Sunway's Athread library, which greatly
simplifies the parallelization work on the Sunway architecture.O2ATH enables
users to write both MPE and CPE code in a single file, and parallelization can
be achieved by utilizing OpenMP directives and attributes. In practice, O2ATH
has helped us to port two large projects, CESM and ROMS, to the CPEs of the
next generation Sunway supercomputers via the OpenMP offload method. In the
experiments, kernel speedups range from 3 to 15 times, resulting in 3 to 6
times whole application speedups.Furthermore, O2ATH requires significantly
fewer code modifications compared to manually crafting CPE functions.This
indicates that O2ATH can greatly enhance development efficiency when porting or
optimizing large software projects on Sunway supercomputers.
|
[
{
"version": "v1",
"created": "Sun, 10 Sep 2023 06:30:52 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Lin",
"Haoran",
""
],
[
"Yan",
"Lifeng",
""
],
[
"Chang",
"Qixin",
""
],
[
"Lu",
"Haitian",
""
],
[
"Li",
"Chenlin",
""
],
[
"He",
"Quanjie",
""
],
[
"Song",
"Zeyu",
""
],
[
"Duan",
"Xiaohui",
""
],
[
"Yin",
"Zekun",
""
],
[
"Li",
"Yuxuan",
""
],
[
"Liu",
"Zhao",
""
],
[
"Xue",
"Wei",
""
],
[
"Fu",
"Haohuan",
""
],
[
"Gan",
"Lin",
""
],
[
"Yang",
"Guangwen",
""
],
[
"Liu",
"Weiguo",
""
]
] |
new_dataset
| 0.951204 |
2309.04976
|
Jiaying Guo
|
Jiaying Guo, Michael R. Jones, Soufiene Djahel, and Shen Wang
|
AVARS -- Alleviating Unexpected Urban Road Traffic Congestion using UAVs
| null | null | null | null |
cs.LG cs.AI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reducing unexpected urban traffic congestion caused by en-route events (e.g.,
road closures, car crashes, etc.) often requires fast and accurate reactions to
choose the best-fit traffic signals. Traditional traffic light control systems,
such as SCATS and SCOOT, are not efficient as their traffic data provided by
induction loops has a low update frequency (i.e., longer than 1 minute).
Moreover, the traffic light signal plans used by these systems are selected
from a limited set of candidate plans pre-programmed prior to unexpected
events' occurrence. Recent research demonstrates that camera-based traffic
light systems controlled by deep reinforcement learning (DRL) algorithms are
more effective in reducing traffic congestion, in which the cameras can provide
high-frequency high-resolution traffic data. However, these systems are costly
to deploy in big cities due to the excessive potential upgrades required to
road infrastructure. In this paper, we argue that Unmanned Aerial Vehicles
(UAVs) can play a crucial role in dealing with unexpected traffic congestion
because UAVs with onboard cameras can be economically deployed when and where
unexpected congestion occurs. Then, we propose a system called "AVARS" that
explores the potential of using UAVs to reduce unexpected urban traffic
congestion using DRL-based traffic light signal control. This approach is
validated on a widely used open-source traffic simulator with practical UAV
settings, including its traffic monitoring ranges and battery lifetime. Our
simulation results show that AVARS can effectively recover the unexpected
traffic congestion in Dublin, Ireland, back to its original un-congested level
within the typical battery life duration of a UAV.
|
[
{
"version": "v1",
"created": "Sun, 10 Sep 2023 09:40:20 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Guo",
"Jiaying",
""
],
[
"Jones",
"Michael R.",
""
],
[
"Djahel",
"Soufiene",
""
],
[
"Wang",
"Shen",
""
]
] |
new_dataset
| 0.996265 |
2309.04977
|
Xuhao Pan
|
Yuan Meng, Xuhao Pan, Jun Chang and Yue Wang
|
RGAT: A Deeper Look into Syntactic Dependency Information for
Coreference Resolution
|
8 pages, 5 figures
|
2023 International Joint Conference on Neural Networks (IJCNN)
|
10.1109/IJCNN54540.2023.10191577
| null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Although syntactic information is beneficial for many NLP tasks, combining it
with contextual information between words to solve the coreference resolution
problem needs to be further explored. In this paper, we propose an end-to-end
parser that combines pre-trained BERT with a Syntactic Relation Graph Attention
Network (RGAT) to take a deeper look into the role of syntactic dependency
information for the coreference resolution task. In particular, the RGAT model
is first proposed, then used to understand the syntactic dependency graph and
learn better task-specific syntactic embeddings. An integrated architecture
incorporating BERT embeddings and syntactic embeddings is constructed to
generate blending representations for the downstream task. Our experiments on a
public Gendered Ambiguous Pronouns (GAP) dataset show that with the supervision
learning of the syntactic dependency graph and without fine-tuning the entire
BERT, we increased the F1-score of the previous best model (RGCN-with-BERT)
from 80.3% to 82.5%, compared to the F1-score by single BERT embeddings from
78.5% to 82.5%. Experimental results on another public dataset - OntoNotes 5.0
demonstrate that the performance of the model is also improved by incorporating
syntactic dependency information learned from RGAT.
|
[
{
"version": "v1",
"created": "Sun, 10 Sep 2023 09:46:38 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Meng",
"Yuan",
""
],
[
"Pan",
"Xuhao",
""
],
[
"Chang",
"Jun",
""
],
[
"Wang",
"Yue",
""
]
] |
new_dataset
| 0.993481 |
2309.05028
|
Liang Song
|
Liang Song, Guangming Wang, Jiuming Liu, Zhenyang Fu, Yanzi Miao, and
Hesheng
|
SC-NeRF: Self-Correcting Neural Radiance Field with Sparse Views
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In recent studies, the generalization of neural radiance fields for novel
view synthesis task has been widely explored. However, existing methods are
limited to objects and indoor scenes. In this work, we extend the
generalization task to outdoor scenes, trained only on object-level datasets.
This approach presents two challenges. Firstly, the significant distributional
shift between training and testing scenes leads to black artifacts in rendering
results. Secondly, viewpoint changes in outdoor scenes cause ghosting or
missing regions in rendered images. To address these challenges, we propose a
geometric correction module and an appearance correction module based on
multi-head attention mechanisms. We normalize rendered depth and combine it
with light direction as query in the attention mechanism. Our network
effectively corrects varying scene structures and geometric features in outdoor
scenes, generalizing well from object-level to unseen outdoor scenes.
Additionally, we use appearance correction module to correct appearance
features, preventing rendering artifacts like blank borders and ghosting due to
viewpoint changes. By combining these modules, our approach successfully
tackles the challenges of outdoor scene generalization, producing high-quality
rendering results. When evaluated on four datasets (Blender, DTU, LLFF,
Spaces), our network outperforms previous methods. Notably, compared to
MVSNeRF, our network improves average PSNR from 19.369 to 25.989, SSIM from
0.838 to 0.889, and reduces LPIPS from 0.265 to 0.224 on Spaces outdoor scenes.
|
[
{
"version": "v1",
"created": "Sun, 10 Sep 2023 13:55:41 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Song",
"Liang",
""
],
[
"Wang",
"Guangming",
""
],
[
"Liu",
"Jiuming",
""
],
[
"Fu",
"Zhenyang",
""
],
[
"Miao",
"Yanzi",
""
],
[
"Hesheng",
"",
""
]
] |
new_dataset
| 0.998526 |
2309.05058
|
Meng Cui
|
Meng Cui, Xubo Liu, Haohe Liu, Zhuangzhuang Du, Tao Chen, Guoping
Lian, Daoliang Li, Wenwu Wang
|
Multimodal Fish Feeding Intensity Assessment in Aquaculture
| null | null | null | null |
cs.SD cs.MM eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Fish feeding intensity assessment (FFIA) aims to evaluate the intensity
change of fish appetite during the feeding process, which is vital in
industrial aquaculture applications. The main challenges surrounding FFIA are
two-fold. 1) robustness: existing work has mainly leveraged single-modality
(e.g., vision, audio) methods, which have a high sensitivity to input noise. 2)
efficiency: FFIA models are generally expected to be employed on devices. This
presents a challenge in terms of computational efficiency. In this work, we
first introduce an audio-visual dataset, called AV-FFIA. AV-FFIA consists of
27,000 labeled audio and video clips that capture different levels of fish
feeding intensity. To our knowledge, AV-FFIA is the first large-scale
multimodal dataset for FFIA research. Then, we introduce a multi-modal approach
for FFIA by leveraging single-modality pre-trained models and modality-fusion
methods, with benchmark studies on AV-FFIA. Our experimental results indicate
that the multi-modal approach substantially outperforms the single-modality
based approach, especially in noisy environments. While multimodal approaches
provide a performance gain for FFIA, it inherently increase the computational
cost. To overcome this issue, we further present a novel unified model, termed
as U-FFIA. U-FFIA is a single model capable of processing audio, visual, or
audio-visual modalities, by leveraging modality dropout during training and
knowledge distillation from single-modality pre-trained models. We demonstrate
that U-FFIA can achieve performance better than or on par with the
state-of-the-art modality-specific FFIA models, with significantly lower
computational overhead. Our proposed U-FFIA approach enables a more robust and
efficient method for FFIA, with the potential to contribute to improved
management practices and sustainability in aquaculture.
|
[
{
"version": "v1",
"created": "Sun, 10 Sep 2023 15:52:56 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Cui",
"Meng",
""
],
[
"Liu",
"Xubo",
""
],
[
"Liu",
"Haohe",
""
],
[
"Du",
"Zhuangzhuang",
""
],
[
"Chen",
"Tao",
""
],
[
"Lian",
"Guoping",
""
],
[
"Li",
"Daoliang",
""
],
[
"Wang",
"Wenwu",
""
]
] |
new_dataset
| 0.962459 |
2309.05091
|
Zeyuan Huang
|
Zeyuan Huang, Qiang He, Kevin Maher, Xiaoming Deng, Yu-Kun Lai, Cuixia
Ma, Sheng-feng Qin, Yong-Jin Liu, and Hongan Wang
|
SpeechMirror: A Multimodal Visual Analytics System for Personalized
Reflection of Online Public Speaking Effectiveness
|
Main paper (11 pages, 6 figures) and Supplemental document (11 pages,
11 figures). Accepted by VIS 2023
| null | null | null |
cs.HC cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As communications are increasingly taking place virtually, the ability to
present well online is becoming an indispensable skill. Online speakers are
facing unique challenges in engaging with remote audiences. However, there has
been a lack of evidence-based analytical systems for people to comprehensively
evaluate online speeches and further discover possibilities for improvement.
This paper introduces SpeechMirror, a visual analytics system facilitating
reflection on a speech based on insights from a collection of online speeches.
The system estimates the impact of different speech techniques on effectiveness
and applies them to a speech to give users awareness of the performance of
speech techniques. A similarity recommendation approach based on speech factors
or script content supports guided exploration to expand knowledge of
presentation evidence and accelerate the discovery of speech delivery
possibilities. SpeechMirror provides intuitive visualizations and interactions
for users to understand speech factors. Among them, SpeechTwin, a novel
multimodal visual summary of speech, supports rapid understanding of critical
speech factors and comparison of different speech samples, and SpeechPlayer
augments the speech video by integrating visualization of the speaker's body
language with interaction, for focused analysis. The system utilizes
visualizations suited to the distinct nature of different speech factors for
user comprehension. The proposed system and visualization techniques were
evaluated with domain experts and amateurs, demonstrating usability for users
with low visualization literacy and its efficacy in assisting users to develop
insights for potential improvement.
|
[
{
"version": "v1",
"created": "Sun, 10 Sep 2023 17:34:40 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Huang",
"Zeyuan",
""
],
[
"He",
"Qiang",
""
],
[
"Maher",
"Kevin",
""
],
[
"Deng",
"Xiaoming",
""
],
[
"Lai",
"Yu-Kun",
""
],
[
"Ma",
"Cuixia",
""
],
[
"Qin",
"Sheng-feng",
""
],
[
"Liu",
"Yong-Jin",
""
],
[
"Wang",
"Hongan",
""
]
] |
new_dataset
| 0.985075 |
2309.05095
|
Atefeh Shahroudnejad
|
Tina Behrouzi, Atefeh Shahroudnejad, Payam Mousavi
|
MaskRenderer: 3D-Infused Multi-Mask Realistic Face Reenactment
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel end-to-end identity-agnostic face reenactment system,
MaskRenderer, that can generate realistic, high fidelity frames in real-time.
Although recent face reenactment works have shown promising results, there are
still significant challenges such as identity leakage and imitating mouth
movements, especially for large pose changes and occluded faces. MaskRenderer
tackles these problems by using (i) a 3DMM to model 3D face structure to better
handle pose changes, occlusion, and mouth movements compared to 2D
representations; (ii) a triplet loss function to embed the cross-reenactment
during training for better identity preservation; and (iii) multi-scale
occlusion, improving inpainting and restoring missing areas. Comprehensive
quantitative and qualitative experiments conducted on the VoxCeleb1 test set,
demonstrate that MaskRenderer outperforms state-of-the-art models on unseen
faces, especially when the Source and Driving identities are very different.
|
[
{
"version": "v1",
"created": "Sun, 10 Sep 2023 17:41:46 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Behrouzi",
"Tina",
""
],
[
"Shahroudnejad",
"Atefeh",
""
],
[
"Mousavi",
"Payam",
""
]
] |
new_dataset
| 0.986907 |
2309.05098
|
Chengliang Zhong
|
Chengliang Zhong, Yuhang Zheng, Yupeng Zheng, Hao Zhao, Li Yi,
Xiaodong Mu, Ling Wang, Pengfei Li, Guyue Zhou, Chao Yang, Xinliang Zhang,
Jian Zhao
|
3D Implicit Transporter for Temporally Consistent Keypoint Discovery
|
ICCV2023 oral paper
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Keypoint-based representation has proven advantageous in various visual and
robotic tasks. However, the existing 2D and 3D methods for detecting keypoints
mainly rely on geometric consistency to achieve spatial alignment, neglecting
temporal consistency. To address this issue, the Transporter method was
introduced for 2D data, which reconstructs the target frame from the source
frame to incorporate both spatial and temporal information. However, the direct
application of the Transporter to 3D point clouds is infeasible due to their
structural differences from 2D images. Thus, we propose the first 3D version of
the Transporter, which leverages hybrid 3D representation, cross attention, and
implicit reconstruction. We apply this new learning system on 3D articulated
objects and nonrigid animals (humans and rodents) and show that learned
keypoints are spatio-temporally consistent. Additionally, we propose a
closed-loop control strategy that utilizes the learned keypoints for 3D object
manipulation and demonstrate its superior performance. Codes are available at
https://github.com/zhongcl-thu/3D-Implicit-Transporter.
|
[
{
"version": "v1",
"created": "Sun, 10 Sep 2023 17:59:48 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Zhong",
"Chengliang",
""
],
[
"Zheng",
"Yuhang",
""
],
[
"Zheng",
"Yupeng",
""
],
[
"Zhao",
"Hao",
""
],
[
"Yi",
"Li",
""
],
[
"Mu",
"Xiaodong",
""
],
[
"Wang",
"Ling",
""
],
[
"Li",
"Pengfei",
""
],
[
"Zhou",
"Guyue",
""
],
[
"Yang",
"Chao",
""
],
[
"Zhang",
"Xinliang",
""
],
[
"Zhao",
"Jian",
""
]
] |
new_dataset
| 0.963489 |
2309.05128
|
Dimitrios Chatziparaschis
|
Dimitrios Chatziparaschis, Elia Scudiero, and Konstantinos Karydis
|
Robot-assisted Soil Apparent Electrical Conductivity Measurements in
Orchards
|
15 pages, 16 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Soil apparent electrical conductivity (ECa) is a vital metric in Precision
Agriculture and Smart Farming, as it is used for optimal water content
management, geological mapping, and yield prediction. Several existing methods
seeking to estimate soil electrical conductivity are available, including
physical soil sampling, ground sensor installation and monitoring, and the use
of sensors that can obtain proximal ECa estimates. However, such methods can be
either very laborious and/or too costly for practical use over larger field
canopies. Robot-assisted ECa measurements, in contrast, may offer a scalable
and cost-effective solution. In this work, we present one such solution that
involves a ground mobile robot equipped with a customized and adjustable
platform to hold an Electromagnetic Induction (EMI) sensor to perform
semi-autonomous and on-demand ECa measurements under various field conditions.
The platform is designed to be easily re-configurable in terms of sensor
placement; results from testing for traversability and robot-to-sensor
interference across multiple case studies help establish appropriate tradeoffs
for sensor placement. Further, a developed simulation software package enables
rapid and accessible estimation of terrain traversability in relation to
desired EMI sensor placement. Extensive experimental evaluation across
different fields demonstrates that the obtained robot-assisted ECa measurements
are of high linearity compared with the ground truth (data collected manually
by a handheld EMI sensor) by scoring more than $90\%$ in Pearson correlation
coefficient in both plot measurements and estimated ECa maps generated by
kriging interpolation. The proposed robotic solution supports autonomous
behavior development in the field since it utilizes the ROS navigation stack
along with the RTK GNSS positioning data and features various ranging sensors.
|
[
{
"version": "v1",
"created": "Sun, 10 Sep 2023 20:23:00 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Chatziparaschis",
"Dimitrios",
""
],
[
"Scudiero",
"Elia",
""
],
[
"Karydis",
"Konstantinos",
""
]
] |
new_dataset
| 0.998761 |
2309.05139
|
Josselin Somerville Roberts
|
Josselin Somerville Roberts, Paul-Emile Giacomelli, Yoni Gozlan, Julia
Di
|
A Skeleton-based Approach For Rock Crack Detection Towards A Climbing
Robot Application
| null | null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Conventional wheeled robots are unable to traverse scientifically
interesting, but dangerous, cave environments. Multi-limbed climbing robot
designs, such as ReachBot, are able to grasp irregular surface features and
execute climbing motions to overcome obstacles, given suitable grasp locations.
To support grasp site identification, we present a method for detecting rock
cracks and edges, the SKeleton Intersection Loss (SKIL). SKIL is a loss
designed for thin object segmentation that leverages the skeleton of the label.
A dataset of rock face images was collected, manually annotated, and augmented
with generated data. A new group of metrics, LineAcc, has been proposed for
thin object segmentation such that the impact of the object width on the score
is minimized. In addition, the metric is less sensitive to translation which
can often lead to a score of zero when computing classical metrics such as Dice
on thin objects. Our fine-tuned models outperform previous methods on similar
thin object segmentation tasks such as blood vessel segmentation and show
promise for integration onto a robotic system.
|
[
{
"version": "v1",
"created": "Sun, 10 Sep 2023 21:16:56 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Roberts",
"Josselin Somerville",
""
],
[
"Giacomelli",
"Paul-Emile",
""
],
[
"Gozlan",
"Yoni",
""
],
[
"Di",
"Julia",
""
]
] |
new_dataset
| 0.994376 |
2309.05174
|
Nicholas Mosier
|
Nicholas Mosier, Hamed Nemati, John C. Mitchell, Caroline Trippel
|
Serberus: Protecting Cryptographic Code from Spectres at Compile-Time
|
Authors' version; to appear in the Proceedings of the IEEE Symposium
on Security and Privacy (S&P) 2024
| null | null | null |
cs.CR cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Serberus, the first comprehensive mitigation for hardening
constant-time (CT) code against Spectre attacks (involving the PHT, BTB, RSB,
STL and/or PSF speculation primitives) on existing hardware. Serberus is based
on three insights. First, some hardware control-flow integrity (CFI)
protections restrict transient control-flow to the extent that it may be
comprehensively considered by software analyses. Second, conformance to the
accepted CT code discipline permits two code patterns that are unsafe in the
post-Spectre era. Third, once these code patterns are addressed, all Spectre
leakage of secrets in CT programs can be attributed to one of four classes of
taint primitives--instructions that can transiently assign a secret value to a
publicly-typed register. We evaluate Serberus on cryptographic primitives in
the OpenSSL, Libsodium, and HACL* libraries. Serberus introduces 21.3% runtime
overhead on average, compared to 24.9% for the next closest state-of-the-art
software mitigation, which is less secure.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 00:06:33 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Mosier",
"Nicholas",
""
],
[
"Nemati",
"Hamed",
""
],
[
"Mitchell",
"John C.",
""
],
[
"Trippel",
"Caroline",
""
]
] |
new_dataset
| 0.998523 |
2309.05251
|
Yiming Zhang
|
Yiming Zhang, ZeMing Gong, Angel X. Chang
|
Multi3DRefer: Grounding Text Description to Multiple 3D Objects
|
ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the task of localizing a flexible number of objects in
real-world 3D scenes using natural language descriptions. Existing 3D visual
grounding tasks focus on localizing a unique object given a text description.
However, such a strict setting is unnatural as localizing potentially multiple
objects is a common need in real-world scenarios and robotic tasks (e.g.,
visual navigation and object rearrangement). To address this setting we propose
Multi3DRefer, generalizing the ScanRefer dataset and task. Our dataset contains
61926 descriptions of 11609 objects, where zero, single or multiple target
objects are referenced by each description. We also introduce a new evaluation
metric and benchmark methods from prior work to enable further investigation of
multi-modal 3D scene understanding. Furthermore, we develop a better baseline
leveraging 2D features from CLIP by rendering object proposals online with
contrastive learning, which outperforms the state of the art on the ScanRefer
benchmark.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 06:03:39 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Zhang",
"Yiming",
""
],
[
"Gong",
"ZeMing",
""
],
[
"Chang",
"Angel X.",
""
]
] |
new_dataset
| 0.999805 |
2309.05261
|
Soumen Basu
|
Soumen Basu, Ashish Papanai, Mayank Gupta, Pankaj Gupta, Chetan Arora
|
Gall Bladder Cancer Detection from US Images with Only Image Level
Labels
|
Accepted at MICCAI 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Automated detection of Gallbladder Cancer (GBC) from Ultrasound (US) images
is an important problem, which has drawn increased interest from researchers.
However, most of these works use difficult-to-acquire information such as
bounding box annotations or additional US videos. In this paper, we focus on
GBC detection using only image-level labels. Such annotation is usually
available based on the diagnostic report of a patient, and do not require
additional annotation effort from the physicians. However, our analysis reveals
that it is difficult to train a standard image classification model for GBC
detection. This is due to the low inter-class variance (a malignant region
usually occupies only a small portion of a US image), high intra-class variance
(due to the US sensor capturing a 2D slice of a 3D object leading to large
viewpoint variations), and low training data availability. We posit that even
when we have only the image level label, still formulating the problem as
object detection (with bounding box output) helps a deep neural network (DNN)
model focus on the relevant region of interest. Since no bounding box
annotations is available for training, we pose the problem as weakly supervised
object detection (WSOD). Motivated by the recent success of transformer models
in object detection, we train one such model, DETR, using
multi-instance-learning (MIL) with self-supervised instance selection to suit
the WSOD task. Our proposed method demonstrates an improvement of AP and
detection sensitivity over the SOTA transformer-based and CNN-based WSOD
methods. Project page is at https://gbc-iitd.github.io/wsod-gbc
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 06:37:12 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Basu",
"Soumen",
""
],
[
"Papanai",
"Ashish",
""
],
[
"Gupta",
"Mayank",
""
],
[
"Gupta",
"Pankaj",
""
],
[
"Arora",
"Chetan",
""
]
] |
new_dataset
| 0.978897 |
2309.05269
|
Yide Qiu
|
Yide Qiu, Shaoxiang Ling, Tong Zhang, Bo Huang, Zhen Cui
|
UniKG: A Benchmark and Universal Embedding for Large-Scale Knowledge
Graphs
|
9 pages, 4 figures
| null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Irregular data in real-world are usually organized as heterogeneous graphs
(HGs) consisting of multiple types of nodes and edges. To explore useful
knowledge from real-world data, both the large-scale encyclopedic HG datasets
and corresponding effective learning methods are crucial, but haven't been well
investigated. In this paper, we construct a large-scale HG benchmark dataset
named UniKG from Wikidata to facilitate knowledge mining and heterogeneous
graph representation learning. Overall, UniKG contains more than 77 million
multi-attribute entities and 2000 diverse association types, which
significantly surpasses the scale of existing HG datasets. To perform effective
learning on the large-scale UniKG, two key measures are taken, including (i)
the semantic alignment strategy for multi-attribute entities, which projects
the feature description of multi-attribute nodes into a common embedding space
to facilitate node aggregation in a large receptive field; (ii) proposing a
novel plug-and-play anisotropy propagation module (APM) to learn effective
multi-hop anisotropy propagation kernels, which extends methods of large-scale
homogeneous graphs to heterogeneous graphs. These two strategies enable
efficient information propagation among a tremendous number of multi-attribute
entities and meantimes adaptively mine multi-attribute association through the
multi-hop aggregation in large-scale HGs. We set up a node classification task
on our UniKG dataset, and evaluate multiple baseline methods which are
constructed by embedding our APM into large-scale homogenous graph learning
methods. Our UniKG dataset and the baseline codes have been released at
https://github.com/Yide-Qiu/UniKG.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 06:56:42 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Qiu",
"Yide",
""
],
[
"Ling",
"Shaoxiang",
""
],
[
"Zhang",
"Tong",
""
],
[
"Huang",
"Bo",
""
],
[
"Cui",
"Zhen",
""
]
] |
new_dataset
| 0.996947 |
2309.05331
|
Abhinav Singh
|
Abhinav Singh, Landfried Kraatz, Pietro Incardona, Ivo F. Sbalzarini
|
A Distributed Algebra System for Time Integration on Parallel Computers
| null | null | null | null |
cs.MS cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a distributed algebra system for efficient and compact
implementation of numerical time integration schemes on parallel computers and
graphics processing units (GPU). The software implementation combines the time
integration library Odeint from Boost with the OpenFPM framework for scalable
scientific computing. Implementing multi-stage, multi-step, or adaptive time
integration methods in distributed-memory parallel codes or on GPUs is
challenging. The present algebra system addresses this by making the time
integration methods from Odeint available in a concise template-expression
language for numerical simulations distributed and parallelized using OpenFPM.
This allows using state-of-the-art time integration schemes, or switching
between schemes, by changing one line of code, while maintaining parallel
scalability. This enables scalable time integration with compact code and
facilitates rapid rewriting and deployment of simulation algorithms. We
benchmark the present software for exponential and sigmoidal dynamics and
present an application example to the 3D Gray-Scott reaction-diffusion problem
on both CPUs and GPUs in only 60 lines of code.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 09:26:37 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Singh",
"Abhinav",
""
],
[
"Kraatz",
"Landfried",
""
],
[
"Incardona",
"Pietro",
""
],
[
"Sbalzarini",
"Ivo F.",
""
]
] |
new_dataset
| 0.98072 |
2309.05334
|
Eden Belouadah
|
Eden Belouadah, Arnaud Dapogny, Kevin Bailly
|
MultIOD: Rehearsal-free Multihead Incremental Object Detector
|
Under review at the WACV 2024 conference
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Class-Incremental learning (CIL) is the ability of artificial agents to
accommodate new classes as they appear in a stream. It is particularly
interesting in evolving environments where agents have limited access to memory
and computational resources. The main challenge of class-incremental learning
is catastrophic forgetting, the inability of neural networks to retain past
knowledge when learning a new one. Unfortunately, most existing
class-incremental object detectors are applied to two-stage algorithms such as
Faster-RCNN and rely on rehearsal memory to retain past knowledge. We believe
that the current benchmarks are not realistic, and more effort should be
dedicated to anchor-free and rehearsal-free object detection. In this context,
we propose MultIOD, a class-incremental object detector based on CenterNet. Our
main contributions are: (1) we propose a multihead feature pyramid and
multihead detection architecture to efficiently separate class representations,
(2) we employ transfer learning between classes learned initially and those
learned incrementally to tackle catastrophic forgetting, and (3) we use a
class-wise non-max-suppression as a post-processing technique to remove
redundant boxes. Without bells and whistles, our method outperforms a range of
state-of-the-art methods on two Pascal VOC datasets.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 09:32:45 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Belouadah",
"Eden",
""
],
[
"Dapogny",
"Arnaud",
""
],
[
"Bailly",
"Kevin",
""
]
] |
new_dataset
| 0.996533 |
2309.05448
|
Haoran Chen
|
Haoran Chen, Kenneth Blomqvist, Francesco Milano and Roland Siegwart
|
Panoptic Vision-Language Feature Fields
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, methods have been proposed for 3D open-vocabulary semantic
segmentation. Such methods are able to segment scenes into arbitrary classes
given at run-time using their text description. In this paper, we propose to
our knowledge the first algorithm for open-vocabulary panoptic segmentation,
simultaneously performing both semantic and instance segmentation. Our
algorithm, Panoptic Vision-Language Feature Fields (PVLFF) learns a feature
field of the scene, jointly learning vision-language features and hierarchical
instance features through a contrastive loss function from 2D instance segment
proposals on input frames. Our method achieves comparable performance against
the state-of-the-art close-set 3D panoptic systems on the HyperSim, ScanNet and
Replica dataset and outperforms current 3D open-vocabulary systems in terms of
semantic segmentation. We additionally ablate our method to demonstrate the
effectiveness of our model architecture. Our code will be available at
https://github.com/ethz-asl/autolabel.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 13:41:27 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Chen",
"Haoran",
""
],
[
"Blomqvist",
"Kenneth",
""
],
[
"Milano",
"Francesco",
""
],
[
"Siegwart",
"Roland",
""
]
] |
new_dataset
| 0.994365 |
2309.05472
|
Titouan Parcollet
|
Titouan Parcollet, Ha Nguyen, Solene Evain, Marcely Zanon Boito,
Adrien Pupier, Salima Mdhaffar, Hang Le, Sina Alisamir, Natalia Tomashenko,
Marco Dinarelli, Shucong Zhang, Alexandre Allauzen, Maximin Coavoux, Yannick
Esteve, Mickael Rouvier, Jerome Goulian, Benjamin Lecouteux, Francois Portet,
Solange Rossato, Fabien Ringeval, Didier Schwab, Laurent Besacier
|
LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for
Self-supervised Representations of French Speech
|
Under submission at Computer Science and Language. Preprint allowed
| null | null | null |
cs.CL cs.AI cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Self-supervised learning (SSL) is at the origin of unprecedented improvements
in many different domains including computer vision and natural language
processing. Speech processing drastically benefitted from SSL as most of the
current domain-related tasks are now being approached with pre-trained models.
This work introduces LeBenchmark 2.0 an open-source framework for assessing and
building SSL-equipped French speech technologies. It includes documented,
large-scale and heterogeneous corpora with up to 14,000 hours of heterogeneous
speech, ten pre-trained SSL wav2vec 2.0 models containing from 26 million to
one billion learnable parameters shared with the community, and an evaluation
protocol made of six downstream tasks to complement existing benchmarks.
LeBenchmark 2.0 also presents unique perspectives on pre-trained SSL models for
speech with the investigation of frozen versus fine-tuned downstream models,
task-agnostic versus task-specific pre-trained models as well as a discussion
on the carbon footprint of large-scale model training.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 14:13:09 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Parcollet",
"Titouan",
""
],
[
"Nguyen",
"Ha",
""
],
[
"Evain",
"Solene",
""
],
[
"Boito",
"Marcely Zanon",
""
],
[
"Pupier",
"Adrien",
""
],
[
"Mdhaffar",
"Salima",
""
],
[
"Le",
"Hang",
""
],
[
"Alisamir",
"Sina",
""
],
[
"Tomashenko",
"Natalia",
""
],
[
"Dinarelli",
"Marco",
""
],
[
"Zhang",
"Shucong",
""
],
[
"Allauzen",
"Alexandre",
""
],
[
"Coavoux",
"Maximin",
""
],
[
"Esteve",
"Yannick",
""
],
[
"Rouvier",
"Mickael",
""
],
[
"Goulian",
"Jerome",
""
],
[
"Lecouteux",
"Benjamin",
""
],
[
"Portet",
"Francois",
""
],
[
"Rossato",
"Solange",
""
],
[
"Ringeval",
"Fabien",
""
],
[
"Schwab",
"Didier",
""
],
[
"Besacier",
"Laurent",
""
]
] |
new_dataset
| 0.988749 |
2309.05500
|
Ha Thanh Nguyen
|
Hai-Long Nguyen, Dieu-Quynh Nguyen, Hoang-Trung Nguyen, Thu-Trang
Pham, Huu-Dong Nguyen, Thach-Anh Nguyen, Thi-Hai-Yen Vuong, Ha-Thanh Nguyen
|
NeCo@ALQAC 2023: Legal Domain Knowledge Acquisition for Low-Resource
Languages through Data Enrichment
|
ISAILD@KSE 2023
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, natural language processing has gained significant
popularity in various sectors, including the legal domain. This paper presents
NeCo Team's solutions to the Vietnamese text processing tasks provided in the
Automated Legal Question Answering Competition 2023 (ALQAC 2023), focusing on
legal domain knowledge acquisition for low-resource languages through data
enrichment. Our methods for the legal document retrieval task employ a
combination of similarity ranking and deep learning models, while for the
second task, which requires extracting an answer from a relevant legal article
in response to a question, we propose a range of adaptive techniques to handle
different question types. Our approaches achieve outstanding results on both
tasks of the competition, demonstrating the potential benefits and
effectiveness of question answering systems in the legal field, particularly
for low-resource languages.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 14:43:45 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Nguyen",
"Hai-Long",
""
],
[
"Nguyen",
"Dieu-Quynh",
""
],
[
"Nguyen",
"Hoang-Trung",
""
],
[
"Pham",
"Thu-Trang",
""
],
[
"Nguyen",
"Huu-Dong",
""
],
[
"Nguyen",
"Thach-Anh",
""
],
[
"Vuong",
"Thi-Hai-Yen",
""
],
[
"Nguyen",
"Ha-Thanh",
""
]
] |
new_dataset
| 0.984815 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.