Dataset Viewer
id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
sequence | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2103.04904 | Laszlo Csirmaz | Laszlo Csirmaz, Franti\v{s}ek Mat\'u\v{s} and Carles Padr\'o | Bipartite secret sharing and staircases | To appear in Discrete Mathematics | null | null | null | cs.CR cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bipartite secret sharing schemes have a bipartite access structure in which
the set of participants is divided into two parts and all participants in the
same part play an equivalent role. Such a bipartite scheme can be described by
a \emph{staircase}: the collection of its minimal points. The complexity of a
scheme is the maximal share size relative to the secret size; and the
$\kappa$-complexity of an access structure is the best lower bound provided by
the entropy method. An access structure is $\kappa$-ideal if it has
$\kappa$-complexity 1. Motivated by the abundance of open problems in this
area, the main results can be summarized as follows. First, a new
characterization of $\kappa$-ideal multipartite access structures is given
which offers a straightforward and simple approach to describe ideal bipartite
and tripartite access structures. Second, the $\kappa$-complexity is determined
for a range of bipartite access structures, including those determined by two
points, staircases with equal widths and heights, and staircases with all
heights 1. Third, matching linear schemes are presented for some non-ideal
cases, including staircases where all heights are 1 and all widths are equal.
Finally, finding the Shannon complexity of a bipartite access structure can be
considered as a discrete submodular optimization problem. An interesting and
intriguing continuous version is defined which might give further insight to
the large-scale behavior of these optimization problems.
| [
{
"version": "v1",
"created": "Mon, 8 Mar 2021 17:09:43 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 14:19:21 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Csirmaz",
"Laszlo",
""
],
[
"Matúš",
"František",
""
],
[
"Padró",
"Carles",
""
]
]
| new_dataset | 0.98718 |
2211.11961 | Arghya Chakraborty | Arghya Chakraborty, Rahul Vaze | Online facility location with timed-requests and congestion | 32 pages, 6 figures | null | null | null | cs.DS | http://creativecommons.org/licenses/by/4.0/ | The classic online facility location problem deals with finding the optimal
set of facilities in an online fashion when demand requests arrive one at a
time and facilities need to be opened to service these requests. In this work,
we study two variants of the online facility location problem; (1) weighted
requests and (2) congestion. Both of these variants are motivated by their
applications to real life scenarios and the previously known results on online
facility location cannot be directly adapted to analyse them.
Weighted requests: In this variant, each demand request is a pair $(x,w)$
where $x$ is the standard location of the demand while $w$ is the corresponding
weight of the request. The cost of servicing request $(x,w)$ at facility $F$ is
$w\cdot d(x,F)$. For this variant, given $n$ requests, we present an online
algorithm attaining a competitive ratio of $\mathcal{O}(\log n)$ in the
secretarial model for the weighted requests and show that it is optimal.
Congestion: The congestion variant considers the case when there is an
additional congestion cost that grows with the number of requests served by
each facility. For this variant, when the congestion cost is a monomial, we
show that there exists an algorithm attaining a constant competitive ratio.
This constant is a function of the exponent of the monomial and the facility
opening cost but independent of the number of requests.
| [
{
"version": "v1",
"created": "Tue, 22 Nov 2022 02:50:51 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 15:49:18 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Chakraborty",
"Arghya",
""
],
[
"Vaze",
"Rahul",
""
]
]
| new_dataset | 0.994855 |
2212.00431 | Violetta Weger | Markus Grassl, Anna-Lena Horlemann, Violetta Weger | The Subfield Metric and its Application to Quantum Error Correction | null | null | 10.1142/S021949882550063X | null | cs.IT math.IT quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new weight and corresponding metric over finite extension
fields for asymmetric error correction. The weight distinguishes between
elements from the base field and the ones outside of it, which is motivated by
asymmetric quantum codes. We set up the theoretic framework for this weight and
metric, including upper and lower bounds, asymptotic behavior of random codes,
and we show the existence of an optimal family of codes achieving the
Singleton-type upper bound.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2022 11:02:31 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Grassl",
"Markus",
""
],
[
"Horlemann",
"Anna-Lena",
""
],
[
"Weger",
"Violetta",
""
]
]
| new_dataset | 0.985599 |
2302.11791 | Gyanendra Kumar Verma | Gyanendra K. Verma and R. K. Sharma | Additive complementary dual codes over $\mathbb{F}_{q^2}$ | There has been major changes in this manuscript we will submit new
one | null | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | Shi et al. [Additive complementary dual codes over F4. Designs, Codes and
Cryptography, 2022.] studied additive codes over the finite field F4 with
respect to trace Hermitian and trace Euclidean inner products. In this article,
we define additive codes of length n over finite field Fq2 as additive
subgroups of Fn q2 where q is a prime power. We associate an additive code with
a matrix called a generator matrix. We characterize trace Euclidean ACD and
trace Hermitian ACD codes in terms of generator matrices over the finite field
Fq2 . Also, we construct these codes over Fq2 from linear LCD codes over Fq.
| [
{
"version": "v1",
"created": "Thu, 23 Feb 2023 06:12:14 GMT"
},
{
"version": "v2",
"created": "Sat, 6 May 2023 17:38:14 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 09:08:46 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Verma",
"Gyanendra K.",
""
],
[
"Sharma",
"R. K.",
""
]
]
| new_dataset | 0.985605 |
2303.01338 | Amira Guesmi | Amira Guesmi, Muhammad Abdullah Hanif, and Muhammad Shafique | AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision
Systems | null | null | null | null | cs.CV cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-based perception modules are increasingly deployed in many
applications, especially autonomous vehicles and intelligent robots. These
modules are being used to acquire information about the surroundings and
identify obstacles. Hence, accurate detection and classification are essential
to reach appropriate decisions and take appropriate and safe actions at all
times. Current studies have demonstrated that "printed adversarial attacks",
known as physical adversarial attacks, can successfully mislead perception
models such as object detectors and image classifiers. However, most of these
physical attacks are based on noticeable and eye-catching patterns for
generated perturbations making them identifiable/detectable by human eye or in
test drives. In this paper, we propose a camera-based inconspicuous adversarial
attack (\textbf{AdvRain}) capable of fooling camera-based perception systems
over all objects of the same class. Unlike mask based fake-weather attacks that
require access to the underlying computing hardware or image memory, our attack
is based on emulating the effects of a natural weather condition (i.e.,
Raindrops) that can be printed on a translucent sticker, which is externally
placed over the lens of a camera. To accomplish this, we provide an iterative
process based on performing a random search aiming to identify critical
positions to make sure that the performed transformation is adversarial for a
target classifier. Our transformation is based on blurring predefined parts of
the captured image corresponding to the areas covered by the raindrop. We
achieve a drop in average model accuracy of more than $45\%$ and $40\%$ on
VGG19 for ImageNet and Resnet34 for Caltech-101, respectively, using only $20$
raindrops.
| [
{
"version": "v1",
"created": "Thu, 2 Mar 2023 15:14:46 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 11:55:37 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Guesmi",
"Amira",
""
],
[
"Hanif",
"Muhammad Abdullah",
""
],
[
"Shafique",
"Muhammad",
""
]
]
| new_dataset | 0.987071 |
2303.09234 | Yining Jiao | Yining Jiao, Carlton Zdanski, Julia Kimbell, Andrew Prince, Cameron
Worden, Samuel Kirse, Christopher Rutter, Benjamin Shields, William Dunn,
Jisan Mahmud, Marc Niethammer | NAISR: A 3D Neural Additive Model for Interpretable Shape Representation | 28 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Deep implicit functions (DIFs) have emerged as a powerful paradigm for many
computer vision tasks such as 3D shape reconstruction, generation,
registration, completion, editing, and understanding. However, given a set of
3D shapes with associated covariates there is at present no shape
representation method which allows to precisely represent the shapes while
capturing the individual dependencies on each covariate. Such a method would be
of high utility to researchers to discover knowledge hidden in a population of
shapes. For scientific shape discovery, we propose a 3D Neural Additive Model
for Interpretable Shape Representation ($\texttt{NAISR}$) which describes
individual shapes by deforming a shape atlas in accordance to the effect of
disentangled covariates. Our approach captures shape population trends and
allows for patient-specific predictions through shape transfer.
$\texttt{NAISR}$ is the first approach to combine the benefits of deep implicit
shape representations with an atlas deforming according to specified
covariates. We evaluate $\texttt{NAISR}$ with respect to shape reconstruction,
shape disentanglement, shape evolution, and shape transfer on three datasets:
1) $\textit{Starman}$, a simulated 2D shape dataset; 2) the ADNI hippocampus 3D
shape dataset; and 3) a pediatric airway 3D shape dataset. Our experiments
demonstrate that $\textit{Starman}$ achieves excellent shape reconstruction
performance while retaining interpretability. Our code is available at
$\href{https://github.com/uncbiag/NAISR}{https://github.com/uncbiag/NAISR}$.
| [
{
"version": "v1",
"created": "Thu, 16 Mar 2023 11:18:04 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Mar 2023 12:13:19 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Mar 2023 20:07:21 GMT"
},
{
"version": "v4",
"created": "Thu, 5 Oct 2023 09:25:26 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Jiao",
"Yining",
""
],
[
"Zdanski",
"Carlton",
""
],
[
"Kimbell",
"Julia",
""
],
[
"Prince",
"Andrew",
""
],
[
"Worden",
"Cameron",
""
],
[
"Kirse",
"Samuel",
""
],
[
"Rutter",
"Christopher",
""
],
[
"Shields",
"Benjamin",
""
],
[
"Dunn",
"William",
""
],
[
"Mahmud",
"Jisan",
""
],
[
"Niethammer",
"Marc",
""
]
]
| new_dataset | 0.991346 |
2303.14655 | Ji Qi | Ji Qi, Jifan Yu, Teng Tu, Kunyu Gao, Yifan Xu, Xinyu Guan, Xiaozhi
Wang, Yuxiao Dong, Bin Xu, Lei Hou, Juanzi Li, Jie Tang, Weidong Guo, Hui
Liu, Yu Xu | GOAL: A Challenging Knowledge-grounded Video Captioning Benchmark for
Real-time Soccer Commentary Generation | Accepted by CIKM 2023 | null | null | null | cs.CV cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the recent emergence of video captioning models, how to generate
vivid, fine-grained video descriptions based on the background knowledge (i.e.,
long and informative commentary about the domain-specific scenes with
appropriate reasoning) is still far from being solved, which however has great
applications such as automatic sports narrative. In this paper, we present
GOAL, a benchmark of over 8.9k soccer video clips, 22k sentences, and 42k
knowledge triples for proposing a challenging new task setting as
Knowledge-grounded Video Captioning (KGVC). Moreover, we conduct experimental
adaption of existing methods to show the difficulty and potential directions
for solving this valuable and applicable task. Our data and code are available
at https://github.com/THU-KEG/goal.
| [
{
"version": "v1",
"created": "Sun, 26 Mar 2023 08:43:36 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 06:55:13 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Qi",
"Ji",
""
],
[
"Yu",
"Jifan",
""
],
[
"Tu",
"Teng",
""
],
[
"Gao",
"Kunyu",
""
],
[
"Xu",
"Yifan",
""
],
[
"Guan",
"Xinyu",
""
],
[
"Wang",
"Xiaozhi",
""
],
[
"Dong",
"Yuxiao",
""
],
[
"Xu",
"Bin",
""
],
[
"Hou",
"Lei",
""
],
[
"Li",
"Juanzi",
""
],
[
"Tang",
"Jie",
""
],
[
"Guo",
"Weidong",
""
],
[
"Liu",
"Hui",
""
],
[
"Xu",
"Yu",
""
]
]
| new_dataset | 0.988441 |
2303.15375 | Yan Sun | Yan Sun, Yifan Yuan, Zeduo Yu, Reese Kuper, Chihun Song, Jinghan
Huang, Houxiang Ji, Siddharth Agarwal, Jiaqi Lou, Ipoom Jeong, Ren Wang, Jung
Ho Ahn, Tianyin Xu, Nam Sung Kim | Demystifying CXL Memory with Genuine CXL-Ready Systems and Devices | This paper has been accepted by MICRO'23. Please refer to the
https://doi.org/10.1145/3613424.3614256 for the official version of this
paper | null | 10.1145/3613424.3614256 | null | cs.PF cs.AR | http://creativecommons.org/licenses/by/4.0/ | The ever-growing demands for memory with larger capacity and higher bandwidth
have driven recent innovations on memory expansion and disaggregation
technologies based on Compute eXpress Link (CXL). Especially, CXL-based memory
expansion technology has recently gained notable attention for its ability not
only to economically expand memory capacity and bandwidth but also to decouple
memory technologies from a specific memory interface of the CPU. However, since
CXL memory devices have not been widely available, they have been emulated
using DDR memory in a remote NUMA node. In this paper, for the first time, we
comprehensively evaluate a true CXL-ready system based on the latest
4th-generation Intel Xeon CPU with three CXL memory devices from different
manufacturers. Specifically, we run a set of microbenchmarks not only to
compare the performance of true CXL memory with that of emulated CXL memory but
also to analyze the complex interplay between the CPU and CXL memory in depth.
This reveals important differences between emulated CXL memory and true CXL
memory, some of which will compel researchers to revisit the analyses and
proposals from recent work. Next, we identify opportunities for
memory-bandwidth-intensive applications to benefit from the use of CXL memory.
Lastly, we propose a CXL-memory-aware dynamic page allocation policy, Caption
to more efficiently use CXL memory as a bandwidth expander. We demonstrate that
Caption can automatically converge to an empirically favorable percentage of
pages allocated to CXL memory, which improves the performance of
memory-bandwidth-intensive applications by up to 24% when compared to the
default page allocation policy designed for traditional NUMA systems.
| [
{
"version": "v1",
"created": "Mon, 27 Mar 2023 16:51:26 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2023 04:25:32 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Jul 2023 22:40:13 GMT"
},
{
"version": "v4",
"created": "Thu, 5 Oct 2023 03:58:56 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Sun",
"Yan",
""
],
[
"Yuan",
"Yifan",
""
],
[
"Yu",
"Zeduo",
""
],
[
"Kuper",
"Reese",
""
],
[
"Song",
"Chihun",
""
],
[
"Huang",
"Jinghan",
""
],
[
"Ji",
"Houxiang",
""
],
[
"Agarwal",
"Siddharth",
""
],
[
"Lou",
"Jiaqi",
""
],
[
"Jeong",
"Ipoom",
""
],
[
"Wang",
"Ren",
""
],
[
"Ahn",
"Jung Ho",
""
],
[
"Xu",
"Tianyin",
""
],
[
"Kim",
"Nam Sung",
""
]
]
| new_dataset | 0.950731 |
2304.03752 | Jiaqi Wang | Jiaqi Wang, Pan Zhang, Tao Chu, Yuhang Cao, Yujie Zhou, Tong Wu, Bin
Wang, Conghui He, Dahua Lin | V3Det: Vast Vocabulary Visual Detection Dataset | ICCV 2023 Oral Camera Ready | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in detecting arbitrary objects in the real world are trained
and evaluated on object detection datasets with a relatively restricted
vocabulary. To facilitate the development of more general visual object
detection, we propose V3Det, a vast vocabulary visual detection dataset with
precisely annotated bounding boxes on massive images. V3Det has several
appealing properties: 1) Vast Vocabulary: It contains bounding boxes of objects
from 13,204 categories on real-world images, which is 10 times larger than the
existing large vocabulary object detection dataset, e.g., LVIS. 2) Hierarchical
Category Organization: The vast vocabulary of V3Det is organized by a
hierarchical category tree which annotates the inclusion relationship among
categories, encouraging the exploration of category relationships in vast and
open vocabulary object detection. 3) Rich Annotations: V3Det comprises
precisely annotated objects in 243k images and professional descriptions of
each category written by human experts and a powerful chatbot. By offering a
vast exploration space, V3Det enables extensive benchmarks on both vast and
open vocabulary object detection, leading to new observations, practices, and
insights for future research. It has the potential to serve as a cornerstone
dataset for developing more general visual perception systems. V3Det is
available at https://v3det.openxlab.org.cn/.
| [
{
"version": "v1",
"created": "Fri, 7 Apr 2023 17:45:35 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 12:18:14 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Wang",
"Jiaqi",
""
],
[
"Zhang",
"Pan",
""
],
[
"Chu",
"Tao",
""
],
[
"Cao",
"Yuhang",
""
],
[
"Zhou",
"Yujie",
""
],
[
"Wu",
"Tong",
""
],
[
"Wang",
"Bin",
""
],
[
"He",
"Conghui",
""
],
[
"Lin",
"Dahua",
""
]
]
| new_dataset | 0.999848 |
2304.04327 | Jinyi Ye | Jinyi Ye, Nikhil Jindal, Francesco Pierri, Luca Luceri | Online Networks of Support in Distressed Environments: Solidarity and
Mobilization during the Russian Invasion of Ukraine | Presented at ICWSM2023 Workshop "Data for the Wellbeing of Most
Vulnerable" | Proceedings of the ICWSM Workshops 2023 | 10.36190/2023.05 | null | cs.SI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Despite their drawbacks and unintended consequences, social media networks
have recently emerged as a crucial resource for individuals in distress,
particularly during times of crisis. These platforms serve as a means to seek
assistance and support, share reliable information, and appeal for action and
solidarity. In this paper, we examine the online networks of support during the
Russia-Ukraine conflict by analyzing four major social media networks: Twitter,
Facebook, Instagram, and YouTube. Using a large dataset of 68 million posts, we
explore the temporal patterns and interconnectedness between these platforms
and online support websites. Our analysis highlights the prevalence of
crowdsourcing and crowdfunding websites as the two main support platforms to
mobilize resources and solicit donations, revealing their purpose and contents,
and investigating different support-seeking and -receiving practices. Overall,
our study underscores the potential of social media in facilitating online
support in distressed environments through grassroots mobilization,
contributing to the growing body of research on the positive impact of online
platforms in promoting social good and protecting vulnerable populations during
times of crisis and conflict.
| [
{
"version": "v1",
"created": "Sun, 9 Apr 2023 23:27:59 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2023 22:17:40 GMT"
},
{
"version": "v3",
"created": "Wed, 4 Oct 2023 21:59:32 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Ye",
"Jinyi",
""
],
[
"Jindal",
"Nikhil",
""
],
[
"Pierri",
"Francesco",
""
],
[
"Luceri",
"Luca",
""
]
]
| new_dataset | 0.999546 |
2304.08247 | Keno Bressem | Tianyu Han and Lisa C. Adams and Jens-Michalis Papaioannou and Paul
Grundmann and Tom Oberhauser and Alexander L\"oser and Daniel Truhn and Keno
K. Bressem | MedAlpaca -- An Open-Source Collection of Medical Conversational AI
Models and Training Data | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | As large language models (LLMs) like OpenAI's GPT series continue to make
strides, we witness the emergence of artificial intelligence applications in an
ever-expanding range of fields. In medicine, these LLMs hold considerable
promise for improving medical workflows, diagnostics, patient care, and
education. Yet, there is an urgent need for open-source models that can be
deployed on-premises to safeguard patient privacy. In our work, we present an
innovative dataset consisting of over 160,000 entries, specifically crafted to
fine-tune LLMs for effective medical applications. We investigate the impact of
fine-tuning these datasets on publicly accessible pre-trained LLMs, and
subsequently, we juxtapose the performance of pre-trained-only models against
the fine-tuned models concerning the examinations that future medical doctors
must pass to achieve certification.
| [
{
"version": "v1",
"created": "Fri, 14 Apr 2023 11:28:08 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 23:28:00 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Han",
"Tianyu",
""
],
[
"Adams",
"Lisa C.",
""
],
[
"Papaioannou",
"Jens-Michalis",
""
],
[
"Grundmann",
"Paul",
""
],
[
"Oberhauser",
"Tom",
""
],
[
"Löser",
"Alexander",
""
],
[
"Truhn",
"Daniel",
""
],
[
"Bressem",
"Keno K.",
""
]
]
| new_dataset | 0.999632 |
2305.11779 | Huitong Pan | Huitong Pan, Qi Zhang, Eduard Dragut, Cornelia Caragea, Longin Jan
Latecki | DMDD: A Large-Scale Dataset for Dataset Mentions Detection | Pre-MIT Press publication version. Submitted to TACL | Transactions of the Association for Computational Linguistics. 11
(2023) 1132-1146 | 10.1162/tacl_a_00592 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | The recognition of dataset names is a critical task for automatic information
extraction in scientific literature, enabling researchers to understand and
identify research opportunities. However, existing corpora for dataset mention
detection are limited in size and naming diversity. In this paper, we introduce
the Dataset Mentions Detection Dataset (DMDD), the largest publicly available
corpus for this task. DMDD consists of the DMDD main corpus, comprising 31,219
scientific articles with over 449,000 dataset mentions weakly annotated in the
format of in-text spans, and an evaluation set, which comprises of 450
scientific articles manually annotated for evaluation purposes. We use DMDD to
establish baseline performance for dataset mention detection and linking. By
analyzing the performance of various models on DMDD, we are able to identify
open problems in dataset mention detection. We invite the community to use our
dataset as a challenge to develop novel dataset mention detection models.
| [
{
"version": "v1",
"created": "Fri, 19 May 2023 16:18:00 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Pan",
"Huitong",
""
],
[
"Zhang",
"Qi",
""
],
[
"Dragut",
"Eduard",
""
],
[
"Caragea",
"Cornelia",
""
],
[
"Latecki",
"Longin Jan",
""
]
]
| new_dataset | 0.999761 |
2306.04018 | Zifeng Wang | Zifeng Wang and Brandon Theodorou and Tianfan Fu and Cao Xiao and
Jimeng Sun | PyTrial: Machine Learning Software and Benchmark for Clinical Trial
Applications | null | null | null | null | cs.AI q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Clinical trials are conducted to test the effectiveness and safety of
potential drugs in humans for regulatory approval. Machine learning (ML) has
recently emerged as a new tool to assist in clinical trials. Despite this
progress, there have been few efforts to document and benchmark ML4Trial
algorithms available to the ML research community. Additionally, the
accessibility to clinical trial-related datasets is limited, and there is a
lack of well-defined clinical tasks to facilitate the development of new
algorithms.
To fill this gap, we have developed PyTrial that provides benchmarks and
open-source implementations of a series of ML algorithms for clinical trial
design and operations. In this paper, we thoroughly investigate 34 ML
algorithms for clinical trials across 6 different tasks, including patient
outcome prediction, trial site selection, trial outcome prediction,
patient-trial matching, trial similarity search, and synthetic data generation.
We have also collected and prepared 23 ML-ready datasets as well as their
working examples in Jupyter Notebooks for quick implementation and testing.
PyTrial defines each task through a simple four-step process: data loading,
model specification, model training, and model evaluation, all achievable with
just a few lines of code. Furthermore, our modular API architecture empowers
practitioners to expand the framework to incorporate new algorithms and tasks
effortlessly. The code is available at https://github.com/RyanWangZf/PyTrial.
| [
{
"version": "v1",
"created": "Tue, 6 Jun 2023 21:19:03 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 05:55:10 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Wang",
"Zifeng",
""
],
[
"Theodorou",
"Brandon",
""
],
[
"Fu",
"Tianfan",
""
],
[
"Xiao",
"Cao",
""
],
[
"Sun",
"Jimeng",
""
]
]
| new_dataset | 0.999811 |
2306.08827 | Zhongkai Hao | Zhongkai Hao, Jiachen Yao, Chang Su, Hang Su, Ziao Wang, Fanzhi Lu,
Zeyu Xia, Yichi Zhang, Songming Liu, Lu Lu, Jun Zhu | PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks
for Solving PDEs | null | null | null | null | cs.LG cs.NA math.NA physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While significant progress has been made on Physics-Informed Neural Networks
(PINNs), a comprehensive comparison of these methods across a wide range of
Partial Differential Equations (PDEs) is still lacking. This study introduces
PINNacle, a benchmarking tool designed to fill this gap. PINNacle provides a
diverse dataset, comprising over 20 distinct PDEs from various domains,
including heat conduction, fluid dynamics, biology, and electromagnetics. These
PDEs encapsulate key challenges inherent to real-world problems, such as
complex geometry, multi-scale phenomena, nonlinearity, and high dimensionality.
PINNacle also offers a user-friendly toolbox, incorporating about 10
state-of-the-art PINN methods for systematic evaluation and comparison. We have
conducted extensive experiments with these methods, offering insights into
their strengths and weaknesses. In addition to providing a standardized means
of assessing performance, PINNacle also offers an in-depth analysis to guide
future research, particularly in areas such as domain decomposition methods and
loss reweighting for handling multi-scale problems and complex geometry. To the
best of our knowledge, it is the largest benchmark with a diverse and
comprehensive evaluation that will undoubtedly foster further research in
PINNs.
| [
{
"version": "v1",
"created": "Thu, 15 Jun 2023 02:49:05 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 06:33:52 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Hao",
"Zhongkai",
""
],
[
"Yao",
"Jiachen",
""
],
[
"Su",
"Chang",
""
],
[
"Su",
"Hang",
""
],
[
"Wang",
"Ziao",
""
],
[
"Lu",
"Fanzhi",
""
],
[
"Xia",
"Zeyu",
""
],
[
"Zhang",
"Yichi",
""
],
[
"Liu",
"Songming",
""
],
[
"Lu",
"Lu",
""
],
[
"Zhu",
"Jun",
""
]
]
| new_dataset | 0.999837 |
2306.13512 | Luca Lanzend\"orfer | Luca A. Lanzend\"orfer, Florian Gr\"otschla, Emil Funke, Roger
Wattenhofer | DISCO-10M: A Large-Scale Music Dataset | NeurIPS 2023 Track on Datasets and Benchmarks | null | null | null | cs.SD cs.LG eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Music datasets play a crucial role in advancing research in machine learning
for music. However, existing music datasets suffer from limited size,
accessibility, and lack of audio resources. To address these shortcomings, we
present DISCO-10M, a novel and extensive music dataset that surpasses the
largest previously available music dataset by an order of magnitude. To ensure
high-quality data, we implement a multi-stage filtering process. This process
incorporates similarities based on textual descriptions and audio embeddings.
Moreover, we provide precomputed CLAP embeddings alongside DISCO-10M,
facilitating direct application on various downstream tasks. These embeddings
enable efficient exploration of machine learning applications on the provided
data. With DISCO-10M, we aim to democratize and facilitate new research to help
advance the development of novel machine learning models for music.
| [
{
"version": "v1",
"created": "Fri, 23 Jun 2023 14:27:14 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 09:45:00 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Lanzendörfer",
"Luca A.",
""
],
[
"Grötschla",
"Florian",
""
],
[
"Funke",
"Emil",
""
],
[
"Wattenhofer",
"Roger",
""
]
]
| new_dataset | 0.999853 |
2306.13941 | Ehud Shapiro | Ehud Shapiro | Grassroots Social Networking: Serverless, Permissionless Protocols for
Twitter/LinkedIn/WhatsApp | null | null | 10.1145/3599696.3612898 | null | cs.DC cs.CY cs.MA cs.NI cs.SI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Offering a viable alternative architecture to centrally-controlled global
digital platforms for social networking is an open challenge. Here we present a
grassroots architecture for serverless, permissionless, peer-to-peer social
networks termed grassroots social networking. The architecture is geared for
roaming (address-changing) agents communicating over an unreliable network,
e.g., smartphones communicating via UDP. The architecture incorporates (i) a
decentralized social graph, where each member controls, maintains and stores
only their local neighbourhood in the graph; (ii) member-created feeds, with
authors and followers; and (iii) a novel grassroots dissemination protocol, in
which communication occurs only along the edges of the social graph. The
architecture realizes these components using the blocklace data structure -- a
distributed partially-ordered counterpart of the replicated totally-ordered
blockchain. We provide two example grassroots social networking protocols --
Twitter/LinkedIn-like and WhatsApp-like -- and address their safety, liveness,
privacy, and spam/deep-fake resistance, demonstrating how centrally-controlled
social networks could be supplanted by a grassroots architecture.
| [
{
"version": "v1",
"created": "Sat, 24 Jun 2023 11:43:17 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Shapiro",
"Ehud",
""
]
]
| new_dataset | 0.999485 |
2307.11932 | Isaac Kasahara | Isaac Kasahara, Shubham Agrawal, Selim Engin, Nikhil Chavan-Dafle,
Shuran Song, Volkan Isler | RIC: Rotate-Inpaint-Complete for Generalizable Scene Reconstruction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | General scene reconstruction refers to the task of estimating the full 3D
geometry and texture of a scene containing previously unseen objects. In many
practical applications such as AR/VR, autonomous navigation, and robotics, only
a single view of the scene may be available, making the scene reconstruction
task challenging. In this paper, we present a method for scene reconstruction
by structurally breaking the problem into two steps: rendering novel views via
inpainting and 2D to 3D scene lifting. Specifically, we leverage the
generalization capability of large visual language models (Dalle-2) to inpaint
the missing areas of scene color images rendered from different views. Next, we
lift these inpainted images to 3D by predicting normals of the inpainted image
and solving for the missing depth values. By predicting for normals instead of
depth directly, our method allows for robustness to changes in depth
distributions and scale. With rigorous quantitative evaluation, we show that
our method outperforms multiple baselines while providing generalization to
novel objects and scenes.
| [
{
"version": "v1",
"created": "Fri, 21 Jul 2023 22:39:41 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 22:57:04 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Kasahara",
"Isaac",
""
],
[
"Agrawal",
"Shubham",
""
],
[
"Engin",
"Selim",
""
],
[
"Chavan-Dafle",
"Nikhil",
""
],
[
"Song",
"Shuran",
""
],
[
"Isler",
"Volkan",
""
]
]
| new_dataset | 0.987759 |
2309.00616 | Zhening Huang | Zhening Huang, Xiaoyang Wu, Xi Chen, Hengshuang Zhao, Lei Zhu, Joan
Lasenby | OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation | 28 pages, 17 figures, 13 tables. Project page:
https://zheninghuang.github.io/OpenIns3D/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current 3D open-vocabulary scene understanding methods mostly utilize
well-aligned 2D images as the bridge to learn 3D features with language.
However, applying these approaches becomes challenging in scenarios where 2D
images are absent. In this work, we introduce a new pipeline, namely,
OpenIns3D, which requires no 2D image inputs, for 3D open-vocabulary scene
understanding at the instance level. The OpenIns3D framework employs a
"Mask-Snap-Lookup" scheme. The "Mask" module learns class-agnostic mask
proposals in 3D point clouds. The "Snap" module generates synthetic scene-level
images at multiple scales and leverages 2D vision language models to extract
interesting objects. The "Lookup" module searches through the outcomes of
"Snap" with the help of Mask2Pixel maps, which contain the precise
correspondence between 3D masks and synthetic images, to assign category names
to the proposed masks. This 2D input-free and flexible approach achieves
state-of-the-art results on a wide range of indoor and outdoor datasets by a
large margin. Moreover, OpenIns3D allows for effortless switching of 2D
detectors without re-training. When integrated with powerful 2D open-world
models such as ODISE and GroundingDINO, excellent results were observed on
open-vocabulary instance segmentation. When integrated with LLM-powered 2D
models like LISA, it demonstrates a remarkable capacity to process highly
complex text queries which require intricate reasoning and world knowledge.
Project page: https://zheninghuang.github.io/OpenIns3D/
| [
{
"version": "v1",
"created": "Fri, 1 Sep 2023 17:59:56 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Sep 2023 17:59:54 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 15:15:58 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Huang",
"Zhening",
""
],
[
"Wu",
"Xiaoyang",
""
],
[
"Chen",
"Xi",
""
],
[
"Zhao",
"Hengshuang",
""
],
[
"Zhu",
"Lei",
""
],
[
"Lasenby",
"Joan",
""
]
]
| new_dataset | 0.998941 |
2309.06262 | Hao Yu | Hao Yu, Xu Cheng, Wei Peng, Weihao Liu, Guoying Zhao | Modality Unifying Network for Visible-Infrared Person Re-Identification | 11 pages, 5 figures. Accepted as the poster paper in ICCV2023 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visible-infrared person re-identification (VI-ReID) is a challenging task due
to large cross-modality discrepancies and intra-class variations. Existing
methods mainly focus on learning modality-shared representations by embedding
different modalities into the same feature space. As a result, the learned
feature emphasizes the common patterns across modalities while suppressing
modality-specific and identity-aware information that is valuable for Re-ID. To
address these issues, we propose a novel Modality Unifying Network (MUN) to
explore a robust auxiliary modality for VI-ReID. First, the auxiliary modality
is generated by combining the proposed cross-modality learner and
intra-modality learner, which can dynamically model the modality-specific and
modality-shared representations to alleviate both cross-modality and
intra-modality variations. Second, by aligning identity centres across the
three modalities, an identity alignment loss function is proposed to discover
the discriminative feature representations. Third, a modality alignment loss is
introduced to consistently reduce the distribution distance of visible and
infrared images by modality prototype modeling. Extensive experiments on
multiple public datasets demonstrate that the proposed method surpasses the
current state-of-the-art methods by a significant margin.
| [
{
"version": "v1",
"created": "Tue, 12 Sep 2023 14:22:22 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 12:30:08 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Yu",
"Hao",
""
],
[
"Cheng",
"Xu",
""
],
[
"Peng",
"Wei",
""
],
[
"Liu",
"Weihao",
""
],
[
"Zhao",
"Guoying",
""
]
]
| new_dataset | 0.99571 |
2309.09566 | Christian Choffrut | Christian Choffrut | Synchronous orders on the set of integers | null | null | null | null | cs.FL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | A binary relation over a free monoid is synchronous if it can be recognized
by a synchronous automaton that reads its two tapes simultaneously. We consider
the case where the free monoid is generated by a single element (which makes it
isomorphic to the additive monoid of integers) and where the binary relation
recognized is a strict order. Our main results are: given such an automaton it
is possible to determine whether or not is has infinite chains or antichains;
we characterize the orders that are linear; given two linear synchronous orders
we show how to determine whether or not they are equivalent.
| [
{
"version": "v1",
"created": "Mon, 18 Sep 2023 08:20:57 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 08:36:01 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Choffrut",
"Christian",
""
]
]
| new_dataset | 0.994723 |
2309.12340 | Laura Schelenz | Laura Schelenz, Ingrid Stapf, Jessica Heesen | Security for Children in the Digital Society -- A Rights-based and
Research Ethics Approach | This version included false figures and technical difficulties made
it difficult to replace the current version with another one that does not
include the false figures | null | null | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | In this position paper, we present initial perspectives and research results
from the project "SIKID - Security for Children in the Digital World." The
project is situated in a German context with a focus on European frameworks for
the development of Artificial Intelligence and the protection of children from
security risks arising in the course of algorithm-mediated online
communication. The project strengthens networks of relevant stakeholders,
explores regulatory measures and informs policy makers, and develops a
children's rights approach to questions of security for children online while
also developing a research ethics approach for conducting research with
children on online harms such as cybergrooming and sexual violence against
children.
| [
{
"version": "v1",
"created": "Thu, 24 Aug 2023 08:13:02 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 10:15:38 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Schelenz",
"Laura",
""
],
[
"Stapf",
"Ingrid",
""
],
[
"Heesen",
"Jessica",
""
]
]
| new_dataset | 0.994339 |
2309.13573 | Yuhao Liang | Yuhao Liang, Mohan Shi, Fan Yu, Yangze Li, Shiliang Zhang, Zhihao Du,
Qian Chen, Lei Xie, Yanmin Qian, Jian Wu, Zhuo Chen, Kong Aik Lee, Zhijie
Yan, Hui Bu | The second multi-channel multi-party meeting transcription challenge
(M2MeT) 2.0): A benchmark for speaker-attributed ASR | 8 pages, Accepted by ASRU2023 | null | null | null | cs.SD eess.AS | http://creativecommons.org/licenses/by-sa/4.0/ | With the success of the first Multi-channel Multi-party Meeting Transcription
challenge (M2MeT), the second M2MeT challenge (M2MeT 2.0) held in ASRU2023
particularly aims to tackle the complex task of \emph{speaker-attributed ASR
(SA-ASR)}, which directly addresses the practical and challenging problem of
``who spoke what at when" at typical meeting scenario. We particularly
established two sub-tracks. The fixed training condition sub-track, where the
training data is constrained to predetermined datasets, but participants can
use any open-source pre-trained model. The open training condition sub-track,
which allows for the use of all available data and models without limitation.
In addition, we release a new 10-hour test set for challenge ranking. This
paper provides an overview of the dataset, track settings, results, and
analysis of submitted systems, as a benchmark to show the current state of
speaker-attributed ASR.
| [
{
"version": "v1",
"created": "Sun, 24 Sep 2023 07:51:52 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 11:35:58 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Liang",
"Yuhao",
""
],
[
"Shi",
"Mohan",
""
],
[
"Yu",
"Fan",
""
],
[
"Li",
"Yangze",
""
],
[
"Zhang",
"Shiliang",
""
],
[
"Du",
"Zhihao",
""
],
[
"Chen",
"Qian",
""
],
[
"Xie",
"Lei",
""
],
[
"Qian",
"Yanmin",
""
],
[
"Wu",
"Jian",
""
],
[
"Chen",
"Zhuo",
""
],
[
"Lee",
"Kong Aik",
""
],
[
"Yan",
"Zhijie",
""
],
[
"Bu",
"Hui",
""
]
]
| new_dataset | 0.992567 |
2309.15630 | Linxin Song | Linxin Song, Jieyu Zhang, Lechao Cheng, Pengyuan Zhou, Tianyi Zhou,
Irene Li | NLPBench: Evaluating Large Language Models on Solving NLP Problems | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recent developments in large language models (LLMs) have shown promise in
enhancing the capabilities of natural language processing (NLP). Despite these
successes, there remains a dearth of research dedicated to the NLP
problem-solving abilities of LLMs. To fill the gap in this area, we present a
unique benchmarking dataset, NLPBench, comprising 378 college-level NLP
questions spanning various NLP topics sourced from Yale University's prior
final exams. NLPBench includes questions with context, in which multiple
sub-questions share the same public information, and diverse question types,
including multiple choice, short answer, and math. Our evaluation, centered on
LLMs such as GPT-3.5/4, PaLM-2, and LLAMA-2, incorporates advanced prompting
strategies like the chain-of-thought (CoT) and tree-of-thought (ToT). Our study
reveals that the effectiveness of the advanced prompting strategies can be
inconsistent, occasionally damaging LLM performance, especially in smaller
models like the LLAMA-2 (13b). Furthermore, our manual assessment illuminated
specific shortcomings in LLMs' scientific problem-solving skills, with
weaknesses in logical decomposition and reasoning notably affecting results.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 13:02:06 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 19:49:27 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Song",
"Linxin",
""
],
[
"Zhang",
"Jieyu",
""
],
[
"Cheng",
"Lechao",
""
],
[
"Zhou",
"Pengyuan",
""
],
[
"Zhou",
"Tianyi",
""
],
[
"Li",
"Irene",
""
]
]
| new_dataset | 0.999487 |
2309.16163 | Juhyeon Kim | Juhyeon Kim, Wojciech Jarosz, Ioannis Gkioulekas, Adithya Pediredla | Doppler Time-of-Flight Rendering | 18 pages, 28 Figures, SIGGRAPH Asia 2023 | null | 10.1145/3618335 | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce Doppler time-of-flight (D-ToF) rendering, an extension of ToF
rendering for dynamic scenes, with applications in simulating D-ToF cameras.
D-ToF cameras use high-frequency modulation of illumination and exposure, and
measure the Doppler frequency shift to compute the radial velocity of dynamic
objects. The time-varying scene geometry and high-frequency modulation
functions used in such cameras make it challenging to accurately and
efficiently simulate their measurements with existing ToF rendering algorithms.
We overcome these challenges in a twofold manner: To achieve accuracy, we
derive path integral expressions for D-ToF measurements under global
illumination and form unbiased Monte Carlo estimates of these integrals. To
achieve efficiency, we develop a tailored time-path sampling technique that
combines antithetic time sampling with correlated path sampling. We show
experimentally that our sampling technique achieves up to two orders of
magnitude lower variance compared to naive time-path sampling. We provide an
open-source simulator that serves as a digital twin for D-ToF imaging systems,
allowing imaging researchers, for the first time, to investigate the impact of
modulation functions, material properties, and global illumination on D-ToF
imaging performance.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 04:30:51 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 02:59:28 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 16:13:34 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Kim",
"Juhyeon",
""
],
[
"Jarosz",
"Wojciech",
""
],
[
"Gkioulekas",
"Ioannis",
""
],
[
"Pediredla",
"Adithya",
""
]
]
| new_dataset | 0.993425 |
2310.01889 | Hao Liu | Hao Liu, Matei Zaharia, Pieter Abbeel | Ring Attention with Blockwise Transformers for Near-Infinite Context | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transformers have emerged as the architecture of choice for many
state-of-the-art AI models, showcasing exceptional performance across a wide
range of AI applications. However, the memory demands imposed by Transformers
limit their ability to handle long sequences, thereby creating challenges for
tasks involving extended sequences or long-term dependencies. We present a
distinct approach, Ring Attention, which leverages blockwise computation of
self-attention to distribute long sequences across multiple devices while
concurrently overlapping the communication of key-value blocks with the
computation of blockwise attention. By processing longer input sequences while
maintaining memory efficiency, Ring Attention enables training and inference of
sequences that are device count times longer than those of prior
memory-efficient Transformers, effectively eliminating the memory constraints
imposed by individual devices. Extensive experiments on language modeling tasks
demonstrate the effectiveness of Ring Attention in allowing large sequence
input size and improving performance.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 08:44:50 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 06:25:34 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Liu",
"Hao",
""
],
[
"Zaharia",
"Matei",
""
],
[
"Abbeel",
"Pieter",
""
]
]
| new_dataset | 0.99967 |
2310.02357 | Sergey Berezin | Sergey Berezin, Reza Farahbakhsh, Noel Crespi | On the definition of toxicity in NLP | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The fundamental problem in toxicity detection task lies in the fact that the
toxicity is ill-defined. This causes us to rely on subjective and vague data in
models' training, which results in non-robust and non-accurate results: garbage
in - garbage out.
This work suggests a new, stress-level-based definition of toxicity designed
to be objective and context-aware. On par with it, we also describe possible
ways of applying this new definition to dataset creation and model training.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 18:32:34 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 12:36:19 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Berezin",
"Sergey",
""
],
[
"Farahbakhsh",
"Reza",
""
],
[
"Crespi",
"Noel",
""
]
]
| new_dataset | 0.998204 |
2310.02601 | Ruiyuan Gao | Ruiyuan Gao, Kai Chen, Enze Xie, Lanqing Hong, Zhenguo Li, Dit-Yan
Yeung, Qiang Xu | MagicDrive: Street View Generation with Diverse 3D Geometry Control | Project Page: https://flymin.github.io/magicdrive | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in diffusion models have significantly enhanced the data
synthesis with 2D control. Yet, precise 3D control in street view generation,
crucial for 3D perception tasks, remains elusive. Specifically, utilizing
Bird's-Eye View (BEV) as the primary condition often leads to challenges in
geometry control (e.g., height), affecting the representation of object shapes,
occlusion patterns, and road surface elevations, all of which are essential to
perception data synthesis, especially for 3D object detection tasks. In this
paper, we introduce MagicDrive, a novel street view generation framework
offering diverse 3D geometry controls, including camera poses, road maps, and
3D bounding boxes, together with textual descriptions, achieved through
tailored encoding strategies. Besides, our design incorporates a cross-view
attention module, ensuring consistency across multiple camera views. With
MagicDrive, we achieve high-fidelity street-view synthesis that captures
nuanced 3D geometry and various scene descriptions, enhancing tasks like BEV
segmentation and 3D object detection.
| [
{
"version": "v1",
"created": "Wed, 4 Oct 2023 06:14:06 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 07:07:38 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Gao",
"Ruiyuan",
""
],
[
"Chen",
"Kai",
""
],
[
"Xie",
"Enze",
""
],
[
"Hong",
"Lanqing",
""
],
[
"Li",
"Zhenguo",
""
],
[
"Yeung",
"Dit-Yan",
""
],
[
"Xu",
"Qiang",
""
]
]
| new_dataset | 0.980478 |
2310.02676 | Yujin Tang | Yujin Tang, Jiaming Zhou, Xiang Pan, Zeying Gong, Junwei Liang | PostRainBench: A comprehensive benchmark and a new model for
precipitation forecasting | 16 pages, 3 figures | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate precipitation forecasting is a vital challenge of both scientific
and societal importance. Data-driven approaches have emerged as a widely used
solution for addressing this challenge. However, solely relying on data-driven
approaches has limitations in modeling the underlying physics, making accurate
predictions difficult. Coupling AI-based post-processing techniques with
traditional Numerical Weather Prediction (NWP) methods offers a more effective
solution for improving forecasting accuracy. Despite previous post-processing
efforts, accurately predicting heavy rainfall remains challenging due to the
imbalanced precipitation data across locations and complex relationships
between multiple meteorological variables. To address these limitations, we
introduce the PostRainBench, a comprehensive multi-variable NWP post-processing
benchmark consisting of three datasets for NWP post-processing-based
precipitation forecasting. We propose CAMT, a simple yet effective Channel
Attention Enhanced Multi-task Learning framework with a specially designed
weighted loss function. Its flexible design allows for easy plug-and-play
integration with various backbones. Extensive experimental results on the
proposed benchmark show that our method outperforms state-of-the-art methods by
6.3%, 4.7%, and 26.8% in rain CSI on the three datasets respectively. Most
notably, our model is the first deep learning-based method to outperform
traditional Numerical Weather Prediction (NWP) approaches in extreme
precipitation conditions. It shows improvements of 15.6%, 17.4%, and 31.8% over
NWP predictions in heavy rain CSI on respective datasets. These results
highlight the potential impact of our model in reducing the severe consequences
of extreme weather events.
| [
{
"version": "v1",
"created": "Wed, 4 Oct 2023 09:27:39 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 02:49:36 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Tang",
"Yujin",
""
],
[
"Zhou",
"Jiaming",
""
],
[
"Pan",
"Xiang",
""
],
[
"Gong",
"Zeying",
""
],
[
"Liang",
"Junwei",
""
]
]
| new_dataset | 0.981038 |
2310.02800 | Yichao Yuan | Yichao Yuan, Haojie Ye, Sanketh Vedula, Wynn Kaza, Nishil Talati | Everest: GPU-Accelerated System For Mining Temporal Motifs | null | null | null | null | cs.SE cs.DC | http://creativecommons.org/licenses/by/4.0/ | Temporal motif mining is the task of finding the occurrences of subgraph
patterns within a large input temporal graph that obey the specified structural
and temporal constraints. Despite its utility in several critical application
domains that demand high performance (e.g., detecting fraud in financial
transaction graphs), the performance of existing software is limited on
commercial hardware platforms, in that it runs for tens of hours. This paper
presents Everest - a system that efficiently maps the workload of mining
(supports both enumeration and counting) temporal motifs to the highly parallel
GPU architecture. In particular, using an input temporal graph and a more
expressive user-defined temporal motif query definition compared to prior
works, Everest generates an execution plan and runtime primitives that optimize
the workload execution by exploiting the high compute throughput of a GPU.
Everest generates motif-specific mining code to reduce long-latency memory
accesses and frequent thread divergence operations. Everest incorporates novel
low-cost runtime mechanisms to enable load balancing to improve GPU hardware
utilization. To support large graphs that do not fit on GPU memory, Everest
also supports multi-GPU execution by intelligently partitioning the edge list
that prevents inter-GPU communication. Everest hides the implementation
complexity of presented optimizations away from the targeted system user for
better usability. Our evaluation shows that, using proposed optimizations,
Everest improves the performance of a baseline GPU implementation by 19x, on
average.
| [
{
"version": "v1",
"created": "Wed, 4 Oct 2023 13:21:04 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 00:53:02 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Yuan",
"Yichao",
""
],
[
"Ye",
"Haojie",
""
],
[
"Vedula",
"Sanketh",
""
],
[
"Kaza",
"Wynn",
""
],
[
"Talati",
"Nishil",
""
]
]
| new_dataset | 0.987687 |
2310.03033 | EPTCS | Andreea Postovan, M\u{a}d\u{a}lina Era\c{s}cu | Benchmarking Local Robustness of High-Accuracy Binary Neural Networks
for Enhanced Traffic Sign Recognition | In Proceedings FROM 2023, arXiv:2309.12959 | EPTCS 389, 2023, pp. 120-130 | 10.4204/EPTCS.389.10 | null | cs.CV cs.AI cs.LG cs.LO | http://creativecommons.org/licenses/by/4.0/ | Traffic signs play a critical role in road safety and traffic management for
autonomous driving systems. Accurate traffic sign classification is essential
but challenging due to real-world complexities like adversarial examples and
occlusions. To address these issues, binary neural networks offer promise in
constructing classifiers suitable for resource-constrained devices.
In our previous work, we proposed high-accuracy BNN models for traffic sign
recognition, focusing on compact size for limited computation and energy
resources. To evaluate their local robustness, this paper introduces a set of
benchmark problems featuring layers that challenge state-of-the-art
verification tools. These layers include binarized convolutions, max pooling,
batch normalization, fully connected. The difficulty of the verification
problem is given by the high number of network parameters (905k - 1.7 M), of
the input dimension (2.7k-12k), and of the number of regions (43) as well by
the fact that the neural networks are not sparse.
The proposed BNN models and local robustness properties can be checked at
https://github.com/ChristopherBrix/vnncomp2023_benchmarks/tree/main/benchmarks/traffic_signs_recognition.
The results of the 4th International Verification of Neural Networks
Competition (VNN-COMP'23) revealed the fact that 4, out of 7, solvers can
handle many of our benchmarks randomly selected (minimum is 6, maximum is 36,
out of 45). Surprisingly, tools output also wrong results or missing
counterexample (ranging from 1 to 4). Currently, our focus lies in exploring
the possibility of achieving a greater count of solved instances by extending
the allotted time (previously set at 8 minutes). Furthermore, we are intrigued
by the reasons behind the erroneous outcomes provided by the tools for certain
benchmarks.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 01:17:14 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Postovan",
"Andreea",
""
],
[
"Eraşcu",
"Mădălina",
""
]
]
| new_dataset | 0.981816 |
2310.03044 | Krzysztof Borowski Mr | Krzysztof Borowski, Bartosz Bali\'s | scg-cli -- a Tool Supporting Software Comprehension via Extraction and
Analysis of Semantic Code Graph | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present scg-cli, a~command line tool facilitating software comprehension.
The tool extracts semantic information about code structure and dependencies
from the Java and Scala projects, and structures it as a~Semantic Code Graph
(SCG), an information model underlying scg-cli. The SCG data, once written into
a~portable, open protobuf-based format, can be used by the scg-cli command line
tool to obtain project metrics, find the most critical code entities, and
compute project partitionings. The results of this analysis and the SCG data
can be exported for further investigation by external tools such as Gephi
software (visualization) and, notably, as a Jupyter Notebook environment with
helper APIs to enable advanced analysis of the project using data analytics
methods. We explain functionalities of the scg-cli tool and demonstrate its
capabilities by showing an example analysis of an open-source Java project
commons-io.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 19:04:51 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Borowski",
"Krzysztof",
""
],
[
"Baliś",
"Bartosz",
""
]
]
| new_dataset | 0.999305 |
2310.03046 | Jieyu Zhang | Jieyu Zhang, Ranjay Krishna, Ahmed H. Awadallah, Chi Wang | EcoAssistant: Using LLM Assistant More Affordably and Accurately | null | null | null | null | cs.SE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today, users ask Large language models (LLMs) as assistants to answer queries
that require external knowledge; they ask about the weather in a specific city,
about stock prices, and even about where specific locations are within their
neighborhood. These queries require the LLM to produce code that invokes
external APIs to answer the user's question, yet LLMs rarely produce correct
code on the first try, requiring iterative code refinement upon execution
results. In addition, using LLM assistants to support high query volumes can be
expensive. In this work, we contribute a framework, EcoAssistant, that enables
LLMs to answer code-driven queries more affordably and accurately. EcoAssistant
contains three components. First, it allows the LLM assistants to converse with
an automatic code executor to iteratively refine code or to produce answers
based on the execution results. Second, we use a hierarchy of LLM assistants,
which attempts to answer the query with weaker, cheaper LLMs before backing off
to stronger, expensive ones. Third, we retrieve solutions from past successful
queries as in-context demonstrations to help subsequent queries. Empirically,
we show that EcoAssistant offers distinct advantages for affordability and
accuracy, surpassing GPT-4 by 10 points of success rate with less than 50% of
GPT-4's cost.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 22:16:13 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Zhang",
"Jieyu",
""
],
[
"Krishna",
"Ranjay",
""
],
[
"Awadallah",
"Ahmed H.",
""
],
[
"Wang",
"Chi",
""
]
]
| new_dataset | 0.998191 |
2310.03052 | Sangjun Park | Sangjun Park and JinYeong Bak | Memoria: Hebbian Memory Architecture for Human-Like Sequential
Processing | Under review as a conference paper at ICLR 2024. 20 pages, 9 figures,
5 tables | null | null | null | cs.LG cs.AI cs.NE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Transformers have demonstrated their success in various domains and tasks.
However, Transformers struggle with long input sequences due to their limited
capacity. While one solution is to increase input length, endlessly stretching
the length is unrealistic. Furthermore, humans selectively remember and use
only relevant information from inputs, unlike Transformers which process all
raw data from start to end. We introduce Memoria, a general memory network that
applies Hebbian theory which is a major theory explaining human memory
formulation to enhance long-term dependencies in neural networks. Memoria
stores and retrieves information called engram at multiple memory levels of
working memory, short-term memory, and long-term memory, using connection
weights that change according to Hebb's rule. Through experiments with popular
Transformer-based models like BERT and GPT, we present that Memoria
significantly improves the ability to consider long-term dependencies in
various tasks. Results show that Memoria outperformed existing methodologies in
sorting and language modeling, and long text classification.
| [
{
"version": "v1",
"created": "Wed, 4 Oct 2023 09:40:46 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Park",
"Sangjun",
""
],
[
"Bak",
"JinYeong",
""
]
]
| new_dataset | 0.999049 |
2310.03147 | Jovan Jeromela | Jovan Jeromela | Context-Based Tweet Engagement Prediction | Submitted as a Diploma Thesis at TU Wien on 2023-05-25. Advisor:
Peter Knees. 198 pages | null | 10.34726/hss.2023.79627 | null | cs.IR cs.LG cs.SI | http://creativecommons.org/licenses/by/4.0/ | Twitter is currently one of the biggest social media platforms. Its users may
share, read, and engage with short posts called tweets. For the ACM Recommender
Systems Conference 2020, Twitter published a dataset around 70 GB in size for
the annual RecSys Challenge. In 2020, the RecSys Challenge invited
participating teams to create models that would predict engagement likelihoods
for given user-tweet combinations. The submitted models predicting like, reply,
retweet, and quote engagements were evaluated based on two metrics: area under
the precision-recall curve (PRAUC) and relative cross-entropy (RCE).
In this diploma thesis, we used the RecSys 2020 Challenge dataset and
evaluation procedure to investigate how well context alone may be used to
predict tweet engagement likelihood. In doing so, we employed the Spark engine
on TU Wien's Little Big Data Cluster to create scalable data preprocessing,
feature engineering, feature selection, and machine learning pipelines. We
manually created just under 200 additional features to describe tweet context.
The results indicate that features describing users' prior engagement history
and the popularity of hashtags and links in the tweet were the most
informative. We also found that factors such as the prediction algorithm,
training dataset size, training dataset sampling method, and feature selection
significantly affect the results. After comparing the best results of our
context-only prediction models with content-only models and with models
developed by the Challenge winners, we identified that the context-based models
underperformed in terms of the RCE score. This work thus concludes by situating
this discrepancy and proposing potential improvements to our implementation,
which is shared in a public git repository.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 08:36:57 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Jeromela",
"Jovan",
""
]
]
| new_dataset | 0.997772 |
2310.03205 | Kim Youwang | Kim Youwang and Lee Hyun and Kim Sung-Bin and Suekyeong Nam and
Janghoon Ju and Tae-Hyun Oh | A Large-Scale 3D Face Mesh Video Dataset via Neural Re-parameterized
Optimization | 9 pages, 7 figures, and 3 tables for the main paper. 8 pages, 6
figures and 3 tables for the appendix | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose NeuFace, a 3D face mesh pseudo annotation method on videos via
neural re-parameterized optimization. Despite the huge progress in 3D face
reconstruction methods, generating reliable 3D face labels for in-the-wild
dynamic videos remains challenging. Using NeuFace optimization, we annotate the
per-view/-frame accurate and consistent face meshes on large-scale face videos,
called the NeuFace-dataset. We investigate how neural re-parameterization helps
to reconstruct image-aligned facial details on 3D meshes via gradient analysis.
By exploiting the naturalness and diversity of 3D faces in our dataset, we
demonstrate the usefulness of our dataset for 3D face-related tasks: improving
the reconstruction accuracy of an existing 3D face reconstruction model and
learning 3D facial motion prior. Code and datasets will be available at
https://neuface-dataset.github.
| [
{
"version": "v1",
"created": "Wed, 4 Oct 2023 23:24:22 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Youwang",
"Kim",
""
],
[
"Hyun",
"Lee",
""
],
[
"Sung-Bin",
"Kim",
""
],
[
"Nam",
"Suekyeong",
""
],
[
"Ju",
"Janghoon",
""
],
[
"Oh",
"Tae-Hyun",
""
]
]
| new_dataset | 0.996048 |
2310.03221 | Yijia Xiao | Yijia Xiao, Dylan Steinecke, Alexander Russell Pelletier, Yushi Bai,
Peipei Ping, Wei Wang | Know2BIO: A Comprehensive Dual-View Benchmark for Evolving Biomedical
Knowledge Graphs | 26 pages, 2 figures, 14 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge graphs (KGs) have emerged as a powerful framework for representing
and integrating complex biomedical information. However, assembling KGs from
diverse sources remains a significant challenge in several aspects, including
entity alignment, scalability, and the need for continuous updates to keep pace
with scientific advancements. Moreover, the representative power of KGs is
often limited by the scarcity of multi-modal data integration. To overcome
these challenges, we propose Know2BIO, a general-purpose heterogeneous KG
benchmark for the biomedical domain. Know2BIO integrates data from 30 diverse
sources, capturing intricate relationships across 11 biomedical categories. It
currently consists of ~219,000 nodes and ~6,200,000 edges. Know2BIO is capable
of user-directed automated updating to reflect the latest knowledge in
biomedical science. Furthermore, Know2BIO is accompanied by multi-modal data:
node features including text descriptions, protein and compound sequences and
structures, enabling the utilization of emerging natural language processing
methods and multi-modal data integration strategies. We evaluate KG
representation models on Know2BIO, demonstrating its effectiveness as a
benchmark for KG representation learning in the biomedical field. Data and
source code of Know2BIO are available at
https://github.com/Yijia-Xiao/Know2BIO/.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 00:34:56 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Xiao",
"Yijia",
""
],
[
"Steinecke",
"Dylan",
""
],
[
"Pelletier",
"Alexander Russell",
""
],
[
"Bai",
"Yushi",
""
],
[
"Ping",
"Peipei",
""
],
[
"Wang",
"Wei",
""
]
]
| new_dataset | 0.974864 |
2310.03239 | Aravind Sivaramakrishnan | Aravind Sivaramakrishnan, Noah R. Carver, Sumanth Tangirala, Kostas E.
Bekris | Roadmaps with Gaps over Controllers: Achieving Efficiency in Planning
under Dynamics | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper aims to improve the computational efficiency of motion planning
for mobile robots with non-trivial dynamics by taking advantage of learned
controllers. It adopts a decoupled strategy, where a system-specific controller
is first trained offline in an empty environment to deal with the system's
dynamics. For an environment, the proposed approach constructs offline a data
structure, a "Roadmap with Gaps," to approximately learn how to solve planning
queries in this environment using the learned controller. Its nodes correspond
to local regions and edges correspond to applications of the learned control
policy that approximately connect these regions. Gaps arise due to the
controller not perfectly connecting pairs of individual states along edges.
Online, given a query, a tree sampling-based motion planner uses the roadmap so
that the tree's expansion is informed towards the goal region. The tree
expansion selects local subgoals given a wavefront on the roadmap that guides
towards the goal. When the controller cannot reach a subgoal region, the
planner resorts to random exploration to maintain probabilistic completeness
and asymptotic optimality. The experimental evaluation shows that the approach
significantly improves the computational efficiency of motion planning on
various benchmarks, including physics-based vehicular models on uneven and
varying friction terrains as well as a quadrotor under air pressure effects.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 01:21:33 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Sivaramakrishnan",
"Aravind",
""
],
[
"Carver",
"Noah R.",
""
],
[
"Tangirala",
"Sumanth",
""
],
[
"Bekris",
"Kostas E.",
""
]
]
| new_dataset | 0.962451 |
2310.03285 | Ahmed Abusnaina | Ahmed Abusnaina, Yizhen Wang, Sunpreet Arora, Ke Wang, Mihai
Christodorescu, David Mohaisen | Burning the Adversarial Bridges: Robust Windows Malware Detection
Against Binary-level Mutations | 12 pages | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Toward robust malware detection, we explore the attack surface of existing
malware detection systems. We conduct root-cause analyses of the practical
binary-level black-box adversarial malware examples. Additionally, we uncover
the sensitivity of volatile features within the detection engines and exhibit
their exploitability. Highlighting volatile information channels within the
software, we introduce three software pre-processing steps to eliminate the
attack surface, namely, padding removal, software stripping, and inter-section
information resetting. Further, to counter the emerging section injection
attacks, we propose a graph-based section-dependent information extraction
scheme for software representation. The proposed scheme leverages aggregated
information within various sections in the software to enable robust malware
detection and mitigate adversarial settings. Our experimental results show that
traditional malware detection models are ineffective against adversarial
threats. However, the attack surface can be largely reduced by eliminating the
volatile information. Therefore, we propose simple-yet-effective methods to
mitigate the impacts of binary manipulation attacks. Overall, our graph-based
malware detection scheme can accurately detect malware with an area under the
curve score of 88.32\% and a score of 88.19% under a combination of binary
manipulation attacks, exhibiting the efficiency of our proposed scheme.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 03:28:02 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Abusnaina",
"Ahmed",
""
],
[
"Wang",
"Yizhen",
""
],
[
"Arora",
"Sunpreet",
""
],
[
"Wang",
"Ke",
""
],
[
"Christodorescu",
"Mihai",
""
],
[
"Mohaisen",
"David",
""
]
]
| new_dataset | 0.998672 |
2310.03302 | Jian Vora | Qian Huang, Jian Vora, Percy Liang, Jure Leskovec | Benchmarking Large Language Models As AI Research Agents | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Scientific experimentation involves an iterative process of creating
hypotheses, designing experiments, running experiments, and analyzing the
results. Can we build AI research agents to perform these long-horizon tasks?
To take a step towards building and evaluating research agents on such
open-ended decision-making tasks, we focus on the problem of machine learning
engineering: given a task description and a dataset, build a high-performing
model. In this paper, we propose MLAgentBench, a suite of ML tasks for
benchmarking AI research agents. Agents can perform actions like
reading/writing files, executing code, and inspecting outputs. With these
actions, agents could run experiments, analyze the results, and modify the code
of entire machine learning pipelines, such as data processing, architecture,
training processes, etc. The benchmark then automatically evaluates the agent's
performance objectively over various metrics related to performance and
efficiency. We also design an LLM-based research agent to automatically perform
experimentation loops in such an environment. Empirically, we find that a
GPT-4-based research agent can feasibly build compelling ML models over many
tasks in MLAgentBench, displaying highly interpretable plans and actions.
However, the success rates vary considerably; they span from almost 90\% on
well-established older datasets to as low as 10\% on recent Kaggle Challenges
-- unavailable during the LLM model's pretraining -- and even 0\% on newer
research challenges like BabyLM. Finally, we identify several key challenges
for LLM-based research agents such as long-term planning and hallucination. Our
code is released at https://github.com/snap-stanford/MLAgentBench.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 04:06:12 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Huang",
"Qian",
""
],
[
"Vora",
"Jian",
""
],
[
"Liang",
"Percy",
""
],
[
"Leskovec",
"Jure",
""
]
]
| new_dataset | 0.996636 |
2310.03374 | Fabio Stroppa | Fabio Stroppa | Design Optimizer for Planar Soft-Growing Robot Manipulators | 50 pages, 15 figures | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Soft-growing robots are innovative devices that feature plant-inspired growth
to navigate environments. Thanks to their embodied intelligence of adapting to
their surroundings and the latest innovation in actuation and manufacturing, it
is possible to employ them for specific manipulation tasks. The applications of
these devices include exploration of delicate/dangerous environments,
manipulation of items, or assistance in domestic environments.
This work presents a novel approach for design optimization of soft-growing
robots, which will be used prior to manufacturing to suggest engineers -- or
robot designer enthusiasts -- the optimal dimension of the robot to be built
for solving a specific task. I modeled the design process as a multi-objective
optimization problem, in which I optimize the kinematic chain of a soft
manipulator to reach targets and avoid unnecessary overuse of material and
resources. The method exploits the advantages of population-based optimization
algorithms, in particular evolutionary algorithms, to transform the problem
from multi-objective into a single-objective thanks to an efficient
mathematical formulation, the novel rank-partitioning algorithm, and obstacle
avoidance integrated within the optimizer operators.
I tested the proposed method on different tasks to access its optimality,
which showed significant performance in solving the problem. Finally,
comparative experiments showed that the proposed method works better than the
one existing in the literature in terms of precision, resource consumption, and
run time.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 08:23:17 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Stroppa",
"Fabio",
""
]
]
| new_dataset | 0.998724 |
2310.03380 | Xingdong Ren | Xingdong Ren, Tianxing Zhang, Hanzhou Wu, Xinpeng Zhang, Yinggui Wang,
Guangling Sun | StegGuard: Fingerprinting Self-supervised Pre-trained Encoders via
Secrets Embeder and Extractor | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we propose StegGuard, a novel fingerprinting mechanism to
verify the ownership of the suspect pre-trained encoder using steganography. A
critical perspective in StegGuard is that the unique characteristic of the
transformation from an image to an embedding, conducted by the pre-trained
encoder, can be equivalently exposed how an embeder embeds secrets into images
and how an extractor extracts the secrets from encoder's embeddings with a
tolerable error after the secrets are subjected to the encoder's
transformation. While each independent encoder has a distinct transformation,
the piracy encoder has a similar transformation to the victim. Based on these,
we learn a pair of secrets embeder and extractor as the fingerprint for the
victim encoder. We introduce a frequency-domain channel attention embedding
block into the embeder to adaptively embed secrets into suitable frequency
bands. During verification, if the secrets embedded into the query images can
be extracted with an acceptable error from the suspect encoder's embeddings,
the suspect encoder is determined as piracy, otherwise independent. Extensive
experiments demonstrate that depending on a very limited number of query
images, StegGuard can reliably identify across varied independent encoders, and
is robust against model stealing related attacks including model extraction,
fine-tuning, pruning, embedding noising and shuffle.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 08:30:42 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Ren",
"Xingdong",
""
],
[
"Zhang",
"Tianxing",
""
],
[
"Wu",
"Hanzhou",
""
],
[
"Zhang",
"Xinpeng",
""
],
[
"Wang",
"Yinggui",
""
],
[
"Sun",
"Guangling",
""
]
]
| new_dataset | 0.994238 |
2310.03388 | Paolo Rabino | Paolo Rabino, Antonio Alliegro, Francesco Cappio Borlino, Tatiana
Tommasi | OpenPatch: a 3D patchwork for Out-Of-Distribution detectionpdf icon | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Moving deep learning models from the laboratory setting to the open world
entails preparing them to handle unforeseen conditions. In several applications
the occurrence of novel classes during deployment poses a significant threat,
thus it is crucial to effectively detect them. Ideally, this skill should be
used when needed without requiring any further computational training effort at
every new task. Out-of-distribution detection has attracted significant
attention in the last years, however the majority of the studies deal with 2D
images ignoring the inherent 3D nature of the real-world and often confusing
between domain and semantic novelty. In this work, we focus on the latter,
considering the objects geometric structure captured by 3D point clouds
regardless of the specific domain. We advance the field by introducing
OpenPatch that builds on a large pre-trained model and simply extracts from its
intermediate features a set of patch representations that describe each known
class. For any new sample, we obtain a novelty score by evaluating whether it
can be recomposed mainly by patches of a single known class or rather via the
contribution of multiple classes. We present an extensive experimental
evaluation of our approach for the task of semantic novelty detection on
real-world point cloud samples when the reference known data are synthetic. We
demonstrate that OpenPatch excels in both the full and few-shot known sample
scenarios, showcasing its robustness across varying pre-training objectives and
network backbones. The inherent training-free nature of our method allows for
its immediate application to a wide array of real-world tasks, offering a
compelling advantage over approaches that need expensive retraining efforts.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 08:49:51 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Rabino",
"Paolo",
""
],
[
"Alliegro",
"Antonio",
""
],
[
"Borlino",
"Francesco Cappio",
""
],
[
"Tommasi",
"Tatiana",
""
]
]
| new_dataset | 0.998729 |
2310.03394 | Khaled Wahba | Khaled Wahba, Joaquim Ortiz-Haro, Marc Toussaint and Wolfgang H\"onig | Kinodynamic Motion Planning for a Team of Multirotors Transporting a
Cable-Suspended Payload in Cluttered Environments | Submitted to ICRA, 2024 | null | null | null | cs.RO cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a motion planner for cable-driven payload transportation using
multiple unmanned aerial vehicles (UAVs) in an environment cluttered with
obstacles. Our planner is kinodynamic, i.e., it considers the full dynamics
model of the transporting system including actuation constraints. Due to the
high dimensionality of the planning problem, we use a hierarchical approach
where we first solve the geometric motion planning using a sampling-based
method with a novel sampler, followed by constrained trajectory optimization
that considers the full dynamics of the system. Both planning stages consider
inter-robot and robot/obstacle collisions. We demonstrate in a
software-in-the-loop simulation that there is a significant benefit in
kinodynamic motion planning for such payload transport systems with respect to
payload tracking error and energy consumption compared to the standard methods
of planning for the payload alone. Notably, we observe a significantly higher
success rate in scenarios where the team formation changes are needed to move
through tight spaces.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 09:02:22 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Wahba",
"Khaled",
""
],
[
"Ortiz-Haro",
"Joaquim",
""
],
[
"Toussaint",
"Marc",
""
],
[
"Hönig",
"Wolfgang",
""
]
]
| new_dataset | 0.951684 |
2310.03402 | Zhenyu Bu | Zhenyu Bu, Kai-Ni Wang, Fuxing Zhao, Shengxiao Li, Guang-Quan Zhou | A Complementary Global and Local Knowledge Network for Ultrasound
denoising with Fine-grained Refinement | Submitted to ICASSP 2024 | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ultrasound imaging serves as an effective and non-invasive diagnostic tool
commonly employed in clinical examinations. However, the presence of speckle
noise in ultrasound images invariably degrades image quality, impeding the
performance of subsequent tasks, such as segmentation and classification.
Existing methods for speckle noise reduction frequently induce excessive image
smoothing or fail to preserve detailed information adequately. In this paper,
we propose a complementary global and local knowledge network for ultrasound
denoising with fine-grained refinement. Initially, the proposed architecture
employs the L-CSwinTransformer as encoder to capture global information,
incorporating CNN as decoder to fuse local features. We expand the resolution
of the feature at different stages to extract more global information compared
to the original CSwinTransformer. Subsequently, we integrate Fine-grained
Refinement Block (FRB) within the skip-connection stage to further augment
features. We validate our model on two public datasets, HC18 and BUSI.
Experimental results demonstrate that our model can achieve competitive
performance in both quantitative metrics and visual performance. Our code will
be available at https://github.com/AAlkaid/USDenoising.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 09:12:34 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Bu",
"Zhenyu",
""
],
[
"Wang",
"Kai-Ni",
""
],
[
"Zhao",
"Fuxing",
""
],
[
"Li",
"Shengxiao",
""
],
[
"Zhou",
"Guang-Quan",
""
]
]
| new_dataset | 0.986105 |
2310.03443 | Hung-Shin Lee | Li-Wei Chen, Kai-Chen Cheng, Hung-Shin Lee | The North System for Formosa Speech Recognition Challenge 2023 | null | null | null | null | cs.CL cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | This report provides a concise overview of the proposed North system, which
aims to achieve automatic word/syllable recognition for Taiwanese Hakka
(Sixian). The report outlines three key components of the system: the
acquisition, composition, and utilization of the training data; the
architecture of the model; and the hardware specifications and operational
statistics. The demonstration of the system can be found at
https://asrvm.iis.sinica.edu.tw/hakka_sixian.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 10:29:18 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Chen",
"Li-Wei",
""
],
[
"Cheng",
"Kai-Chen",
""
],
[
"Lee",
"Hung-Shin",
""
]
]
| new_dataset | 0.997734 |
2310.03478 | Boshi An | Boshi An, Yiran Geng, Kai Chen, Xiaoqi Li, Qi Dou, Hao Dong | RGBManip: Monocular Image-based Robotic Manipulation through Active
Object Pose Estimation | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | Robotic manipulation requires accurate perception of the environment, which
poses a significant challenge due to its inherent complexity and constantly
changing nature. In this context, RGB image and point-cloud observations are
two commonly used modalities in visual-based robotic manipulation, but each of
these modalities have their own limitations. Commercial point-cloud
observations often suffer from issues like sparse sampling and noisy output due
to the limits of the emission-reception imaging principle. On the other hand,
RGB images, while rich in texture information, lack essential depth and 3D
information crucial for robotic manipulation. To mitigate these challenges, we
propose an image-only robotic manipulation framework that leverages an
eye-on-hand monocular camera installed on the robot's parallel gripper. By
moving with the robot gripper, this camera gains the ability to actively
perceive object from multiple perspectives during the manipulation process.
This enables the estimation of 6D object poses, which can be utilized for
manipulation. While, obtaining images from more and diverse viewpoints
typically improves pose estimation, it also increases the manipulation time. To
address this trade-off, we employ a reinforcement learning policy to
synchronize the manipulation strategy with active perception, achieving a
balance between 6D pose accuracy and manipulation efficiency. Our experimental
results in both simulated and real-world environments showcase the
state-of-the-art effectiveness of our approach. %, which, to the best of our
knowledge, is the first to achieve robust real-world robotic manipulation
through active pose estimation. We believe that our method will inspire further
research on real-world-oriented robotic manipulation.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 11:46:09 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"An",
"Boshi",
""
],
[
"Geng",
"Yiran",
""
],
[
"Chen",
"Kai",
""
],
[
"Li",
"Xiaoqi",
""
],
[
"Dou",
"Qi",
""
],
[
"Dong",
"Hao",
""
]
]
| new_dataset | 0.999028 |
2310.03491 | Washington Cunha | Washington Cunha, Celso Fran\c{c}a, Leonardo Rocha, Marcos Andr\'e
Gon\c{c}alves | TPDR: A Novel Two-Step Transformer-based Product and Class Description
Match and Retrieval Method | 10 pages, 8 figures, 5 tables | null | null | null | cs.IR cs.LG cs.SE | http://creativecommons.org/licenses/by/4.0/ | There is a niche of companies responsible for intermediating the purchase of
large batches of varied products for other companies, for which the main
challenge is to perform product description standardization, i.e., matching an
item described by a client with a product described in a catalog. The problem
is complex since the client's product description may be: (1) potentially
noisy; (2) short and uninformative (e.g., missing information about model and
size); and (3) cross-language. In this paper, we formalize this problem as a
ranking task: given an initial client product specification (query), return the
most appropriate standardized descriptions (response). In this paper, we
propose TPDR, a two-step Transformer-based Product and Class Description
Retrieval method that is able to explore the semantic correspondence between IS
and SD, by exploiting attention mechanisms and contrastive learning. First,
TPDR employs the transformers as two encoders sharing the embedding vector
space: one for encoding the IS and another for the SD, in which corresponding
pairs (IS, SD) must be close in the vector space. Closeness is further enforced
by a contrastive learning mechanism leveraging a specialized loss function.
TPDR also exploits a (second) re-ranking step based on syntactic features that
are very important for the exact matching (model, dimension) of certain
products that may have been neglected by the transformers. To evaluate our
proposal, we consider 11 datasets from a real company, covering different
application contexts. Our solution was able to retrieve the correct
standardized product before the 5th ranking position in 71% of the cases and
its correct category in the first position in 80% of the situations. Moreover,
the effectiveness gains over purely syntactic or semantic baselines reach up to
3.7 times, solving cases that none of the approaches in isolation can do by
themselves.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 12:02:51 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Cunha",
"Washington",
""
],
[
"França",
"Celso",
""
],
[
"Rocha",
"Leonardo",
""
],
[
"Gonçalves",
"Marcos André",
""
]
]
| new_dataset | 0.988432 |
2310.03505 | Alexander Mock | Alexander Mock, Martin Magnusson, Joachim Hertzberg | RadaRays: Real-time Simulation of Rotating FMCW Radar for Mobile
Robotics via Hardware-accelerated Ray Tracing | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RadaRays allows for the accurate modeling and simulation of rotating FMCW
radar sensors in complex environments, including the simulation of reflection,
refraction, and scattering of radar waves. Our software is able to handle large
numbers of objects and materials, making it suitable for use in a variety of
mobile robotics applications. We demonstrate the effectiveness of RadaRays
through a series of experiments and show that it can more accurately reproduce
the behavior of FMCW radar sensors in a variety of environments, compared to
the ray casting-based lidar-like simulations that are commonly used in
simulators for autonomous driving such as CARLA. Our experiments additionally
serve as valuable reference point for researchers to evaluate their own radar
simulations. By using RadaRays, developers can significantly reduce the time
and cost associated with prototyping and testing FMCW radar-based algorithms.
We also provide a Gazebo plugin that makes our work accessible to the mobile
robotics community.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 12:35:09 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Mock",
"Alexander",
""
],
[
"Magnusson",
"Martin",
""
],
[
"Hertzberg",
"Joachim",
""
]
]
| new_dataset | 0.998163 |
2310.03563 | \'Agoston Csehi | \'Agoston Istv\'an Csehi, Csaba M\'at\'e J\'ozsa | BID-NeRF: RGB-D image pose estimation with inverted Neural Radiance
Fields | Accepted to Nerf4ADR workshop of ICCV23 conference | null | null | null | cs.CV cs.LG cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We aim to improve the Inverted Neural Radiance Fields (iNeRF) algorithm which
defines the image pose estimation problem as a NeRF based iterative linear
optimization. NeRFs are novel neural space representation models that can
synthesize photorealistic novel views of real-world scenes or objects. Our
contributions are as follows: we extend the localization optimization objective
with a depth-based loss function, we introduce a multi-image based loss
function where a sequence of images with known relative poses are used without
increasing the computational complexity, we omit hierarchical sampling during
volumetric rendering, meaning only the coarse model is used for pose
estimation, and we how that by extending the sampling interval convergence can
be achieved even or higher initial pose estimate errors. With the proposed
modifications the convergence speed is significantly improved, and the basin of
convergence is substantially extended.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 14:27:06 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Csehi",
"Ágoston István",
""
],
[
"Józsa",
"Csaba Máté",
""
]
]
| new_dataset | 0.997065 |
2310.03574 | Anders Bj{\ae}rt S{\o}rensen | Anders Bj{\ae}rt S{\o}rensen | A note on a gap in the proof of the minimum distance for Projective
Reed-Muller Codes | null | null | null | null | cs.IT math.AG math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The note clarifies a gap in the proof of the minimum distance for Projective
Reed-Muller Codes. The gap was identified by S.Ghorpade and R.Ludhani in a
recent article. Here the original thoughts are explained and the gap closed.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 14:46:22 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Sørensen",
"Anders Bjært",
""
]
]
| new_dataset | 0.950299 |
2310.03583 | Christopher Scherb | Christopher Scherb and Adrian Hadayah and Luc Bryan Heitz | CyMed: A Framework for Testing Cybersecurity of Connected Medical
Devices | null | null | null | null | cs.CR cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Connected Medical Devices (CMDs) have a large impact on patients as they
allow them to lead a more normal life. Any malfunction could not only remove
the health benefits the CMDs provide, they could also cause further harm to the
patient. Due to this, there are many safety regulations which must be adhered
to prior to a CMD entering the market. However, while many detailed safety
regulations exist, there are a fundamental lack of cybersecurity frameworks
applicable to CMDs. While there are recent regulations which aim to enforce
cybersecurity practices, they are vague and do not contain the concrete steps
necessary to implement cybersecurity. This paper aims to fill that gap by
describing a framework, CyMed, to be used by vendors and ens-users, which
contains concrete measures to improve the resilience of CMDs against cyber
attack. The CyMed framework is subsequently evaluated based on practical tests
as well as expert interviews.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 15:05:16 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Scherb",
"Christopher",
""
],
[
"Hadayah",
"Adrian",
""
],
[
"Heitz",
"Luc Bryan",
""
]
]
| new_dataset | 0.999123 |
2310.03602 | Chuan Fang | Chuan Fang, Xiaotao Hu, Kunming Luo, Ping Tan | Ctrl-Room: Controllable Text-to-3D Room Meshes Generation with Layout
Constraints | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Text-driven 3D indoor scene generation could be useful for gaming, film
industry, and AR/VR applications. However, existing methods cannot faithfully
capture the room layout, nor do they allow flexible editing of individual
objects in the room. To address these problems, we present Ctrl-Room, which is
able to generate convincing 3D rooms with designer-style layouts and
high-fidelity textures from just a text prompt. Moreover, Ctrl-Room enables
versatile interactive editing operations such as resizing or moving individual
furniture items. Our key insight is to separate the modeling of layouts and
appearance. %how to model the room that takes into account both scene texture
and geometry at the same time. To this end, Our proposed method consists of two
stages, a `Layout Generation Stage' and an `Appearance Generation Stage'. The
`Layout Generation Stage' trains a text-conditional diffusion model to learn
the layout distribution with our holistic scene code parameterization. Next,
the `Appearance Generation Stage' employs a fine-tuned ControlNet to produce a
vivid panoramic image of the room guided by the 3D scene layout and text
prompt. In this way, we achieve a high-quality 3D room with convincing layouts
and lively textures. Benefiting from the scene code parameterization, we can
easily edit the generated room model through our mask-guided editing module,
without expensive editing-specific training. Extensive experiments on the
Structured3D dataset demonstrate that our method outperforms existing methods
in producing more reasonable, view-consistent, and editable 3D rooms from
natural language prompts.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 15:29:52 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Fang",
"Chuan",
""
],
[
"Hu",
"Xiaotao",
""
],
[
"Luo",
"Kunming",
""
],
[
"Tan",
"Ping",
""
]
]
| new_dataset | 0.995126 |
2310.03617 | Yihong Tang | Yihong Tang, Weipeng Deng, Shuyu Lei, Yuebing Liang, Zhenliang Ma,
Zhan Zhao | RouteKG: A knowledge graph-based framework for route prediction on road
networks | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Short-term route prediction on road networks allows us to anticipate the
future trajectories of road users, enabling a plethora of intelligent
transportation applications such as dynamic traffic control or personalized
route recommendation. Despite the recent advances in this area, existing
methods focus primarily on learning sequential patterns, neglecting the
inherent spatial structure in road networks that can affect human routing
decisions. To fill the gap, this paper introduces RouteKG, a novel Knowledge
Graph-based framework for route prediction. Specifically, we construct a
Knowledge Graph on the road network, thereby learning and leveraging spatial
relations, especially moving directions, which are crucial for human
navigation. Moreover, an n-ary tree-based algorithm is introduced to
efficiently generate top-K routes in a batch mode, enhancing scalability and
computational efficiency. To further optimize the prediction performance, a
rank refinement module is incorporated to fine-tune the candidate route
rankings. The model performance is evaluated using two real-world vehicle
trajectory datasets from two Chinese cities, Chengdu and Shanghai, under
various practical scenarios. The results demonstrate a significant improvement
in accuracy over baseline methods, with an average increase of 6.2%, 7.8%, and
6.1% in top-1, 5, and 10 routes predictions, respectively. We further validate
our model through a case study that utilizes the pretrained model as a
simulator for real-time traffic flow estimation at the link level. The proposed
RouteKG promises wide-ranging applications in vehicle navigation, traffic
management, and other intelligent transportation tasks.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 10:40:35 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Tang",
"Yihong",
""
],
[
"Deng",
"Weipeng",
""
],
[
"Lei",
"Shuyu",
""
],
[
"Liang",
"Yuebing",
""
],
[
"Ma",
"Zhenliang",
""
],
[
"Zhao",
"Zhan",
""
]
]
| new_dataset | 0.986767 |
2310.03635 | Jiayuan Mao | Jiayuan Mao, Xuelin Yang, Xikun Zhang, Noah D. Goodman, Jiajun Wu | CLEVRER-Humans: Describing Physical and Causal Events the Human Way | NeurIPS 2022 (Dataset and Benchmark Track). First two authors
contributed equally. Project page:
https://sites.google.com/stanford.edu/clevrer-humans/home | null | null | null | cs.AI cs.CL cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building machines that can reason about physical events and their causal
relationships is crucial for flexible interaction with the physical world.
However, most existing physical and causal reasoning benchmarks are exclusively
based on synthetically generated events and synthetic natural language
descriptions of causal relationships. This design brings up two issues. First,
there is a lack of diversity in both event types and natural language
descriptions; second, causal relationships based on manually-defined heuristics
are different from human judgments. To address both shortcomings, we present
the CLEVRER-Humans benchmark, a video reasoning dataset for causal judgment of
physical events with human labels. We employ two techniques to improve data
collection efficiency: first, a novel iterative event cloze task to elicit a
new representation of events in videos, which we term Causal Event Graphs
(CEGs); second, a data augmentation technique based on neural language
generative models. We convert the collected CEGs into questions and answers to
be consistent with prior work. Finally, we study a collection of baseline
approaches for CLEVRER-Humans question-answering, highlighting the great
challenges set forth by our benchmark.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 16:09:48 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Mao",
"Jiayuan",
""
],
[
"Yang",
"Xuelin",
""
],
[
"Zhang",
"Xikun",
""
],
[
"Goodman",
"Noah D.",
""
],
[
"Wu",
"Jiajun",
""
]
]
| new_dataset | 0.999656 |
2310.03665 | Sergio Salinas-Fern\'andez S. Salinas-Fern\'andez | Sergio Salinas-Fern\'andez and Nancy Hitschfeld-Kahler | POLYLLA: Polygonal/Polyhedral meshing algorithm based on terminal-edge
regions and terminal-face regions | Technical report | null | null | null | cs.CG | http://creativecommons.org/publicdomain/zero/1.0/ | Polylla is a polygonal mesh algorithm that generates meshes with arbitrarily
shaped polygons using the concept of terminal-edge regions. Until now, Polylla
has been limited to 2D meshes, but in this work, we extend Polylla to 3D
volumetric meshes. We present two versions of Polylla 3D. The first version
generates terminal-edge regions, converts them into polyhedra, and repairs
polyhedra that are joined by only an edge. This version differs from the
original Polylla algorithm in that it does not have the same phases as the 2D
version. In the second version, we define two new concepts: longest-face
propagation path and terminal-face regions. We use these concepts to create an
almost direct extension of the 2D Polylla mesh with the same three phases:
label phase, traversal phase, and repair phase.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 16:40:38 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Salinas-Fernández",
"Sergio",
""
],
[
"Hitschfeld-Kahler",
"Nancy",
""
]
]
| new_dataset | 0.999001 |
2310.03676 | Ajay Suresha Sathya | Ajay Suresha Sathya, Wilm Decre, Jan Swevers | PV-OSIMr: A Lowest Order Complexity Algorithm for Computing the Delassus
Matrix | 8 pages, submitted for review | null | null | null | cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | We present PV-OSIMr, an efficient algorithm for computing the Delassus matrix
(also known as the inverse operational space inertia matrix) for a kinematic
tree, with the lowest order computational complexity known in literature.
PV-OSIMr is derived by optimizing the Popov-Vereshchagin (PV) solver
computations using the compositionality of the force and motion propagators. It
has a computational complexity of O(n + m^2 ) compared to O(n + m^2d) of the
original PV-OSIM algorithm and O(n+md+m^2 ) of the extended force propagator
algorithm (EFPA), where n is the number of joints, m is the number of
constraints and d is the depth of the kinematic tree. Since Delassus matrix
computation requires constructing an m x m sized matrix and must consider all
the n joints at least once, the asymptotic computational complexity of PV-OSIMr
is optimal. We further benchmark our algorithm and find it to be often more
efficient than the PV-OSIM and EFPA in practice.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 16:52:59 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Sathya",
"Ajay Suresha",
""
],
[
"Decre",
"Wilm",
""
],
[
"Swevers",
"Jan",
""
]
]
| new_dataset | 0.987291 |
2310.03700 | Evgeny Stemasov | Evgeny Stemasov, Jessica Hohn, Maurice Cordts, Anja Schikorr, Enrico
Rukzio, Jan Gugenheimer | BrickStARt: Enabling In-situ Design and Tangible Exploration for
Personal Fabrication using Mixed Reality | 23 pages, 13 figures, to appear in: Proceedings of the ACM on
Human-Computer Interaction, Vol. 7 Number ISS (PACM ISS), November 5-8, 2023,
Pittsburgh, PA, USA | Proceedings of the ACM on Human-Computer Interaction, Vol. 7, No.
ISS (PACM ISS), 2023, Article 429 | 10.1145/3626465 | null | cs.HC | http://creativecommons.org/licenses/by/4.0/ | 3D printers enable end-users to design and fabricate unique physical
artifacts but maintain an increased entry barrier and friction. End users must
design tangible artifacts through intangible media away from the main problem
space (ex-situ) and transfer spatial requirements to an abstract software
environment. To allow users to evaluate dimensions, balance, or fit early and
in-situ, we developed BrickStARt, a design tool using tangible construction
blocks paired with a mixed-reality headset. Users assemble a physical block
model at the envisioned location of the fabricated artifact. Designs can be
tested tangibly, refined, and digitally post-processed, remaining continuously
in-situ. We implemented BrickStARt using a Magic Leap headset and present
walkthroughs, highlighting novel interactions for 3D design. In a user study
(n=16), first-time 3D modelers succeeded more often using BrickStARt than
Tinkercad. Our results suggest that BrickStARt provides an accessible and
explorative process while facilitating quick, tangible design iterations that
allow users to detect physics-related issues (e.g., clearance) early on.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 17:18:13 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Stemasov",
"Evgeny",
""
],
[
"Hohn",
"Jessica",
""
],
[
"Cordts",
"Maurice",
""
],
[
"Schikorr",
"Anja",
""
],
[
"Rukzio",
"Enrico",
""
],
[
"Gugenheimer",
"Jan",
""
]
]
| new_dataset | 0.996659 |
2310.03704 | Zhiwen Fan | Zhiwen Fan, Panwang Pan, Peihao Wang, Yifan Jiang, Hanwen Jiang, Dejia
Xu, Zehao Zhu, Dilin Wang, Zhangyang Wang | Drag View: Generalizable Novel View Synthesis with Unposed Imagery | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce DragView, a novel and interactive framework for generating novel
views of unseen scenes. DragView initializes the new view from a single source
image, and the rendering is supported by a sparse set of unposed multi-view
images, all seamlessly executed within a single feed-forward pass. Our approach
begins with users dragging a source view through a local relative coordinate
system. Pixel-aligned features are obtained by projecting the sampled 3D points
along the target ray onto the source view. We then incorporate a view-dependent
modulation layer to effectively handle occlusion during the projection.
Additionally, we broaden the epipolar attention mechanism to encompass all
source pixels, facilitating the aggregation of initialized coordinate-aligned
point features from other unposed views. Finally, we employ another transformer
to decode ray features into final pixel intensities. Crucially, our framework
does not rely on either 2D prior models or the explicit estimation of camera
poses. During testing, DragView showcases the capability to generalize to new
scenes unseen during training, also utilizing only unposed support images,
enabling the generation of photo-realistic new views characterized by flexible
camera trajectories. In our experiments, we conduct a comprehensive comparison
of the performance of DragView with recent scene representation networks
operating under pose-free conditions, as well as with generalizable NeRFs
subject to noisy test camera poses. DragView consistently demonstrates its
superior performance in view synthesis quality, while also being more
user-friendly. Project page: https://zhiwenfan.github.io/DragView/.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 17:24:36 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Fan",
"Zhiwen",
""
],
[
"Pan",
"Panwang",
""
],
[
"Wang",
"Peihao",
""
],
[
"Jiang",
"Yifan",
""
],
[
"Jiang",
"Hanwen",
""
],
[
"Xu",
"Dejia",
""
],
[
"Zhu",
"Zehao",
""
],
[
"Wang",
"Dilin",
""
],
[
"Wang",
"Zhangyang",
""
]
]
| new_dataset | 0.993806 |
2310.03731 | Aojun Zhou | Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun Luo, Weikang Shi,
Renrui Zhang, Linqi Song, Mingjie Zhan, Hongsheng Li | MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical
Reasoning | The state-of-the-art open-source language models for mathematical
reasoning | null | null | null | cs.CL cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recently released GPT-4 Code Interpreter has demonstrated remarkable
proficiency in solving challenging math problems, primarily attributed to its
ability to seamlessly reason with natural language, generate code, execute
code, and continue reasoning based on the execution output. In this paper, we
present a method to fine-tune open-source language models, enabling them to use
code for modeling and deriving math equations and, consequently, enhancing
their mathematical reasoning abilities. We propose a method of generating novel
and high-quality datasets with math problems and their code-based solutions,
referred to as MathCodeInstruct. Each solution interleaves natural language,
code, and execution results. We also introduce a customized supervised
fine-tuning and inference approach. This approach yields the MathCoder models,
a family of models capable of generating code-based solutions for solving
challenging math problems. Impressively, the MathCoder models achieve
state-of-the-art scores among open-source LLMs on the MATH (45.2%) and GSM8K
(83.9%) datasets, substantially outperforming other open-source alternatives.
Notably, the MathCoder model not only surpasses ChatGPT-3.5 and PaLM-2 on GSM8K
and MATH but also outperforms GPT-4 on the competition-level MATH dataset. The
dataset and models will be released at https://github.com/mathllm/MathCoder.
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2023 17:52:09 GMT"
}
]
| 2023-10-06T00:00:00 | [
[
"Wang",
"Ke",
""
],
[
"Ren",
"Houxing",
""
],
[
"Zhou",
"Aojun",
""
],
[
"Lu",
"Zimu",
""
],
[
"Luo",
"Sichun",
""
],
[
"Shi",
"Weikang",
""
],
[
"Zhang",
"Renrui",
""
],
[
"Song",
"Linqi",
""
],
[
"Zhan",
"Mingjie",
""
],
[
"Li",
"Hongsheng",
""
]
]
| new_dataset | 0.998443 |
2106.06082 | Bradley Hauer | Bradley Hauer, Grzegorz Kondrak | One Sense per Translation | To be published at IJCNLP-AACL 2023: The 13th International Joint
Conference on Natural Language Processing and the 3rd Conference of the
Asia-Pacific Chapter of the Association for Computational Linguistics | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Word sense disambiguation (WSD) is the task of determining the sense of a
word in context. Translations have been used in WSD as a source of knowledge,
and even as a means of delimiting word senses. In this paper, we define three
theoretical properties of the relationship between senses and translations, and
argue that they constitute necessary conditions for using translations as sense
inventories. The key property of One Sense per Translation (OSPT) provides a
foundation for a translation-based WSD method. The results of an intrinsic
evaluation experiment indicate that our method achieves a precision of
approximately 93% compared to manual corpus annotations. Our extrinsic
evaluation experiments demonstrate WSD improvements of up to 4.6% F1-score on
difficult WSD datasets.
| [
{
"version": "v1",
"created": "Thu, 10 Jun 2021 23:24:26 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 17:54:58 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Hauer",
"Bradley",
""
],
[
"Kondrak",
"Grzegorz",
""
]
]
| new_dataset | 0.977606 |
2206.08903 | Taylor Bobrow | Taylor L. Bobrow, Mayank Golhar, Rohan Vijayan, Venkata S. Akshintala,
Juan R. Garcia, and Nicholas J. Durr | Colonoscopy 3D Video Dataset with Paired Depth from 2D-3D Registration | null | null | 10.1016/j.media.2023.102956 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Screening colonoscopy is an important clinical application for several 3D
computer vision techniques, including depth estimation, surface reconstruction,
and missing region detection. However, the development, evaluation, and
comparison of these techniques in real colonoscopy videos remain largely
qualitative due to the difficulty of acquiring ground truth data. In this work,
we present a Colonoscopy 3D Video Dataset (C3VD) acquired with a high
definition clinical colonoscope and high-fidelity colon models for benchmarking
computer vision methods in colonoscopy. We introduce a novel multimodal 2D-3D
registration technique to register optical video sequences with ground truth
rendered views of a known 3D model. The different modalities are registered by
transforming optical images to depth maps with a Generative Adversarial Network
and aligning edge features with an evolutionary optimizer. This registration
method achieves an average translation error of 0.321 millimeters and an
average rotation error of 0.159 degrees in simulation experiments where
error-free ground truth is available. The method also leverages video
information, improving registration accuracy by 55.6% for translation and 60.4%
for rotation compared to single frame registration. 22 short video sequences
were registered to generate 10,015 total frames with paired ground truth depth,
surface normals, optical flow, occlusion, six degree-of-freedom pose, coverage
maps, and 3D models. The dataset also includes screening videos acquired by a
gastroenterologist with paired ground truth pose and 3D surface models. The
dataset and registration source code are available at durr.jhu.edu/C3VD.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2022 17:23:50 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Nov 2022 15:58:44 GMT"
},
{
"version": "v3",
"created": "Tue, 5 Sep 2023 17:51:32 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Bobrow",
"Taylor L.",
""
],
[
"Golhar",
"Mayank",
""
],
[
"Vijayan",
"Rohan",
""
],
[
"Akshintala",
"Venkata S.",
""
],
[
"Garcia",
"Juan R.",
""
],
[
"Durr",
"Nicholas J.",
""
]
]
| new_dataset | 0.999849 |
2209.01774 | Boyi Liu | Boyi Liu | ElasticROS: An Elastically Collaborative Robot Operation System for Fog
and Cloud Robotics | null | null | null | null | cs.RO | http://creativecommons.org/publicdomain/zero/1.0/ | Robots are integrating more huge-size models to enrich functions and improve
accuracy, which leads to out-of-control computing pressure. And thus robots are
encountering bottlenecks in computing power and battery capacity. Fog or cloud
robotics is one of the most anticipated theories to address these issues.
Approaches of cloud robotics have developed from system-level to node-level.
However, the present node-level systems are not flexible enough to dynamically
adapt to changing conditions. To address this, we present ElasticROS, which
evolves the present node-level systems into an algorithm-level one. ElasticROS
is based on ROS and ROS2. For fog and cloud robotics, it is the first robot
operating system with algorithm-level collaborative computing. ElasticROS
develops elastic collaborative computing to achieve adaptability to dynamic
conditions. The collaborative computing algorithm is the core and challenge of
ElasticROS. We abstract the problem and then propose an algorithm named
ElasAction to address. It is a dynamic action decision algorithm based on
online learning, which determines how robots and servers cooperate. The
algorithm dynamically updates parameters to adapt to changes of conditions
where the robot is currently in. It achieves elastically distributing of
computing tasks to robots and servers according to configurations. In addition,
we prove that the regret upper bound of the ElasAction is sublinear, which
guarantees its convergence and thus enables ElasticROS to be stable in its
elasticity. Finally, we conducted experiments with ElasticROS on common tasks
of robotics, including SLAM, grasping and human-robot dialogue, and then
measured its performances in latency, CPU usage and power consumption. The
algorithm-level ElasticROS performs significantly better than the present
node-level system.
| [
{
"version": "v1",
"created": "Mon, 5 Sep 2022 05:54:35 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 07:31:41 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Liu",
"Boyi",
""
]
]
| new_dataset | 0.973745 |
2211.14118 | Cl\'ement Hardy | Cl\'ement Hardy, Yvain Qu\'eau, David Tschumperl\'e | MS-PS: A Multi-Scale Network for Photometric Stereo With a New
Comprehensive Training Dataset | null | null | 10.24132/CSRN.3301.23 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The photometric stereo (PS) problem consists in reconstructing the 3D-surface
of an object, thanks to a set of photographs taken under different lighting
directions. In this paper, we propose a multi-scale architecture for PS which,
combined with a new dataset, yields state-of-the-art results. Our proposed
architecture is flexible: it permits to consider a variable number of images as
well as variable image size without loss of performance. In addition, we define
a set of constraints to allow the generation of a relevant synthetic dataset to
train convolutional neural networks for the PS problem. Our proposed dataset is
much larger than pre-existing ones, and contains many objects with challenging
materials having anisotropic reflectance (e.g. metals, glass). We show on
publicly available benchmarks that the combination of both these contributions
drastically improves the accuracy of the estimated normal field, in comparison
with previous state-of-the-art methods.
| [
{
"version": "v1",
"created": "Fri, 25 Nov 2022 14:01:54 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 09:29:07 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Hardy",
"Clément",
""
],
[
"Quéau",
"Yvain",
""
],
[
"Tschumperlé",
"David",
""
]
]
| new_dataset | 0.998913 |
2303.06419 | Vihari Piratla Dr | Juyeon Heo, Vihari Piratla, Matthew Wicker, Adrian Weller | Use Perturbations when Learning from Explanations | NeurIPS 2023 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Machine learning from explanations (MLX) is an approach to learning that uses
human-provided explanations of relevant or irrelevant features for each input
to ensure that model predictions are right for the right reasons. Existing MLX
approaches rely on local model interpretation methods and require strong model
smoothing to align model and human explanations, leading to sub-optimal
performance. We recast MLX as a robustness problem, where human explanations
specify a lower dimensional manifold from which perturbations can be drawn, and
show both theoretically and empirically how this approach alleviates the need
for strong model smoothing. We consider various approaches to achieving
robustness, leading to improved performance over prior MLX methods. Finally, we
show how to combine robustness with an earlier MLX method, yielding
state-of-the-art results on both synthetic and real-world benchmarks.
| [
{
"version": "v1",
"created": "Sat, 11 Mar 2023 14:57:52 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 16:24:04 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Heo",
"Juyeon",
""
],
[
"Piratla",
"Vihari",
""
],
[
"Wicker",
"Matthew",
""
],
[
"Weller",
"Adrian",
""
]
]
| new_dataset | 0.978692 |
2303.11916 | Sanghyuk Chun | Geonmo Gu and Sanghyuk Chun and Wonjae Kim and HeeJae Jun and Yoohoon
Kang and Sangdoo Yun | CompoDiff: Versatile Composed Image Retrieval With Latent Diffusion | First two authors contributed equally; 26 pages, 4.1MB | null | null | null | cs.CV cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a novel diffusion-based model, CompoDiff, for solving
Composed Image Retrieval (CIR) with latent diffusion and presents a newly
created dataset, named SynthTriplets18M, of 18 million reference images,
conditions, and corresponding target image triplets to train the model.
CompoDiff and SynthTriplets18M tackle the shortages of the previous CIR
approaches, such as poor generalizability due to the small dataset scale and
the limited types of conditions. CompoDiff not only achieves a new zero-shot
state-of-the-art on four CIR benchmarks, including FashionIQ, CIRR, CIRCO, and
GeneCIS, but also enables a more versatile and controllable CIR by accepting
various conditions, such as negative text and image mask conditions, and the
controllability to the importance between multiple queries or the trade-off
between inference speed and the performance which are unavailable with existing
CIR methods. The code and dataset are available at
https://github.com/navervision/CompoDiff
| [
{
"version": "v1",
"created": "Tue, 21 Mar 2023 15:06:35 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 15:54:30 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Gu",
"Geonmo",
""
],
[
"Chun",
"Sanghyuk",
""
],
[
"Kim",
"Wonjae",
""
],
[
"Jun",
"HeeJae",
""
],
[
"Kang",
"Yoohoon",
""
],
[
"Yun",
"Sangdoo",
""
]
]
| new_dataset | 0.998983 |
2305.14779 | Nikita Srivatsan | Nikita Srivatsan, Sofia Samaniego, Omar Florez, Taylor
Berg-Kirkpatrick | Alt-Text with Context: Improving Accessibility for Images on Twitter | null | null | null | null | cs.CV cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | In this work we present an approach for generating alternative text (or
alt-text) descriptions for images shared on social media, specifically Twitter.
More than just a special case of image captioning, alt-text is both more
literally descriptive and context-specific. Also critically, images posted to
Twitter are often accompanied by user-written text that despite not necessarily
describing the image may provide useful context that if properly leveraged can
be informative. We address this task with a multimodal model that conditions on
both textual information from the associated social media post as well as
visual signal from the image, and demonstrate that the utility of these two
information sources stacks. We put forward a new dataset of 371k images paired
with alt-text and tweets scraped from Twitter and evaluate on it across a
variety of automated metrics as well as human evaluation. We show that our
approach of conditioning on both tweet text and visual information
significantly outperforms prior work, by more than 2x on BLEU@4.
| [
{
"version": "v1",
"created": "Wed, 24 May 2023 06:35:26 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Oct 2023 23:01:05 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Srivatsan",
"Nikita",
""
],
[
"Samaniego",
"Sofia",
""
],
[
"Florez",
"Omar",
""
],
[
"Berg-Kirkpatrick",
"Taylor",
""
]
]
| new_dataset | 0.97619 |
2305.16311 | Omri Avrahami | Omri Avrahami, Kfir Aberman, Ohad Fried, Daniel Cohen-Or, Dani
Lischinski | Break-A-Scene: Extracting Multiple Concepts from a Single Image | SIGGRAPH Asia 2023. Project page: at:
https://omriavrahami.com/break-a-scene/ Video:
https://www.youtube.com/watch?v=-9EA-BhizgM | null | null | null | cs.CV cs.GR cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Text-to-image model personalization aims to introduce a user-provided concept
to the model, allowing its synthesis in diverse contexts. However, current
methods primarily focus on the case of learning a single concept from multiple
images with variations in backgrounds and poses, and struggle when adapted to a
different scenario. In this work, we introduce the task of textual scene
decomposition: given a single image of a scene that may contain several
concepts, we aim to extract a distinct text token for each concept, enabling
fine-grained control over the generated scenes. To this end, we propose
augmenting the input image with masks that indicate the presence of target
concepts. These masks can be provided by the user or generated automatically by
a pre-trained segmentation model. We then present a novel two-phase
customization process that optimizes a set of dedicated textual embeddings
(handles), as well as the model weights, striking a delicate balance between
accurately capturing the concepts and avoiding overfitting. We employ a masked
diffusion loss to enable handles to generate their assigned concepts,
complemented by a novel loss on cross-attention maps to prevent entanglement.
We also introduce union-sampling, a training strategy aimed to improve the
ability of combining multiple concepts in generated images. We use several
automatic metrics to quantitatively compare our method against several
baselines, and further affirm the results using a user study. Finally, we
showcase several applications of our method. Project page is available at:
https://omriavrahami.com/break-a-scene/
| [
{
"version": "v1",
"created": "Thu, 25 May 2023 17:59:04 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 07:38:36 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Avrahami",
"Omri",
""
],
[
"Aberman",
"Kfir",
""
],
[
"Fried",
"Ohad",
""
],
[
"Cohen-Or",
"Daniel",
""
],
[
"Lischinski",
"Dani",
""
]
]
| new_dataset | 0.98457 |
2306.03091 | Tianyang Liu | Tianyang Liu, Canwen Xu, Julian McAuley | RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems | null | null | null | null | cs.CL cs.AI cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have greatly advanced code auto-completion
systems, with a potential for substantial productivity enhancements for
developers. However, current benchmarks mainly focus on single-file tasks,
leaving an assessment gap for more complex, real-world, multi-file programming
scenarios. To fill this gap, we introduce RepoBench, a new benchmark
specifically designed for evaluating repository-level code auto-completion
systems. RepoBench supports both Python and Java and consists of three
interconnected evaluation tasks: RepoBench-R (Retrieval), RepoBench-C (Code
Completion), and RepoBench-P (Pipeline). Each task respectively measures the
system's ability to retrieve the most relevant code snippets from other files
as cross-file context, predict the next line of code with cross-file and
in-file context, and handle complex tasks that require a combination of both
retrieval and next-line prediction. RepoBench aims to facilitate a more
complete comparison of performance and encouraging continuous improvement in
auto-completion systems. RepoBench is publicly available at
https://github.com/Leolty/repobench.
| [
{
"version": "v1",
"created": "Mon, 5 Jun 2023 17:59:41 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 01:13:49 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Liu",
"Tianyang",
""
],
[
"Xu",
"Canwen",
""
],
[
"McAuley",
"Julian",
""
]
]
| new_dataset | 0.999498 |
2306.03872 | Zhan Ling | Zhan Ling, Yunhao Fang, Xuanlin Li, Zhiao Huang, Mingu Lee, Roland
Memisevic and Hao Su | Deductive Verification of Chain-of-Thought Reasoning | Published at NeurIPS 2023 | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) significantly benefit from Chain-of-Thought
(CoT) prompting in performing various reasoning tasks. While CoT allows models
to produce more comprehensive reasoning processes, its emphasis on intermediate
reasoning steps can inadvertently introduce hallucinations and accumulated
errors, thereby limiting models' ability to solve complex reasoning tasks.
Inspired by how humans engage in careful and meticulous deductive logical
reasoning processes to solve tasks, we seek to enable language models to
perform explicit and rigorous deductive reasoning, and also ensure the
trustworthiness of their reasoning process through self-verification. However,
directly verifying the validity of an entire deductive reasoning process is
challenging, even with advanced models like ChatGPT. In light of this, we
propose to decompose a reasoning verification process into a series of
step-by-step subprocesses, each only receiving their necessary context and
premises. To facilitate this procedure, we propose Natural Program, a natural
language-based deductive reasoning format. Our approach enables models to
generate precise reasoning steps where subsequent steps are more rigorously
grounded on prior steps. It also empowers language models to carry out
reasoning self-verification in a step-by-step manner. By integrating this
verification process into each deductive reasoning stage, we significantly
enhance the rigor and trustfulness of generated reasoning steps. Along this
process, we also improve the answer correctness on complex reasoning tasks.
Code will be released at https://github.com/lz1oceani/verify_cot.
| [
{
"version": "v1",
"created": "Tue, 6 Jun 2023 17:18:56 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 00:37:34 GMT"
},
{
"version": "v3",
"created": "Tue, 3 Oct 2023 19:48:22 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Ling",
"Zhan",
""
],
[
"Fang",
"Yunhao",
""
],
[
"Li",
"Xuanlin",
""
],
[
"Huang",
"Zhiao",
""
],
[
"Lee",
"Mingu",
""
],
[
"Memisevic",
"Roland",
""
],
[
"Su",
"Hao",
""
]
]
| new_dataset | 0.989599 |
2307.00589 | Qiao Jin | Qiao Jin, Won Kim, Qingyu Chen, Donald C. Comeau, Lana Yeganova, W.
John Wilbur, Zhiyong Lu | MedCPT: Contrastive Pre-trained Transformers with Large-scale PubMed
Search Logs for Zero-shot Biomedical Information Retrieval | The MedCPT code and API are available at
https://github.com/ncbi/MedCPT | null | null | null | cs.IR cs.AI cs.CL q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Information retrieval (IR) is essential in biomedical knowledge acquisition
and clinical decision support. While recent progress has shown that language
model encoders perform better semantic retrieval, training such models requires
abundant query-article annotations that are difficult to obtain in biomedicine.
As a result, most biomedical IR systems only conduct lexical matching. In
response, we introduce MedCPT, a first-of-its-kind Contrastively Pre-trained
Transformer model for zero-shot semantic IR in biomedicine. To train MedCPT, we
collected an unprecedented scale of 255 million user click logs from PubMed.
With such data, we use contrastive learning to train a pair of
closely-integrated retriever and re-ranker. Experimental results show that
MedCPT sets new state-of-the-art performance on six biomedical IR tasks,
outperforming various baselines including much larger models such as
GPT-3-sized cpt-text-XL. In addition, MedCPT also generates better biomedical
article and sentence representations for semantic evaluations. As such, MedCPT
can be readily applied to various real-world biomedical IR tasks.
| [
{
"version": "v1",
"created": "Sun, 2 Jul 2023 15:11:59 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 01:43:15 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Jin",
"Qiao",
""
],
[
"Kim",
"Won",
""
],
[
"Chen",
"Qingyu",
""
],
[
"Comeau",
"Donald C.",
""
],
[
"Yeganova",
"Lana",
""
],
[
"Wilbur",
"W. John",
""
],
[
"Lu",
"Zhiyong",
""
]
]
| new_dataset | 0.977628 |
2307.10387 | Rui Wang | Rui Wang, Sophokles Ktistakis, Siwei Zhang, Mirko Meboldt, and Quentin
Lohmeyer | POV-Surgery: A Dataset for Egocentric Hand and Tool Pose Estimation
During Surgical Activities | null | "Medical Image Computing and Computer Assisted Intervention --
MICCAI 2023" | 10.1007/978-3-031-43996-4_42 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The surgical usage of Mixed Reality (MR) has received growing attention in
areas such as surgical navigation systems, skill assessment, and robot-assisted
surgeries. For such applications, pose estimation for hand and surgical
instruments from an egocentric perspective is a fundamental task and has been
studied extensively in the computer vision field in recent years. However, the
development of this field has been impeded by a lack of datasets, especially in
the surgical field, where bloody gloves and reflective metallic tools make it
hard to obtain 3D pose annotations for hands and objects using conventional
methods. To address this issue, we propose POV-Surgery, a large-scale,
synthetic, egocentric dataset focusing on pose estimation for hands with
different surgical gloves and three orthopedic surgical instruments, namely
scalpel, friem, and diskplacer. Our dataset consists of 53 sequences and 88,329
frames, featuring high-resolution RGB-D video streams with activity
annotations, accurate 3D and 2D annotations for hand-object pose, and 2D
hand-object segmentation masks. We fine-tune the current SOTA methods on
POV-Surgery and further show the generalizability when applying to real-life
cases with surgical gloves and tools by extensive evaluations. The code and the
dataset are publicly available at batfacewayne.github.io/POV_Surgery_io/.
| [
{
"version": "v1",
"created": "Wed, 19 Jul 2023 18:00:32 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Wang",
"Rui",
""
],
[
"Ktistakis",
"Sophokles",
""
],
[
"Zhang",
"Siwei",
""
],
[
"Meboldt",
"Mirko",
""
],
[
"Lohmeyer",
"Quentin",
""
]
]
| new_dataset | 0.999691 |
2307.10928 | Seonghyeon Ye | Seonghyeon Ye, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, Seungone
Kim, Yongrae Jo, James Thorne, Juho Kim, Minjoon Seo | FLASK: Fine-grained Language Model Evaluation based on Alignment Skill
Sets | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Evaluation of Large Language Models (LLMs) is challenging because
instruction-following necessitates alignment with human values and the required
set of skills varies depending on the instruction. However, previous studies
have mainly focused on coarse-grained evaluation (i.e. overall preference-based
evaluation), which limits interpretability since it does not consider the
nature of user instructions that require instance-wise skill composition. In
this paper, we introduce FLASK (Fine-grained Language Model Evaluation based on
Alignment Skill Sets), a fine-grained evaluation protocol for both human-based
and model-based evaluation which decomposes coarse-level scoring to a skill
set-level scoring for each instruction. We experimentally observe that the
fine-graininess of evaluation is crucial for attaining a holistic view of model
performance and increasing the reliability of the evaluation. Using FLASK, we
compare multiple open-source and proprietary LLMs and observe a high
correlation between model-based and human-based evaluations. We publicly
release the evaluation data and code implementation at
https://github.com/kaistAI/FLASK.
| [
{
"version": "v1",
"created": "Thu, 20 Jul 2023 14:56:35 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 04:11:16 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Ye",
"Seonghyeon",
""
],
[
"Kim",
"Doyoung",
""
],
[
"Kim",
"Sungdong",
""
],
[
"Hwang",
"Hyeonbin",
""
],
[
"Kim",
"Seungone",
""
],
[
"Jo",
"Yongrae",
""
],
[
"Thorne",
"James",
""
],
[
"Kim",
"Juho",
""
],
[
"Seo",
"Minjoon",
""
]
]
| new_dataset | 0.999722 |
2308.07327 | Juho Kim | Juho Kim | PokerKit: A Comprehensive Python Library for Fine-Grained Multi-Variant
Poker Game Simulations | 6 pages, 1 figure, submission to IEEE Transactions on Games | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | PokerKit is an open-source Python library designed to overcome the
restrictions of existing poker game simulation and hand evaluation tools, which
typically support only a handful of poker variants and lack flexibility in game
state control. In contrast, PokerKit significantly expands this scope by
supporting an extensive array of poker variants and it provides a flexible
architecture for users to define their custom games. This paper details the
design and implementation of PokerKit, including its intuitive programmatic
API, multi-variant game support, and a unified hand evaluation suite across
different hand types. The flexibility of PokerKit allows for applications in
diverse areas, such as poker AI development, tool creation, and online poker
casino implementation. PokerKit's reliability has been established through
static type checking, extensive doctests, and unit tests, achieving 99% code
coverage. The introduction of PokerKit represents a significant contribution to
the field of computer poker, fostering future research and advanced AI
development for a wide variety of poker games. The source code is available at
https://github.com/uoftcprg/pokerkit
| [
{
"version": "v1",
"created": "Tue, 8 Aug 2023 13:54:48 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Sep 2023 22:20:32 GMT"
},
{
"version": "v3",
"created": "Tue, 3 Oct 2023 23:42:04 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Kim",
"Juho",
""
]
]
| new_dataset | 0.99955 |
2309.03006 | Jens-Rene Giesen | Sven Smolka (1), Jens-Rene Giesen (1), Pascal Winkler (1), Oussama
Draissi (1), Lucas Davi (1), Ghassan Karame (2), Klaus Pohl (1) ((1)
University of Duisburg-Essen, (2) Ruhr University Bochum) | Fuzz on the Beach: Fuzzing Solana Smart Contracts | This paper will appear on the ACM CCS 2023 in November 2023 | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Solana has quickly emerged as a popular platform for building decentralized
applications (DApps), such as marketplaces for non-fungible tokens (NFTs). A
key reason for its success are Solana's low transaction fees and high
performance, which is achieved in part due to its stateless programming model.
Although the literature features extensive tooling support for smart contract
security, current solutions are largely tailored for the Ethereum Virtual
Machine. Unfortunately, the very stateless nature of Solana's execution
environment introduces novel attack patterns specific to Solana requiring a
rethinking for building vulnerability analysis methods.
In this paper, we address this gap and propose FuzzDelSol, the first
binary-only coverage-guided fuzzing architecture for Solana smart contracts.
FuzzDelSol faithfully models runtime specifics such as smart contract
interactions. Moreover, since source code is not available for the large
majority of Solana contracts, FuzzDelSol operates on the contract's binary
code. Hence, due to the lack of semantic information, we carefully extracted
low-level program and state information to develop a diverse set of bug oracles
covering all major bug classes in Solana. Our extensive evaluation on 6049
smart contracts shows that FuzzDelSol's bug oracles find bugs with a high
precision and recall. To the best of our knowledge, this is the largest
evaluation of the security landscape on the Solana mainnet.
| [
{
"version": "v1",
"created": "Wed, 6 Sep 2023 13:54:07 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 09:42:17 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Smolka",
"Sven",
""
],
[
"Giesen",
"Jens-Rene",
""
],
[
"Winkler",
"Pascal",
""
],
[
"Draissi",
"Oussama",
""
],
[
"Davi",
"Lucas",
""
],
[
"Karame",
"Ghassan",
""
],
[
"Pohl",
"Klaus",
""
]
]
| new_dataset | 0.999425 |
2309.05445 | Andreas Herten | Andreas Herten | Many Cores, Many Models: GPU Programming Model vs. Vendor Compatibility
Overview | To be published in the proceedings of the P3HPC workshop, hosted at
SC23 (International Conference for High Performance Computing, Networking,
Storage, and Analysis) | null | null | null | cs.DC cs.PL | http://creativecommons.org/licenses/by/4.0/ | In recent history, GPUs became a key driver of compute performance in HPC.
With the installation of the Frontier supercomputer, they became the enablers
of the Exascale era; further largest-scale installations are in progress
(Aurora, El Capitan, JUPITER). But the early-day dominance by NVIDIA and their
CUDA programming model has changed: The current HPC GPU landscape features
three vendors (AMD, Intel, NVIDIA), each with native and derived programming
models. The choices are ample, but not all models are supported on all
platforms, especially if support for Fortran is needed; in addition, some
restrictions might apply. It is hard for scientific programmers to navigate
this abundance of choices and limits.
This paper gives a guide by matching the GPU platforms with supported
programming models, presented in a concise table and further elaborated in
detailed comments. An assessment is made regarding the level of support of a
model on a platform.
| [
{
"version": "v1",
"created": "Mon, 11 Sep 2023 13:32:32 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 07:13:51 GMT"
},
{
"version": "v3",
"created": "Wed, 4 Oct 2023 14:08:31 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Herten",
"Andreas",
""
]
]
| new_dataset | 0.950266 |
2309.10253 | Jiahao Yu | Jiahao Yu, Xingwei Lin, Zheng Yu, Xinyu Xing | GPTFUZZER: Red Teaming Large Language Models with Auto-Generated
Jailbreak Prompts | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have recently experienced tremendous popularity
and are widely used from casual conversations to AI-driven programming.
However, despite their considerable success, LLMs are not entirely reliable and
can give detailed guidance on how to conduct harmful or illegal activities.
While safety measures can reduce the risk of such outputs, adversarial
jailbreak attacks can still exploit LLMs to produce harmful content. These
jailbreak templates are typically manually crafted, making large-scale testing
challenging.
In this paper, we introduce GPTFuzz, a novel black-box jailbreak fuzzing
framework inspired by the AFL fuzzing framework. Instead of manual engineering,
GPTFuzz automates the generation of jailbreak templates for red-teaming LLMs.
At its core, GPTFuzz starts with human-written templates as initial seeds, then
mutates them to produce new templates. We detail three key components of
GPTFuzz: a seed selection strategy for balancing efficiency and variability,
mutate operators for creating semantically equivalent or similar sentences, and
a judgment model to assess the success of a jailbreak attack.
We evaluate GPTFuzz against various commercial and open-source LLMs,
including ChatGPT, LLaMa-2, and Vicuna, under diverse attack scenarios. Our
results indicate that GPTFuzz consistently produces jailbreak templates with a
high success rate, surpassing human-crafted templates. Remarkably, GPTFuzz
achieves over 90% attack success rates against ChatGPT and Llama-2 models, even
with suboptimal initial seed templates. We anticipate that GPTFuzz will be
instrumental for researchers and practitioners in examining LLM robustness and
will encourage further exploration into enhancing LLM safety.
| [
{
"version": "v1",
"created": "Tue, 19 Sep 2023 02:19:48 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 06:15:12 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Yu",
"Jiahao",
""
],
[
"Lin",
"Xingwei",
""
],
[
"Yu",
"Zheng",
""
],
[
"Xing",
"Xinyu",
""
]
]
| new_dataset | 0.985208 |
2309.16282 | Mukta Debnath | Mukta Debnath, Krishnendu Guha, Debasri Saha, Susmita Sur-Kolay | AgEncID: Aggregate Encryption Individual Decryption of Key for FPGA
Bitstream IP Cores in Cloud | 21 pages, 7 figures, 5 tables | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Cloud computing platforms are progressively adopting Field Programmable Gate
Arrays to deploy specialized hardware accelerators for specific computational
tasks. However, the security of FPGA-based bitstream for Intellectual Property,
IP cores from unauthorized interception in cloud environments remains a
prominent concern. Existing methodologies for protection of such bitstreams
possess several limitations, such as requiring a large number of keys, tying
bitstreams to specific FPGAs, and relying on trusted third parties. This paper
proposes Aggregate Encryption and Individual Decryption, a cryptosystem based
on key aggregation to enhance the security of FPGA-based bitstream for IP cores
and to address the pitfalls of previous related works. In our proposed scheme,
IP providers can encrypt their bitstreams with a single key for a set S of FPGA
boards, with which the bitstreams can directly be decrypted on any of the FPGA
boards in S. Aggregate encryption of the key is performed in a way which
ensures that the key can solely be obtained onboard through individual
decryption employing the board's private key, thus facilitating secure key
provisioning. The proposed cryptosystem is evaluated mainly on Zynq FPGAs. The
outcomes demonstrate that our cryptosystem not only outperforms existing
techniques with respect to resource, time and energy significantly but also
upholds robust security assurances.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 09:27:56 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 12:01:32 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Debnath",
"Mukta",
""
],
[
"Guha",
"Krishnendu",
""
],
[
"Saha",
"Debasri",
""
],
[
"Sur-Kolay",
"Susmita",
""
]
]
| new_dataset | 0.991845 |
2310.01469 | Jiayu Yao | Jia-Yu Yao, Kun-Peng Ning, Zhen-Hui Liu, Mu-Nan Ning, Li Yuan | LLM Lies: Hallucinations are not Bugs, but Features as Adversarial
Examples | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs), including GPT-3.5, LLaMA, and PaLM, seem to be
knowledgeable and able to adapt to many tasks. However, we still can not
completely trust their answer, since LLMs suffer from
hallucination--fabricating non-existent facts to cheat users without
perception. And the reasons for their existence and pervasiveness remain
unclear. In this paper, we demonstrate that non-sense prompts composed of
random tokens can also elicit the LLMs to respond with hallucinations. This
phenomenon forces us to revisit that hallucination may be another view of
adversarial examples, and it shares similar features with conventional
adversarial examples as the basic feature of LLMs. Therefore, we formalize an
automatic hallucination triggering method as the hallucination attack in an
adversarial way. Finally, we explore basic feature of attacked adversarial
prompts and propose a simple yet effective defense strategy. Our code is
released on GitHub.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 17:01:56 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 17:53:49 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Yao",
"Jia-Yu",
""
],
[
"Ning",
"Kun-Peng",
""
],
[
"Liu",
"Zhen-Hui",
""
],
[
"Ning",
"Mu-Nan",
""
],
[
"Yuan",
"Li",
""
]
]
| new_dataset | 0.995721 |
2310.01557 | Yue Wu | Yue Wu, Xuan Tang, Tom M. Mitchell, Yuanzhi Li | SmartPlay : A Benchmark for LLMs as Intelligent Agents | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent large language models (LLMs) have demonstrated great potential toward
intelligent agents and next-gen automation, but there currently lacks a
systematic benchmark for evaluating LLMs' abilities as agents. We introduce
SmartPlay: both a challenging benchmark and a methodology for evaluating LLMs
as agents. SmartPlay consists of 6 different games, including
Rock-Paper-Scissors, Tower of Hanoi, Minecraft. Each game features a unique
setting, providing up to 20 evaluation settings and infinite environment
variations. Each game in SmartPlay uniquely challenges a subset of 9 important
capabilities of an intelligent LLM agent, including reasoning with object
dependencies, planning ahead, spatial reasoning, learning from history, and
understanding randomness. The distinction between the set of capabilities each
game test allows us to analyze each capability separately. SmartPlay serves not
only as a rigorous testing ground for evaluating the overall performance of LLM
agents but also as a road-map for identifying gaps in current methodologies. We
release our benchmark at github.com/microsoft/SmartPlay
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 18:52:11 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 04:10:15 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Wu",
"Yue",
""
],
[
"Tang",
"Xuan",
""
],
[
"Mitchell",
"Tom M.",
""
],
[
"Li",
"Yuanzhi",
""
]
]
| new_dataset | 0.999504 |
2310.01852 | Bin Zhu | Bin Zhu, Bin Lin, Munan Ning, Yang Yan, Jiaxi Cui, HongFa Wang, Yatian
Pang, Wenhao Jiang, Junwu Zhang, Zongwei Li, Wancai Zhang, Zhifeng Li, Wei
Liu, and Li Yuan | LanguageBind: Extending Video-Language Pretraining to N-modality by
Language-based Semantic Alignment | Under review as a conference paper at ICLR 2024 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The video-language (VL) pretraining has achieved remarkable improvement in
multiple downstream tasks. However, the current VL pretraining framework is
hard to extend to multiple modalities (N modalities, N>=3) beyond vision and
language. We thus propose LanguageBind, taking the language as the bind across
different modalities because the language modality is well-explored and
contains rich semantics. Specifically, we freeze the language encoder acquired
by VL pretraining, then train encoders for other modalities with contrastive
learning. As a result, all modalities are mapped to a shared feature space,
implementing multi-modal semantic alignment. While LanguageBind ensures that we
can extend VL modalities to N modalities, we also need a high-quality dataset
with alignment data pairs centered on language. We thus propose VIDAL-10M with
Video, Infrared, Depth, Audio and their corresponding Language, naming as
VIDAL-10M. In our VIDAL-10M, all videos are from short video platforms with
complete semantics rather than truncated segments from long videos, and all the
video, depth, infrared, and audio modalities are aligned to their textual
descriptions. After pretraining on VIDAL-10M, we outperform ImageBind by 1.2%
R@1 on the MSR-VTT dataset with only 15% of the parameters in the zero-shot
video-text retrieval, validating the high quality of our dataset. Beyond this,
our LanguageBind has achieved great improvement in the zero-shot video, audio,
depth, and infrared understanding tasks. For instance, on the LLVIP and NYU-D
datasets, LanguageBind outperforms ImageBind-huge with 23.8% and 11.1% top-1
accuracy. Code address: https://github.com/PKU-YuanGroup/LanguageBind.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 07:33:27 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 03:48:19 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Zhu",
"Bin",
""
],
[
"Lin",
"Bin",
""
],
[
"Ning",
"Munan",
""
],
[
"Yan",
"Yang",
""
],
[
"Cui",
"Jiaxi",
""
],
[
"Wang",
"HongFa",
""
],
[
"Pang",
"Yatian",
""
],
[
"Jiang",
"Wenhao",
""
],
[
"Zhang",
"Junwu",
""
],
[
"Li",
"Zongwei",
""
],
[
"Zhang",
"Wancai",
""
],
[
"Li",
"Zhifeng",
""
],
[
"Liu",
"Wei",
""
],
[
"Yuan",
"Li",
""
]
]
| new_dataset | 0.998871 |
2310.02282 | Sarah Almeida Carneiro | Sarah Almeida Carneiro (LIGM, IFPEN), Giovanni Chierchia (LIGM), Jean
Charl\'ety (IFPEN), Aur\'elie Chataignon (IFPEN), Laurent Najman (LIGM) | SWMLP: Shared Weight Multilayer Perceptron for Car Trajectory Speed
Prediction using Road Topographical Features | null | International Conference on Models and Technologies for
Intelligent Transportation Systems, Jun 2023, Nice, France. pp.1-6 | 10.1109/MT-ITS56129.2023.10241394 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although traffic is one of the massively collected data, it is often only
available for specific regions. One concern is that, although there are studies
that give good results for these data, the data from these regions may not be
sufficiently representative to describe all the traffic patterns in the rest of
the world. In quest of addressing this concern, we propose a speed prediction
method that is independent of large historical speed data. To predict a
vehicle's speed, we use the trajectory road topographical features to fit a
Shared Weight Multilayer Perceptron learning model. Our results show
significant improvement, both qualitative and quantitative, over standard
regression analysis. Moreover, the proposed framework sheds new light on the
way to design new approaches for traffic analysis.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 12:39:33 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Carneiro",
"Sarah Almeida",
"",
"LIGM, IFPEN"
],
[
"Chierchia",
"Giovanni",
"",
"LIGM"
],
[
"Charléty",
"Jean",
"",
"IFPEN"
],
[
"Chataignon",
"Aurélie",
"",
"IFPEN"
],
[
"Najman",
"Laurent",
"",
"LIGM"
]
]
| new_dataset | 0.996781 |
2310.02324 | Mohd Omama | Mohammad Omama, Pranav Inani, Pranjal Paul, Sarat Chandra
Yellapragada, Krishna Murthy Jatavallabhula, Sandeep Chinchali, and Madhava
Krishna | ALT-Pilot: Autonomous navigation with Language augmented Topometric maps | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | We present an autonomous navigation system that operates without assuming HD
LiDAR maps of the environment. Our system, ALT-Pilot, relies only on publicly
available road network information and a sparse (and noisy) set of crowdsourced
language landmarks. With the help of onboard sensors and a language-augmented
topometric map, ALT-Pilot autonomously pilots the vehicle to any destination on
the road network. We achieve this by leveraging vision-language models
pre-trained on web-scale data to identify potential landmarks in a scene,
incorporating vision-language features into the recursive Bayesian state
estimation stack to generate global (route) plans, and a reactive trajectory
planner and controller operating in the vehicle frame. We implement and
evaluate ALT-Pilot in simulation and on a real, full-scale autonomous vehicle
and report improvements over state-of-the-art topometric navigation systems by
a factor of 3x on localization accuracy and 5x on goal reachability
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 18:01:27 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Omama",
"Mohammad",
""
],
[
"Inani",
"Pranav",
""
],
[
"Paul",
"Pranjal",
""
],
[
"Yellapragada",
"Sarat Chandra",
""
],
[
"Jatavallabhula",
"Krishna Murthy",
""
],
[
"Chinchali",
"Sandeep",
""
],
[
"Krishna",
"Madhava",
""
]
]
| new_dataset | 0.999359 |
2310.02344 | EPTCS | Christopher R. Anderson, Louise A. Dennis | Autonomous Systems' Safety Cases for use in UK Nuclear Environments | In Proceedings AREA 2023, arXiv:2310.00333 | EPTCS 391, 2023, pp. 83-88 | 10.4204/EPTCS.391.10 | null | cs.RO cs.AI | http://creativecommons.org/licenses/by/4.0/ | An overview of the process to develop a safety case for an autonomous robot
deployment on a nuclear site in the UK is described and a safety case for a
hypothetical robot incorporating AI is presented. This forms a first step
towards a deployment, showing what is possible now and what may be possible
with development of tools. It forms the basis for further discussion between
nuclear site licensees, the Office for Nuclear Regulation (ONR), industry and
academia.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 18:24:19 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Anderson",
"Christopher R.",
""
],
[
"Dennis",
"Louise A.",
""
]
]
| new_dataset | 0.989336 |
2310.02356 | EPTCS | Caroline Bonhomme (Safran Electronics and Defense, ONERA), Jean-Louis
Dufour (Safran Electronics and Defense) | ORTAC+ : A User Friendly Domain Specific Language for Multi-Agent
Mission Planning | In Proceedings AREA 2023, arXiv:2310.00333 | EPTCS 391, 2023, pp. 127-133 | 10.4204/EPTCS.391.14 | null | cs.PL cs.MA | http://creativecommons.org/licenses/by/4.0/ | A tactical military unit is a complex system composed of many agents such as
infantry, robots, or drones. Given a mission, an automated planner can find an
optimal plan. Therefore, the mission itself must be modeled. The problem is
that languages like PDDL are too low-level to be usable by the end-user: an
officer in the field. We present ORTAC+, a language and a planning tool
designed for this end-user. Its main objective is to allow a natural modeling
of the mission, to minimize the risk of bad modeling, and thus obtain reliable
plans. The language offers high-level constructs specifically designed to
describe tactical missions, but at the same time has clear semantics allowing a
translation to PDDL, to take advantage of state-of-the-art planners.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 18:31:31 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Bonhomme",
"Caroline",
"",
"Safran Electronics and Defense, ONERA"
],
[
"Dufour",
"Jean-Louis",
"",
"Safran Electronics and Defense"
]
]
| new_dataset | 0.998754 |
2310.02393 | Margus Veanes | Margus Veanes and Thomas Ball and Gabriel Ebner and Olli Saarikivi | Symbolic Automata: $\omega$-Regularity Modulo Theories | null | null | null | null | cs.FL cs.DS | http://creativecommons.org/licenses/by/4.0/ | Symbolic automata are finite state automata that support potentially infinite
alphabets, such as the set of rational numbers, generally applied to regular
expressions/languages over finite words. In symbolic automata (or automata
modulo theories), an alphabet is represented by an effective Boolean algebra,
supported by a decision procedure for satisfiability. Regular languages over
infinite words (so called $\omega$-regular languages) have a rich history
paralleling that of regular languages over finite words, with well known
applications to model checking via B\"uchi automata and temporal logics.
We generalize symbolic automata to support $\omega$-regular languages via
symbolic transition terms and symbolic derivatives, bringing together a variety
of classic automata and logics in a unified framework that provides all the
necessary ingredients to support symbolic model checking modulo $A$, $NBW_A$.
In particular, we define: (1) alternating B\"uchi automata modulo $A$, $ABW_A$
as well (non-alternating) non-deterministic B\"uchi automata modulo $A$,
$NBW_A$; (2) an alternation elimination algorithm that incrementally constructs
an $NBW_A$ from an $ABW_A$, and can also be used for constructing the product
of two $NBW_A$'s; (3) a definition of linear temporal logic (LTL) modulo $A$
that generalizes Vardi's construction of alternating B\"uchi automata from LTL,
using (2) to go from LTL modulo $A$ to $NBW_A$ via $ABW_A$.
Finally, we present a combination of LTL modulo $A$ with extended regular
expressions modulo $A$ that generalizes the Property Specification Language
(PSL). Our combination allows regex complement, that is not supported in PSL
but can be supported naturally by using symbolic transition terms.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 19:27:03 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Veanes",
"Margus",
""
],
[
"Ball",
"Thomas",
""
],
[
"Ebner",
"Gabriel",
""
],
[
"Saarikivi",
"Olli",
""
]
]
| new_dataset | 0.999517 |
2310.02399 | Ashutosh Srivastava | Ashutosh Srivastava, Qing Zhao, Yi Lu, Ping Wang, Qi Qu, Zhu Ji, Yee
Sin Chan, Shivendra S. Panwar | Can 5G NR Sidelink communications support wireless augmented reality? | 7 pages, 7 figures, accepted for publication in 2023 IEEE Global
Communications Conference: Mobile and Wireless Networks (Globecom 2023 MWN),
Kuala Lumpur, Malaysia, Dec. 2023 | null | null | null | cs.NI eess.SP | http://creativecommons.org/licenses/by/4.0/ | Smart glasses that support augmented reality (AR) have the potential to
become the consumer's primary medium of connecting to the future internet. For
the best quality of user experience, AR glasses must have a small form factor
and long battery life, while satisfying the data rate and latency requirements
of AR applications. To extend the AR glasses' battery life, the computation and
processing involved in AR may be offloaded to a companion device, such as a
smartphone, through a wireless connection. Sidelink (SL), i.e., the D2D
communication interface of 5G NR, is a potential candidate for this wireless
link. In this paper, we use system-level simulations to analyze the feasibility
of NR SL for supporting AR. Our simulator incorporates the PHY layer structure
and MAC layer resource scheduling of 3GPP SL, standard 3GPP channel models, and
MCS configurations. Our results suggest that the current SL standard
specifications are insufficient for high-end AR use cases with heavy
interaction but can support simpler previews and file transfers. We further
propose two enhancements to SL resource allocation, which have the potential to
offer significant performance improvements for AR applications.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 19:48:47 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Srivastava",
"Ashutosh",
""
],
[
"Zhao",
"Qing",
""
],
[
"Lu",
"Yi",
""
],
[
"Wang",
"Ping",
""
],
[
"Qu",
"Qi",
""
],
[
"Ji",
"Zhu",
""
],
[
"Chan",
"Yee Sin",
""
],
[
"Panwar",
"Shivendra S.",
""
]
]
| new_dataset | 0.986416 |
2310.02409 | Guanghui Qin | Guanghui Qin, Corby Rosset, Ethan C. Chau, Nikhil Rao, Benjamin Van
Durme | Nugget 2D: Dynamic Contextual Compression for Scaling Decoder-only
Language Models | Preprint. 15 pages and 7 figures | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Standard Transformer-based language models (LMs) scale poorly to long
contexts. We propose a solution based on dynamic contextual compression, which
extends the Nugget approach of Qin & Van Durme (2023) from BERT-like frameworks
to decoder-only LMs. Our method models history as compressed "nuggets" which
are trained to allow for reconstruction, and it can be initialized with
off-the-shelf models such as LLaMA. We demonstrate through experiments in
language modeling, question answering, and summarization that Nugget2D retains
capabilities in these tasks, while drastically reducing the overhead during
decoding in terms of time and space. For example, in the experiments of
autoencoding, Nugget2D can shrink context at a 20x compression ratio with a
BLEU score of 98% for reconstruction, achieving nearly lossless encoding.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 20:07:06 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Qin",
"Guanghui",
""
],
[
"Rosset",
"Corby",
""
],
[
"Chau",
"Ethan C.",
""
],
[
"Rao",
"Nikhil",
""
],
[
"Van Durme",
"Benjamin",
""
]
]
| new_dataset | 0.996483 |
2310.02424 | Amanda Swearngin | Maryam Taeb, Amanda Swearngin, Eldon School, Ruijia Cheng, Yue Jiang,
Jeffrey Nichols | AXNav: Replaying Accessibility Tests from Natural Language | 22 pages, 7 figures | null | null | null | cs.HC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developers and quality assurance testers often rely on manual testing to test
accessibility features throughout the product lifecycle. Unfortunately, manual
testing can be tedious, often has an overwhelming scope, and can be difficult
to schedule amongst other development milestones. Recently, Large Language
Models (LLMs) have been used for a variety of tasks including automation of
UIs, however to our knowledge no one has yet explored their use in controlling
assistive technologies for the purposes of supporting accessibility testing. In
this paper, we explore the requirements of a natural language based
accessibility testing workflow, starting with a formative study. From this we
build a system that takes as input a manual accessibility test (e.g., ``Search
for a show in VoiceOver'') and uses an LLM combined with pixel-based UI
Understanding models to execute the test and produce a chaptered, navigable
video. In each video, to help QA testers we apply heuristics to detect and flag
accessibility issues (e.g., Text size not increasing with Large Text enabled,
VoiceOver navigation loops). We evaluate this system through a 10 participant
user study with accessibility QA professionals who indicated that the tool
would be very useful in their current work and performed tests similarly to how
they would manually test the features. The study also reveals insights for
future work on using LLMs for accessibility testing.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 20:37:58 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Taeb",
"Maryam",
""
],
[
"Swearngin",
"Amanda",
""
],
[
"School",
"Eldon",
""
],
[
"Cheng",
"Ruijia",
""
],
[
"Jiang",
"Yue",
""
],
[
"Nichols",
"Jeffrey",
""
]
]
| new_dataset | 0.972651 |
2310.02426 | Samyadeep Basu | Samyadeep Basu, Mehrdad Saberi, Shweta Bhardwaj, Atoosa Malemir
Chegini, Daniela Massiceti, Maziar Sanjabi, Shell Xu Hu, Soheil Feizi | EditVal: Benchmarking Diffusion Based Text-Guided Image Editing Methods | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | A plethora of text-guided image editing methods have recently been developed
by leveraging the impressive capabilities of large-scale diffusion-based
generative models such as Imagen and Stable Diffusion. A standardized
evaluation protocol, however, does not exist to compare methods across
different types of fine-grained edits. To address this gap, we introduce
EditVal, a standardized benchmark for quantitatively evaluating text-guided
image editing methods. EditVal consists of a curated dataset of images, a set
of editable attributes for each image drawn from 13 possible edit types, and an
automated evaluation pipeline that uses pre-trained vision-language models to
assess the fidelity of generated images for each edit type. We use EditVal to
benchmark 8 cutting-edge diffusion-based editing methods including SINE, Imagic
and Instruct-Pix2Pix. We complement this with a large-scale human study where
we show that EditVall's automated evaluation pipeline is strongly correlated
with human-preferences for the edit types we considered. From both the human
study and automated evaluation, we find that: (i) Instruct-Pix2Pix, Null-Text
and SINE are the top-performing methods averaged across different edit types,
however {\it only} Instruct-Pix2Pix and Null-Text are able to preserve original
image properties; (ii) Most of the editing methods fail at edits involving
spatial operations (e.g., changing the position of an object). (iii) There is
no `winner' method which ranks the best individually across a range of
different edit types. We hope that our benchmark can pave the way to developing
more reliable text-guided image editing tools in the future. We will publicly
release EditVal, and all associated code and human-study templates to support
these research directions in https://deep-ml-research.github.io/editval/.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 20:46:10 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Basu",
"Samyadeep",
""
],
[
"Saberi",
"Mehrdad",
""
],
[
"Bhardwaj",
"Shweta",
""
],
[
"Chegini",
"Atoosa Malemir",
""
],
[
"Massiceti",
"Daniela",
""
],
[
"Sanjabi",
"Maziar",
""
],
[
"Hu",
"Shell Xu",
""
],
[
"Feizi",
"Soheil",
""
]
]
| new_dataset | 0.999711 |
2310.02492 | Yan Luo | Yan Luo, Yu Tian, Min Shi, Tobias Elze, Mengyu Wang | Eye Fairness: A Large-Scale 3D Imaging Dataset for Equitable Eye
Diseases Screening and Fair Identity Scaling | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Fairness or equity in machine learning is profoundly important for societal
well-being, but limited public datasets hinder its progress, especially in the
area of medicine. It is undeniable that fairness in medicine is one of the most
important areas for fairness learning's applications. Currently, no large-scale
public medical datasets with 3D imaging data for fairness learning are
available, while 3D imaging data in modern clinics are standard tests for
disease diagnosis. In addition, existing medical fairness datasets are actually
repurposed datasets, and therefore they typically have limited demographic
identity attributes with at most three identity attributes of age, gender, and
race for fairness modeling. To address this gap, we introduce our Eye Fairness
dataset with 30,000 subjects (Harvard-EF) covering three major eye diseases
including age-related macular degeneration, diabetic retinopathy, and glaucoma
affecting 380 million patients globally. Our Harvard-EF dataset includes both
2D fundus photos and 3D optical coherence tomography scans with six demographic
identity attributes including age, gender, race, ethnicity, preferred language,
and marital status. We also propose a fair identity scaling (FIS) approach
combining group and individual scaling together to improve model fairness. Our
FIS approach is compared with various state-of-the-art fairness learning
methods with superior performance in the racial, gender, and ethnicity fairness
tasks with 2D and 3D imaging data, which demonstrate the utilities of our
Harvard-EF dataset for fairness learning. To facilitate fairness comparisons
between different models, we propose performance-scaled disparity measures,
which can be used to compare model fairness accounting for overall performance
levels. The dataset and code are publicly accessible via
\url{https://ophai.hms.harvard.edu/datasets/harvard-ef30k}.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 23:44:35 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Luo",
"Yan",
""
],
[
"Tian",
"Yu",
""
],
[
"Shi",
"Min",
""
],
[
"Elze",
"Tobias",
""
],
[
"Wang",
"Mengyu",
""
]
]
| new_dataset | 0.99955 |
2310.02522 | Fan Yang | Fan Yang and Tao Wang | SCB-Dataset3: A Benchmark for Detecting Student Classroom Behavior | arXiv admin note: text overlap with arXiv:2304.02488,
arXiv:2306.03318 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of deep learning methods to automatically detect students' classroom
behavior is a promising approach for analyzing their class performance and
improving teaching effectiveness. However, the lack of publicly available
datasets on student behavior poses a challenge for researchers in this field.
To address this issue, we propose the Student Classroom Behavior dataset
(SCB-dataset3), which represents real-life scenarios. Our dataset comprises
5686 images with 45578 labels, focusing on six behaviors: hand-raising,
reading, writing, using a phone, bowing the head, and leaning over the table.
We evaluated the dataset using the YOLOv5, YOLOv7, and YOLOv8 algorithms,
achieving a mean average precision (map) of up to 80.3$\%$. We believe that our
dataset can serve as a robust foundation for future research in student
behavior detection and contribute to advancements in this field. Our
SCB-dataset3 is available for download at:
https://github.com/Whiffe/SCB-dataset
| [
{
"version": "v1",
"created": "Wed, 4 Oct 2023 01:43:46 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Yang",
"Fan",
""
],
[
"Wang",
"Tao",
""
]
]
| new_dataset | 0.999765 |
2310.02530 | Tianyu Chen | Tianyu Chen, Lin Li, Taotao Qian, Zeyu Wang, Guangtai Liang, Ding Li,
Qianxiang Wang, Tao Xie | Identifying Vulnerability Patches by Comprehending Code Commits with
Comprehensive Change Contexts | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To help application developers apply vulnerability patches timely, security
researchers maintain vulnerability databases such as National Vulnerability
Database (NVD). By directly monitoring NVD with the name of each used library,
application developers can be aware of vulnerabilities and their patches. Given
that the monitoring results of vulnerability patches are unreliable due to
patch incompleteness of NVD, existing approaches employ deep-learning (DL)
models to identify additional vulnerability patches by determining whether a
code commit fixes a vulnerability. However, these approaches suffer from low
accuracy due to not considering code commits' comprehensive contexts such as
control/data-flow contexts or method-invocation contexts. To improve accuracy,
we design CompVPD, the first approach to identify vulnerability patches by
fine-tuning a large language model (LLM) named StarCoder to comprehend code
commits with comprehensive contexts. Considering that including comprehensive
contexts needs to balance the context size and the training costs of LLM,
CompVPD includes our two novel algorithms to generate comprehensive contexts
within the given window size by removing irrelevant components (i.e., files,
methods, and statements) and adaptively expanding each context. We empirically
compare CompVPD with four state-of-the-art/practice (SOTA) approaches that
identify vulnerability patches. The results show that CompVPD improves the AUC
score by 11% and the F1 score by 30% when compared with the best scores of the
SOTA approaches. Additionally, CompVPD provides high value to security practice
by helping identify 20 vulnerability patches and 18 fixes of high-risk bugs
from 2,500 recent code commits of five highly popular open-source projects.
| [
{
"version": "v1",
"created": "Wed, 4 Oct 2023 02:08:18 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Chen",
"Tianyu",
""
],
[
"Li",
"Lin",
""
],
[
"Qian",
"Taotao",
""
],
[
"Wang",
"Zeyu",
""
],
[
"Liang",
"Guangtai",
""
],
[
"Li",
"Ding",
""
],
[
"Wang",
"Qianxiang",
""
],
[
"Xie",
"Tao",
""
]
]
| new_dataset | 0.994057 |
2310.02532 | Tara Sadjadpour | Tara Sadjadpour, Rares Ambrus, Jeannette Bohg | ShaSTA-Fuse: Camera-LiDAR Sensor Fusion to Model Shape and
Spatio-Temporal Affinities for 3D Multi-Object Tracking | 8 pages, 1 figure | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | 3D multi-object tracking (MOT) is essential for an autonomous mobile agent to
safely navigate a scene. In order to maximize the perception capabilities of
the autonomous agent, we aim to develop a 3D MOT framework that fuses camera
and LiDAR sensor information. Building on our prior LiDAR-only work, ShaSTA,
which models shape and spatio-temporal affinities for 3D MOT, we propose a
novel camera-LiDAR fusion approach for learning affinities. At its core, this
work proposes a fusion technique that generates a rich sensory signal
incorporating information about depth and distant objects to enhance affinity
estimation for improved data association, track lifecycle management,
false-positive elimination, false-negative propagation, and track confidence
score refinement. Our main contributions include a novel fusion approach for
combining camera and LiDAR sensory signals to learn affinities, and a
first-of-its-kind multimodal sequential track confidence refinement technique
that fuses 2D and 3D detections. Additionally, we perform an ablative analysis
on each fusion step to demonstrate the added benefits of incorporating the
camera sensor, particular for small, distant objects that tend to suffer from
the depth-sensing limits and sparsity of LiDAR sensors. In sum, our technique
achieves state-of-the-art performance on the nuScenes benchmark amongst
multimodal 3D MOT algorithms using CenterPoint detections.
| [
{
"version": "v1",
"created": "Wed, 4 Oct 2023 02:17:59 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Sadjadpour",
"Tara",
""
],
[
"Ambrus",
"Rares",
""
],
[
"Bohg",
"Jeannette",
""
]
]
| new_dataset | 0.996185 |
2310.02638 | Fazhi He | Zhihao Zong, Fazhi He, Rubin Fan, Yuxin Liu | P2CADNet: An End-to-End Reconstruction Network for Parametric 3D CAD
Model from Point Clouds | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computer Aided Design (CAD), especially the feature-based parametric CAD,
plays an important role in modern industry and society. However, the
reconstruction of featured CAD model is more challenging than the
reconstruction of other CAD models. To this end, this paper proposes an
end-to-end network to reconstruct featured CAD model from point cloud
(P2CADNet). Initially, the proposed P2CADNet architecture combines a point
cloud feature extractor, a CAD sequence reconstructor and a parameter
optimizer. Subsequently, in order to reconstruct the featured CAD model in an
autoregressive way, the CAD sequence reconstructor applies two transformer
decoders, one with target mask and the other without mask. Finally, for
predicting parameters more precisely, we design a parameter optimizer with
cross-attention mechanism to further refine the CAD feature parameters. We
evaluate P2CADNet on the public dataset, and the experimental results show that
P2CADNet has excellent reconstruction quality and accuracy. To our best
knowledge, P2CADNet is the first end-to-end network to reconstruct featured CAD
model from point cloud, and can be regarded as baseline for future works.
Therefore, we open the source code at https://github.com/Blice0415/P2CADNet.
| [
{
"version": "v1",
"created": "Wed, 4 Oct 2023 08:00:05 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Zong",
"Zhihao",
""
],
[
"He",
"Fazhi",
""
],
[
"Fan",
"Rubin",
""
],
[
"Liu",
"Yuxin",
""
]
]
| new_dataset | 0.998885 |
2310.02713 | Jong Chul Ye | Gyutaek Oh, Baekgyu Choi, Inkyung Jung, and Jong Chul Ye | scHyena: Foundation Model for Full-Length Single-Cell RNA-Seq Analysis
in Brain | 21 pages, 16 figures | null | null | null | cs.LG cs.AI q-bio.GN q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Single-cell RNA sequencing (scRNA-seq) has made significant strides in
unraveling the intricate cellular diversity within complex tissues. This is
particularly critical in the brain, presenting a greater diversity of cell
types than other tissue types, to gain a deeper understanding of brain function
within various cellular contexts. However, analyzing scRNA-seq data remains a
challenge due to inherent measurement noise stemming from dropout events and
the limited utilization of extensive gene expression information. In this work,
we introduce scHyena, a foundation model designed to address these challenges
and enhance the accuracy of scRNA-seq analysis in the brain. Specifically,
inspired by the recent Hyena operator, we design a novel Transformer
architecture called singe-cell Hyena (scHyena) that is equipped with a linear
adaptor layer, the positional encoding via gene-embedding, and a
{bidirectional} Hyena operator. This enables us to process full-length
scRNA-seq data without losing any information from the raw data. In particular,
our model learns generalizable features of cells and genes through pre-training
scHyena using the full length of scRNA-seq data. We demonstrate the superior
performance of scHyena compared to other benchmark methods in downstream tasks,
including cell type classification and scRNA-seq imputation.
| [
{
"version": "v1",
"created": "Wed, 4 Oct 2023 10:30:08 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Oh",
"Gyutaek",
""
],
[
"Choi",
"Baekgyu",
""
],
[
"Jung",
"Inkyung",
""
],
[
"Ye",
"Jong Chul",
""
]
]
| new_dataset | 0.9996 |
End of preview. Expand
in Data Studio
- Downloads last month
- 7