id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2310.02894 | Lingru Zhou | Lingru Zhou, Yiqi Gao, Manqing Zhang, Peng Wu, Peng Wang, and Yanning
Zhang | Human-centric Behavior Description in Videos: New Benchmark and Model | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the domain of video surveillance, describing the behavior of each
individual within the video is becoming increasingly essential, especially in
complex scenarios with multiple individuals present. This is because describing
each individual's behavior provides more detailed situational analysis,
enabling accurate assessment and response to potential risks, ensuring the
safety and harmony of public places. Currently, video-level captioning datasets
cannot provide fine-grained descriptions for each individual's specific
behavior. However, mere descriptions at the video-level fail to provide an
in-depth interpretation of individual behaviors, making it challenging to
accurately determine the specific identity of each individual. To address this
challenge, we construct a human-centric video surveillance captioning dataset,
which provides detailed descriptions of the dynamic behaviors of 7,820
individuals. Specifically, we have labeled several aspects of each person, such
as location, clothing, and interactions with other elements in the scene, and
these people are distributed across 1,012 videos. Based on this dataset, we can
link individuals to their respective behaviors, allowing for further analysis
of each person's behavior in surveillance videos. Besides the dataset, we
propose a novel video captioning approach that can describe individual behavior
in detail on a person-level basis, achieving state-of-the-art results. To
facilitate further research in this field, we intend to release our dataset and
code.
| [
{
"version": "v1",
"created": "Wed, 4 Oct 2023 15:31:02 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Zhou",
"Lingru",
""
],
[
"Gao",
"Yiqi",
""
],
[
"Zhang",
"Manqing",
""
],
[
"Wu",
"Peng",
""
],
[
"Wang",
"Peng",
""
],
[
"Zhang",
"Yanning",
""
]
]
| new_dataset | 0.999382 |
2310.02943 | Evelina Bakhturina | Aleksandr Meister, Matvei Novikov, Nikolay Karpov, Evelina Bakhturina,
Vitaly Lavrukhin, Boris Ginsburg | LibriSpeech-PC: Benchmark for Evaluation of Punctuation and
Capitalization Capabilities of end-to-end ASR Models | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Traditional automatic speech recognition (ASR) models output lower-cased
words without punctuation marks, which reduces readability and necessitates a
subsequent text processing model to convert ASR transcripts into a proper
format. Simultaneously, the development of end-to-end ASR models capable of
predicting punctuation and capitalization presents several challenges,
primarily due to limited data availability and shortcomings in the existing
evaluation methods, such as inadequate assessment of punctuation prediction. In
this paper, we introduce a LibriSpeech-PC benchmark designed to assess the
punctuation and capitalization prediction capabilities of end-to-end ASR
models. The benchmark includes a LibriSpeech-PC dataset with restored
punctuation and capitalization, a novel evaluation metric called Punctuation
Error Rate (PER) that focuses on punctuation marks, and initial baseline
models. All code, data, and models are publicly available.
| [
{
"version": "v1",
"created": "Wed, 4 Oct 2023 16:23:37 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Meister",
"Aleksandr",
""
],
[
"Novikov",
"Matvei",
""
],
[
"Karpov",
"Nikolay",
""
],
[
"Bakhturina",
"Evelina",
""
],
[
"Lavrukhin",
"Vitaly",
""
],
[
"Ginsburg",
"Boris",
""
]
]
| new_dataset | 0.992659 |
2310.02960 | Yang Cao | Yang Cao, Yihan Zeng, Hang Xu, Dan Xu | CoDA: Collaborative Novel Box Discovery and Cross-modal Alignment for
Open-vocabulary 3D Object Detection | Accepted by NeurIPS 2023. Project Page:
https://yangcaoai.github.io/publications/CoDA.html | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open-vocabulary 3D Object Detection (OV-3DDet) aims to detect objects from an
arbitrary list of categories within a 3D scene, which remains seldom explored
in the literature. There are primarily two fundamental problems in OV-3DDet,
i.e., localizing and classifying novel objects. This paper aims at addressing
the two problems simultaneously via a unified framework, under the condition of
limited base categories. To localize novel 3D objects, we propose an effective
3D Novel Object Discovery strategy, which utilizes both the 3D box geometry
priors and 2D semantic open-vocabulary priors to generate pseudo box labels of
the novel objects. To classify novel object boxes, we further develop a
cross-modal alignment module based on discovered novel boxes, to align feature
spaces between 3D point cloud and image/text modalities. Specifically, the
alignment process contains a class-agnostic and a class-discriminative
alignment, incorporating not only the base objects with annotations but also
the increasingly discovered novel objects, resulting in an iteratively enhanced
alignment. The novel box discovery and crossmodal alignment are jointly learned
to collaboratively benefit each other. The novel object discovery can directly
impact the cross-modal alignment, while a better feature alignment can, in
turn, boost the localization capability, leading to a unified OV-3DDet
framework, named CoDA, for simultaneous novel object localization and
classification. Extensive experiments on two challenging datasets (i.e.,
SUN-RGBD and ScanNet) demonstrate the effectiveness of our method and also show
a significant mAP improvement upon the best-performing alternative method by
80%. Codes and pre-trained models are released on the project page.
| [
{
"version": "v1",
"created": "Wed, 4 Oct 2023 16:50:51 GMT"
}
]
| 2023-10-05T00:00:00 | [
[
"Cao",
"Yang",
""
],
[
"Zeng",
"Yihan",
""
],
[
"Xu",
"Hang",
""
],
[
"Xu",
"Dan",
""
]
]
| new_dataset | 0.979362 |
1906.04082 | Maria Ponomareva | Ekaterina Chernyak and Maria Ponomareva and Kirill Milintsevich | Char-RNN for Word Stress Detection in East Slavic Languages | Proceedings of the Sixth Workshop on NLP for Similar Languages,
Varieties and Dialects at NAACL-2019 | 2019, In Proceedings of the Sixth Workshop on NLP for Similar
Languages, Varieties and Dialects, pages 35-41,TOBEFILLED-Ann Arbor,
Michigan, Association for Computational Linguistics | 10.18653/v1/W19-1404 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore how well a sequence labeling approach, namely, recurrent neural
network, is suited for the task of resource-poor and POS tagging free word
stress detection in the Russian, Ukranian, Belarusian languages. We present new
datasets, annotated with the word stress, for the three languages and compare
several RNN models trained on three languages and explore possible applications
of the transfer learning for the task. We show that it is possible to train a
model in a cross-lingual setting and that using additional languages improves
the quality of the results.
| [
{
"version": "v1",
"created": "Mon, 10 Jun 2019 15:53:20 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Chernyak",
"Ekaterina",
""
],
[
"Ponomareva",
"Maria",
""
],
[
"Milintsevich",
"Kirill",
""
]
]
| new_dataset | 0.998874 |
2003.04862 | Kanata Suzuki | Kanata Suzuki, Hiroki Mori, Tetsuya Ogata | Compensation for undefined behaviors during robot task execution by
switching controllers depending on embedded dynamics in RNN | To appear in IEEE Robotics and Automation Letters (RA-L) and IEEE
International Conference on Robotics and Automation (ICRA 2021) | null | 10.1109/LRA.2021.3063702 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robotic applications require both correct task performance and compensation
for undefined behaviors. Although deep learning is a promising approach to
perform complex tasks, the response to undefined behaviors that are not
reflected in the training dataset remains challenging. In a human-robot
collaborative task, the robot may adopt an unexpected posture due to collisions
and other unexpected events. Therefore, robots should be able to recover from
disturbances for completing the execution of the intended task. We propose a
compensation method for undefined behaviors by switching between two
controllers. Specifically, the proposed method switches between learning-based
and model-based controllers depending on the internal representation of a
recurrent neural network that learns task dynamics. We applied the proposed
method to a pick-and-place task and evaluated the compensation for undefined
behaviors. Experimental results from simulations and on a real robot
demonstrate the effectiveness and high performance of the proposed method.
| [
{
"version": "v1",
"created": "Tue, 10 Mar 2020 17:13:15 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Mar 2021 23:36:33 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Suzuki",
"Kanata",
""
],
[
"Mori",
"Hiroki",
""
],
[
"Ogata",
"Tetsuya",
""
]
]
| new_dataset | 0.998068 |
2010.02605 | Ekaterina Artemova | Taisia Glushkova and Alexey Machnev and Alena Fenogenova and Tatiana
Shavrina and Ekaterina Artemova and Dmitry I. Ignatov | DaNetQA: a yes/no Question Answering Dataset for the Russian Language | Analysis of Images, Social Networks and Texts - 9 th International
Conference, AIST 2020, Skolkovo, Russia, October 15-16, 2020, Revised
Selected Papers. Lecture Notes in Computer Science
(https://dblp.org/db/series/lncs/index.html), Springer 2020 | null | 10.1007/978-3-030-72610-2_4 | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DaNetQA, a new question-answering corpus, follows (Clark et. al, 2019)
design: it comprises natural yes/no questions. Each question is paired with a
paragraph from Wikipedia and an answer, derived from the paragraph. The task is
to take both the question and a paragraph as input and come up with a yes/no
answer, i.e. to produce a binary output. In this paper, we present a
reproducible approach to DaNetQA creation and investigate transfer learning
methods for task and language transferring. For task transferring we leverage
three similar sentence modelling tasks: 1) a corpus of paraphrases,
Paraphraser, 2) an NLI task, for which we use the Russian part of XNLI, 3)
another question answering task, SberQUAD. For language transferring we use
English to Russian translation together with multilingual language fine-tuning.
| [
{
"version": "v1",
"created": "Tue, 6 Oct 2020 10:30:48 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Oct 2020 10:36:06 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Glushkova",
"Taisia",
""
],
[
"Machnev",
"Alexey",
""
],
[
"Fenogenova",
"Alena",
""
],
[
"Shavrina",
"Tatiana",
""
],
[
"Artemova",
"Ekaterina",
""
],
[
"Ignatov",
"Dmitry I.",
""
]
]
| new_dataset | 0.999055 |
2010.15925 | Ekaterina Artemova | Tatiana Shavrina and Alena Fenogenova and Anton Emelyanov and Denis
Shevelev and Ekaterina Artemova and Valentin Malykh and Vladislav Mikhailov
and Maria Tikhonova and Andrey Chertok and Andrey Evlampiev | RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark | to appear in EMNLP 2020 | null | 10.18653/v1/2020.emnlp-main.381 | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce an advanced Russian general language
understanding evaluation benchmark -- RussianGLUE. Recent advances in the field
of universal language models and transformers require the development of a
methodology for their broad diagnostics and testing for general intellectual
skills - detection of natural language inference, commonsense reasoning,
ability to perform simple logical operations regardless of text subject or
lexicon. For the first time, a benchmark of nine tasks, collected and organized
analogically to the SuperGLUE methodology, was developed from scratch for the
Russian language. We provide baselines, human level evaluation, an open-source
framework for evaluating models
(https://github.com/RussianNLP/RussianSuperGLUE), and an overall leaderboard of
transformer models for the Russian language. Besides, we present the first
results of comparing multilingual models in the adapted diagnostic test set and
offer the first steps to further expanding or assessing state-of-the-art models
independently of language.
| [
{
"version": "v1",
"created": "Thu, 29 Oct 2020 20:31:39 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Nov 2020 11:02:10 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Shavrina",
"Tatiana",
""
],
[
"Fenogenova",
"Alena",
""
],
[
"Emelyanov",
"Anton",
""
],
[
"Shevelev",
"Denis",
""
],
[
"Artemova",
"Ekaterina",
""
],
[
"Malykh",
"Valentin",
""
],
[
"Mikhailov",
"Vladislav",
""
],
[
"Tikhonova",
"Maria",
""
],
[
"Chertok",
"Andrey",
""
],
[
"Evlampiev",
"Andrey",
""
]
]
| new_dataset | 0.99875 |
2111.06812 | Ali J. Ghandour | Hasan Nasrallah, Mustafa Shukor and Ali J. Ghandour | Sci-Net: Scale Invariant Model for Buildings Segmentation from Aerial
Imagery | null | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Buildings' segmentation is a fundamental task in the field of earth
observation and aerial imagery analysis. Most existing deep learning-based
methods in the literature can be applied to a fixed or narrow-range spatial
resolution imagery. In practical scenarios, users deal with a broad spectrum of
image resolutions. Thus, a given aerial image often needs to be re-sampled to
match the spatial resolution of the dataset used to train the deep learning
model, which results in a degradation in segmentation performance. To overcome
this challenge, we propose, in this manuscript, Scale-invariant Neural Network
(Sci-Net) architecture that segments buildings from wide-range spatial
resolution aerial images. Specifically, our approach leverages UNet
hierarchical representation and Dense Atrous Spatial Pyramid Pooling to extract
fine-grained multi-scale representations. Sci-Net significantly outperforms
state of the art models on the Open Cities AI and the Multi-Scale Building
datasets with a steady improvement margin across different spatial resolutions.
| [
{
"version": "v1",
"created": "Fri, 12 Nov 2021 16:45:20 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Nov 2021 11:19:30 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Feb 2022 10:58:48 GMT"
},
{
"version": "v4",
"created": "Wed, 30 Nov 2022 05:23:52 GMT"
},
{
"version": "v5",
"created": "Wed, 1 Feb 2023 13:54:51 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Nasrallah",
"Hasan",
""
],
[
"Shukor",
"Mustafa",
""
],
[
"Ghandour",
"Ali J.",
""
]
]
| new_dataset | 0.969882 |
2205.11159 | Ekaterina Artemova | Ekaterina Artemova, Maxim Zmeev, Natalia Loukachevitch, Igor Rozhkov,
Tatiana Batura, Vladimir Ivanov, Elena Tutubalina | RuNNE-2022 Shared Task: Recognizing Nested Named Entities | To appear in Dialogue 2022 | null | 10.28995/2075-7182-2022-21-33-41 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The RuNNE Shared Task approaches the problem of nested named entity
recognition. The annotation schema is designed in such a way, that an entity
may partially overlap or even be nested into another entity. This way, the
named entity "The Yermolova Theatre" of type "organization" houses another
entity "Yermolova" of type "person". We adopt the Russian NEREL dataset for the
RuNNE Shared Task. NEREL comprises news texts written in the Russian language
and collected from the Wikinews portal. The annotation schema includes 29
entity types. The nestedness of named entities in NEREL reaches up to six
levels. The RuNNE Shared Task explores two setups. (i) In the general setup all
entities occur more or less with the same frequency. (ii) In the few-shot setup
the majority of entity types occur often in the training set. However, some of
the entity types are have lower frequency, being thus challenging to recognize.
In the test set the frequency of all entity types is even.
This paper reports on the results of the RuNNE Shared Task. Overall the
shared task has received 156 submissions from nine teams. Half of the
submissions outperform a straightforward BERT-based baseline in both setups.
This paper overviews the shared task setup and discusses the submitted systems,
discovering meaning insights for the problem of nested NER. The links to the
evaluation platform and the data from the shared task are available in our
github repository: https://github.com/dialogue-evaluation/RuNNE.
| [
{
"version": "v1",
"created": "Mon, 23 May 2022 09:50:42 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Artemova",
"Ekaterina",
""
],
[
"Zmeev",
"Maxim",
""
],
[
"Loukachevitch",
"Natalia",
""
],
[
"Rozhkov",
"Igor",
""
],
[
"Batura",
"Tatiana",
""
],
[
"Ivanov",
"Vladimir",
""
],
[
"Tutubalina",
"Elena",
""
]
]
| new_dataset | 0.975211 |
2210.00193 | Parker Riley | Parker Riley, Timothy Dozat, Jan A. Botha, Xavier Garcia, Dan
Garrette, Jason Riesa, Orhan Firat, Noah Constant | FRMT: A Benchmark for Few-Shot Region-Aware Machine Translation | Published in TACL Vol. 11 (2023) | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present FRMT, a new dataset and evaluation benchmark for Few-shot
Region-aware Machine Translation, a type of style-targeted translation. The
dataset consists of professional translations from English into two regional
variants each of Portuguese and Mandarin Chinese. Source documents are selected
to enable detailed analysis of phenomena of interest, including lexically
distinct terms and distractor terms. We explore automatic evaluation metrics
for FRMT and validate their correlation with expert human evaluation across
both region-matched and mismatched rating scenarios. Finally, we present a
number of baseline models for this task, and offer guidelines for how
researchers can train, evaluate, and compare their own models. Our dataset and
evaluation code are publicly available: https://bit.ly/frmt-task
| [
{
"version": "v1",
"created": "Sat, 1 Oct 2022 05:02:04 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Feb 2023 22:07:09 GMT"
},
{
"version": "v3",
"created": "Tue, 3 Oct 2023 17:20:04 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Riley",
"Parker",
""
],
[
"Dozat",
"Timothy",
""
],
[
"Botha",
"Jan A.",
""
],
[
"Garcia",
"Xavier",
""
],
[
"Garrette",
"Dan",
""
],
[
"Riesa",
"Jason",
""
],
[
"Firat",
"Orhan",
""
],
[
"Constant",
"Noah",
""
]
]
| new_dataset | 0.999767 |
2306.10577 | Yongchan Kwon | Kevin Fu Jiang, Weixin Liang, James Zou, Yongchan Kwon | OpenDataVal: a Unified Benchmark for Data Valuation | 25 pages, NeurIPS 2023 Track on Datasets and Benchmarks | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Assessing the quality and impact of individual data points is critical for
improving model performance and mitigating undesirable biases within the
training dataset. Several data valuation algorithms have been proposed to
quantify data quality, however, there lacks a systemic and standardized
benchmarking system for data valuation. In this paper, we introduce
OpenDataVal, an easy-to-use and unified benchmark framework that empowers
researchers and practitioners to apply and compare various data valuation
algorithms. OpenDataVal provides an integrated environment that includes (i) a
diverse collection of image, natural language, and tabular datasets, (ii)
implementations of eleven different state-of-the-art data valuation algorithms,
and (iii) a prediction model API that can import any models in scikit-learn.
Furthermore, we propose four downstream machine learning tasks for evaluating
the quality of data values. We perform benchmarking analysis using OpenDataVal,
quantifying and comparing the efficacy of state-of-the-art data valuation
approaches. We find that no single algorithm performs uniformly best across all
tasks, and an appropriate algorithm should be employed for a user's downstream
task. OpenDataVal is publicly available at https://opendataval.github.io with
comprehensive documentation. Furthermore, we provide a leaderboard where
researchers can evaluate the effectiveness of their own data valuation
algorithms.
| [
{
"version": "v1",
"created": "Sun, 18 Jun 2023 14:38:29 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Oct 2023 19:27:55 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Jiang",
"Kevin Fu",
""
],
[
"Liang",
"Weixin",
""
],
[
"Zou",
"James",
""
],
[
"Kwon",
"Yongchan",
""
]
]
| new_dataset | 0.977648 |
2307.05923 | Kosuke Tatsumura | Kosuke Tatsumura, Ryo Hidaka, Jun Nakayama, Tomoya Kashimata, and
Masaya Yamasaki | Pairs-trading System using Quantum-inspired Combinatorial Optimization
Accelerator for Optimal Path Search in Market Graphs | 11 pages, 8 figures | IEEE Access 11, pp. 104406 - 104416 (2023) | 10.1109/ACCESS.2023.3316727 | null | cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pairs-trading is a trading strategy that involves matching a long position
with a short position in two stocks aiming at market-neutral profits. While a
typical pairs-trading system monitors the prices of two statistically
correlated stocks for detecting a temporary divergence, monitoring and
analyzing the prices of more stocks would potentially lead to finding more
trading opportunities. Here we report a stock pairs-trading system that finds
trading opportunities for any two stocks in an $N$-stock universe using a
combinatorial optimization accelerator based on a quantum-inspired algorithm
called simulated bifurcation. The trading opportunities are detected through
solving an optimal path search problem in an $N$-node directed graph with edge
weights corresponding to the products of instantaneous price differences and
statistical correlation factors between two stocks. The accelerator is one of
Ising machines and operates consecutively to find multiple opportunities in a
market situation with avoiding duplicate detections by a tabu search technique.
It has been demonstrated in the Tokyo Stock Exchange that the FPGA
(field-programmable gate array)-based trading system has a sufficiently low
latency (33 $\mu$s for $N$=15 or 210 pairs) to execute the pairs-trading
strategy based on optimal path search in market graphs.
| [
{
"version": "v1",
"created": "Wed, 12 Jul 2023 05:41:39 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Tatsumura",
"Kosuke",
""
],
[
"Hidaka",
"Ryo",
""
],
[
"Nakayama",
"Jun",
""
],
[
"Kashimata",
"Tomoya",
""
],
[
"Yamasaki",
"Masaya",
""
]
]
| new_dataset | 0.969528 |
2307.06777 | Saina Sunny | C. Aiswarya, Amaldev Manuel, Saina Sunny | Deciding Conjugacy of a Rational Relation | null | null | null | null | cs.FL | http://creativecommons.org/licenses/by/4.0/ | A rational relation is conjugate if every pair of words in the relation are
conjugates, i.e., cyclic shifts of each other. We show that checking whether a
rational relation is conjugate is decidable.
We assume that the rational relation is given as a rational expression over
pairs of words. Every rational expression is effectively equivalent to a sum of
sumfree expressions, possibly with an exponential size blow-up. Hence, the
general problem reduces to determining the conjugacy of sumfree rational
expressions. To solve this specific case, we give two generalisations of the
Lyndon-Sch\"utzenberger's theorem from word combinatorics that equates
conjugacy of a pair of words $(u,v)$ and the existence of a word $z$ (called a
witness) such that $uz=zv$. A set of conjugate pairs has a common witness if
there is a word that is a witness for every pair in the set. We show the
following.
1. If $G$ is an arbitrary set of conjugate pairs, then $G^*$ is conjugate if
and only if there is a common witness for $G$.
2. If $G_1^*, \ldots, G_k^*, k > 0$, be arbitrary sets of conjugate pairs and
$(a_0, b_0), \ldots, (a_k, b_k)$ be arbitrary pairs of words, then the set of
words \[G = (a_0, b_0) G_1^* (a_1, b_1) \cdots G_k^*(a_k,b_k)\] is conjugate if
and only if it has a common witness.
A consequence is that a set of pairs generated by a sumfree rational
expression is conjugate if and only if there is a word witnessing the conjugacy
of all the pairs. Moreover the witness is effectively computable leading to an
algorithm to decide the conjugacy.
| [
{
"version": "v1",
"created": "Thu, 13 Jul 2023 14:34:18 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Oct 2023 10:14:18 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Aiswarya",
"C.",
""
],
[
"Manuel",
"Amaldev",
""
],
[
"Sunny",
"Saina",
""
]
]
| new_dataset | 0.99935 |
2307.06945 | Tao Ge | Tao Ge, Jing Hu, Lei Wang, Xun Wang, Si-Qing Chen, Furu Wei | In-context Autoencoder for Context Compression in a Large Language Model | v2 (19 pages) with the code, data and model released | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose the In-context Autoencoder (ICAE), leveraging the power of a large
language models (LLM) to compress a long context into short compact memory
slots that can be directly conditioned on by the LLM for various purposes. ICAE
is first pretrained using both autoencoding and language modeling objectives on
massive text data, enabling it to generate memory slots that accurately and
comprehensively represent the original context; Then, it is fine-tuned on
instruction data for producing desirable responses to various prompts.
Experiments demonstrate that our lightweight ICAE, introducing fewer than 1%
additional parameters, effectively achieves 4X context compression based on
Llama, offering advantages in both improved latency and GPU memory cost during
inference, and showing an interesting insight in memorization as well as
potential for scalability. These promising results imply a novel perspective on
the connection between working memory in cognitive science and representation
learning in LLMs, revealing ICAE's significant implications in addressing the
long context problem and suggesting further research in LLM context management.
Our data, code and model are released at https://github.com/getao/icae.
| [
{
"version": "v1",
"created": "Thu, 13 Jul 2023 17:59:21 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Oct 2023 22:38:42 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Ge",
"Tao",
""
],
[
"Hu",
"Jing",
""
],
[
"Wang",
"Lei",
""
],
[
"Wang",
"Xun",
""
],
[
"Chen",
"Si-Qing",
""
],
[
"Wei",
"Furu",
""
]
]
| new_dataset | 0.993628 |
2309.00381 | Nick Brown | Nick Brown, Maurice Jamieson, Joseph Lee, Paul Wang | Is RISC-V ready for HPC prime-time: Evaluating the 64-core Sophon SG2042
RISC-V CPU | Author accepted version of paper in ACM Workshops of The
International Conference on High Performance Computing, Network, Storage, and
Analysis (SC-W 2023) | null | 10.1145/3624062.3624234 | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Sophon SG2042 is the world's first commodity 64-core RISC-V CPU for high
performance workloads and an important question is whether the SG2042 has the
potential to encourage the HPC community to embrace RISC-V.
In this paper we undertaking a performance exploration of the SG2042 against
existing RISC-V hardware and high performance x86 CPUs in use by modern
supercomputers. Leveraging the RAJAPerf benchmarking suite, we discover that on
average, the SG2042 delivers, per core, between five and ten times the
performance compared to the nearest widely available RISC-V hardware. We found
that, on average, the x86 high performance CPUs under test outperform the
SG2042 by between four and eight times for multi-threaded workloads, although
some individual kernels do perform faster on the SG2042. The result of this
work is a performance study that not only contrasts this new RISC-V CPU against
existing technologies, but furthermore shares performance best practice.
| [
{
"version": "v1",
"created": "Fri, 1 Sep 2023 10:35:32 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Oct 2023 08:52:10 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Brown",
"Nick",
""
],
[
"Jamieson",
"Maurice",
""
],
[
"Lee",
"Joseph",
""
],
[
"Wang",
"Paul",
""
]
]
| new_dataset | 0.997853 |
2309.05653 | Xiang Yue | Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu
Su, Wenhu Chen | MAmmoTH: Building Math Generalist Models through Hybrid Instruction
Tuning | Work in progress; Xiang Yue and Wenhu Chen contributed equally to
this paper | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce MAmmoTH, a series of open-source large language models (LLMs)
specifically tailored for general math problem-solving. The MAmmoTH models are
trained on MathInstruct, our meticulously curated instruction tuning dataset.
MathInstruct is compiled from 13 math datasets with intermediate rationales,
six of which have rationales newly curated by us. It presents a unique hybrid
of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and also
ensures extensive coverage of diverse fields in math. The hybrid of CoT and PoT
not only unleashes the potential of tool use but also allows different thought
processes for different math problems. As a result, the MAmmoTH series
substantially outperform existing open-source models on nine mathematical
reasoning datasets across all scales with an average accuracy gain between 16%
and 32%. Remarkably, our MAmmoTH-7B model reaches 33% on MATH (a
competition-level dataset), which exceeds the best open-source 7B model
(WizardMath) by 23%, and the MAmmoTH-34B model achieves 44% accuracy on MATH,
even surpassing GPT-4's CoT result. Our work underscores the importance of
diverse problem coverage and the use of hybrid rationales in developing
superior math generalist models.
| [
{
"version": "v1",
"created": "Mon, 11 Sep 2023 17:47:22 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Oct 2023 15:25:41 GMT"
},
{
"version": "v3",
"created": "Tue, 3 Oct 2023 02:48:42 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Yue",
"Xiang",
""
],
[
"Qu",
"Xingwei",
""
],
[
"Zhang",
"Ge",
""
],
[
"Fu",
"Yao",
""
],
[
"Huang",
"Wenhao",
""
],
[
"Sun",
"Huan",
""
],
[
"Su",
"Yu",
""
],
[
"Chen",
"Wenhu",
""
]
]
| new_dataset | 0.981202 |
2309.11500 | Luoyi Sun | Luoyi Sun, Xuenan Xu, Mengyue Wu, Weidi Xie | A Large-scale Dataset for Audio-Language Representation Learning | null | null | null | null | cs.SD cs.CV cs.MM eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The AI community has made significant strides in developing powerful
foundation models, driven by large-scale multimodal datasets. However, in the
audio representation learning community, the present audio-language datasets
suffer from limitations such as insufficient volume, simplistic content, and
arduous collection procedures. To tackle these challenges, we present an
innovative and automatic audio caption generation pipeline based on a series of
public tools or APIs, and construct a large-scale, high-quality, audio-language
dataset, named as Auto-ACD, comprising over 1.9M audio-text pairs. To
demonstrate the effectiveness of the proposed dataset, we train popular models
on our dataset and show performance improvement on various downstream tasks,
namely, audio-language retrieval, audio captioning, environment classification.
In addition, we establish a novel test set and provide a benchmark for
audio-text tasks. The proposed dataset will be released at
https://auto-acd.github.io/.
| [
{
"version": "v1",
"created": "Wed, 20 Sep 2023 17:59:32 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Sep 2023 15:25:03 GMT"
},
{
"version": "v3",
"created": "Tue, 3 Oct 2023 11:37:40 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Sun",
"Luoyi",
""
],
[
"Xu",
"Xuenan",
""
],
[
"Wu",
"Mengyue",
""
],
[
"Xie",
"Weidi",
""
]
]
| new_dataset | 0.999668 |
2309.13526 | Qiang Liu | Qiang Liu, Yongjie Xue, Yuru Zhang, Dawei Chen, Kyungtae Han | AdaMap: High-Scalable Real-Time Cooperative Perception at the Edge | Accepted by IEEE/ACM SEC 2023 | null | null | null | cs.RO cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cooperative perception is the key approach to augment the perception of
connected and automated vehicles (CAVs) toward safe autonomous driving.
However, it is challenging to achieve real-time perception sharing for hundreds
of CAVs in large-scale deployment scenarios. In this paper, we propose AdaMap,
a new high-scalable real-time cooperative perception system, which achieves
assured percentile end-to-end latency under time-varying network dynamics. To
achieve AdaMap, we design a tightly coupled data plane and control plane. In
the data plane, we design a new hybrid localization module to dynamically
switch between object detection and tracking, and a novel point cloud
representation module to adaptively compress and reconstruct the point cloud of
detected objects. In the control plane, we design a new graph-based object
selection method to un-select excessive multi-viewed point clouds of objects,
and a novel approximated gradient descent algorithm to optimize the
representation of point clouds. We implement AdaMap on an emulation platform,
including realistic vehicle and server computation and a simulated 5G network,
under a 150-CAV trace collected from the CARLA simulator. The evaluation
results show that, AdaMap reduces up to 49x average transmission data size at
the cost of 0.37 reconstruction loss, as compared to state-of-the-art
solutions, which verifies its high scalability, adaptability, and computation
efficiency.
| [
{
"version": "v1",
"created": "Sun, 24 Sep 2023 02:11:45 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Oct 2023 03:04:42 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Liu",
"Qiang",
""
],
[
"Xue",
"Yongjie",
""
],
[
"Zhang",
"Yuru",
""
],
[
"Chen",
"Dawei",
""
],
[
"Han",
"Kyungtae",
""
]
]
| new_dataset | 0.998295 |
2309.15183 | Budmonde Duinkharjav | Budmonde Duinkharjav, Benjamin Liang, Anjul Patney, Rachel Brown, Qi
Sun | The Shortest Route Is Not Always the Fastest: Probability-Modeled
Stereoscopic Eye Movement Completion Time in VR | null | null | 10.1145/3618334 | null | cs.GR cs.HC | http://creativecommons.org/licenses/by/4.0/ | Speed and consistency of target-shifting play a crucial role in human ability
to perform complex tasks. Shifting our gaze between objects of interest quickly
and consistently requires changes both in depth and direction. Gaze changes in
depth are driven by slow, inconsistent vergence movements which rotate the eyes
in opposite directions, while changes in direction are driven by ballistic,
consistent movements called saccades, which rotate the eyes in the same
direction. In the natural world, most of our eye movements are a combination of
both types. While scientific consensus on the nature of saccades exists,
vergence and combined movements remain less understood and agreed upon.
We eschew the lack of scientific consensus in favor of proposing an
operationalized computational model which predicts the speed of any type of
gaze movement during target-shifting in 3D. To this end, we conduct a
psychophysical study in a stereo VR environment to collect more than 12,000
gaze movement trials, analyze the temporal distribution of the observed gaze
movements, and fit a probabilistic model to the data. We perform a series of
objective measurements and user studies to validate the model. The results
demonstrate its predictive accuracy, generalization, as well as applications
for optimizing visual performance by altering content placement. Lastly, we
leverage the model to measure differences in human target-changing time
relative to the natural world, as well as suggest scene-aware projection depth.
By incorporating the complexities and randomness of human oculomotor control,
we hope this research will support new behavior-aware metrics for VR/AR display
design, interface layout, and gaze-contingent rendering.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 18:40:17 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Oct 2023 15:35:30 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Duinkharjav",
"Budmonde",
""
],
[
"Liang",
"Benjamin",
""
],
[
"Patney",
"Anjul",
""
],
[
"Brown",
"Rachel",
""
],
[
"Sun",
"Qi",
""
]
]
| new_dataset | 0.972578 |
2309.16499 | Danfeng Hong | Danfeng Hong, Bing Zhang, Hao Li, Yuxuan Li, Jing Yao, Chenyu Li,
Martin Werner, Jocelyn Chanussot, Alexander Zipf, Xiao Xiang Zhu | Cross-City Matters: A Multimodal Remote Sensing Benchmark Dataset for
Cross-City Semantic Segmentation using High-Resolution Domain Adaptation
Networks | null | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial intelligence (AI) approaches nowadays have gained remarkable
success in single-modality-dominated remote sensing (RS) applications,
especially with an emphasis on individual urban environments (e.g., single
cities or regions). Yet these AI models tend to meet the performance bottleneck
in the case studies across cities or regions, due to the lack of diverse RS
information and cutting-edge solutions with high generalization ability. To
this end, we build a new set of multimodal remote sensing benchmark datasets
(including hyperspectral, multispectral, SAR) for the study purpose of the
cross-city semantic segmentation task (called C2Seg dataset), which consists of
two cross-city scenes, i.e., Berlin-Augsburg (in Germany) and Beijing-Wuhan (in
China). Beyond the single city, we propose a high-resolution domain adaptation
network, HighDAN for short, to promote the AI model's generalization ability
from the multi-city environments. HighDAN is capable of retaining the spatially
topological structure of the studied urban scene well in a parallel high-to-low
resolution fusion fashion but also closing the gap derived from enormous
differences of RS image representations between different cities by means of
adversarial learning. In addition, the Dice loss is considered in HighDAN to
alleviate the class imbalance issue caused by factors across cities. Extensive
experiments conducted on the C2Seg dataset show the superiority of our HighDAN
in terms of segmentation performance and generalization ability, compared to
state-of-the-art competitors. The C2Seg dataset and the semantic segmentation
toolbox (involving the proposed HighDAN) will be available publicly at
https://github.com/danfenghong.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 23:55:39 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Oct 2023 08:49:58 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Hong",
"Danfeng",
""
],
[
"Zhang",
"Bing",
""
],
[
"Li",
"Hao",
""
],
[
"Li",
"Yuxuan",
""
],
[
"Yao",
"Jing",
""
],
[
"Li",
"Chenyu",
""
],
[
"Werner",
"Martin",
""
],
[
"Chanussot",
"Jocelyn",
""
],
[
"Zipf",
"Alexander",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
]
| new_dataset | 0.999724 |
2310.00710 | Ying Zhang | Ying Zhang, Wenjia Song, Zhengjie Ji, Danfeng (Daphne) Yao, Na Meng | How well does LLM generate security tests? | null | null | null | null | cs.CR cs.SE | http://creativecommons.org/licenses/by-sa/4.0/ | Developers often build software on top of third-party libraries (Libs) to
improve programmer productivity and software quality. The libraries may contain
vulnerabilities exploitable by hackers to attack the applications (Apps) built
on top of them. People refer to such attacks as supply chain attacks, the
documented number of which has increased 742% in 2022. People created tools to
mitigate such attacks, by scanning the library dependencies of Apps,
identifying the usage of vulnerable library versions, and suggesting secure
alternatives to vulnerable dependencies. However, recent studies show that many
developers do not trust the reports by these tools; they ask for code or
evidence to demonstrate how library vulnerabilities lead to security exploits,
in order to assess vulnerability severity and modification necessity.
Unfortunately, manually crafting demos of application-specific attacks is
challenging and time-consuming, and there is insufficient tool support to
automate that procedure.
In this study, we used ChatGPT-4.0 to generate security tests, and to
demonstrate how vulnerable library dependencies facilitate the supply chain
attacks to given Apps. We explored various prompt styles/templates, and found
that ChatGPT-4.0 generated tests for all 55 Apps, demonstrating 24 attacks
successfully. It outperformed two state-of-the-art security test generators --
TRANSFER and SIEGE -- by generating a lot more tests and achieving more
exploits. ChatGPT-4.0 worked better when prompts described more on the
vulnerabilities, possible exploits, and code context. Our research will shed
light on new research in security test generation. The generated tests will
help developers create secure by design and secure by default software.
| [
{
"version": "v1",
"created": "Sun, 1 Oct 2023 16:00:58 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Oct 2023 03:29:12 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Zhang",
"Ying",
"",
"Daphne"
],
[
"Song",
"Wenjia",
"",
"Daphne"
],
[
"Ji",
"Zhengjie",
"",
"Daphne"
],
[
"Danfeng",
"",
"",
"Daphne"
],
[
"Yao",
"",
""
],
[
"Meng",
"Na",
""
]
]
| new_dataset | 0.965953 |
2310.00835 | Yuqing Wang | Yuqing Wang, Yun Zhao | TRAM: Benchmarking Temporal Reasoning for Large Language Models | 21 pages, in submission | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Reasoning about time is essential for understanding the nuances of events
described in natural language. Previous research on this topic has been limited
in scope, characterized by a lack of standardized benchmarks that would allow
for consistent evaluations across different studies. In this paper, we
introduce TRAM, a temporal reasoning benchmark composed of ten datasets,
encompassing various temporal aspects of events such as order, arithmetic,
frequency, and duration, designed to facilitate a comprehensive evaluation of
the temporal reasoning capabilities of large language models (LLMs). We conduct
an extensive evaluation using popular LLMs, such as GPT-4 and Llama2, in both
zero-shot and few-shot learning scenarios. Additionally, we employ BERT-based
models to establish the baseline evaluations. Our findings indicate that these
models still trail human performance in temporal reasoning tasks. It is our
aspiration that TRAM will spur further progress in enhancing the temporal
reasoning abilities of LLMs.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 00:59:07 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Oct 2023 13:54:02 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Wang",
"Yuqing",
""
],
[
"Zhao",
"Yun",
""
]
]
| new_dataset | 0.999541 |
2310.01206 | Atsuki Yamaguchi | Atsuki Yamaguchi, Terufumi Morishita | appjsonify: An Academic Paper PDF-to-JSON Conversion Toolkit | Preprint. PyPI: https://pypi.org/project/appjsonify/ GitHub:
https://pypi.org/project/appjsonify/. Fixed Figure 1 containing paper PDF
examples | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present appjsonify, a Python-based PDF-to-JSON conversion toolkit for
academic papers. It parses a PDF file using several visual-based document
layout analysis models and rule-based text processing approaches. appjsonify is
a flexible tool that allows users to easily configure the processing pipeline
to handle a specific format of a paper they wish to process. We are publicly
releasing appjsonify as an easy-to-install toolkit available via PyPI and
GitHub.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 13:48:16 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Oct 2023 13:19:40 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Yamaguchi",
"Atsuki",
""
],
[
"Morishita",
"Terufumi",
""
]
]
| new_dataset | 0.963872 |
2310.01418 | Dean Ninalga | Dean Ninalga | Cordyceps@LT-EDI: Depression Detection with Reddit and Self-training | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Depression is debilitating, and not uncommon. Indeed, studies of excessive
social media users show correlations with depression, ADHD, and other mental
health concerns. Given that there is a large number of people with excessive
social media usage, then there is a significant population of potentially
undiagnosed users and posts that they create. In this paper, we propose a
depression severity detection system using a semi-supervised learning technique
to predict if a post is from a user who is experiencing severe, moderate, or
low (non-diagnostic) levels of depression. Namely, we use a trained model to
classify a large number of unlabelled social media posts from Reddit, then use
these generated labels to train a more powerful classifier. We demonstrate our
framework on Detecting Signs of Depression from Social Media Text -
LT-EDI@RANLP 2023 shared task, where our framework ranks 3rd overall.
| [
{
"version": "v1",
"created": "Sun, 24 Sep 2023 01:14:49 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Ninalga",
"Dean",
""
]
]
| new_dataset | 0.999529 |
2310.01429 | Eren Unlu Ph. D. | Eren Unlu | Chatmap : Large Language Model Interaction with Cartographic Data | 9 pages, 4 figures | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | The swift advancement and widespread availability of foundational Large
Language Models (LLMs), complemented by robust fine-tuning methodologies, have
catalyzed their adaptation for innovative and industrious applications.
Enabling LLMs to recognize and interpret geospatial data, while offering a
linguistic access to vast cartographic datasets, is of significant importance.
OpenStreetMap (OSM) is the most ambitious open-source global initiative
offering detailed urban and rural geographic data, curated by a community of
over 10 million contributors, which constitutes a great potential for LLM
applications. In this study, we demonstrate the proof of concept and details of
the process of fine-tuning a relatively small scale (1B parameters) LLM with a
relatively small artificial dataset curated by a more capable teacher model, in
order to provide a linguistic interface to the OSM data of an arbitrary urban
region. Through this interface, users can inquire about a location's
attributes, covering a wide spectrum of concepts, such as its touristic appeal
or the potential profitability of various businesses in that vicinity. The
study aims to provide an initial guideline for such generative artificial
intelligence (AI) adaptations and demonstrate early signs of useful emerging
abilities in this context even in minimal computational settings. The
embeddings of artificially curated prompts including OSM data are also
investigated in detail, which might be instrumental for potential geospatially
aware urban Retrieval Augmented Generation (RAG) applications.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 15:32:36 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Unlu",
"Eren",
""
]
]
| new_dataset | 0.998253 |
2310.01430 | Swapnil Bhosale | Swapnil Bhosale, Abhra Chaudhuri, Alex Lee Robert Williams, Divyank
Tiwari, Anjan Dutta, Xiatian Zhu, Pushpak Bhattacharyya, Diptesh Kanojia | Sarcasm in Sight and Sound: Benchmarking and Expansion to Improve
Multimodal Sarcasm Detection | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | The introduction of the MUStARD dataset, and its emotion recognition
extension MUStARD++, have identified sarcasm to be a multi-modal phenomenon --
expressed not only in natural language text, but also through manners of speech
(like tonality and intonation) and visual cues (facial expression). With this
work, we aim to perform a rigorous benchmarking of the MUStARD++ dataset by
considering state-of-the-art language, speech, and visual encoders, for fully
utilizing the totality of the multi-modal richness that it has to offer,
achieving a 2\% improvement in macro-F1 over the existing benchmark.
Additionally, to cure the imbalance in the `sarcasm type' category in
MUStARD++, we propose an extension, which we call \emph{MUStARD++ Balanced},
benchmarking the same with instances from the extension split across both train
and test sets, achieving a further 2.4\% macro-F1 boost. The new clips were
taken from a novel source -- the TV show, House MD, which adds to the diversity
of the dataset, and were manually annotated by multiple annotators with
substantial inter-annotator agreement in terms of Cohen's kappa and
Krippendorf's alpha. Our code, extended data, and SOTA benchmark models are
made public.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 07:00:41 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Bhosale",
"Swapnil",
""
],
[
"Chaudhuri",
"Abhra",
""
],
[
"Williams",
"Alex Lee Robert",
""
],
[
"Tiwari",
"Divyank",
""
],
[
"Dutta",
"Anjan",
""
],
[
"Zhu",
"Xiatian",
""
],
[
"Bhattacharyya",
"Pushpak",
""
],
[
"Kanojia",
"Diptesh",
""
]
]
| new_dataset | 0.9998 |
2310.01471 | Joan Espasa Arxer | Miquel Bofill, Cristina Borralleras, Joan Espasa, Gerard Mart\'in,
Gustavo Patow, Mateu Villaret | A Good Snowman is Hard to Plan | arXiv admin note: text overlap with arXiv:2310.01378 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this work we face a challenging puzzle video game: A Good Snowman is Hard
to Build. The objective of the game is to build snowmen by moving and stacking
snowballs on a discrete grid. For the sake of player engagement with the game,
it is interesting to avoid that a player finds a much easier solution than the
one the designer expected. Therefore, having tools that are able to certify the
optimality of solutions is crucial.
Although the game can be stated as a planning problem and can be naturally
modelled in PDDL, we show that a direct translation to SAT clearly outperforms
off-the-shelf state-of-the-art planners. As we show, this is mainly due to the
fact that reachability properties can be easily modelled in SAT, allowing for
shorter plans, whereas using axioms to express a reachability derived predicate
in PDDL does not result in any significant reduction of solving time with the
considered planners. We deal with a set of 51 levels, both original and
crafted, solving 43 and with 8 challenging instances still remaining to be
solved.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 17:50:31 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Bofill",
"Miquel",
""
],
[
"Borralleras",
"Cristina",
""
],
[
"Espasa",
"Joan",
""
],
[
"Martín",
"Gerard",
""
],
[
"Patow",
"Gustavo",
""
],
[
"Villaret",
"Mateu",
""
]
]
| new_dataset | 0.984447 |
2310.01526 | Michael Unterkalmsteiner | Deepika Badampudi, Ricardo Britto, Michael Unterkalmsteiner | Modern code reviews -- Preliminary results of a systematic mapping study | EASE 2019: 340-345 | null | 10.1145/3319008.3319354 | null | cs.SE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Reviewing source code is a common practice in a modern and collaborative
coding environment. In the past few years, the research on modern code reviews
has gained interest among practitioners and researchers. The objective of our
investigation is to observe the evolution of research related to modern code
reviews, identify research gaps and serve as a basis for future research. We
use a systematic mapping approach to identify and classify 177 research papers.
As preliminary result of our investigation, we present in this paper a
classification scheme of the main contributions of modern code review research
between 2005 and 2018.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 18:15:26 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Badampudi",
"Deepika",
""
],
[
"Britto",
"Ricardo",
""
],
[
"Unterkalmsteiner",
"Michael",
""
]
]
| new_dataset | 0.984687 |
2310.01732 | Guanghui Qin | Guanghui Qin, Benjamin Van Durme | Nugget: Neural Agglomerative Embeddings of Text | Appeared at ICML 2023 | ICML 2023 | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Embedding text sequences is a widespread requirement in modern language
understanding. Existing approaches focus largely on constant-size
representations. This is problematic, as the amount of information contained in
text often varies with the length of the input. We propose a solution called
Nugget, which encodes language into a representation based on a dynamically
selected subset of input tokens. These nuggets are learned through tasks like
autoencoding and machine translation, and intuitively segment language into
meaningful units. We demonstrate Nugget outperforms related approaches in tasks
involving semantic comparison. Finally, we illustrate these compact units allow
for expanding the contextual window of a language model (LM), suggesting new
future LMs that can condition on significantly larger amounts of content.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 01:47:49 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Qin",
"Guanghui",
""
],
[
"Van Durme",
"Benjamin",
""
]
]
| new_dataset | 0.955961 |
2310.01742 | Guangji Chen | Guangji Chen, Qingqing Wu, Wen Chen, Yanzhao Hou, Mengnan Jian,
Shunqing Zhang, Jun Li, Lajos Hanzo | Intelligent Reflecting Surface Aided MIMO Networks: Distributed or
Centralized Architecture? | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the capacity of a broadcast channel with a multi-antenna base
station (BS) sending independent messages to multiple users, aided by IRSs with
N elements. In particular, both the distributed and centralized IRS deployment
architectures are considered. Regarding the distributed IRS, the N IRS elements
form multiple IRSs and each of them is installed near a user cluster; while for
the centralized IRS, all IRS elements are located in the vicinity of the BS. To
draw essential insights, we first derive the maximum capacity achieved by the
distributed IRS and centralized IRS, respectively, under the assumption of
line-of-sight propagation and homogeneous channel setups. By capturing the
fundamental tradeoff between the spatial multiplexing gain and passive
beamforming gain, we rigourously prove that the capacity of the distributed IRS
is higher than that of the centralized IRS provided that the total number of
IRS elements is above a threshold. Motivated by the superiority of the
distributed IRS, we then focus on the transmission and element allocation
design under the distributed IRS. By exploiting the user channel correlation of
intra-clusters and inter-clusters, an efficient hybrid multiple access scheme
relying on both spatial and time domains is proposed to fully exploit both the
passive beamforming gain and spatial DoF. Moreover, the IRS element allocation
problem is investigated for the objectives of sum-rate maximization and minimum
user rate maximization, respectively. Finally, extensive numerical results are
provided to validate our theoretical finding and also to unveil the
effectiveness of the distributed IRS for improving the system capacity under
various system setups.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 02:09:09 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Chen",
"Guangji",
""
],
[
"Wu",
"Qingqing",
""
],
[
"Chen",
"Wen",
""
],
[
"Hou",
"Yanzhao",
""
],
[
"Jian",
"Mengnan",
""
],
[
"Zhang",
"Shunqing",
""
],
[
"Li",
"Jun",
""
],
[
"Hanzo",
"Lajos",
""
]
]
| new_dataset | 0.996263 |
2310.01753 | Yuxiao Cheng | Yuxiao Cheng, Ziqian Wang, Tingxiong Xiao, Qin Zhong, Jinli Suo,
Kunlun He | CausalTime: Realistically Generated Time-series for Benchmarking of
Causal Discovery | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time-series causal discovery (TSCD) is a fundamental problem of machine
learning. However, existing synthetic datasets cannot properly evaluate or
predict the algorithms' performance on real data. This study introduces the
CausalTime pipeline to generate time-series that highly resemble the real data
and with ground truth causal graphs for quantitative performance evaluation.
The pipeline starts from real observations in a specific scenario and produces
a matching benchmark dataset. Firstly, we harness deep neural networks along
with normalizing flow to accurately capture realistic dynamics. Secondly, we
extract hypothesized causal graphs by performing importance analysis on the
neural network or leveraging prior knowledge. Thirdly, we derive the ground
truth causal graphs by splitting the causal model into causal term, residual
term, and noise term. Lastly, using the fitted network and the derived causal
graph, we generate corresponding versatile time-series proper for algorithm
assessment. In the experiments, we validate the fidelity of the generated data
through qualitative and quantitative experiments, followed by a benchmarking of
existing TSCD algorithms using these generated datasets. CausalTime offers a
feasible solution to evaluating TSCD algorithms in real applications and can be
generalized to a wide range of fields. For easy use of the proposed approach,
we also provide a user-friendly website, hosted on www.causaltime.cc.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 02:29:19 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Cheng",
"Yuxiao",
""
],
[
"Wang",
"Ziqian",
""
],
[
"Xiao",
"Tingxiong",
""
],
[
"Zhong",
"Qin",
""
],
[
"Suo",
"Jinli",
""
],
[
"He",
"Kunlun",
""
]
]
| new_dataset | 0.972334 |
2310.01818 | Xilie Xu | Xilie Xu, Jingfeng Zhang, Mohan Kankanhalli | AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework | null | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robust Fine-Tuning (RFT) is a low-cost strategy to obtain adversarial
robustness in downstream applications, without requiring a lot of computational
resources and collecting significant amounts of data. This paper uncovers an
issue with the existing RFT, where optimizing both adversarial and natural
objectives through the feature extractor (FE) yields significantly divergent
gradient directions. This divergence introduces instability in the optimization
process, thereby hindering the attainment of adversarial robustness and
rendering RFT highly sensitive to hyperparameters. To mitigate this issue, we
propose a low-rank (LoRa) branch that disentangles RFT into two distinct
components: optimizing natural objectives via the LoRa branch and adversarial
objectives via the FE. Besides, we introduce heuristic strategies for
automating the scheduling of the learning rate and the scalars of loss terms.
Extensive empirical evaluations demonstrate that our proposed automated RFT
disentangled via the LoRa branch (AutoLoRa) achieves new state-of-the-art
results across a range of downstream tasks. AutoLoRa holds significant
practical utility, as it automatically converts a pre-trained FE into an
adversarially robust model for downstream tasks without the need for searching
hyperparameters.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 06:16:03 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Xu",
"Xilie",
""
],
[
"Zhang",
"Jingfeng",
""
],
[
"Kankanhalli",
"Mohan",
""
]
]
| new_dataset | 0.982491 |
2310.01821 | Takuhiro Kaneko | Takuhiro Kaneko | MIMO-NeRF: Fast Neural Rendering with Multi-input Multi-output Neural
Radiance Fields | Accepted to ICCV 2023. Project page:
https://www.kecl.ntt.co.jp/people/kaneko.takuhiro/projects/mimo-nerf/ | null | null | null | cs.CV cs.AI cs.GR cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural radiance fields (NeRFs) have shown impressive results for novel view
synthesis. However, they depend on the repetitive use of a single-input
single-output multilayer perceptron (SISO MLP) that maps 3D coordinates and
view direction to the color and volume density in a sample-wise manner, which
slows the rendering. We propose a multi-input multi-output NeRF (MIMO-NeRF)
that reduces the number of MLPs running by replacing the SISO MLP with a MIMO
MLP and conducting mappings in a group-wise manner. One notable challenge with
this approach is that the color and volume density of each point can differ
according to a choice of input coordinates in a group, which can lead to some
notable ambiguity. We also propose a self-supervised learning method that
regularizes the MIMO MLP with multiple fast reformulated MLPs to alleviate this
ambiguity without using pretrained models. The results of a comprehensive
experimental evaluation including comparative and ablation studies are
presented to show that MIMO-NeRF obtains a good trade-off between speed and
quality with a reasonable training time. We then demonstrate that MIMO-NeRF is
compatible with and complementary to previous advancements in NeRFs by applying
it to two representative fast NeRFs, i.e., a NeRF with sample reduction
(DONeRF) and a NeRF with alternative representations (TensoRF).
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 06:33:05 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Kaneko",
"Takuhiro",
""
]
]
| new_dataset | 0.996278 |
2310.01824 | Jiaheng Hu | Emily Jin, Jiaheng Hu, Zhuoyi Huang, Ruohan Zhang, Jiajun Wu, Li
Fei-Fei, Roberto Mart\'in-Mart\'in | Mini-BEHAVIOR: A Procedurally Generated Benchmark for Long-horizon
Decision-Making in Embodied AI | null | null | null | null | cs.AI cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Mini-BEHAVIOR, a novel benchmark for embodied AI that challenges
agents to use reasoning and decision-making skills to solve complex activities
that resemble everyday human challenges. The Mini-BEHAVIOR environment is a
fast, realistic Gridworld environment that offers the benefits of rapid
prototyping and ease of use while preserving a symbolic level of physical
realism and complexity found in complex embodied AI benchmarks. We introduce
key features such as procedural generation, to enable the creation of countless
task variations and support open-ended learning. Mini-BEHAVIOR provides
implementations of various household tasks from the original BEHAVIOR
benchmark, along with starter code for data collection and reinforcement
learning agent training. In essence, Mini-BEHAVIOR offers a fast, open-ended
benchmark for evaluating decision-making and planning solutions in embodied AI.
It serves as a user-friendly entry point for research and facilitates the
evaluation and development of solutions, simplifying their assessment and
development while advancing the field of embodied AI. Code is publicly
available at https://github.com/StanfordVL/mini_behavior.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 06:41:18 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Jin",
"Emily",
""
],
[
"Hu",
"Jiaheng",
""
],
[
"Huang",
"Zhuoyi",
""
],
[
"Zhang",
"Ruohan",
""
],
[
"Wu",
"Jiajun",
""
],
[
"Fei-Fei",
"Li",
""
],
[
"Martín-Martín",
"Roberto",
""
]
]
| new_dataset | 0.991064 |
2310.01835 | Paul Sumedrea Sumedrea | Dragos Georgian Corlatescu, Alexandru Dinu, Mihaela Gaman, Paul
Sumedrea | EMBERSim: A Large-Scale Databank for Boosting Similarity Search in
Malware Analysis | Accepted at the 37th Conference on Neural Information Processing
Systems (NeurIPS 2023) Track on Datasets and Benchmarks | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | In recent years there has been a shift from heuristics-based malware
detection towards machine learning, which proves to be more robust in the
current heavily adversarial threat landscape. While we acknowledge machine
learning to be better equipped to mine for patterns in the increasingly high
amounts of similar-looking files, we also note a remarkable scarcity of the
data available for similarity-targeted research. Moreover, we observe that the
focus in the few related works falls on quantifying similarity in malware,
often overlooking the clean data. This one-sided quantification is especially
dangerous in the context of detection bypass. We propose to address the
deficiencies in the space of similarity research on binary files, starting from
EMBER - one of the largest malware classification data sets. We enhance EMBER
with similarity information as well as malware class tags, to enable further
research in the similarity space. Our contribution is threefold: (1) we publish
EMBERSim, an augmented version of EMBER, that includes similarity-informed
tags; (2) we enrich EMBERSim with automatically determined malware class tags
using the open-source tool AVClass on VirusTotal data and (3) we describe and
share the implementation for our class scoring technique and leaf similarity
method.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 06:58:45 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Corlatescu",
"Dragos Georgian",
""
],
[
"Dinu",
"Alexandru",
""
],
[
"Gaman",
"Mihaela",
""
],
[
"Sumedrea",
"Paul",
""
]
]
| new_dataset | 0.994718 |
2310.01904 | Yoav Arad | Yoav Arad, Michael Werman | Beyond the Benchmark: Detecting Diverse Anomalies in Videos | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Video Anomaly Detection (VAD) plays a crucial role in modern surveillance
systems, aiming to identify various anomalies in real-world situations.
However, current benchmark datasets predominantly emphasize simple,
single-frame anomalies such as novel object detection. This narrow focus
restricts the advancement of VAD models. In this research, we advocate for an
expansion of VAD investigations to encompass intricate anomalies that extend
beyond conventional benchmark boundaries. To facilitate this, we introduce two
datasets, HMDB-AD and HMDB-Violence, to challenge models with diverse
action-based anomalies. These datasets are derived from the HMDB51 action
recognition dataset. We further present Multi-Frame Anomaly Detection (MFAD), a
novel method built upon the AI-VAD framework. AI-VAD utilizes single-frame
features such as pose estimation and deep image encoding, and two-frame
features such as object velocity. They then apply a density estimation
algorithm to compute anomaly scores. To address complex multi-frame anomalies,
we add a deep video encoding features capturing long-range temporal
dependencies, and logistic regression to enhance final score calculation.
Experimental results confirm our assumptions, highlighting existing models
limitations with new anomaly types. MFAD excels in both simple and complex
anomaly detection scenarios.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 09:22:06 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Arad",
"Yoav",
""
],
[
"Werman",
"Michael",
""
]
]
| new_dataset | 0.997651 |
2310.01914 | Nick Brown | Gabriel Rodriguez-Canal, Nick Brown, Maurice Jamieson, Emilien Bauer,
Anton Lydike, Tobias Grosser | Stencil-HMLS: A multi-layered approach to the automatic optimisation of
stencil codes on FPGA | Author accepted version which appears in ACM Workshops of The
International Conference on High Performance Computing, Network, Storage, and
Analysis (SC-W 2023) | null | 10.1145/3624062.362454 | null | cs.DC cs.PF cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The challenges associated with effectively programming FPGAs have been a
major blocker in popularising reconfigurable architectures for HPC workloads.
However new compiler technologies, such as MLIR, are providing new capabilities
which potentially deliver the ability to extract domain specific information
and drive automatic structuring of codes for FPGAs.
In this paper we explore domain specific optimisations for stencils, a
fundamental access pattern in scientific computing, to obtain high performance
on FPGAs via automated code structuring. We propose Stencil-HMLS, a
multi-layered approach to automatic optimisation of stencil codes and introduce
the HLS dialect, which brings FPGA programming into the MLIR ecosystem. Using
the PSyclone Fortran DSL, we demonstrate an improvement of 14-100$\times$ with
respect to the next best performant state-of-the-art tool. Furthermore, our
approach is 14 to 92 times more energy efficient than the next most energy
efficient approach.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 09:43:22 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Rodriguez-Canal",
"Gabriel",
""
],
[
"Brown",
"Nick",
""
],
[
"Jamieson",
"Maurice",
""
],
[
"Bauer",
"Emilien",
""
],
[
"Lydike",
"Anton",
""
],
[
"Grosser",
"Tobias",
""
]
]
| new_dataset | 0.979847 |
2310.01931 | Ziqiang Zheng | Liang Haixin, Zheng Ziqiang, Ma Zeyu, Sai-Kit Yeung | MarineDet: Towards Open-Marine Object Detection | 8 pages, 5 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Marine object detection has gained prominence in marine research, driven by
the pressing need to unravel oceanic mysteries and enhance our understanding of
invaluable marine ecosystems. There is a profound requirement to efficiently
and accurately identify and localize diverse and unseen marine entities within
underwater imagery. The open-marine object detection (OMOD for short) is
required to detect diverse and unseen marine objects, performing categorization
and localization simultaneously. To achieve OMOD, we present
\textbf{MarineDet}. We formulate a joint visual-text semantic space through
pre-training and then perform marine-specific training to achieve
in-air-to-marine knowledge transfer. Considering there is no specific dataset
designed for OMOD, we construct a \textbf{MarineDet dataset} consisting of 821
marine-relative object categories to promote and measure OMOD performance. The
experimental results demonstrate the superior performance of MarineDet over
existing generalist and specialist object detection algorithms. To the best of
our knowledge, we are the first to present OMOD, which holds a more valuable
and practical setting for marine ecosystem monitoring and management. Our
research not only pushes the boundaries of marine understanding but also offers
a standard pipeline for OMOD.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 10:13:42 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Haixin",
"Liang",
""
],
[
"Ziqiang",
"Zheng",
""
],
[
"Zeyu",
"Ma",
""
],
[
"Yeung",
"Sai-Kit",
""
]
]
| new_dataset | 0.964696 |
2310.01941 | Catalin Dima | Eugene Asarin and Aldric Degorre and Catalin Dima and Bernardo Jacobo
Inclan | Bandwidth of Timed Automata: 3 Classes | null | null | 10.4230/LIPIcs.FSTTCS.2023 | null | cs.FL cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | Timed languages contain sequences of discrete events ("letters'') separated
by real-valued delays, they can be recognized by timed automata, and represent
behaviors of various real-time systems. The notion of bandwidth of a timed
language defined in a previous paper characterizes the amount of information
per time unit, encoded in words of the language observed with some precision
{\epsilon}.
In this paper, we identify three classes of timed automata according to the
asymptotics of the bandwidth of their languages with respect to this precision
{\epsilon}: automata are either meager, with an O(1) bandwidth, normal, with a
{\Theta}(log (1/{\epsilon})) bandwidth, or obese, with {\Theta}(1/{\epsilon})
bandwidth. We define two structural criteria and prove that they partition
timed automata into these three classes of bandwidth, implying that there are
no intermediate asymptotic classes. The classification problem of a timed
automaton is PSPACE-complete.
Both criteria are formulated using morphisms from paths of the timed
automaton to some finite monoids extending Puri's orbit graphs; the proofs are
based on Simon's factorization forest theorem.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 10:37:59 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Asarin",
"Eugene",
""
],
[
"Degorre",
"Aldric",
""
],
[
"Dima",
"Catalin",
""
],
[
"Inclan",
"Bernardo Jacobo",
""
]
]
| new_dataset | 0.998245 |
2310.01943 | Joseph Birkner M.Sc. | Joseph Birkner, Andreas Dolp, Negin Karimi, Nikita Basargin, Alona
Kharchenko and Rafael Hostettler | Ravestate: Distributed Composition of a Causal-Specificity-Guided
Interaction Policy | null | null | null | null | cs.RO cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | In human-robot interaction policy design, a rule-based method is efficient,
explainable, expressive and intuitive. In this paper, we present the
Signal-Rule-Slot framework, which refines prior work on rule-based symbol
system design and introduces a new, Bayesian notion of interaction rule utility
called Causal Pathway Self-information. We offer a rigorous theoretical
foundation as well as a rich open-source reference implementation Ravestate,
with which we conduct user studies in text-, speech-, and vision-based
scenarios. The experiments show robust contextual behaviour of our
probabilistically informed rule-based system, paving the way for more effective
human-machine interaction.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 10:38:53 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Birkner",
"Joseph",
""
],
[
"Dolp",
"Andreas",
""
],
[
"Karimi",
"Negin",
""
],
[
"Basargin",
"Nikita",
""
],
[
"Kharchenko",
"Alona",
""
],
[
"Hostettler",
"Rafael",
""
]
]
| new_dataset | 0.965049 |
2310.01946 | Ziqiang Zheng | Zheng Ziqiang, Xie Yaofeng, Liang Haixin, Yu Zhibin, Sai-Kit Yeung | CoralVOS: Dataset and Benchmark for Coral Video Segmentation | 8 pages, 9 figures, dense coral video segmentation dataset and
benchmark | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Coral reefs formulate the most valuable and productive marine ecosystems,
providing habitat for many marine species. Coral reef surveying and analysis
are currently confined to coral experts who invest substantial effort in
generating comprehensive and dependable reports (\emph{e.g.}, coral coverage,
population, spatial distribution, \textit{etc}), from the collected survey
data. However, performing dense coral analysis based on manual efforts is
significantly time-consuming, the existing coral analysis algorithms compromise
and opt for performing down-sampling and only conducting sparse point-based
coral analysis within selected frames. However, such down-sampling will
\textbf{inevitable} introduce the estimation bias or even lead to wrong
results. To address this issue, we propose to perform \textbf{dense coral video
segmentation}, with no down-sampling involved. Through video object
segmentation, we could generate more \textit{reliable} and \textit{in-depth}
coral analysis than the existing coral reef analysis algorithms. To boost such
dense coral analysis, we propose a large-scale coral video segmentation
dataset: \textbf{CoralVOS} as demonstrated in Fig. 1. To the best of our
knowledge, our CoralVOS is the first dataset and benchmark supporting dense
coral video segmentation. We perform experiments on our CoralVOS dataset,
including 6 recent state-of-the-art video object segmentation (VOS) algorithms.
We fine-tuned these VOS algorithms on our CoralVOS dataset and achieved
observable performance improvement. The results show that there is still great
potential for further promoting the segmentation accuracy. The dataset and
trained models will be released with the acceptance of this work to foster the
coral reef research community.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 10:45:37 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Ziqiang",
"Zheng",
""
],
[
"Yaofeng",
"Xie",
""
],
[
"Haixin",
"Liang",
""
],
[
"Zhibin",
"Yu",
""
],
[
"Yeung",
"Sai-Kit",
""
]
]
| new_dataset | 0.999725 |
2310.01957 | Long Chen | Long Chen, Oleg Sinavski, Jan H\"unermann, Alice Karnsund, Andrew
James Willmott, Danny Birch, Daniel Maund, Jamie Shotton | Driving with LLMs: Fusing Object-Level Vector Modality for Explainable
Autonomous Driving | null | null | null | null | cs.RO cs.AI cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have shown promise in the autonomous driving
sector, particularly in generalization and interpretability. We introduce a
unique object-level multimodal LLM architecture that merges vectorized numeric
modalities with a pre-trained LLM to improve context understanding in driving
situations. We also present a new dataset of 160k QA pairs derived from 10k
driving scenarios, paired with high quality control commands collected with RL
agent and question answer pairs generated by teacher LLM (GPT-3.5). A distinct
pretraining strategy is devised to align numeric vector modalities with static
LLM representations using vector captioning language data. We also introduce an
evaluation metric for Driving QA and demonstrate our LLM-driver's proficiency
in interpreting driving scenarios, answering questions, and decision-making.
Our findings highlight the potential of LLM-based driving action generation in
comparison to traditional behavioral cloning. We make our benchmark, datasets,
and model available for further exploration.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 11:05:14 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Chen",
"Long",
""
],
[
"Sinavski",
"Oleg",
""
],
[
"Hünermann",
"Jan",
""
],
[
"Karnsund",
"Alice",
""
],
[
"Willmott",
"Andrew James",
""
],
[
"Birch",
"Danny",
""
],
[
"Maund",
"Daniel",
""
],
[
"Shotton",
"Jamie",
""
]
]
| new_dataset | 0.999586 |
2310.01967 | Muhammad Farhan Ahmed | Matteo Maragliano, Muhammad Farhan Ahmed, Carmine Tommaso Recchiuto,
Antonio Sgorbissa, Vincent Fremont | Collaborative Active SLAM: Synchronous and Asynchronous Coordination
Among Agents | 7 pages, 8 figures | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | In the realm of autonomous robotics, a critical challenge lies in developing
robust solutions for Active Collaborative SLAM, wherein multiple robots must
collaboratively explore and map an unknown environment while intelligently
coordinating their movements and sensor data acquisitions. To this aim, we
present two approaches for coordinating a system consisting of multiple robots
to perform Active Collaborative SLAM (AC-SLAM) for environmental exploration.
Our two coordination approaches, synchronous and asynchronous implement a
methodology to prioritize robot goal assignments by the central server. We also
present a method to efficiently spread the robots for maximum exploration while
keeping SLAM uncertainty low. Both coordination approaches were evaluated
through simulation on publicly available datasets, obtaining promising results.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 11:21:19 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Maragliano",
"Matteo",
""
],
[
"Ahmed",
"Muhammad Farhan",
""
],
[
"Recchiuto",
"Carmine Tommaso",
""
],
[
"Sgorbissa",
"Antonio",
""
],
[
"Fremont",
"Vincent",
""
]
]
| new_dataset | 0.958444 |
2310.01968 | Aditi Agarwal | Aditi Agarwal, Anupam Saxena and Prabhat Kumar | PyHexTop: a compact Python code for topology optimization using
hexagonal elements | Accepted in NCMDAO 2023 conference | null | null | null | cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Python serves as an open-source and cost-effective alternative to the MATLAB
programming language. This paper introduces a concise topology optimization
Python code, named ``PyHexTop," primarily intended for educational purposes.
Code employs hexagonal elements to parameterize design domains as such elements
provide checkerboard-free optimized design naturally. PyHexTop is developed
based on the ``HoneyTop90" MATLAB code~\cite{kumar2023honeytop90} and uses the
NumPy and SciPy libraries. Code is straightforward and easily comprehensible,
proving a helpful tool that can help people new in the topology optimization
field to learn and explore. PyHexTop is specifically tailored to address
compliance minimization with specified volume constraints. The paper provides a
detailed explanation of the code for solving the MBB design and extensions to
solve problems with varying boundary and force conditions. The code is publicly
shared at: \url{https://github.com/PrabhatIn/PyHexTop.}
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 11:21:34 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Agarwal",
"Aditi",
""
],
[
"Saxena",
"Anupam",
""
],
[
"Kumar",
"Prabhat",
""
]
]
| new_dataset | 0.999825 |
2310.01986 | Weiliang Xu | Weiliang Xu, Guoyuan Zhou, Yuanzhi Zhou, Zhibin Zou, Jiali Wang,
Wenfeng Wu, Xinming Li | A Vision-Based Tactile Sensing System for Multimodal Contact Information
Perception via Neural Network | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In general, robotic dexterous hands are equipped with various sensors for
acquiring multimodal contact information such as position, force, and pose of
the grasped object. This multi-sensor-based design adds complexity to the
robotic system. In contrast, vision-based tactile sensors employ specialized
optical designs to enable the extraction of tactile information across
different modalities within a single system. Nonetheless, the decoupling design
for different modalities in common systems is often independent. Therefore, as
the dimensionality of tactile modalities increases, it poses more complex
challenges in data processing and decoupling, thereby limiting its application
to some extent. Here, we developed a multimodal sensing system based on a
vision-based tactile sensor, which utilizes visual representations of tactile
information to perceive the multimodal contact information of the grasped
object. The visual representations contain extensive content that can be
decoupled by a deep neural network to obtain multimodal contact information
such as classification, position, posture, and force of the grasped object. The
results show that the tactile sensing system can perceive multimodal tactile
information using only one single sensor and without different data decoupling
designs for different modal tactile information, which reduces the complexity
of the tactile system and demonstrates the potential for multimodal tactile
integration in various fields such as biomedicine, biology, and robotics.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 11:58:14 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Xu",
"Weiliang",
""
],
[
"Zhou",
"Guoyuan",
""
],
[
"Zhou",
"Yuanzhi",
""
],
[
"Zou",
"Zhibin",
""
],
[
"Wang",
"Jiali",
""
],
[
"Wu",
"Wenfeng",
""
],
[
"Li",
"Xinming",
""
]
]
| new_dataset | 0.998764 |
2310.02003 | Samuel Holt | Samuel Holt, Max Ruiz Luyten, Mihaela van der Schaar | L2MAC: Large Language Model Automatic Computer for Unbounded Code
Generation | Copyright 2023 by the author(s) | null | null | null | cs.SE cs.AI cs.LG cs.PL | http://creativecommons.org/licenses/by/4.0/ | Transformer-based large language models (LLMs) are constrained by the fixed
context window of the underlying transformer architecture, hindering their
ability to produce long and logically consistent code. Memory-augmented LLMs
are a promising solution, but current approaches cannot handle long code
generation tasks since they (1) only focus on reading memory and reduce its
evolution to the concatenation of new memories or (2) use very specialized
memories that cannot adapt to other domains. This paper presents L2MAC, the
first practical LLM-based stored-program automatic computer for long and
consistent code generation. Its memory has two components: the instruction
registry, which is populated with a prompt program to solve the user-given
task, and a file store, which will contain the final and intermediate outputs.
Each instruction is executed by a separate LLM instance, whose context is
managed by a control unit capable of precise memory reading and writing to
ensure effective interaction with the file store. These components enable L2MAC
to generate virtually unbounded code structures, bypassing the constraints of
the finite context window while producing code that fulfills complex
user-specified requirements. We empirically show that L2MAC succeeds in
generating large code bases for system design tasks where other coding methods
fall short in implementing user requirements and provide insight into the
reasons for this performance gap.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 16:55:19 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Holt",
"Samuel",
""
],
[
"Luyten",
"Max Ruiz",
""
],
[
"van der Schaar",
"Mihaela",
""
]
]
| new_dataset | 0.995007 |
2310.02045 | Michael Rogenmoser | Michael Rogenmoser, Luca Benini | Trikarenos: A Fault-Tolerant RISC-V-based Microcontroller for CubeSats
in 28nm | 4 pages, 4 figures, accepted by IEEE International Conference on
Electronics Circuits and Systems (ICECS) 2023 | null | null | null | cs.AR | http://creativecommons.org/licenses/by-sa/4.0/ | One of the key challenges when operating microcontrollers in harsh
environments such as space is radiation-induced Single Event Upsets (SEUs),
which can lead to errors in computation. Common countermeasures rely on
proprietary radiation-hardened technologies, low density technologies, or
extensive replication, leading to high costs and low performance and
efficiency. To combat this, we present Trikarenos, a fault-tolerant 32-bit
RISC-V microcontroller SoC in an advanced TSMC 28nm technology. Trikarenos
alleviates the replication cost by employing a configurable triple-core
lockstep configuration, allowing three Ibex cores to execute applications
reliably, operating on ECC-protected memory. If reliability is not needed for a
given application, the cores can operate independently in parallel for higher
performance and efficiency. Trikarenos consumes 15.7mW at 250MHz executing a
fault-tolerant matrix-matrix multiplication, a 21.5x efficiency gain over
state-of-the-art, and performance is increased by 2.96x when reliability is not
needed for processing, with a 2.36x increase in energy efficiency.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 13:38:50 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Rogenmoser",
"Michael",
""
],
[
"Benini",
"Luca",
""
]
]
| new_dataset | 0.998597 |
2310.02118 | Diogo M. Silva | Rafael Ferreira, Diogo Tavares, Diogo Silva, Rodrigo Val\'erio, Jo\~ao
Bordalo, In\^es Sim\~oes, Vasco Ramos, David Semedo, Jo\~ao Magalh\~aes | TWIZ: The Wizard of Multimodal Conversational-Stimulus | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this report, we describe the vision, challenges, and scientific
contributions of the Task Wizard team, TWIZ, in the Alexa Prize TaskBot
Challenge 2022. Our vision, is to build TWIZ bot as an helpful, multimodal,
knowledgeable, and engaging assistant that can guide users towards the
successful completion of complex manual tasks. To achieve this, we focus our
efforts on three main research questions: (1) Humanly-Shaped Conversations, by
providing information in a knowledgeable way; (2) Multimodal Stimulus, making
use of various modalities including voice, images, and videos; and (3)
Zero-shot Conversational Flows, to improve the robustness of the interaction to
unseen scenarios. TWIZ is an assistant capable of supporting a wide range of
tasks, with several innovative features such as creative cooking, video
navigation through voice, and the robust TWIZ-LLM, a Large Language Model
trained for dialoguing about complex manual tasks. Given ratings and feedback
provided by users, we observed that TWIZ bot is an effective and robust system,
capable of guiding users through tasks while providing several multimodal
stimuli.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 14:59:35 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Ferreira",
"Rafael",
""
],
[
"Tavares",
"Diogo",
""
],
[
"Silva",
"Diogo",
""
],
[
"Valério",
"Rodrigo",
""
],
[
"Bordalo",
"João",
""
],
[
"Simões",
"Inês",
""
],
[
"Ramos",
"Vasco",
""
],
[
"Semedo",
"David",
""
],
[
"Magalhães",
"João",
""
]
]
| new_dataset | 0.999387 |
2310.02143 | Ngoc Luyen Le | Ngoc Luyen Le, Jinfeng Zhong, Elsa Negre, Marie-H\'el\`ene Abel | CORec-Cri: How collaborative and social technologies can help to
contextualize crises? | null | null | null | null | cs.CY cs.IR | http://creativecommons.org/licenses/by/4.0/ | Crisis situations can present complex and multifaceted challenges, often
requiring the involvement of multiple organizations and stakeholders with
varying areas of expertise, responsibilities, and resources. Acquiring accurate
and timely information about impacted areas is crucial to effectively respond
to these crises. In this paper, we investigate how collaborative and social
technologies help to contextualize crises, including identifying impacted areas
and real-time needs. To this end, we define CORec-Cri (Contextulized
Ontology-based Recommender system for crisis management) based on existing
work. Our motivation for this approach is two-fold: first, effective
collaboration among stakeholders is essential for efficient and coordinated
crisis response; second, social computing facilitates interaction, information
flow, and collaboration among stakeholders. We detail the key components of our
system design, highlighting its potential to support decision-making, resource
allocation, and communication among stakeholders. Finally, we provide examples
of how our system can be applied to contextualize crises to improve crisis
management.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 15:29:37 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Le",
"Ngoc Luyen",
""
],
[
"Zhong",
"Jinfeng",
""
],
[
"Negre",
"Elsa",
""
],
[
"Abel",
"Marie-Hélène",
""
]
]
| new_dataset | 0.99627 |
2310.02162 | Derek Cheng | Derek Cheng, Fernando Cladera Ojeda, Ankit Prabhu, Xu Liu, Alan Zhu,
Patrick Corey Green, Reza Ehsani, Pratik Chaudhari, Vijay Kumar | TreeScope: An Agricultural Robotics Dataset for LiDAR-Based Mapping of
Trees in Forests and Orchards | Submitted to 2024 IEEE International Conference on Robotics and
Automation (ICRA 2024) for review | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Data collection for forestry, timber, and agriculture currently relies on
manual techniques which are labor-intensive and time-consuming. We seek to
demonstrate that robotics offers improvements over these techniques and
accelerate agricultural research, beginning with semantic segmentation and
diameter estimation of trees in forests and orchards. We present TreeScope
v1.0, the first robotics dataset for precision agriculture and forestry
addressing the counting and mapping of trees in forestry and orchards.
TreeScope provides LiDAR data from agricultural environments collected with
robotics platforms, such as UAV and mobile robot platforms carried by vehicles
and human operators. In the first release of this dataset, we provide
ground-truth data with over 1,800 manually annotated semantic labels for tree
stems and field-measured tree diameters. We share benchmark scripts for these
tasks that researchers may use to evaluate the accuracy of their algorithms.
Finally, we run our open-source diameter estimation and off-the-shelf semantic
segmentation algorithms and share our baseline results.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 15:49:03 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Cheng",
"Derek",
""
],
[
"Ojeda",
"Fernando Cladera",
""
],
[
"Prabhu",
"Ankit",
""
],
[
"Liu",
"Xu",
""
],
[
"Zhu",
"Alan",
""
],
[
"Green",
"Patrick Corey",
""
],
[
"Ehsani",
"Reza",
""
],
[
"Chaudhari",
"Pratik",
""
],
[
"Kumar",
"Vijay",
""
]
]
| new_dataset | 0.999867 |
2310.02170 | Zijun Liu | Zijun Liu, Yanzhe Zhang, Peng Li, Yang Liu, Diyi Yang | Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with
Agent Team Optimization | Preprint, under review. 21 pages | null | null | null | cs.CL cs.AI cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language model (LLM) agents have been shown effective on a wide range
of tasks, and by ensembling multiple LLM agents, their performances could be
further improved. Existing approaches employ a fixed set of agents to interact
with each other in a static architecture, which limits their generalizability
to various tasks and requires strong human prior in designing these agents. In
this work, we propose to construct a strategic team of agents communicating in
a dynamic interaction architecture based on the task query. Specifically, we
build a framework named Dynamic LLM-Agent Network ($\textbf{DyLAN}$) for
LLM-agent collaboration on complicated tasks like reasoning and code
generation. DyLAN enables agents to interact for multiple rounds in a dynamic
architecture with inference-time agent selection and an early-stopping
mechanism to improve performance and efficiency. We further design an automatic
agent team optimization algorithm based on an unsupervised metric termed
$\textit{Agent Importance Score}$, enabling the selection of best agents based
on the contribution each agent makes. Empirically, we demonstrate that DyLAN
performs well in both reasoning and code generation tasks with reasonable
computational cost. DyLAN achieves 13.0% and 13.3% improvement on MATH and
HumanEval, respectively, compared to a single execution on GPT-35-turbo. On
specific subjects of MMLU, agent team optimization in DyLAN increases accuracy
by up to 25.0%.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 16:05:48 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Liu",
"Zijun",
""
],
[
"Zhang",
"Yanzhe",
""
],
[
"Li",
"Peng",
""
],
[
"Liu",
"Yang",
""
],
[
"Yang",
"Diyi",
""
]
]
| new_dataset | 0.997891 |
2310.02192 | Guillaume Cabanac | Lonni Besan\c{c}on and Guillaume Cabanac and Cyril Labb\'e and
Alexander Magazinov | Sneaked references: Cooked reference metadata inflate citation counts | null | null | null | null | cs.DL | http://creativecommons.org/licenses/by/4.0/ | We report evidence of an undocumented method to manipulate citation counts
involving 'sneaked' references. Sneaked references are registered as metadata
for scientific articles in which they do not appear. This manipulation exploits
trusted relationships between various actors: publishers, the Crossref metadata
registration agency, digital libraries, and bibliometric platforms. By
collecting metadata from various sources, we show that extra undue references
are actually sneaked in at Digital Object Identifier (DOI) registration time,
resulting in artificially inflated citation counts. As a case study, focusing
on three journals from a given publisher, we identified at least 9% sneaked
references (5,978/65,836) mainly benefiting two authors. Despite not existing
in the articles, these sneaked references exist in metadata registries and
inappropriately propagate to bibliometric dashboards. Furthermore, we
discovered 'lost' references: the studied bibliometric platform failed to index
at least 56% (36,939/65,836) of the references listed in the HTML version of
the publications. The extent of the sneaked and lost references in the global
literature remains unknown and requires further investigations. Bibliometric
platforms producing citation counts should identify, quantify, and correct
these flaws to provide accurate data to their patrons and prevent further
citation gaming.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 16:37:36 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Besançon",
"Lonni",
""
],
[
"Cabanac",
"Guillaume",
""
],
[
"Labbé",
"Cyril",
""
],
[
"Magazinov",
"Alexander",
""
]
]
| new_dataset | 0.981475 |
2310.02240 | Aminata Diouf Mrs | Aminata Diouf, Bruno Belzile, Maarouf Saad, David St-Onge | Spherical Rolling Robots Design, Modeling, and Control: A Systematic
Literature Review | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Spherical robots have garnered increasing interest for their applications in
exploration, tunnel inspection, and extraterrestrial missions. Diverse designs
have emerged, including barycentric configurations, pendulum-based mechanisms,
etc. In addition, a wide spectrum of control strategies has been proposed,
ranging from traditional PID approaches to cutting-edge neural networks. Our
systematic review aims to comprehensively identify and categorize locomotion
systems and control schemes employed by spherical robots, spanning the years
1996 to 2023. A meticulous search across five databases yielded a dataset of
3189 records. As a result of our exhaustive analysis, we identified a
collection of novel designs and control strategies. Leveraging the insights
garnered, we provide valuable recommendations for optimizing the design and
control aspects of spherical robots, supporting both novel design endeavors and
the advancement of field deployments. Furthermore, we illuminate key research
directions that hold the potential to unlock the full capabilities of spherical
robots
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 17:49:21 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Diouf",
"Aminata",
""
],
[
"Belzile",
"Bruno",
""
],
[
"Saad",
"Maarouf",
""
],
[
"St-Onge",
"David",
""
]
]
| new_dataset | 0.9966 |
2310.02251 | Vikrant Dewangan | Vikrant Dewangan, Tushar Choudhary, Shivam Chandhok, Shubham
Priyadarshan, Anushka Jain, Arun K. Singh, Siddharth Srivastava, Krishna
Murthy Jatavallabhula, K. Madhava Krishna | Talk2BEV: Language-enhanced Bird's-eye View Maps for Autonomous Driving | Submitted to ICRA 2024. Project page at
https://llmbev.github.io/talk2bev/ | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Talk2BEV is a large vision-language model (LVLM) interface for bird's-eye
view (BEV) maps in autonomous driving contexts. While existing perception
systems for autonomous driving scenarios have largely focused on a pre-defined
(closed) set of object categories and driving scenarios, Talk2BEV blends recent
advances in general-purpose language and vision models with BEV-structured map
representations, eliminating the need for task-specific models. This enables a
single system to cater to a variety of autonomous driving tasks encompassing
visual and spatial reasoning, predicting the intents of traffic actors, and
decision-making based on visual cues. We extensively evaluate Talk2BEV on a
large number of scene understanding tasks that rely on both the ability to
interpret free-form natural language queries, and in grounding these queries to
the visual context embedded into the language-enhanced BEV map. To enable
further research in LVLMs for autonomous driving scenarios, we develop and
release Talk2BEV-Bench, a benchmark encompassing 1000 human-annotated BEV
scenarios, with more than 20,000 questions and ground-truth responses from the
NuScenes dataset.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 17:53:51 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Dewangan",
"Vikrant",
""
],
[
"Choudhary",
"Tushar",
""
],
[
"Chandhok",
"Shivam",
""
],
[
"Priyadarshan",
"Shubham",
""
],
[
"Jain",
"Anushka",
""
],
[
"Singh",
"Arun K.",
""
],
[
"Srivastava",
"Siddharth",
""
],
[
"Jatavallabhula",
"Krishna Murthy",
""
],
[
"Krishna",
"K. Madhava",
""
]
]
| new_dataset | 0.99834 |
2310.02255 | Pan Lu | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh
Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | MathVista: Evaluating Mathematical Reasoning of Foundation Models in
Visual Contexts | 51 pages, 56 figures. Work in progress | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Although Large Language Models (LLMs) and Large Multimodal Models (LMMs)
exhibit impressive skills in various domains, their ability for mathematical
reasoning within visual contexts has not been formally examined. Equipping LLMs
and LMMs with this capability is vital for general-purpose AI assistants and
showcases promising potential in education, data analysis, and scientific
discovery. To bridge this gap, we present MathVista, a benchmark designed to
amalgamate challenges from diverse mathematical and visual tasks. We first
taxonomize the key task types, reasoning skills, and visual contexts from the
literature to guide our selection from 28 existing math-focused and visual
question answering datasets. Then, we construct three new datasets, IQTest,
FunctionQA, and PaperQA, to accommodate for missing types of visual contexts.
The problems featured often require deep visual understanding beyond OCR or
image captioning, and compositional reasoning with rich domain-specific tools,
thus posing a notable challenge to existing models. We conduct a comprehensive
evaluation of 11 prominent open-source and proprietary foundation models (LLMs,
LLMs augmented with tools, and LMMs), and early experiments with GPT-4V. The
best-performing model, Multimodal Bard, achieves only 58% of human performance
(34.8% vs 60.3%), indicating ample room for further improvement. Given this
significant gap, MathVista fuels future research in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. Preliminary tests show that MathVista also
presents challenges to GPT-4V, underscoring the benchmark's importance. The
project is available at https://mathvista.github.io/.
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 17:57:24 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Lu",
"Pan",
""
],
[
"Bansal",
"Hritik",
""
],
[
"Xia",
"Tony",
""
],
[
"Liu",
"Jiacheng",
""
],
[
"Li",
"Chunyuan",
""
],
[
"Hajishirzi",
"Hannaneh",
""
],
[
"Cheng",
"Hao",
""
],
[
"Chang",
"Kai-Wei",
""
],
[
"Galley",
"Michel",
""
],
[
"Gao",
"Jianfeng",
""
]
]
| new_dataset | 0.999563 |
2310.02260 | Jean Lahoud | Yahia Dalbah, Jean Lahoud, Hisham Cholakkal | TransRadar: Adaptive-Directional Transformer for Real-Time Multi-View
Radar Semantic Segmentation | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Scene understanding plays an essential role in enabling autonomous driving
and maintaining high standards of performance and safety. To address this task,
cameras and laser scanners (LiDARs) have been the most commonly used sensors,
with radars being less popular. Despite that, radars remain low-cost,
information-dense, and fast-sensing techniques that are resistant to adverse
weather conditions. While multiple works have been previously presented for
radar-based scene semantic segmentation, the nature of the radar data still
poses a challenge due to the inherent noise and sparsity, as well as the
disproportionate foreground and background. In this work, we propose a novel
approach to the semantic segmentation of radar scenes using a multi-input
fusion of radar data through a novel architecture and loss functions that are
tailored to tackle the drawbacks of radar perception. Our novel architecture
includes an efficient attention block that adaptively captures important
feature information. Our method, TransRadar, outperforms state-of-the-art
methods on the CARRADA and RADIal datasets while having smaller model sizes.
https://github.com/YahiDar/TransRadar
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 17:59:05 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Dalbah",
"Yahia",
""
],
[
"Lahoud",
"Jean",
""
],
[
"Cholakkal",
"Hisham",
""
]
]
| new_dataset | 0.977684 |
2310.02262 | Mingyu Ding | Tong Zhao, Chenfeng Xu, Mingyu Ding, Masayoshi Tomizuka, Wei Zhan,
Yintao Wei | RSRD: A Road Surface Reconstruction Dataset and Benchmark for Safe and
Comfortable Autonomous Driving | null | null | null | null | cs.CV cs.GR cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the growing demands for safety and comfort in
intelligent robot systems, particularly autonomous vehicles, where road
conditions play a pivotal role in overall driving performance. For example,
reconstructing road surfaces helps to enhance the analysis and prediction of
vehicle responses for motion planning and control systems. We introduce the
Road Surface Reconstruction Dataset (RSRD), a real-world, high-resolution, and
high-precision dataset collected with a specialized platform in diverse driving
conditions. It covers common road types containing approximately 16,000 pairs
of stereo images, original point clouds, and ground-truth depth/disparity maps,
with accurate post-processing pipelines to ensure its quality. Based on RSRD,
we further build a comprehensive benchmark for recovering road profiles through
depth estimation and stereo matching. Preliminary evaluations with various
state-of-the-art methods reveal the effectiveness of our dataset and the
challenge of the task, underscoring substantial opportunities of RSRD as a
valuable resource for advancing techniques, e.g., multi-view stereo towards
safe autonomous driving. The dataset and demo videos are available at
https://thu-rsxd.com/rsrd/
| [
{
"version": "v1",
"created": "Tue, 3 Oct 2023 17:59:32 GMT"
}
]
| 2023-10-04T00:00:00 | [
[
"Zhao",
"Tong",
""
],
[
"Xu",
"Chenfeng",
""
],
[
"Ding",
"Mingyu",
""
],
[
"Tomizuka",
"Masayoshi",
""
],
[
"Zhan",
"Wei",
""
],
[
"Wei",
"Yintao",
""
]
]
| new_dataset | 0.999901 |
2004.08672 | Shiqi Zhang | Shiqi Zhang, Piyush Khandelwal, Peter Stone | iCORPP: Interleaved Commonsense Reasoning and Probabilistic Planning on
Robots | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robot sequential decision-making in the real world is a challenge because it
requires the robots to simultaneously reason about the current world state and
dynamics, while planning actions to accomplish complex tasks. On the one hand,
declarative languages and reasoning algorithms well support representing and
reasoning with commonsense knowledge. But these algorithms are not good at
planning actions toward maximizing cumulative reward over a long, unspecified
horizon. On the other hand, probabilistic planning frameworks, such as Markov
decision processes (MDPs) and partially observable MDPs (POMDPs), well support
planning to achieve long-term goals under uncertainty. But they are
ill-equipped to represent or reason about knowledge that is not directly
related to actions.
In this article, we present a novel algorithm, called iCORPP, to
simultaneously estimate the current world state, reason about world dynamics,
and construct task-oriented controllers. In this process, robot decision-making
problems are decomposed into two interdependent (smaller) subproblems that
focus on reasoning to "understand the world" and planning to "achieve the goal"
respectively. Contextual knowledge is represented in the reasoning component,
which makes the planning component epistemic and enables active information
gathering. The developed algorithm has been implemented and evaluated both in
simulation and on real robots using everyday service tasks, such as indoor
navigation, dialog management, and object delivery. Results show significant
improvements in scalability, efficiency, and adaptiveness, compared to
competitive baselines including handcrafted action policies.
| [
{
"version": "v1",
"created": "Sat, 18 Apr 2020 17:46:59 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Oct 2023 00:56:27 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Zhang",
"Shiqi",
""
],
[
"Khandelwal",
"Piyush",
""
],
[
"Stone",
"Peter",
""
]
]
| new_dataset | 0.996347 |
2105.14362 | Alberto Garcia-Robledo Ph.D. | Alberto Garcia-Robledo and Mahboobeh Zangiabady | Dash Sylvereye: A WebGL-powered Library for Dashboard-driven
Visualization of Large Street Networks | Re-submitted to IEEE Access on Aug. 11, 2023. The interpretation of
the results in Section V has been corrected, as a more in-depth analysis
unveiled that the prior results are attributed to the software (CPU)
acceleration capabilities of Dash Sylvereye. Additionally, the manuscript now
features a performance comparison with Kepler.gl and city-roads | null | null | null | cs.HC | http://creativecommons.org/licenses/by/4.0/ | State-of-the-art open network visualization tools like Gephi, KeyLines, and
Cytoscape are not suitable for studying street networks with thousands of roads
since they do not support simultaneously polylines for edges, navigable maps,
GPU-accelerated rendering, interactivity, and the means for visualizing
multivariate data. To fill this gap, the present paper presents Dash Sylvereye:
a new Python library to produce interactive visualizations of primal street
networks on top of tiled web maps. Thanks to its integration with the Dash
framework, Dash Sylvereye can be used to develop web dashboards around temporal
and multivariate street data by coordinating the various elements of a Dash
Sylvereye visualization with other plotting and UI components provided by the
Dash framework. Additionally, Dash Sylvereye provides convenient functions to
easily import OpenStreetMap street topologies obtained with the OSMnx library.
Moreover, Dash Sylvereye uses WebGL for GPU-accelerated rendering when
redrawing the road network. We conduct experiments to assess the performance of
Dash Sylvereye on a commodity computer when exploiting software acceleration in
terms of frames per second, CPU time, and frame duration. We show that Dash
Sylvereye can offer fast panning speeds, close to 60 FPS, and CPU times below
20 ms, for street networks with thousands of edges, and above 24 FPS, and CPU
times below 40 ms, for networks with dozens of thousands of edges.
Additionally, we conduct a performance comparison against two state-of-the-art
street visualization tools. We found Dash Sylvereye to be competitive when
compared to the state-of-the-art visualization libraries Kepler.gl and
city-roads. Finally, we describe a web dashboard application that exploits Dash
Sylvereye for the analysis of a SUMO vehicle traffic simulation.
| [
{
"version": "v1",
"created": "Sat, 29 May 2021 19:39:18 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Sep 2023 12:31:21 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Garcia-Robledo",
"Alberto",
""
],
[
"Zangiabady",
"Mahboobeh",
""
]
]
| new_dataset | 0.999418 |
2112.02240 | Congying Xu | Congying Xu, Bihuan Chen, Chenhao Lu, Kaifeng Huang, Xin Peng, Yang
Liu | Tracking Patches for Open Source Software Vulnerabilities | Accepted to the 30th ACM Joint European Software Engineering
Conference and Symposium on the Foundations of Software Engineering
(ESEC/FSE) | null | null | null | cs.SE cs.CR | http://creativecommons.org/licenses/by/4.0/ | Open source software (OSS) vulnerabilities threaten the security of software
systems that use OSS. Vulnerability databases provide valuable information
(e.g., vulnerable version and patch) to mitigate OSS vulnerabilities. There
arises a growing concern about the information quality of vulnerability
databases. However, it is unclear what the quality of patches in existing
vulnerability databases is; and existing manual or heuristic-based approaches
for patch tracking are either too expensive or too specific to apply to all OSS
vulnerabilities.
| [
{
"version": "v1",
"created": "Sat, 4 Dec 2021 04:39:24 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Sep 2023 13:13:27 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Xu",
"Congying",
""
],
[
"Chen",
"Bihuan",
""
],
[
"Lu",
"Chenhao",
""
],
[
"Huang",
"Kaifeng",
""
],
[
"Peng",
"Xin",
""
],
[
"Liu",
"Yang",
""
]
]
| new_dataset | 0.993211 |
2206.14560 | Giuseppe D'Alconzo | Giuseppe D'Alconzo | A note on a Code-Based Signature Scheme | 8 pages | null | 10.1142/S0129054123500132 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we exploit a serious security flaw in a code-based signature
scheme from a 2019 work by Liu, Yang, Han and Wang. They adapt the McEliece
cryptosystem to obtain a new scheme and, on top of this, they design an
efficient digital signature. We show that the new encryption scheme based on
McEliece, even if it has longer public keys, is not more secure than the
standard one. Moreover, the choice of parameters for the signature leads to a
significant performance improvement, but it introduces a vulnerability in the
protocol.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2022 12:13:10 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"D'Alconzo",
"Giuseppe",
""
]
]
| new_dataset | 0.959954 |
2206.14867 | Zechen Xiong | Zechen Xiong, Liqi Chen, Wenxiong Hao, Pengfei Yang, Xi Chen | Pre-stressed Bi-stable Hair Clip Mechanism for Faster Swimming Robots | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structural instability is a hazard that leads to catastrophic failure and is
generally avoided through special designs. A trend, however, has emerged over
the past decades pointing to the harnessing of mechanisms with instability.
Inspired by the snapping of a hair clip, we are finessing the unique
characteristics of the lateral-torsional buckling of beams and the snap-through
of pre-buckled dome-like thin-wall structures in a new field: the in-plane
prestressed mechanism. Analyses reveal how the 2D-3D assembly of an in-plane
prestressed actuator (IPA) is achieved and how the post-buckling energy
landscape is pictured. Combining them with soft robotics, we show that the
inclusion of a bistable IPA can enormously enhance the performance of an
underwater fish robot as well as inspire a finger-like soft gripper.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2022 19:14:58 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Aug 2022 20:39:50 GMT"
},
{
"version": "v3",
"created": "Mon, 14 Aug 2023 21:45:05 GMT"
},
{
"version": "v4",
"created": "Sun, 1 Oct 2023 10:18:12 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Xiong",
"Zechen",
""
],
[
"Chen",
"Liqi",
""
],
[
"Hao",
"Wenxiong",
""
],
[
"Yang",
"Pengfei",
""
],
[
"Chen",
"Xi",
""
]
]
| new_dataset | 0.98499 |
2207.11530 | Qijie Song | Tieming Chen, Qijie Song, Xuebo Qiu, Tiantian Zhu, Zhiling Zhu, Mingqi
Lv | Kellect: a Kernel-Based Efficient and Lossless Event Log Collector for
Windows Security | 20 pages | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, APT attacks have frequently happened, which are increasingly
complicated and more challenging for traditional security detection models. The
system logs are vital for cyber security analysis mainly due to their effective
reconstruction ability of system behavior. existing log collection tools built
on ETW for Windows suffer from working shortages, including data loss, high
overhead, and weak real-time performance. Therefore, It is still very difficult
to apply ETW-based Windows tools to analyze APT attack scenarios.
To address these challenges, this paper proposes an efficient and lossless
kernel log collector called Kellect, which has open sourced with project at
www.kellect.org. It takes extra CPU usage with only 2%-3% and about 40MB memory
consumption, by dynamically optimizing the number of cache and processing
threads through a multi-level cache solution. By replacing the TDH library with
a sliding pointer, Kellect enhances analysis performance, achieving at least 9
times the efficiency of existing tools. Furthermore, Kellect improves
compatibility with different OS versions. Additionally, Kellect enhances log
semantics understanding by maintaining event mappings and application
callstacks which provide more comprehensive characteristics for security
behavior analysis.
With plenty of experiments, Kellect demonstrates its capability to achieve
non-destructive, real-time and full collection of kernel log data generated
from events with a comprehensive efficiency of 9 times greater than existing
tools. As a killer illustration to show how Kellect can work for APT, full data
logs have been collected as a dataset Kellect4APT, generated by implementing
TTPs from the latest ATT&CK. To our knowledge, it is the first open benchmark
dataset representing ATT&CK technique-specific behaviors, which could be highly
expected to improve more extensive research on APT study.
| [
{
"version": "v1",
"created": "Sat, 23 Jul 2022 14:38:43 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Oct 2023 19:03:41 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Chen",
"Tieming",
""
],
[
"Song",
"Qijie",
""
],
[
"Qiu",
"Xuebo",
""
],
[
"Zhu",
"Tiantian",
""
],
[
"Zhu",
"Zhiling",
""
],
[
"Lv",
"Mingqi",
""
]
]
| new_dataset | 0.992673 |
2209.13091 | Advaith Venkatramanan Sethuraman | Advaith Venkatramanan Sethuraman, Manikandasriram Srinivasan
Ramanagopal and Katherine A. Skinner | WaterNeRF: Neural Radiance Fields for Underwater Scenes | null | null | null | null | cs.RO cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Underwater imaging is a critical task performed by marine robots for a wide
range of applications including aquaculture, marine infrastructure inspection,
and environmental monitoring. However, water column effects, such as
attenuation and backscattering, drastically change the color and quality of
imagery captured underwater. Due to varying water conditions and
range-dependency of these effects, restoring underwater imagery is a
challenging problem. This impacts downstream perception tasks including depth
estimation and 3D reconstruction. In this paper, we advance state-of-the-art in
neural radiance fields (NeRFs) to enable physics-informed dense depth
estimation and color correction. Our proposed method, WaterNeRF, estimates
parameters of a physics-based model for underwater image formation, leading to
a hybrid data-driven and model-based solution. After determining the scene
structure and radiance field, we can produce novel views of degraded as well as
corrected underwater images, along with dense depth of the scene. We evaluate
the proposed method qualitatively and quantitatively on a real underwater
dataset.
| [
{
"version": "v1",
"created": "Tue, 27 Sep 2022 00:53:26 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 18:12:18 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Sethuraman",
"Advaith Venkatramanan",
""
],
[
"Ramanagopal",
"Manikandasriram Srinivasan",
""
],
[
"Skinner",
"Katherine A.",
""
]
]
| new_dataset | 0.969746 |
2209.15140 | Xihang Yu | Xihang Yu, Sangli Teng, Theodor Chakhachiro, Wenzhe Tong, Tingjun Li,
Tzu-Yuan Lin, Sarah Koehler, Manuel Ahumada, Jeffrey M. Walls, Maani Ghaffari | Fully Proprioceptive Slip-Velocity-Aware State Estimation for Mobile
Robots via Invariant Kalman Filtering and Disturbance Observer | The work will be presented in IROS2023. github repository at
https://github.com/UMich-CURLY/slip_detection_DOB. arXiv admin note: text
overlap with arXiv:1805.10410 by other authors | null | null | null | cs.RO cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper develops a novel slip estimator using the invariant observer
design theory and Disturbance Observer (DOB). The proposed state estimator for
mobile robots is fully proprioceptive and combines data from an inertial
measurement unit and body velocity within a Right Invariant Extended Kalman
Filter (RI-EKF). By embedding the slip velocity into $\mathrm{SE}_3(3)$ matrix
Lie group, the developed DOB-based RI-EKF provides real-time velocity and slip
velocity estimates on different terrains. Experimental results using a Husky
wheeled robot confirm the mathematical derivations and effectiveness of the
proposed method in estimating the observable state variables. Open-source
software is available for download and reproducing the presented results.
| [
{
"version": "v1",
"created": "Thu, 29 Sep 2022 23:59:42 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Oct 2023 02:13:00 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Yu",
"Xihang",
""
],
[
"Teng",
"Sangli",
""
],
[
"Chakhachiro",
"Theodor",
""
],
[
"Tong",
"Wenzhe",
""
],
[
"Li",
"Tingjun",
""
],
[
"Lin",
"Tzu-Yuan",
""
],
[
"Koehler",
"Sarah",
""
],
[
"Ahumada",
"Manuel",
""
],
[
"Walls",
"Jeffrey M.",
""
],
[
"Ghaffari",
"Maani",
""
]
]
| new_dataset | 0.979683 |
2209.15179 | Hui Wei | Hui Wei, Hao Tang, Xuemei Jia, Zhixiang Wang, Hanxun Yu, Zhubo Li,
Shin'ichi Satoh, Luc Van Gool, Zheng Wang | Physical Adversarial Attack meets Computer Vision: A Decade Survey | 19 pages. Under Review | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the impressive achievements of Deep Neural Networks (DNNs) in
computer vision, their vulnerability to adversarial attacks remains a critical
concern. Extensive research has demonstrated that incorporating sophisticated
perturbations into input images can lead to a catastrophic degradation in DNNs'
performance. This perplexing phenomenon not only exists in the digital space
but also in the physical world. Consequently, it becomes imperative to evaluate
the security of DNNs-based systems to ensure their safe deployment in
real-world scenarios, particularly in security-sensitive applications. To
facilitate a profound understanding of this topic, this paper presents a
comprehensive overview of physical adversarial attacks. Firstly, we distill
four general steps for launching physical adversarial attacks. Building upon
this foundation, we uncover the pervasive role of artifacts carrying
adversarial perturbations in the physical world. These artifacts influence each
step. To denote them, we introduce a new term: adversarial medium. Then, we
take the first step to systematically evaluate the performance of physical
adversarial attacks, taking the adversarial medium as a first attempt. Our
proposed evaluation metric, hiPAA, comprises six perspectives: Effectiveness,
Stealthiness, Robustness, Practicability, Aesthetics, and Economics. We also
provide comparative results across task categories, together with insightful
observations and suggestions for future research directions.
| [
{
"version": "v1",
"created": "Fri, 30 Sep 2022 01:59:53 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Nov 2022 13:44:24 GMT"
},
{
"version": "v3",
"created": "Sun, 1 Oct 2023 05:06:56 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Wei",
"Hui",
""
],
[
"Tang",
"Hao",
""
],
[
"Jia",
"Xuemei",
""
],
[
"Wang",
"Zhixiang",
""
],
[
"Yu",
"Hanxun",
""
],
[
"Li",
"Zhubo",
""
],
[
"Satoh",
"Shin'ichi",
""
],
[
"Van Gool",
"Luc",
""
],
[
"Wang",
"Zheng",
""
]
]
| new_dataset | 0.986738 |
2211.05407 | Nghia Hieu Nguyen | Nghia Hieu Nguyen, Duong T.D. Vo, Kiet Van Nguyen | UIT-HWDB: Using Transferring Method to Construct A Novel Benchmark for
Evaluating Unconstrained Handwriting Image Recognition in Vietnamese | Accepted for publishing at the 16th International Conference on
Computing and Communication Technologies (RIVF) | null | 10.1109/RIVF55975.2022.10013898 | null | cs.CV cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recognizing handwriting images is challenging due to the vast variation in
writing style across many people and distinct linguistic aspects of writing
languages. In Vietnamese, besides the modern Latin characters, there are accent
and letter marks together with characters that draw confusion to
state-of-the-art handwriting recognition methods. Moreover, as a low-resource
language, there are not many datasets for researching handwriting recognition
in Vietnamese, which makes handwriting recognition in this language have a
barrier for researchers to approach. Recent works evaluated offline handwriting
recognition methods in Vietnamese using images from an online handwriting
dataset constructed by connecting pen stroke coordinates without further
processing. This approach obviously can not measure the ability of recognition
methods effectively, as it is trivial and may be lack of features that are
essential in offline handwriting images. Therefore, in this paper, we propose
the Transferring method to construct a handwriting image dataset that
associates crucial natural attributes required for offline handwriting images.
Using our method, we provide a first high-quality synthetic dataset which is
complex and natural for efficiently evaluating handwriting recognition methods.
In addition, we conduct experiments with various state-of-the-art methods to
figure out the challenge to reach the solution for handwriting recognition in
Vietnamese.
| [
{
"version": "v1",
"created": "Thu, 10 Nov 2022 08:23:54 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Nguyen",
"Nghia Hieu",
""
],
[
"Vo",
"Duong T. D.",
""
],
[
"Van Nguyen",
"Kiet",
""
]
]
| new_dataset | 0.999841 |
2211.08229 | Jinghuai Zhang | Jinghuai Zhang and Hongbin Liu and Jinyuan Jia and Neil Zhenqiang Gong | CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive
Learning | null | null | null | null | cs.CR cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Contrastive learning (CL) pre-trains general-purpose encoders using an
unlabeled pre-training dataset, which consists of images or image-text pairs.
CL is vulnerable to data poisoning based backdoor attacks (DPBAs), in which an
attacker injects poisoned inputs into the pre-training dataset so the encoder
is backdoored. However, existing DPBAs achieve limited effectiveness. In this
work, we take the first step to analyze the limitations of existing attacks and
propose new DPBAs called CorruptEncoder to CL. CorruptEncoder uses a
theory-guided method to create optimal poisoned inputs to maximize attack
effectiveness. Our experiments show that CorruptEncoder substantially
outperforms existing DPBAs. In particular, CorruptEncoder is the first DPBA
that achieves more than 90% attack success rates with only a few (3) reference
images and a small poisoning ratio (0.5%). Moreover, we also propose a defense,
called localized cropping, to defend against DPBAs. Our results show that our
defense can reduce the effectiveness of DPBAs, but it sacrifices the utility of
the encoder, highlighting the need for new defenses.
| [
{
"version": "v1",
"created": "Tue, 15 Nov 2022 15:48:28 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Nov 2022 03:29:42 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Mar 2023 02:16:37 GMT"
},
{
"version": "v4",
"created": "Fri, 29 Sep 2023 23:41:24 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Zhang",
"Jinghuai",
""
],
[
"Liu",
"Hongbin",
""
],
[
"Jia",
"Jinyuan",
""
],
[
"Gong",
"Neil Zhenqiang",
""
]
]
| new_dataset | 0.998462 |
2301.00190 | Abubakar Siddique | Abubakar Siddique and Henry Medeiros | Tracking Passengers and Baggage Items using Multiple Overhead Cameras at
Security Checkpoints | Need to replace already published arxiv version of this work. This
work will be the latest version of the previously published arXiv:2007.07924 | IEEE Transactions on Systems, Man, and Cybernetics: Systems, Early
Access, 14 December 2022 | 10.1109/TSMC.2022.3225252 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel framework to track multiple objects in overhead camera
videos for airport checkpoint security scenarios where targets correspond to
passengers and their baggage items. We propose a Self-Supervised Learning (SSL)
technique to provide the model information about instance segmentation
uncertainty from overhead images. Our SSL approach improves object detection by
employing a test-time data augmentation and a regression-based,
rotation-invariant pseudo-label refinement technique. Our pseudo-label
generation method provides multiple geometrically-transformed images as inputs
to a Convolutional Neural Network (CNN), regresses the augmented detections
generated by the network to reduce localization errors, and then clusters them
using the mean-shift algorithm. The self-supervised detector model is used in a
single-camera tracking algorithm to generate temporal identifiers for the
targets. Our method also incorporates a multi-view trajectory association
mechanism to maintain consistent temporal identifiers as passengers travel
across camera views. An evaluation of detection, tracking, and association
performances on videos obtained from multiple overhead cameras in a realistic
airport checkpoint environment demonstrates the effectiveness of the proposed
approach. Our results show that self-supervision improves object detection
accuracy by up to $42\%$ without increasing the inference time of the model.
Our multi-camera association method achieves up to $89\%$ multi-object tracking
accuracy with an average computation time of less than $15$ ms.
| [
{
"version": "v1",
"created": "Sat, 31 Dec 2022 12:57:09 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Sep 2023 08:31:19 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Siddique",
"Abubakar",
""
],
[
"Medeiros",
"Henry",
""
]
]
| new_dataset | 0.993732 |
2301.02615 | Tzvi Lederer | Tzvi Lederer, Gallil Maimon and Lior Rokach | Silent Killer: A Stealthy, Clean-Label, Black-Box Backdoor Attack | null | null | null | null | cs.CR cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Backdoor poisoning attacks pose a well-known risk to neural networks.
However, most studies have focused on lenient threat models. We introduce
Silent Killer, a novel attack that operates in clean-label, black-box settings,
uses a stealthy poison and trigger and outperforms existing methods. We
investigate the use of universal adversarial perturbations as triggers in
clean-label attacks, following the success of such approaches under
poison-label settings. We analyze the success of a naive adaptation and find
that gradient alignment for crafting the poison is required to ensure high
success rates. We conduct thorough experiments on MNIST, CIFAR10, and a reduced
version of ImageNet and achieve state-of-the-art results.
| [
{
"version": "v1",
"created": "Thu, 5 Jan 2023 15:11:05 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Oct 2023 16:32:23 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Lederer",
"Tzvi",
""
],
[
"Maimon",
"Gallil",
""
],
[
"Rokach",
"Lior",
""
]
]
| new_dataset | 0.992887 |
2301.03213 | Hao Tang | Hao Tang, Kevin Liang, Matt Feiszli, Weiyao Wang | EgoTracks: A Long-term Egocentric Visual Object Tracking Dataset | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Visual object tracking is a key component to many egocentric vision problems.
However, the full spectrum of challenges of egocentric tracking faced by an
embodied AI is underrepresented in many existing datasets; these tend to focus
on relatively short, third-person videos. Egocentric video has several
distinguishing characteristics from those commonly found in past datasets:
frequent large camera motions and hand interactions with objects commonly lead
to occlusions or objects exiting the frame, and object appearance can change
rapidly due to widely different points of view, scale, or object states.
Embodied tracking is also naturally long-term, and being able to consistently
(re-)associate objects to their appearances and disappearances over as long as
a lifetime is critical. Previous datasets under-emphasize this re-detection
problem, and their "framed" nature has led to adoption of various
spatiotemporal priors that we find do not necessarily generalize to egocentric
video. We thus introduce EgoTracks, a new dataset for long-term egocentric
visual object tracking. Sourced from the Ego4D dataset, this new dataset
presents a significant challenge to recent state-of-the-art single-object
tracking models, which we find score poorly on traditional tracking metrics for
our new dataset, compared to popular benchmarks. We further show improvements
that can be made to a STARK tracker to significantly increase its performance
on egocentric data, resulting in a baseline model we call EgoSTARK. We publicly
release our annotations and benchmark, hoping our dataset leads to further
advancements in tracking.
| [
{
"version": "v1",
"created": "Mon, 9 Jan 2023 09:10:35 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Jan 2023 01:30:59 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Mar 2023 02:28:01 GMT"
},
{
"version": "v4",
"created": "Tue, 14 Mar 2023 18:48:15 GMT"
},
{
"version": "v5",
"created": "Sun, 1 Oct 2023 22:54:53 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Tang",
"Hao",
""
],
[
"Liang",
"Kevin",
""
],
[
"Feiszli",
"Matt",
""
],
[
"Wang",
"Weiyao",
""
]
]
| new_dataset | 0.998898 |
2301.03417 | Lucas Picasarri-Arrieta | Nicolas Bousquet (1), Fr\'ed\'eric Havet (2), Nicolas Nisse (2), Lucas
Picasarri-Arrieta (2), Amadeus Reinald (2 and 3) ((1) LIRIS, CNRS,
Universit\'e Claude Bernard Lyon 1, Lyon, France, (2) CNRS, Universit\'e
C\^ote d'Azur, I3S, Inria, Sophia-Antipolis, France, (3) LIRMM, CNRS,
Universit\'e de Montpellier, Montpellier, France) | Digraph redicolouring | 28 pages, 6 figures | null | null | null | cs.DM math.CO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Given two $k$-dicolourings of a digraph $D$, we prove that it is
PSPACE-complete to decide whether we can transform one into the other by
recolouring one vertex at each step while maintaining a dicolouring at any step
even for $k=2$ and for digraphs with maximum degree $5$ or oriented planar
graphs with maximum degree $6$. A digraph is said to be $k$-mixing if there
exists a transformation between any pair of $k$-colourings. We show that every
digraph $D$ is $k$-mixing for all $k\geq \delta^*_{\min}(D)+2$, generalizing a
result due to Dyer et al. We also prove that every oriented graph $\vec{G}$ is
$k$-mixing for all $k\geq \delta^*_{\max}(\vec{G}) +1$ and for all $k\geq
\delta^*_{\rm avg}(\vec{G})+1$. We conjecture that, for every digraph $D$, the
dicolouring graph of $D$ on $k\geq \delta_{\min}^*(D)+2$ colours has diameter
at most $O(|V(D)|^2)$ and give some evidences. We first prove that the
dicolouring graph of any digraph $D$ on $k\geq 2\delta_{\min}^*(D) + 2$ colours
has linear diameter, extending a result from Bousquet and Perarnau. We also
prove that the conjecture is true when $k\geq
\frac{3}{2}(\delta_{\min}^*(D)+1)$. Restricted to the special case of oriented
graphs, we prove that the dicolouring graph of any subcubic oriented graph on
$k\geq 2$ colours is connected and has diameter at most $2n$. We conjecture
that every non $2$-mixing oriented graph has maximum average degree at least
$4$, and we provide some support for this conjecture by proving it on the
special case of $2$-freezable oriented graphs. More generally, we show that
every $k$-freezable oriented graph on $n$ vertices must contain at least $kn +
k(k-2)$ arcs, and we give a family of $k$-freezable oriented graphs that reach
this bound. In the general case, we prove as a partial result that every non
$2$-mixing oriented graph has maximum average degree at least $\frac{7}{2}$.
| [
{
"version": "v1",
"created": "Mon, 9 Jan 2023 15:13:03 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Oct 2023 14:25:20 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Bousquet",
"Nicolas",
"",
"2 and 3"
],
[
"Havet",
"Frédéric",
"",
"2 and 3"
],
[
"Nisse",
"Nicolas",
"",
"2 and 3"
],
[
"Picasarri-Arrieta",
"Lucas",
"",
"2 and 3"
],
[
"Reinald",
"Amadeus",
"",
"2 and 3"
]
]
| new_dataset | 0.976323 |
2302.11752 | Nghia Hieu Nguyen | Ngan Luu-Thuy Nguyen, Nghia Hieu Nguyen, Duong T.D Vo, Khanh Quoc
Tran, Kiet Van Nguyen | VLSP2022-EVJVQA Challenge: Multilingual Visual Question Answering | VLSP2022 EVJVQA challenge | null | 10.15625/1813-9663/18157 | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Visual Question Answering (VQA) is a challenging task of natural language
processing (NLP) and computer vision (CV), attracting significant attention
from researchers. English is a resource-rich language that has witnessed
various developments in datasets and models for visual question answering.
Visual question answering in other languages also would be developed for
resources and models. In addition, there is no multilingual dataset targeting
the visual content of a particular country with its own objects and cultural
characteristics. To address the weakness, we provide the research community
with a benchmark dataset named EVJVQA, including 33,000+ pairs of
question-answer over three languages: Vietnamese, English, and Japanese, on
approximately 5,000 images taken from Vietnam for evaluating multilingual VQA
systems or models. EVJVQA is used as a benchmark dataset for the challenge of
multilingual visual question answering at the 9th Workshop on Vietnamese
Language and Speech Processing (VLSP 2022). This task attracted 62 participant
teams from various universities and organizations. In this article, we present
details of the organization of the challenge, an overview of the methods
employed by shared-task participants, and the results. The highest performances
are 0.4392 in F1-score and 0.4009 in BLUE on the private test set. The
multilingual QA systems proposed by the top 2 teams use ViT for the pre-trained
vision model and mT5 for the pre-trained language model, a powerful pre-trained
language model based on the transformer architecture. EVJVQA is a challenging
dataset that motivates NLP and CV researchers to further explore the
multilingual models or systems for visual question answering systems. We
released the challenge on the Codalab evaluation system for further research.
| [
{
"version": "v1",
"created": "Thu, 23 Feb 2023 02:38:39 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Feb 2023 02:02:07 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Feb 2023 01:25:52 GMT"
},
{
"version": "v4",
"created": "Wed, 12 Apr 2023 00:44:29 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Nguyen",
"Ngan Luu-Thuy",
""
],
[
"Nguyen",
"Nghia Hieu",
""
],
[
"Vo",
"Duong T. D",
""
],
[
"Tran",
"Khanh Quoc",
""
],
[
"Van Nguyen",
"Kiet",
""
]
]
| new_dataset | 0.99973 |
2302.12533 | Nguyen Duc Thuan | Nguyen Duc Thuan and Hoang Si Hong | HUST bearing: a practical dataset for ball bearing fault diagnosis | We are considering some issues in the paper | null | 10.1186/s13104-023-06400-4 | null | cs.LG cs.AI eess.SP | http://creativecommons.org/publicdomain/zero/1.0/ | In this work, we introduce a practical dataset named HUST bearing, that
provides a large set of vibration data on different ball bearings. This dataset
contains 90 raw vibration data of 6 types of defects (inner crack, outer crack,
ball crack, and their 2-combinations) on 5 types of bearing at 3 working
conditions with the sample rate of 51,200 samples per second. We established
the envelope analysis and order tracking analysis on the introduced dataset to
allow an initial evaluation of the data. A number of classical machine learning
classification methods are used to identify bearing faults of the dataset using
features in different domains. The typical advanced unsupervised transfer
learning algorithms also perform to observe the transferability of knowledge
among parts of the dataset. The experimental results of examined methods on the
dataset gain divergent accuracy up to 100% on classification task and 60-80% on
unsupervised transfer learning task.
| [
{
"version": "v1",
"created": "Fri, 24 Feb 2023 09:38:41 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Oct 2023 07:38:33 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Thuan",
"Nguyen Duc",
""
],
[
"Hong",
"Hoang Si",
""
]
]
| new_dataset | 0.999852 |
2302.13293 | Nguyen Duc Thuan | Nguyen Duc Thuan, Le Hai Anh and Hoang Si Hong | PDIWS: Thermal Imaging Dataset for Person Detection in Intrusion Warning
Systems | We are considering some issues in the paper | null | null | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | In this paper, we present a synthetic thermal imaging dataset for Person
Detection in Intrusion Warning Systems (PDIWS). The dataset consists of a
training set with 2000 images and a test set with 500 images. Each image is
synthesized by compounding a subject (intruder) with a background using the
modified Poisson image editing method. There are a total of 50 different
backgrounds and nearly 1000 subjects divided into five classes according to
five human poses: creeping, crawling, stooping, climbing and other. The
presence of the intruder will be confirmed if the first four poses are
detected. Advanced object detection algorithms have been implemented with this
dataset and give relatively satisfactory results, with the highest mAP values
of 95.5% and 90.9% for IoU of 0.5 and 0.75 respectively. The dataset is freely
published online for research purposes at
https://github.com/thuan-researcher/Intruder-Thermal-Dataset.
| [
{
"version": "v1",
"created": "Sun, 26 Feb 2023 11:02:34 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Oct 2023 07:37:56 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Thuan",
"Nguyen Duc",
""
],
[
"Anh",
"Le Hai",
""
],
[
"Hong",
"Hoang Si",
""
]
]
| new_dataset | 0.999898 |
2303.06511 | David Yoon | David J. Yoon, Keenan Burnett, Johann Laconte, Yi Chen, Heethesh
Vhavle, Soeren Kammel, James Reuther, Timothy D. Barfoot | Need for Speed: Fast Correspondence-Free Lidar-Inertial Odometry Using
Doppler Velocity | Accepted and presented at IROS 2023 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a fast, lightweight odometry method that uses the
Doppler velocity measurements from a Frequency-Modulated Continuous-Wave (FMCW)
lidar without data association. FMCW lidar is a recently emerging technology
that enables per-return relative radial velocity measurements via the Doppler
effect. Since the Doppler measurement model is linear with respect to the
6-degrees-of-freedom (DOF) vehicle velocity, we can formulate a linear
continuous-time estimation problem for the velocity and numerically integrate
for the 6-DOF pose estimate afterward. The caveat is that angular velocity is
not observable with a single FMCW lidar. We address this limitation by also
incorporating the angular velocity measurements from a gyroscope. This results
in an extremely efficient odometry method that processes lidar frames at an
average wall-clock time of 5.64ms on a single thread, well below the 10Hz
operating rate of the lidar we tested. We show experimental results on
real-world driving sequences and compare against state-of-the-art Iterative
Closest Point (ICP)-based odometry methods, presenting a compelling trade-off
between accuracy and computation. We also present an algebraic observability
study, where we demonstrate in theory that the Doppler measurements from
multiple FMCW lidars are capable of observing all 6 degrees of freedom
(translational and angular velocity).
| [
{
"version": "v1",
"created": "Sat, 11 Mar 2023 22:35:43 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 23:35:05 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Yoon",
"David J.",
""
],
[
"Burnett",
"Keenan",
""
],
[
"Laconte",
"Johann",
""
],
[
"Chen",
"Yi",
""
],
[
"Vhavle",
"Heethesh",
""
],
[
"Kammel",
"Soeren",
""
],
[
"Reuther",
"James",
""
],
[
"Barfoot",
"Timothy D.",
""
]
]
| new_dataset | 0.993816 |
2303.09892 | Parth Patwa | Shreyash Mishra, S Suryavardan, Parth Patwa, Megha Chakraborty, Anku
Rani, Aishwarya Reganti, Aman Chadha, Amitava Das, Amit Sheth, Manoj
Chinnakotla, Asif Ekbal and Srijan Kumar | Memotion 3: Dataset on Sentiment and Emotion Analysis of Codemixed
Hindi-English Memes | Defactify2 @AAAI | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Memes are the new-age conveyance mechanism for humor on social media sites.
Memes often include an image and some text. Memes can be used to promote
disinformation or hatred, thus it is crucial to investigate in details. We
introduce Memotion 3, a new dataset with 10,000 annotated memes. Unlike other
prevalent datasets in the domain, including prior iterations of Memotion,
Memotion 3 introduces Hindi-English Codemixed memes while prior works in the
area were limited to only the English memes. We describe the Memotion task, the
data collection and the dataset creation methodologies. We also provide a
baseline for the task. The baseline code and dataset will be made available at
https://github.com/Shreyashm16/Memotion-3.0
| [
{
"version": "v1",
"created": "Fri, 17 Mar 2023 11:13:30 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Mar 2023 03:52:05 GMT"
},
{
"version": "v3",
"created": "Mon, 2 Oct 2023 14:28:03 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Mishra",
"Shreyash",
""
],
[
"Suryavardan",
"S",
""
],
[
"Patwa",
"Parth",
""
],
[
"Chakraborty",
"Megha",
""
],
[
"Rani",
"Anku",
""
],
[
"Reganti",
"Aishwarya",
""
],
[
"Chadha",
"Aman",
""
],
[
"Das",
"Amitava",
""
],
[
"Sheth",
"Amit",
""
],
[
"Chinnakotla",
"Manoj",
""
],
[
"Ekbal",
"Asif",
""
],
[
"Kumar",
"Srijan",
""
]
]
| new_dataset | 0.999567 |
2303.15553 | Yiqing Shen | Yiqing Shen, Pengfei Guo, Jingpu Wu, Qianqi Huang, Nhat Le, Jinyuan
Zhou, Shanshan Jiang, Mathias Unberath | MoViT: Memorizing Vision Transformers for Medical Image Analysis | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The synergy of long-range dependencies from transformers and local
representations of image content from convolutional neural networks (CNNs) has
led to advanced architectures and increased performance for various medical
image analysis tasks due to their complementary benefits. However, compared
with CNNs, transformers require considerably more training data, due to a
larger number of parameters and an absence of inductive bias. The need for
increasingly large datasets continues to be problematic, particularly in the
context of medical imaging, where both annotation efforts and data protection
result in limited data availability. In this work, inspired by the human
decision-making process of correlating new evidence with previously memorized
experience, we propose a Memorizing Vision Transformer (MoViT) to alleviate the
need for large-scale datasets to successfully train and deploy
transformer-based architectures. MoViT leverages an external memory structure
to cache history attention snapshots during the training stage. To prevent
overfitting, we incorporate an innovative memory update scheme, attention
temporal moving average, to update the stored external memories with the
historical moving average. For inference speedup, we design a prototypical
attention learning method to distill the external memory into smaller
representative subsets. We evaluate our method on a public histology image
dataset and an in-house MRI dataset, demonstrating that MoViT applied to varied
medical image analysis tasks, can outperform vanilla transformer models across
varied data regimes, especially in cases where only a small amount of annotated
data is available. More importantly, MoViT can reach a competitive performance
of ViT with only 3.0% of the training data.
| [
{
"version": "v1",
"created": "Mon, 27 Mar 2023 19:12:02 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2023 07:06:55 GMT"
},
{
"version": "v3",
"created": "Fri, 29 Sep 2023 20:14:37 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Shen",
"Yiqing",
""
],
[
"Guo",
"Pengfei",
""
],
[
"Wu",
"Jingpu",
""
],
[
"Huang",
"Qianqi",
""
],
[
"Le",
"Nhat",
""
],
[
"Zhou",
"Jinyuan",
""
],
[
"Jiang",
"Shanshan",
""
],
[
"Unberath",
"Mathias",
""
]
]
| new_dataset | 0.999698 |
2303.17550 | Chenpeng Du | Chenpeng Du, Qi Chen, Tianyu He, Xu Tan, Xie Chen, Kai Yu, Sheng Zhao,
Jiang Bian | DAE-Talker: High Fidelity Speech-Driven Talking Face Generation with
Diffusion Autoencoder | Accepted to ACM Multimedia 2023 | null | 10.1145/3581783.3613753 | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While recent research has made significant progress in speech-driven talking
face generation, the quality of the generated video still lags behind that of
real recordings. One reason for this is the use of handcrafted intermediate
representations like facial landmarks and 3DMM coefficients, which are designed
based on human knowledge and are insufficient to precisely describe facial
movements. Additionally, these methods require an external pretrained model for
extracting these representations, whose performance sets an upper bound on
talking face generation. To address these limitations, we propose a novel
method called DAE-Talker that leverages data-driven latent representations
obtained from a diffusion autoencoder (DAE). DAE contains an image encoder that
encodes an image into a latent vector and a DDIM image decoder that
reconstructs the image from it. We train our DAE on talking face video frames
and then extract their latent representations as the training target for a
Conformer-based speech2latent model. This allows DAE-Talker to synthesize full
video frames and produce natural head movements that align with the content of
speech, rather than relying on a predetermined head pose from a template video.
We also introduce pose modelling in speech2latent for pose controllability.
Additionally, we propose a novel method for generating continuous video frames
with the DDIM image decoder trained on individual frames, eliminating the need
for modelling the joint distribution of consecutive frames directly. Our
experiments show that DAE-Talker outperforms existing popular methods in
lip-sync, video fidelity, and pose naturalness. We also conduct ablation
studies to analyze the effectiveness of the proposed techniques and demonstrate
the pose controllability of DAE-Talker.
| [
{
"version": "v1",
"created": "Thu, 30 Mar 2023 17:18:31 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Apr 2023 19:58:37 GMT"
},
{
"version": "v3",
"created": "Sat, 5 Aug 2023 17:26:48 GMT"
},
{
"version": "v4",
"created": "Sun, 1 Oct 2023 11:20:26 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Du",
"Chenpeng",
""
],
[
"Chen",
"Qi",
""
],
[
"He",
"Tianyu",
""
],
[
"Tan",
"Xu",
""
],
[
"Chen",
"Xie",
""
],
[
"Yu",
"Kai",
""
],
[
"Zhao",
"Sheng",
""
],
[
"Bian",
"Jiang",
""
]
]
| new_dataset | 0.995467 |
2304.02419 | Kehong Gong | Kehong Gong, Dongze Lian, Heng Chang, Chuan Guo, Zihang Jiang, Xinxin
Zuo, Michael Bi Mi, Xinchao Wang | TM2D: Bimodality Driven 3D Dance Generation via Music-Text Integration | Accepted by ICCV2023 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel task for generating 3D dance movements that simultaneously
incorporate both text and music modalities. Unlike existing works that generate
dance movements using a single modality such as music, our goal is to produce
richer dance movements guided by the instructive information provided by the
text. However, the lack of paired motion data with both music and text
modalities limits the ability to generate dance movements that integrate both.
To alleviate this challenge, we propose to utilize a 3D human motion VQ-VAE to
project the motions of the two datasets into a latent space consisting of
quantized vectors, which effectively mix the motion tokens from the two
datasets with different distributions for training. Additionally, we propose a
cross-modal transformer to integrate text instructions into motion generation
architecture for generating 3D dance movements without degrading the
performance of music-conditioned dance generation. To better evaluate the
quality of the generated motion, we introduce two novel metrics, namely Motion
Prediction Distance (MPD) and Freezing Score (FS), to measure the coherence and
freezing percentage of the generated motion. Extensive experiments show that
our approach can generate realistic and coherent dance movements conditioned on
both text and music while maintaining comparable performance with the two
single modalities. Code is available at https://garfield-kh.github.io/TM2D/.
| [
{
"version": "v1",
"created": "Wed, 5 Apr 2023 12:58:33 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Oct 2023 15:23:02 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Gong",
"Kehong",
""
],
[
"Lian",
"Dongze",
""
],
[
"Chang",
"Heng",
""
],
[
"Guo",
"Chuan",
""
],
[
"Jiang",
"Zihang",
""
],
[
"Zuo",
"Xinxin",
""
],
[
"Mi",
"Michael Bi",
""
],
[
"Wang",
"Xinchao",
""
]
]
| new_dataset | 0.999174 |
2304.03897 | Parth Patwa | S Suryavardan, Shreyash Mishra, Parth Patwa, Megha Chakraborty, Anku
Rani, Aishwarya Reganti, Aman Chadha, Amitava Das, Amit Sheth, Manoj
Chinnakotla, Asif Ekbal, Srijan Kumar | Factify 2: A Multimodal Fake News and Satire News Dataset | Defactify2 @AAAI2023 | null | null | null | cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | The internet gives the world an open platform to express their views and
share their stories. While this is very valuable, it makes fake news one of our
society's most pressing problems. Manual fact checking process is time
consuming, which makes it challenging to disprove misleading assertions before
they cause significant harm. This is he driving interest in automatic fact or
claim verification. Some of the existing datasets aim to support development of
automating fact-checking techniques, however, most of them are text based.
Multi-modal fact verification has received relatively scant attention. In this
paper, we provide a multi-modal fact-checking dataset called FACTIFY 2,
improving Factify 1 by using new data sources and adding satire articles.
Factify 2 has 50,000 new data instances. Similar to FACTIFY 1.0, we have three
broad categories - support, no-evidence, and refute, with sub-categories based
on the entailment of visual and textual data. We also provide a BERT and Vison
Transformer based baseline, which achieves 65% F1 score in the test set. The
baseline codes and the dataset will be made available at
https://github.com/surya1701/Factify-2.0.
| [
{
"version": "v1",
"created": "Sat, 8 Apr 2023 03:14:19 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Oct 2023 14:48:45 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Suryavardan",
"S",
""
],
[
"Mishra",
"Shreyash",
""
],
[
"Patwa",
"Parth",
""
],
[
"Chakraborty",
"Megha",
""
],
[
"Rani",
"Anku",
""
],
[
"Reganti",
"Aishwarya",
""
],
[
"Chadha",
"Aman",
""
],
[
"Das",
"Amitava",
""
],
[
"Sheth",
"Amit",
""
],
[
"Chinnakotla",
"Manoj",
""
],
[
"Ekbal",
"Asif",
""
],
[
"Kumar",
"Srijan",
""
]
]
| new_dataset | 0.999587 |
2304.04429 | Hongming Shan | Tao Chen, Chenhui Wang, Hongming Shan | BerDiff: Conditional Bernoulli Diffusion Model for Medical Image
Segmentation | 14 pages, 7 figures | MICCAI 2023 | 10.1007/978-3-031-43901-8_47 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Medical image segmentation is a challenging task with inherent ambiguity and
high uncertainty, attributed to factors such as unclear tumor boundaries and
multiple plausible annotations. The accuracy and diversity of segmentation
masks are both crucial for providing valuable references to radiologists in
clinical practice. While existing diffusion models have shown strong capacities
in various visual generation tasks, it is still challenging to deal with
discrete masks in segmentation. To achieve accurate and diverse medical image
segmentation masks, we propose a novel conditional Bernoulli Diffusion model
for medical image segmentation (BerDiff). Instead of using the Gaussian noise,
we first propose to use the Bernoulli noise as the diffusion kernel to enhance
the capacity of the diffusion model for binary segmentation tasks, resulting in
more accurate segmentation masks. Second, by leveraging the stochastic nature
of the diffusion model, our BerDiff randomly samples the initial Bernoulli
noise and intermediate latent variables multiple times to produce a range of
diverse segmentation masks, which can highlight salient regions of interest
that can serve as valuable references for radiologists. In addition, our
BerDiff can efficiently sample sub-sequences from the overall trajectory of the
reverse diffusion, thereby speeding up the segmentation process. Extensive
experimental results on two medical image segmentation datasets with different
modalities demonstrate that our BerDiff outperforms other recently published
state-of-the-art methods. Our results suggest diffusion models could serve as a
strong backbone for medical image segmentation.
| [
{
"version": "v1",
"created": "Mon, 10 Apr 2023 07:21:38 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Chen",
"Tao",
""
],
[
"Wang",
"Chenhui",
""
],
[
"Shan",
"Hongming",
""
]
]
| new_dataset | 0.99258 |
2304.12317 | Chonghyuk Song | Chonghyuk Song, Gengshan Yang, Kangle Deng, Jun-Yan Zhu, Deva Ramanan | Total-Recon: Deformable Scene Reconstruction for Embodied View Synthesis | ICCV 2023 camera-ready version. Project page with code, models, and
data: https://andrewsonga.github.io/totalrecon | null | null | null | cs.CV cs.GR cs.LG | http://creativecommons.org/licenses/by/4.0/ | We explore the task of embodied view synthesis from monocular videos of
deformable scenes. Given a minute-long RGBD video of people interacting with
their pets, we render the scene from novel camera trajectories derived from the
in-scene motion of actors: (1) egocentric cameras that simulate the point of
view of a target actor and (2) 3rd-person cameras that follow the actor.
Building such a system requires reconstructing the root-body and articulated
motion of every actor, as well as a scene representation that supports
free-viewpoint synthesis. Longer videos are more likely to capture the scene
from diverse viewpoints (which helps reconstruction) but are also more likely
to contain larger motions (which complicates reconstruction). To address these
challenges, we present Total-Recon, the first method to photorealistically
reconstruct deformable scenes from long monocular RGBD videos. Crucially, to
scale to long videos, our method hierarchically decomposes the scene into the
background and objects, whose motion is decomposed into carefully initialized
root-body motion and local articulations. To quantify such "in-the-wild"
reconstruction and view synthesis, we collect ground-truth data from a
specialized stereo RGBD capture rig for 11 challenging videos, significantly
outperforming prior methods. Our code, model, and data can be found at
https://andrewsonga.github.io/totalrecon .
| [
{
"version": "v1",
"created": "Mon, 24 Apr 2023 17:59:52 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Oct 2023 13:07:37 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Song",
"Chonghyuk",
""
],
[
"Yang",
"Gengshan",
""
],
[
"Deng",
"Kangle",
""
],
[
"Zhu",
"Jun-Yan",
""
],
[
"Ramanan",
"Deva",
""
]
]
| new_dataset | 0.998148 |
2304.14065 | Gabriel Tseng | Gabriel Tseng, Ruben Cartuyvels, Ivan Zvonkov, Mirali Purohit, David
Rolnick, Hannah Kerner | Lightweight, Pre-trained Transformers for Remote Sensing Timeseries | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning models for parsing remote sensing data have a wide range of
societally relevant applications, but labels used to train these models can be
difficult or impossible to acquire. This challenge has spurred research into
self-supervised learning for remote sensing data aiming to unlock the use of
machine learning in geographies or application domains where labelled datasets
are small. Current self-supervised learning approaches for remote sensing data
draw significant inspiration from techniques applied to natural images.
However, remote sensing data has important differences from natural images --
for example, the temporal dimension is critical for many tasks and data is
collected from many complementary sensors. We show we can create significantly
smaller performant models by designing architectures and self-supervised
training techniques specifically for remote sensing data. We introduce the
Pretrained Remote Sensing Transformer (Presto), a transformer-based model
pre-trained on remote sensing pixel-timeseries data. Presto excels at a wide
variety of globally distributed remote sensing tasks and performs competitively
with much larger models while requiring far less compute. Presto can be used
for transfer learning or as a feature extractor for simple models, enabling
efficient deployment at scale.
| [
{
"version": "v1",
"created": "Thu, 27 Apr 2023 09:52:35 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 12:21:32 GMT"
},
{
"version": "v3",
"created": "Sat, 30 Sep 2023 13:47:02 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Tseng",
"Gabriel",
""
],
[
"Cartuyvels",
"Ruben",
""
],
[
"Zvonkov",
"Ivan",
""
],
[
"Purohit",
"Mirali",
""
],
[
"Rolnick",
"David",
""
],
[
"Kerner",
"Hannah",
""
]
]
| new_dataset | 0.982029 |
2305.03815 | Ekta Samani | Ekta U. Samani, Ashis G. Banerjee | Persistent Homology Meets Object Unity: Object Recognition in Clutter | Conditionally accepted for publication in the IEEE Transactions on
Robotics | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recognition of occluded objects in unseen and unstructured indoor
environments is a challenging problem for mobile robots. To address this
challenge, we propose a new descriptor, TOPS, for point clouds generated from
depth images and an accompanying recognition framework, THOR, inspired by human
reasoning. The descriptor employs a novel slicing-based approach to compute
topological features from filtrations of simplicial complexes using persistent
homology, and facilitates reasoning-based recognition using object unity. Apart
from a benchmark dataset, we report performance on a new dataset, the UW Indoor
Scenes (UW-IS) Occluded dataset, curated using commodity hardware to reflect
real-world scenarios with different environmental conditions and degrees of
object occlusion. THOR outperforms state-of-the-art methods on both the
datasets and achieves substantially higher recognition accuracy for all the
scenarios of the UW-IS Occluded dataset. Therefore, THOR, is a promising step
toward robust recognition in low-cost robots, meant for everyday use in indoor
settings.
| [
{
"version": "v1",
"created": "Fri, 5 May 2023 19:42:39 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Oct 2023 03:13:26 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Samani",
"Ekta U.",
""
],
[
"Banerjee",
"Ashis G.",
""
]
]
| new_dataset | 0.999767 |
2305.04183 | Nghia Hieu Nguyen | Nghia Hieu Nguyen, Duong T.D. Vo, Kiet Van Nguyen, Ngan Luu-Thuy
Nguyen | OpenViVQA: Task, Dataset, and Multimodal Fusion Models for Visual
Question Answering in Vietnamese | submitted to Elsevier | null | 10.1016/j.inffus.2023.101868 | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In recent years, visual question answering (VQA) has attracted attention from
the research community because of its highly potential applications (such as
virtual assistance on intelligent cars, assistant devices for blind people, or
information retrieval from document images using natural language as queries)
and challenge. The VQA task requires methods that have the ability to fuse the
information from questions and images to produce appropriate answers. Neural
visual question answering models have achieved tremendous growth on large-scale
datasets which are mostly for resource-rich languages such as English. However,
available datasets narrow the VQA task as the answers selection task or answer
classification task. We argue that this form of VQA is far from human ability
and eliminates the challenge of the answering aspect in the VQA task by just
selecting answers rather than generating them. In this paper, we introduce the
OpenViVQA (Open-domain Vietnamese Visual Question Answering) dataset, the first
large-scale dataset for VQA with open-ended answers in Vietnamese, consists of
11,000+ images associated with 37,000+ question-answer pairs (QAs). Moreover,
we proposed FST, QuMLAG, and MLPAG which fuse information from images and
answers, then use these fused features to construct answers as humans
iteratively. Our proposed methods achieve results that are competitive with
SOTA models such as SAAA, MCAN, LORA, and M4C. The dataset is available to
encourage the research community to develop more generalized algorithms
including transformers for low-resource languages such as Vietnamese.
| [
{
"version": "v1",
"created": "Sun, 7 May 2023 03:59:31 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Nguyen",
"Nghia Hieu",
""
],
[
"Vo",
"Duong T. D.",
""
],
[
"Van Nguyen",
"Kiet",
""
],
[
"Nguyen",
"Ngan Luu-Thuy",
""
]
]
| new_dataset | 0.999779 |
2305.13495 | Pha Nguyen | Pha Nguyen, Kha Gia Quach, Kris Kitani, Khoa Luu | Type-to-Track: Retrieve Any Object via Prompt-based Tracking | Accepted at NeurIPS 2023. Project page:
https://uark-cviu.github.io/Type-to-Track/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the recent trends in vision problems is to use natural language
captions to describe the objects of interest. This approach can overcome some
limitations of traditional methods that rely on bounding boxes or category
annotations. This paper introduces a novel paradigm for Multiple Object
Tracking called Type-to-Track, which allows users to track objects in videos by
typing natural language descriptions. We present a new dataset for that
Grounded Multiple Object Tracking task, called GroOT, that contains videos with
various types of objects and their corresponding textual captions describing
their appearance and action in detail. Additionally, we introduce two new
evaluation protocols and formulate evaluation metrics specifically for this
task. We develop a new efficient method that models a transformer-based
eMbed-ENcoDE-extRact framework (MENDER) using the third-order tensor
decomposition. The experiments in five scenarios show that our MENDER approach
outperforms another two-stage design in terms of accuracy and efficiency, up to
14.7% accuracy and 4$\times$ speed faster.
| [
{
"version": "v1",
"created": "Mon, 22 May 2023 21:25:27 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Aug 2023 16:49:32 GMT"
},
{
"version": "v3",
"created": "Sat, 30 Sep 2023 18:58:41 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Nguyen",
"Pha",
""
],
[
"Quach",
"Kha Gia",
""
],
[
"Kitani",
"Kris",
""
],
[
"Luu",
"Khoa",
""
]
]
| new_dataset | 0.999518 |
2305.16309 | Murtaza Dalal | Murtaza Dalal, Ajay Mandlekar, Caelan Garrett, Ankur Handa, Ruslan
Salakhutdinov, Dieter Fox | Imitating Task and Motion Planning with Visuomotor Transformers | Conference on Robot Learning (CoRL) 2023. 8 pages, 5 figures, 2
tables; 11 pages appendix (10 additional figures) | null | null | null | cs.RO cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Imitation learning is a powerful tool for training robot manipulation
policies, allowing them to learn from expert demonstrations without manual
programming or trial-and-error. However, common methods of data collection,
such as human supervision, scale poorly, as they are time-consuming and
labor-intensive. In contrast, Task and Motion Planning (TAMP) can autonomously
generate large-scale datasets of diverse demonstrations. In this work, we show
that the combination of large-scale datasets generated by TAMP supervisors and
flexible Transformer models to fit them is a powerful paradigm for robot
manipulation. To that end, we present a novel imitation learning system called
OPTIMUS that trains large-scale visuomotor Transformer policies by imitating a
TAMP agent. OPTIMUS introduces a pipeline for generating TAMP data that is
specifically curated for imitation learning and can be used to train performant
transformer-based policies. In this paper, we present a thorough study of the
design decisions required to imitate TAMP and demonstrate that OPTIMUS can
solve a wide variety of challenging vision-based manipulation tasks with over
70 different objects, ranging from long-horizon pick-and-place tasks, to shelf
and articulated object manipulation, achieving 70 to 80% success rates. Video
results and code at https://mihdalal.github.io/optimus/
| [
{
"version": "v1",
"created": "Thu, 25 May 2023 17:58:14 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 22:27:49 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Dalal",
"Murtaza",
""
],
[
"Mandlekar",
"Ajay",
""
],
[
"Garrett",
"Caelan",
""
],
[
"Handa",
"Ankur",
""
],
[
"Salakhutdinov",
"Ruslan",
""
],
[
"Fox",
"Dieter",
""
]
]
| new_dataset | 0.997929 |
2305.17343 | Yung Hsuan Lai | Yung-Hsuan Lai, Yen-Chun Chen, Yu-Chiang Frank Wang | Modality-Independent Teachers Meet Weakly-Supervised Audio-Visual Event
Parser | NeurIPS 2023 | null | null | null | cs.CV cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Audio-visual learning has been a major pillar of multi-modal machine
learning, where the community mostly focused on its modality-aligned setting,
i.e., the audio and visual modality are both assumed to signal the prediction
target. With the Look, Listen, and Parse dataset (LLP), we investigate the
under-explored unaligned setting, where the goal is to recognize audio and
visual events in a video with only weak labels observed. Such weak video-level
labels only tell what events happen without knowing the modality they are
perceived (audio, visual, or both). To enhance learning in this challenging
setting, we incorporate large-scale contrastively pre-trained models as the
modality teachers. A simple, effective, and generic method, termed Visual-Audio
Label Elaboration (VALOR), is innovated to harvest modality labels for the
training events. Empirical studies show that the harvested labels significantly
improve an attentional baseline by 8.0 in average F-score (Type@AV).
Surprisingly, we found that modality-independent teachers outperform their
modality-fused counterparts since they are noise-proof from the other
potentially unaligned modality. Moreover, our best model achieves the new
state-of-the-art on all metrics of LLP by a substantial margin (+5.4 F-score
for Type@AV). VALOR is further generalized to Audio-Visual Event Localization
and achieves the new state-of-the-art as well. Code is available at:
https://github.com/Franklin905/VALOR.
| [
{
"version": "v1",
"created": "Sat, 27 May 2023 02:57:39 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Oct 2023 08:34:54 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Lai",
"Yung-Hsuan",
""
],
[
"Chen",
"Yen-Chun",
""
],
[
"Wang",
"Yu-Chiang Frank",
""
]
]
| new_dataset | 0.999678 |
2306.04344 | Jiaming Liu | Jiaming Liu, Senqiao Yang, Peidong Jia, Renrui Zhang, Ming Lu, Yandong
Guo, Wei Xue, Shanghang Zhang | ViDA: Homeostatic Visual Domain Adapter for Continual Test Time
Adaptation | Neurips2023 final Rating: Weak Accept; Weak Accept; Borderline
accept; Borderline accept | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since real-world machine systems are running in non-stationary environments,
Continual Test-Time Adaptation (CTTA) task is proposed to adapt the pre-trained
model to continually changing target domains. Recently, existing methods mainly
focus on model-based adaptation, which aims to leverage a self-training manner
to extract the target domain knowledge. However, pseudo labels can be noisy and
the updated model parameters are unreliable under dynamic data distributions,
leading to error accumulation and catastrophic forgetting in the continual
adaptation process. To tackle these challenges and maintain the model
plasticity, we tactfully design a Visual Domain Adapter (ViDA) for CTTA,
explicitly handling both domain-specific and domain-shared knowledge.
Specifically, we first comprehensively explore the different domain
representations of the adapters with trainable high-rank or low-rank embedding
spaces. Then we inject ViDAs into the pre-trained model, which leverages
high-rank and low-rank features to adapt the current domain distribution and
maintain the continual domain-shared knowledge, respectively. To exploit the
low-rank and high-rank ViDAs more effectively, we further propose a Homeostatic
Knowledge Allotment (HKA) strategy, which adaptively combines different
knowledge from each ViDA. Extensive experiments conducted on four widely used
benchmarks demonstrate that our proposed method achieves state-of-the-art
performance in both classification and segmentation CTTA tasks. Note that, our
method can be regarded as a novel transfer paradigm for large-scale models,
delivering promising results in adaptation to continually changing
distributions.
| [
{
"version": "v1",
"created": "Wed, 7 Jun 2023 11:18:53 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Sep 2023 05:55:55 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Liu",
"Jiaming",
""
],
[
"Yang",
"Senqiao",
""
],
[
"Jia",
"Peidong",
""
],
[
"Zhang",
"Renrui",
""
],
[
"Lu",
"Ming",
""
],
[
"Guo",
"Yandong",
""
],
[
"Xue",
"Wei",
""
],
[
"Zhang",
"Shanghang",
""
]
]
| new_dataset | 0.986174 |
2306.05032 | Jinyang Liu | Jinyang Liu, Junjie Huang, Yintong Huo, Zhihan Jiang, Jiazhen Gu,
Zhuangbin Chen, Cong Feng, Minzhi Yan and Michael R. Lyu | Log-based Anomaly Detection based on EVT Theory with feedback | null | null | null | null | cs.SE cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | System logs play a critical role in maintaining the reliability of software
systems. Fruitful studies have explored automatic log-based anomaly detection
and achieved notable accuracy on benchmark datasets. However, when applied to
large-scale cloud systems, these solutions face limitations due to high
resource consumption and lack of adaptability to evolving logs. In this paper,
we present an accurate, lightweight, and adaptive log-based anomaly detection
framework, referred to as SeaLog. Our method introduces a Trie-based Detection
Agent (TDA) that employs a lightweight, dynamically-growing trie structure for
real-time anomaly detection. To enhance TDA's accuracy in response to evolving
log data, we enable it to receive feedback from experts. Interestingly, our
findings suggest that contemporary large language models, such as ChatGPT, can
provide feedback with a level of consistency comparable to human experts, which
can potentially reduce manual verification efforts. We extensively evaluate
SeaLog on two public datasets and an industrial dataset. The results show that
SeaLog outperforms all baseline methods in terms of effectiveness, runs 2X to
10X faster and only consumes 5% to 41% of the memory resource.
| [
{
"version": "v1",
"created": "Thu, 8 Jun 2023 08:34:58 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Sep 2023 04:09:55 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Liu",
"Jinyang",
""
],
[
"Huang",
"Junjie",
""
],
[
"Huo",
"Yintong",
""
],
[
"Jiang",
"Zhihan",
""
],
[
"Gu",
"Jiazhen",
""
],
[
"Chen",
"Zhuangbin",
""
],
[
"Feng",
"Cong",
""
],
[
"Yan",
"Minzhi",
""
],
[
"Lyu",
"Michael R.",
""
]
]
| new_dataset | 0.992311 |
2306.08243 | Jicheng Li | Jicheng Li, Vuthea Chheang, Pinar Kullu, Eli Brignac, Zhang Guo,
Kenneth E. Barner, Anjana Bhat, Roghayeh Leila Barmaki | MMASD: A Multimodal Dataset for Autism Intervention Analysis | 8 pages, 2 figures | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Autism spectrum disorder (ASD) is a developmental disorder characterized by
significant social communication impairments and difficulties perceiving and
presenting communication cues. Machine learning techniques have been broadly
adopted to facilitate autism studies and assessments. However, computational
models are primarily concentrated on specific analysis and validated on private
datasets in the autism community, which limits comparisons across models due to
privacy-preserving data sharing complications. This work presents a novel
privacy-preserving open-source dataset, MMASD as a MultiModal ASD benchmark
dataset, collected from play therapy interventions of children with Autism.
MMASD includes data from 32 children with ASD, and 1,315 data samples segmented
from over 100 hours of intervention recordings. To promote public access, each
data sample consists of four privacy-preserving modalities of data; some of
which are derived from original videos: (1) optical flow, (2) 2D skeleton, (3)
3D skeleton, and (4) clinician ASD evaluation scores of children, e.g., ADOS
scores. MMASD aims to assist researchers and therapists in understanding
children's cognitive status, monitoring their progress during therapy, and
customizing the treatment plan accordingly. It also has inspiration for
downstream tasks such as action quality assessment and interpersonal synchrony
estimation. MMASD dataset can be easily accessed at
https://github.com/Li-Jicheng/MMASD-A-Multimodal-Dataset-for-Autism-Intervention-Analysis.
| [
{
"version": "v1",
"created": "Wed, 14 Jun 2023 05:04:11 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jun 2023 15:03:23 GMT"
},
{
"version": "v3",
"created": "Sun, 1 Oct 2023 15:20:24 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Li",
"Jicheng",
""
],
[
"Chheang",
"Vuthea",
""
],
[
"Kullu",
"Pinar",
""
],
[
"Brignac",
"Eli",
""
],
[
"Guo",
"Zhang",
""
],
[
"Barner",
"Kenneth E.",
""
],
[
"Bhat",
"Anjana",
""
],
[
"Barmaki",
"Roghayeh Leila",
""
]
]
| new_dataset | 0.999836 |
2306.09001 | Xinhao Liu | Yiming Li, Sihang Li, Xinhao Liu, Moonjun Gong, Kenan Li, Nuo Chen,
Zijun Wang, Zhiheng Li, Tao Jiang, Fisher Yu, Yue Wang, Hang Zhao, Zhiding
Yu, Chen Feng | SSCBench: Monocular 3D Semantic Scene Completion Benchmark in Street
Views | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monocular scene understanding is a foundational component of autonomous
systems. Within the spectrum of monocular perception topics, one crucial and
useful task for holistic 3D scene understanding is semantic scene completion
(SSC), which jointly completes semantic information and geometric details from
RGB input. However, progress in SSC, particularly in large-scale street views,
is hindered by the scarcity of high-quality datasets. To address this issue, we
introduce SSCBench, a comprehensive benchmark that integrates scenes from
widely used automotive datasets (e.g., KITTI-360, nuScenes, and Waymo).
SSCBench follows an established setup and format in the community, facilitating
the easy exploration of SSC methods in various street views. We benchmark
models using monocular, trinocular, and point cloud input to assess the
performance gap resulting from sensor coverage and modality. Moreover, we have
unified semantic labels across diverse datasets to simplify cross-domain
generalization testing. We commit to including more datasets and SSC models to
drive further advancements in this field.
| [
{
"version": "v1",
"created": "Thu, 15 Jun 2023 09:56:33 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Sep 2023 01:50:38 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Li",
"Yiming",
""
],
[
"Li",
"Sihang",
""
],
[
"Liu",
"Xinhao",
""
],
[
"Gong",
"Moonjun",
""
],
[
"Li",
"Kenan",
""
],
[
"Chen",
"Nuo",
""
],
[
"Wang",
"Zijun",
""
],
[
"Li",
"Zhiheng",
""
],
[
"Jiang",
"Tao",
""
],
[
"Yu",
"Fisher",
""
],
[
"Wang",
"Yue",
""
],
[
"Zhao",
"Hang",
""
],
[
"Yu",
"Zhiding",
""
],
[
"Feng",
"Chen",
""
]
]
| new_dataset | 0.998216 |
2306.16605 | Priya Sundaresan | Priya Sundaresan, Suneel Belkhale, Dorsa Sadigh, Jeannette Bohg | KITE: Keypoint-Conditioned Policies for Semantic Manipulation | null | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | While natural language offers a convenient shared interface for humans and
robots, enabling robots to interpret and follow language commands remains a
longstanding challenge in manipulation. A crucial step to realizing a
performant instruction-following robot is achieving semantic manipulation,
where a robot interprets language at different specificities, from high-level
instructions like "Pick up the stuffed animal" to more detailed inputs like
"Grab the left ear of the elephant." To tackle this, we propose Keypoints +
Instructions to Execution (KITE), a two-step framework for semantic
manipulation which attends to both scene semantics (distinguishing between
different objects in a visual scene) and object semantics (precisely localizing
different parts within an object instance). KITE first grounds an input
instruction in a visual scene through 2D image keypoints, providing a highly
accurate object-centric bias for downstream action inference. Provided an RGB-D
scene observation, KITE then executes a learned keypoint-conditioned skill to
carry out the instruction. The combined precision of keypoints and
parameterized skills enables fine-grained manipulation with generalization to
scene and object variations. Empirically, we demonstrate KITE in 3 real-world
environments: long-horizon 6-DoF tabletop manipulation, semantic grasping, and
a high-precision coffee-making task. In these settings, KITE achieves a 75%,
70%, and 71% overall success rate for instruction-following, respectively. KITE
outperforms frameworks that opt for pre-trained visual language models over
keypoint-based grounding, or omit skills in favor of end-to-end visuomotor
control, all while being trained from fewer or comparable amounts of
demonstrations. Supplementary material, datasets, code, and videos can be found
on our website: http://tinyurl.com/kite-site.
| [
{
"version": "v1",
"created": "Thu, 29 Jun 2023 00:12:21 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jul 2023 11:02:52 GMT"
},
{
"version": "v3",
"created": "Sun, 1 Oct 2023 14:56:37 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Sundaresan",
"Priya",
""
],
[
"Belkhale",
"Suneel",
""
],
[
"Sadigh",
"Dorsa",
""
],
[
"Bohg",
"Jeannette",
""
]
]
| new_dataset | 0.999227 |
2307.00433 | Melvin Mokhtari | Melvin Mokhtari, Amirreza Hosseini, Alireza Habibi, Adel Karshenas,
Ali Amoomahdi | Intelligent Traffic Control with Smart Speed Bumps | 7 pages, 5 figures | null | null | null | cs.NI cs.DC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Traffic congestion and safety continue to pose significant challenges in
urban environments. In this paper, we introduce the Smart Speed Bump (SSBump),
a novel traffic calming solution that leverages the Internet of Things (IoT)
and innovative non-Newtonian fluid materials to enhance road safety, optimize
emergency response times, and improve the overall driving experience. The
SSBump uses IoT sensors to detect and communicate with emergency vehicles,
reducing response times by temporarily deflating. These sensors also analyze
traffic patterns and inform data-driven decisions. Additionally, the SSBump
uses an Oobleck mixture that adapts its behavior based on the velocity of
approaching vehicles, resulting in a safer and more comfortable experience for
drivers. This study commences with an overview of the prevalent traffic
congestion, followed by a discussion on various available options in this
domain. Subsequently, the paper explores the advantages of smart speed bumps
and their operational mechanisms. Finally, it presents a comprehensive analysis
of the results, its challenges, and the prospects of the work. The findings of
this research demonstrate the potential of the SSBump system to revolutionize
traffic control, emergency response time, and the driving experience in smart
cities, making it a game-changing innovation for advanced transportation
systems.
| [
{
"version": "v1",
"created": "Sat, 1 Jul 2023 21:47:03 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 21:31:18 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Mokhtari",
"Melvin",
""
],
[
"Hosseini",
"Amirreza",
""
],
[
"Habibi",
"Alireza",
""
],
[
"Karshenas",
"Adel",
""
],
[
"Amoomahdi",
"Ali",
""
]
]
| new_dataset | 0.996857 |
2307.03533 | Simon Leglaive | Simon Leglaive, L\'eonie Borne, Efthymios Tzinis, Mostafa Sadeghi,
Matthieu Fraticelli, Scott Wisdom, Manuel Pariente, Daniel Pressnitzer, John
R. Hershey | The CHiME-7 UDASE task: Unsupervised domain adaptation for
conversational speech enhancement | null | The 7th International Workshop on Speech Processing in Everyday
Environments (CHiME), Dublin, Ireland, 2023 | null | null | cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supervised speech enhancement models are trained using artificially generated
mixtures of clean speech and noise signals, which may not match real-world
recording conditions at test time. This mismatch can lead to poor performance
if the test domain significantly differs from the synthetic training domain.
This paper introduces the unsupervised domain adaptation for conversational
speech enhancement (UDASE) task of the 7th CHiME challenge. This task aims to
leverage real-world noisy speech recordings from the target domain for
unsupervised domain adaptation of speech enhancement models. The target domain
corresponds to the multi-speaker reverberant conversational speech recordings
of the CHiME-5 dataset, for which the ground-truth clean speech reference is
unavailable. Given a CHiME-5 recording, the task is to estimate the clean,
potentially multi-speaker, reverberant speech, removing the additive background
noise. We discuss the motivation for the CHiME-7 UDASE task and describe the
data, the task, and the baseline system.
| [
{
"version": "v1",
"created": "Fri, 7 Jul 2023 11:41:33 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Oct 2023 07:38:18 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Leglaive",
"Simon",
""
],
[
"Borne",
"Léonie",
""
],
[
"Tzinis",
"Efthymios",
""
],
[
"Sadeghi",
"Mostafa",
""
],
[
"Fraticelli",
"Matthieu",
""
],
[
"Wisdom",
"Scott",
""
],
[
"Pariente",
"Manuel",
""
],
[
"Pressnitzer",
"Daniel",
""
],
[
"Hershey",
"John R.",
""
]
]
| new_dataset | 0.979833 |
2307.10173 | Wei Cheng | Wei Cheng, Ruixiang Chen, Wanqi Yin, Siming Fan, Keyu Chen, Honglin
He, Huiwen Luo, Zhongang Cai, Jingbo Wang, Yang Gao, Zhengming Yu, Zhengyu
Lin, Daxuan Ren, Lei Yang, Ziwei Liu, Chen Change Loy, Chen Qian, Wayne Wu,
Dahua Lin, Bo Dai, Kwan-Yee Lin | DNA-Rendering: A Diverse Neural Actor Repository for High-Fidelity
Human-centric Rendering | This paper is accepted by ICCV2023. Project page:
https://dna-rendering.github.io/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Realistic human-centric rendering plays a key role in both computer vision
and computer graphics. Rapid progress has been made in the algorithm aspect
over the years, yet existing human-centric rendering datasets and benchmarks
are rather impoverished in terms of diversity, which are crucial for rendering
effect. Researchers are usually constrained to explore and evaluate a small set
of rendering problems on current datasets, while real-world applications
require methods to be robust across different scenarios. In this work, we
present DNA-Rendering, a large-scale, high-fidelity repository of human
performance data for neural actor rendering. DNA-Rendering presents several
alluring attributes. First, our dataset contains over 1500 human subjects, 5000
motion sequences, and 67.5M frames' data volume. Second, we provide rich assets
for each subject -- 2D/3D human body keypoints, foreground masks, SMPLX models,
cloth/accessory materials, multi-view images, and videos. These assets boost
the current method's accuracy on downstream rendering tasks. Third, we
construct a professional multi-view system to capture data, which contains 60
synchronous cameras with max 4096 x 3000 resolution, 15 fps speed, and stern
camera calibration steps, ensuring high-quality resources for task training and
evaluation. Along with the dataset, we provide a large-scale and quantitative
benchmark in full-scale, with multiple tasks to evaluate the existing progress
of novel view synthesis, novel pose animation synthesis, and novel identity
rendering methods. In this manuscript, we describe our DNA-Rendering effort as
a revealing of new observations, challenges, and future directions to
human-centric rendering. The dataset, code, and benchmarks will be publicly
available at https://dna-rendering.github.io/
| [
{
"version": "v1",
"created": "Wed, 19 Jul 2023 17:58:03 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Sep 2023 06:24:23 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Cheng",
"Wei",
""
],
[
"Chen",
"Ruixiang",
""
],
[
"Yin",
"Wanqi",
""
],
[
"Fan",
"Siming",
""
],
[
"Chen",
"Keyu",
""
],
[
"He",
"Honglin",
""
],
[
"Luo",
"Huiwen",
""
],
[
"Cai",
"Zhongang",
""
],
[
"Wang",
"Jingbo",
""
],
[
"Gao",
"Yang",
""
],
[
"Yu",
"Zhengming",
""
],
[
"Lin",
"Zhengyu",
""
],
[
"Ren",
"Daxuan",
""
],
[
"Yang",
"Lei",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Qian",
"Chen",
""
],
[
"Wu",
"Wayne",
""
],
[
"Lin",
"Dahua",
""
],
[
"Dai",
"Bo",
""
],
[
"Lin",
"Kwan-Yee",
""
]
]
| new_dataset | 0.998974 |
2308.01544 | Sarah Schwettmann | Sarah Schwettmann, Neil Chowdhury, Samuel Klein, David Bau, Antonio
Torralba | Multimodal Neurons in Pretrained Text-Only Transformers | Oral presentation at ICCV CLVL 2023 | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Language models demonstrate remarkable capacity to generalize representations
learned in one modality to downstream tasks in other modalities. Can we trace
this ability to individual neurons? We study the case where a frozen text
transformer is augmented with vision using a self-supervised visual encoder and
a single linear projection learned on an image-to-text task. Outputs of the
projection layer are not immediately decodable into language describing image
content; instead, we find that translation between modalities occurs deeper
within the transformer. We introduce a procedure for identifying "multimodal
neurons" that convert visual representations into corresponding text, and
decoding the concepts they inject into the model's residual stream. In a series
of experiments, we show that multimodal neurons operate on specific visual
concepts across inputs, and have a systematic causal effect on image
captioning.
| [
{
"version": "v1",
"created": "Thu, 3 Aug 2023 05:27:12 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Oct 2023 23:24:13 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Schwettmann",
"Sarah",
""
],
[
"Chowdhury",
"Neil",
""
],
[
"Klein",
"Samuel",
""
],
[
"Bau",
"David",
""
],
[
"Torralba",
"Antonio",
""
]
]
| new_dataset | 0.996242 |
2308.06810 | Yue Cao | Yue Cao and C.S. George Lee | Ground Manipulator Primitive Tasks to Executable Actions using Large
Language Models | AAAI Fall Symposium on Unifying Representations for Robot Application
Development, Arlington, VA, 2023 | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Layered architectures have been widely used in robot systems. The majority of
them implement planning and execution functions in separate layers. However,
there still lacks a straightforward way to transit high-level tasks in the
planning layer to the low-level motor commands in the execution layer. In order
to tackle this challenge, we propose a novel approach to ground the manipulator
primitive tasks to robot low-level actions using large language models (LLMs).
We designed a program-function-like prompt based on the task frame formalism.
In this way, we enable LLMs to generate position/force set-points for hybrid
control. Evaluations over several state-of-the-art LLMs are provided.
| [
{
"version": "v1",
"created": "Sun, 13 Aug 2023 16:52:36 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Oct 2023 03:31:02 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Cao",
"Yue",
""
],
[
"Lee",
"C. S. George",
""
]
]
| new_dataset | 0.998964 |
2308.16458 | Xiangru Tang | Xiangru Tang, Bill Qian, Rick Gao, Jiakang Chen, Xinyun Chen, Mark
Gerstein | BioCoder: A Benchmark for Bioinformatics Code Generation with Contextual
Pragmatic Knowledge | null | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Pre-trained large language models have significantly improved code
generation. As these models scale up, there is an increasing need for the
output to handle more intricate tasks and to be appropriately specialized to
particular domains. Bioinformatics provides an important domain. In this field
generating functional programs poses additional notable challenges due to the
amount of specialized domain knowledge, the need for complicated data
operations, and intricate functional dependencies between the operations. Here,
we present BioCoder, a benchmark developed to evaluate existing pre-trained
models in generating bioinformatics code. In relation to function-code
generation, BioCoder covers potential package dependencies, class declarations,
and global variables. It incorporates 1026 functions and 1243 methods in Python
and Java from GitHub and 253 examples from the Rosalind Project. BioCoder
incorporates a fuzz-testing framework for evaluation, and we have applied it to
evaluate many models including InCoder, CodeGen, CodeGen2, SantaCoder,
StarCoder, StarCoder+, InstructCodeT5+, GPT-3.5, and GPT-4. The results
highlight two key aspects of successful models: 1) that they contain specific
domain knowledge of bioinformatics (beyond just coding knowledge); 2) that they
accommodate a long prompt with full context (i.e. functional dependencies). Our
dataset, benchmark, Docker images, and scripts required for testing are all
available at https://github.com/gersteinlab/biocoder.
| [
{
"version": "v1",
"created": "Thu, 31 Aug 2023 04:52:58 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Sep 2023 17:51:16 GMT"
},
{
"version": "v3",
"created": "Fri, 29 Sep 2023 20:27:06 GMT"
}
]
| 2023-10-03T00:00:00 | [
[
"Tang",
"Xiangru",
""
],
[
"Qian",
"Bill",
""
],
[
"Gao",
"Rick",
""
],
[
"Chen",
"Jiakang",
""
],
[
"Chen",
"Xinyun",
""
],
[
"Gerstein",
"Mark",
""
]
]
| new_dataset | 0.999371 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.