id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2304.12521 | Keunwoo Choi Mr | Keunwoo Choi, Jaekwon Im, Laurie Heller, Brian McFee, Keisuke Imoto,
Yuki Okamoto, Mathieu Lagrange, Shinosuke Takamichi | Foley Sound Synthesis at the DCASE 2023 Challenge | DCASE 2023 Challenge - Task 7 - Technical Report (Submitted to DCASE
2023 Workshop) | null | null | null | cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | The addition of Foley sound effects during post-production is a common
technique used to enhance the perceived acoustic properties of multimedia
content. Traditionally, Foley sound has been produced by human Foley artists,
which involves manual recording and mixing of sound. However, recent advances
in sound synthesis and generative models have generated interest in
machine-assisted or automatic Foley synthesis techniques. To promote further
research in this area, we have organized a challenge in DCASE 2023: Task 7 -
Foley Sound Synthesis. Our challenge aims to provide a standardized evaluation
framework that is both rigorous and efficient, allowing for the evaluation of
different Foley synthesis systems. We received 17 submissions, and performed
both objective and subjective evaluation to rank them according to three
criteria: audio quality, fit-to-category, and diversity. Through this
challenge, we hope to encourage active participation from the research
community and advance the state-of-the-art in automatic Foley synthesis. In
this technical report, we provide a detailed overview of the Foley sound
synthesis challenge, including task definition, dataset, baseline, evaluation
scheme and criteria, challenge result, and discussion.
| [
{
"version": "v1",
"created": "Tue, 25 Apr 2023 02:28:32 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Apr 2023 03:25:11 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Jun 2023 04:35:03 GMT"
},
{
"version": "v4",
"created": "Thu, 28 Sep 2023 18:38:21 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Choi",
"Keunwoo",
""
],
[
"Im",
"Jaekwon",
""
],
[
"Heller",
"Laurie",
""
],
[
"McFee",
"Brian",
""
],
[
"Imoto",
"Keisuke",
""
],
[
"Okamoto",
"Yuki",
""
],
[
"Lagrange",
"Mathieu",
""
],
[
"Takamichi",
"Shinosuke",
""
]
]
| new_dataset | 0.998785 |
2305.01074 | Kien Nguyen Thanh | Kien Nguyen, Tharindu Fernando, Clinton Fookes, Sridha Sridharan | Physical Adversarial Attacks for Surveillance: A Survey | This paper has been accepted for publication in T-NNLS | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Modern automated surveillance techniques are heavily reliant on deep learning
methods. Despite the superior performance, these learning systems are
inherently vulnerable to adversarial attacks - maliciously crafted inputs that
are designed to mislead, or trick, models into making incorrect predictions. An
adversary can physically change their appearance by wearing adversarial
t-shirts, glasses, or hats or by specific behavior, to potentially avoid
various forms of detection, tracking and recognition of surveillance systems;
and obtain unauthorized access to secure properties and assets. This poses a
severe threat to the security and safety of modern surveillance systems. This
paper reviews recent attempts and findings in learning and designing physical
adversarial attacks for surveillance applications. In particular, we propose a
framework to analyze physical adversarial attacks and provide a comprehensive
survey of physical adversarial attacks on four key surveillance tasks:
detection, identification, tracking, and action recognition under this
framework. Furthermore, we review and analyze strategies to defend against the
physical adversarial attacks and the methods for evaluating the strengths of
the defense. The insights in this paper present an important step in building
resilience within surveillance systems to physical adversarial attacks.
| [
{
"version": "v1",
"created": "Mon, 1 May 2023 20:19:59 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 13:43:21 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Nguyen",
"Kien",
""
],
[
"Fernando",
"Tharindu",
""
],
[
"Fookes",
"Clinton",
""
],
[
"Sridharan",
"Sridha",
""
]
]
| new_dataset | 0.990782 |
2305.07893 | Mohammad Abdous | Mohammad Abdous, Poorya Piroozfar, Behrouz Minaei Bidgoli | PESTS: Persian_English Cross Lingual Corpus for Semantic Textual
Similarity | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | One of the components of natural language processing that has received a lot
of investigation recently is semantic textual similarity. In computational
linguistics and natural language processing, assessing the semantic similarity
of words, phrases, paragraphs, and texts is crucial. Calculating the degree of
semantic resemblance between two textual pieces, paragraphs, or phrases
provided in both monolingual and cross-lingual versions is known as semantic
similarity. Cross lingual semantic similarity requires corpora in which there
are sentence pairs in both the source and target languages with a degree of
semantic similarity between them. Many existing cross lingual semantic
similarity models use a machine translation due to the unavailability of cross
lingual semantic similarity dataset, which the propagation of the machine
translation error reduces the accuracy of the model. On the other hand, when we
want to use semantic similarity features for machine translation the same
machine translations should not be used for semantic similarity. For Persian,
which is one of the low resource languages, no effort has been made in this
regard and the need for a model that can understand the context of two
languages is felt more than ever. In this article, the corpus of semantic
textual similarity between sentences in Persian and English languages has been
produced for the first time by using linguistic experts. We named this dataset
PESTS (Persian English Semantic Textual Similarity). This corpus contains 5375
sentence pairs. Also, different models based on transformers have been
fine-tuned using this dataset. The results show that using the PESTS dataset,
the Pearson correlation of the XLM ROBERTa model increases from 85.87% to
95.62%.
| [
{
"version": "v1",
"created": "Sat, 13 May 2023 11:02:50 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 16:12:29 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Abdous",
"Mohammad",
""
],
[
"Piroozfar",
"Poorya",
""
],
[
"Bidgoli",
"Behrouz Minaei",
""
]
]
| new_dataset | 0.992014 |
2305.10503 | Youtan Yin | Youtan Yin, Zhoujie Fu, Fan Yang, Guosheng Lin | OR-NeRF: Object Removing from 3D Scenes Guided by Multiview Segmentation
with Neural Radiance Fields | project site: https://ornerf.github.io/ (codes available) | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The emergence of Neural Radiance Fields (NeRF) for novel view synthesis has
increased interest in 3D scene editing. An essential task in editing is
removing objects from a scene while ensuring visual reasonability and multiview
consistency. However, current methods face challenges such as time-consuming
object labeling, limited capability to remove specific targets, and compromised
rendering quality after removal. This paper proposes a novel object-removing
pipeline, named OR-NeRF, that can remove objects from 3D scenes with user-given
points or text prompts on a single view, achieving better performance in less
time than previous works. Our method spreads user annotations to all views
through 3D geometry and sparse correspondence, ensuring 3D consistency with
less processing burden. Then recent 2D segmentation model Segment-Anything
(SAM) is applied to predict masks, and a 2D inpainting model is used to
generate color supervision. Finally, our algorithm applies depth supervision
and perceptual loss to maintain consistency in geometry and appearance after
object removal. Experimental results demonstrate that our method achieves
better editing quality with less time than previous works, considering both
quality and quantity.
| [
{
"version": "v1",
"created": "Wed, 17 May 2023 18:18:05 GMT"
},
{
"version": "v2",
"created": "Wed, 24 May 2023 03:32:11 GMT"
},
{
"version": "v3",
"created": "Fri, 29 Sep 2023 02:36:03 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Yin",
"Youtan",
""
],
[
"Fu",
"Zhoujie",
""
],
[
"Yang",
"Fan",
""
],
[
"Lin",
"Guosheng",
""
]
]
| new_dataset | 0.999441 |
2305.19402 | Yujia Bao | Yujia Bao, Theofanis Karaletsos | Contextual Vision Transformers for Robust Representation Learning | null | null | null | null | cs.CV cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | We introduce Contextual Vision Transformers (ContextViT), a method designed
to generate robust image representations for datasets experiencing shifts in
latent factors across various groups. Derived from the concept of in-context
learning, ContextViT incorporates an additional context token to encapsulate
group-specific information. This integration allows the model to adjust the
image representation in accordance with the group-specific context.
Specifically, for a given input image, ContextViT maps images with identical
group membership into this context token, which is appended to the input image
tokens. Additionally, we introduce a context inference network to predict such
tokens on-the-fly, given a batch of samples from the group. This enables
ContextViT to adapt to new testing distributions during inference time. We
demonstrate the efficacy of ContextViT across a wide range of applications. In
supervised fine-tuning, we show that augmenting pre-trained ViTs with our
proposed context conditioning mechanism results in consistent improvements in
out-of-distribution generalization on iWildCam and FMoW. We also investigate
self-supervised representation learning with ContextViT. Our experiments on the
Camelyon17 pathology imaging benchmark and the JUMP-CP microscopy imaging
benchmark demonstrate that ContextViT excels in learning stable image
featurizations amidst distribution shift, consistently outperforming its ViT
counterpart.
| [
{
"version": "v1",
"created": "Tue, 30 May 2023 20:31:26 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Sep 2023 20:01:05 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Bao",
"Yujia",
""
],
[
"Karaletsos",
"Theofanis",
""
]
]
| new_dataset | 0.999539 |
2306.00637 | Marc Aubreville | Pablo Pernias, Dominic Rampas, Mats L. Richter, Christopher J. Pal and
Marc Aubreville | Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image
Diffusion Models | Corresponding to "W\"urstchen v2" | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We introduce W\"urstchen, a novel architecture for text-to-image synthesis
that combines competitive performance with unprecedented cost-effectiveness for
large-scale text-to-image diffusion models. A key contribution of our work is
to develop a latent diffusion technique in which we learn a detailed but
extremely compact semantic image representation used to guide the diffusion
process. This highly compressed representation of an image provides much more
detailed guidance compared to latent representations of language and this
significantly reduces the computational requirements to achieve
state-of-the-art results. Our approach also improves the quality of
text-conditioned image generation based on our user preference study. The
training requirements of our approach consists of 24,602 A100-GPU hours -
compared to Stable Diffusion 2.1's 200,000 GPU hours. Our approach also
requires less training data to achieve these results. Furthermore, our compact
latent representations allows us to perform inference over twice as fast,
slashing the usual costs and carbon footprint of a state-of-the-art (SOTA)
diffusion model significantly, without compromising the end performance. In a
broader comparison against SOTA models our approach is substantially more
efficient and compares favorably in terms of image quality. We believe that
this work motivates more emphasis on the prioritization of both performance and
computational accessibility.
| [
{
"version": "v1",
"created": "Thu, 1 Jun 2023 13:00:53 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 05:32:46 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Pernias",
"Pablo",
""
],
[
"Rampas",
"Dominic",
""
],
[
"Richter",
"Mats L.",
""
],
[
"Pal",
"Christopher J.",
""
],
[
"Aubreville",
"Marc",
""
]
]
| new_dataset | 0.99492 |
2306.14565 | Fuxiao Liu | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan
Wang | Mitigating Hallucination in Large Multi-Modal Models via Robust
Instruction Tuning | 40 pages, 32 figures. Under Review | null | null | null | cs.CV cs.AI cs.CE cs.CL cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model.
| [
{
"version": "v1",
"created": "Mon, 26 Jun 2023 10:26:33 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Sep 2023 14:41:52 GMT"
},
{
"version": "v3",
"created": "Fri, 29 Sep 2023 16:02:28 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Liu",
"Fuxiao",
""
],
[
"Lin",
"Kevin",
""
],
[
"Li",
"Linjie",
""
],
[
"Wang",
"Jianfeng",
""
],
[
"Yacoob",
"Yaser",
""
],
[
"Wang",
"Lijuan",
""
]
]
| new_dataset | 0.982236 |
2307.09087 | Piergiorgio Ladisa | Piergiorgio Ladisa, Merve Sahin, Serena Elisa Ponta, Marco Rosa,
Matias Martinez, Olivier Barais | The Hitchhiker's Guide to Malicious Third-Party Dependencies | Proceedings of the 2023 Workshop on Software Supply Chain Offensive
Research and Ecosystem Defenses (SCORED '23), November 30, 2023, Copenhagen,
Denmark | null | 10.1145/3605770.3625212 | null | cs.CR | http://creativecommons.org/licenses/by-sa/4.0/ | The increasing popularity of certain programming languages has spurred the
creation of ecosystem-specific package repositories and package managers. Such
repositories (e.g., NPM, PyPI) serve as public databases that users can query
to retrieve packages for various functionalities, whereas package managers
automatically handle dependency resolution and package installation on the
client side. These mechanisms enhance software modularization and accelerate
implementation. However, they have become a target for malicious actors seeking
to propagate malware on a large scale.
In this work, we show how attackers can leverage capabilities of popular
package managers and languages to achieve arbitrary code execution on victim
machines, thereby realizing open-source software supply chain attacks. Based on
the analysis of 7 ecosystems, we identify 3 install-time and 4 runtime
techniques, and we provide recommendations describing how to reduce the risk
when consuming third-party dependencies. We will provide proof-of-concepts that
demonstrate the identified techniques. Furthermore, we describe evasion
strategies employed by attackers to circumvent detection mechanisms.
| [
{
"version": "v1",
"created": "Tue, 18 Jul 2023 09:12:06 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 13:03:56 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Ladisa",
"Piergiorgio",
""
],
[
"Sahin",
"Merve",
""
],
[
"Ponta",
"Serena Elisa",
""
],
[
"Rosa",
"Marco",
""
],
[
"Martinez",
"Matias",
""
],
[
"Barais",
"Olivier",
""
]
]
| new_dataset | 0.972218 |
2308.09959 | Karl W\"ust | Giacomo Giuliari, Markus Legner, Adrian Perrig, Jean-Pierre Smith,
Karl W\"ust | Hummingbird: A Flexible and Lightweight Inter-Domain
Bandwidth-Reservation System | 14 pages, 7 figures | null | null | null | cs.NI cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The current Internet lacks a bandwidth-reservation infrastructure that
enables fine-grained inter-domain reservations for end hosts. This is hindering
the provisioning of quality-of-service guarantees for real-time applications
like video calls and gaming, cloud-based systems, financial transactions,
telesurgery, and other remote applications that benefit from reliable
communication. This paper introduces Hummingbird, a novel lightweight
inter-domain bandwidth-reservation system that addresses several shortcomings
of previous designs.
Hummingbird supports flexible and composable reservations and enables
end-to-end guarantees without requiring autonomous systems to manage
reservations for their endhosts. Previous systems tied reservations to
autonomous-system numbers or network addresses, which limits the flexibility of
reservations. In contrast, our system decouples reservations from network
identities and, as a result, the control plane from the data plane. This design
choice facilitates multiple co-existing control-plane mechanisms and enables
innovative approaches, such as a control plane based on blockchain smart
contracts that offers tradeable bandwidth-reservation assets and end-to-end
guarantees. The data-plane design ensures simplicity for efficient processing
on border routers, which streamlines implementation, deployment, and traffic
policing while maintaining robust security properties.
| [
{
"version": "v1",
"created": "Sat, 19 Aug 2023 09:27:46 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 09:17:15 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Giuliari",
"Giacomo",
""
],
[
"Legner",
"Markus",
""
],
[
"Perrig",
"Adrian",
""
],
[
"Smith",
"Jean-Pierre",
""
],
[
"Wüst",
"Karl",
""
]
]
| new_dataset | 0.999715 |
2308.11951 | Chunjin Song | Chunjin Song, Bastian Wandt, Helge Rhodin | Pose Modulated Avatars from Video | null | null | null | null | cs.CV cs.AI cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is now possible to reconstruct dynamic human motion and shape from a
sparse set of cameras using Neural Radiance Fields (NeRF) driven by an
underlying skeleton. However, a challenge remains to model the deformation of
cloth and skin in relation to skeleton pose. Unlike existing avatar models that
are learned implicitly or rely on a proxy surface, our approach is motivated by
the observation that different poses necessitate unique frequency assignments.
Neglecting this distinction yields noisy artifacts in smooth areas or blurs
fine-grained texture and shape details in sharp regions. We develop a
two-branch neural network that is adaptive and explicit in the frequency
domain. The first branch is a graph neural network that models correlations
among body parts locally, taking skeleton pose as input. The second branch
combines these correlation features to a set of global frequencies and then
modulates the feature encoding. Our experiments demonstrate that our network
outperforms state-of-the-art methods in terms of preserving details and
generalization capabilities.
| [
{
"version": "v1",
"created": "Wed, 23 Aug 2023 06:49:07 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Aug 2023 18:52:03 GMT"
},
{
"version": "v3",
"created": "Fri, 29 Sep 2023 15:03:09 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Song",
"Chunjin",
""
],
[
"Wandt",
"Bastian",
""
],
[
"Rhodin",
"Helge",
""
]
]
| new_dataset | 0.96411 |
2308.14477 | Zhuoqi Cheng | Zhuoqi Cheng, Simon Lyck Bj{\ae}rt S{\o}rensen, Mikkel Werge Olsen,
Ren\'e Lynge Eriksen, Thiusius Rajeeth Savarimuthu | Medical needle tip tracking based on Optical Imaging and AI | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep needle insertion to a target often poses a huge challenge, requiring a
combination of specialized skills, assistive technology, and extensive
training. One of the frequently encountered medical scenarios demanding such
expertise includes the needle insertion into a femoral vessel in the groin.
After the access to the femoral vessel, various medical procedures, such as
cardiac catheterization and extracorporeal membrane oxygenation (ECMO) can be
performed. However, even with the aid of Ultrasound imaging, achieving
successful insertion can necessitate multiple attempts due to the complexities
of anatomy and tissue deformation. To address this challenge, this paper
presents an innovative technology for needle tip real-time tracking, aiming for
enhanced needle insertion guidance. Specifically, our approach revolves around
the creation of scattering imaging using an optical fiber-equipped needle, and
uses Convolutional Neural Network (CNN) based algorithms to enable real-time
estimation of the needle tip's position and orientation during insertion
procedures. The efficacy of the proposed technology was rigorously evaluated
through three experiments. The first two experiments involved rubber and bacon
phantoms to simulate groin anatomy. The positional errors averaging 2.3+1.5mm
and 2.0+1.2mm, and the orientation errors averaging 0.2+0.11rad and
0.16+0.1rad. Furthermore, the system's capabilities were validated through
experiments conducted on fresh porcine phantom mimicking more complex
anatomical structures, yielding positional accuracy results of 3.2+3.1mm and
orientational accuracy of 0.19+0.1rad. Given the average femoral arterial
radius of 4 to 5mm, the proposed system is demonstrated with a great potential
for precise needle guidance in femoral artery insertion procedures. In
addition, the findings highlight the broader potential applications of the
system in the medical field.
| [
{
"version": "v1",
"created": "Mon, 28 Aug 2023 10:30:08 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Sep 2023 14:27:00 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Cheng",
"Zhuoqi",
""
],
[
"Sørensen",
"Simon Lyck Bjært",
""
],
[
"Olsen",
"Mikkel Werge",
""
],
[
"Eriksen",
"René Lynge",
""
],
[
"Savarimuthu",
"Thiusius Rajeeth",
""
]
]
| new_dataset | 0.994714 |
2308.16149 | Preslav Nakov | Neha Sengupta, Sunil Kumar Sahu, Bokang Jia, Satheesh Katipomu, Haonan
Li, Fajri Koto, William Marshall, Gurpreet Gosal, Cynthia Liu, Zhiming Chen,
Osama Mohammed Afzal, Samta Kamboj, Onkar Pandit, Rahul Pal, Lalit Pradhan,
Zain Muhammad Mujahid, Massa Baali, Xudong Han, Sondos Mahmoud Bsharat, Alham
Fikri Aji, Zhiqiang Shen, Zhengzhong Liu, Natalia Vassilieva, Joel Hestness,
Andy Hock, Andrew Feldman, Jonathan Lee, Andrew Jackson, Hector Xuguang Ren,
Preslav Nakov, Timothy Baldwin, Eric Xing | Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open
Generative Large Language Models | Arabic-centric, foundation model, large-language model, LLM,
generative model, instruction-tuned, Jais, Jais-chat | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce Jais and Jais-chat, new state-of-the-art Arabic-centric
foundation and instruction-tuned open generative large language models (LLMs).
The models are based on the GPT-3 decoder-only architecture and are pretrained
on a mixture of Arabic and English texts, including source code in various
programming languages. With 13 billion parameters, they demonstrate better
knowledge and reasoning capabilities in Arabic than any existing open Arabic
and multilingual models by a sizable margin, based on extensive evaluation.
Moreover, the models are competitive in English compared to English-centric
open models of similar size, despite being trained on much less English data.
We provide a detailed description of the training, the tuning, the safety
alignment, and the evaluation of the models. We release two open versions of
the model -- the foundation Jais model, and an instruction-tuned Jais-chat
variant -- with the aim of promoting research on Arabic LLMs. Available at
https://huggingface.co/inception-mbzuai/jais-13b-chat
| [
{
"version": "v1",
"created": "Wed, 30 Aug 2023 17:07:17 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 11:51:51 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Sengupta",
"Neha",
""
],
[
"Sahu",
"Sunil Kumar",
""
],
[
"Jia",
"Bokang",
""
],
[
"Katipomu",
"Satheesh",
""
],
[
"Li",
"Haonan",
""
],
[
"Koto",
"Fajri",
""
],
[
"Marshall",
"William",
""
],
[
"Gosal",
"Gurpreet",
""
],
[
"Liu",
"Cynthia",
""
],
[
"Chen",
"Zhiming",
""
],
[
"Afzal",
"Osama Mohammed",
""
],
[
"Kamboj",
"Samta",
""
],
[
"Pandit",
"Onkar",
""
],
[
"Pal",
"Rahul",
""
],
[
"Pradhan",
"Lalit",
""
],
[
"Mujahid",
"Zain Muhammad",
""
],
[
"Baali",
"Massa",
""
],
[
"Han",
"Xudong",
""
],
[
"Bsharat",
"Sondos Mahmoud",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Shen",
"Zhiqiang",
""
],
[
"Liu",
"Zhengzhong",
""
],
[
"Vassilieva",
"Natalia",
""
],
[
"Hestness",
"Joel",
""
],
[
"Hock",
"Andy",
""
],
[
"Feldman",
"Andrew",
""
],
[
"Lee",
"Jonathan",
""
],
[
"Jackson",
"Andrew",
""
],
[
"Ren",
"Hector Xuguang",
""
],
[
"Nakov",
"Preslav",
""
],
[
"Baldwin",
"Timothy",
""
],
[
"Xing",
"Eric",
""
]
]
| new_dataset | 0.996712 |
2309.03179 | Aliasghar Khani | Aliasghar Khani, Saeid Asgari Taghanaki, Aditya Sanghi, Ali Mahdavi
Amiri, Ghassan Hamarneh | SLiMe: Segment Like Me | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Significant strides have been made using large vision-language models, like
Stable Diffusion (SD), for a variety of downstream tasks, including image
editing, image correspondence, and 3D shape generation. Inspired by these
advancements, we explore leveraging these extensive vision-language models for
segmenting images at any desired granularity using as few as one annotated
sample by proposing SLiMe. SLiMe frames this problem as an optimization task.
Specifically, given a single training image and its segmentation mask, we first
extract attention maps, including our novel "weighted accumulated
self-attention map" from the SD prior. Then, using the extracted attention
maps, the text embeddings of Stable Diffusion are optimized such that, each of
them, learn about a single segmented region from the training image. These
learned embeddings then highlight the segmented region in the attention maps,
which in turn can then be used to derive the segmentation map. This enables
SLiMe to segment any real-world image during inference with the granularity of
the segmented region in the training image, using just one example. Moreover,
leveraging additional training data when available, i.e. few-shot, improves the
performance of SLiMe. We carried out a knowledge-rich set of experiments
examining various design factors and showed that SLiMe outperforms other
existing one-shot and few-shot segmentation methods.
| [
{
"version": "v1",
"created": "Wed, 6 Sep 2023 17:39:05 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 15:14:51 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Khani",
"Aliasghar",
""
],
[
"Taghanaki",
"Saeid Asgari",
""
],
[
"Sanghi",
"Aditya",
""
],
[
"Amiri",
"Ali Mahdavi",
""
],
[
"Hamarneh",
"Ghassan",
""
]
]
| new_dataset | 0.975834 |
2309.03377 | Guillaume Rosinosky | Guillaume Rosinosky, Donatien Schmitz, Etienne Rivi\`ere | StreamBed: capacity planning for stream processing | 14 pages, 11 figures. This project has been funded by the Walloon
region (Belgium) through the Win2Wal project GEPICIAD | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | StreamBed is a capacity planning system for stream processing. It predicts,
ahead of any production deployment, the resources that a query will require to
process an incoming data rate sustainably, and the appropriate configuration of
these resources. StreamBed builds a capacity planning model by piloting a
series of runs of the target query in a small-scale, controlled testbed. We
implement StreamBed for the popular Flink DSP engine. Our evaluation with
large-scale queries of the Nexmark benchmark demonstrates that StreamBed can
effectively and accurately predict capacity requirements for jobs spanning more
than 1,000 cores using a testbed of only 48 cores.
| [
{
"version": "v1",
"created": "Wed, 6 Sep 2023 21:56:09 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Sep 2023 10:35:50 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Sep 2023 09:40:54 GMT"
},
{
"version": "v4",
"created": "Thu, 28 Sep 2023 21:43:51 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Rosinosky",
"Guillaume",
""
],
[
"Schmitz",
"Donatien",
""
],
[
"Rivière",
"Etienne",
""
]
]
| new_dataset | 0.956842 |
2309.04899 | Shahriar Ferdous | Shahriar Ferdous, Laszlo B. Kish | Transient Attacks against the VMG-KLJN Secure Key Exchanger | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The security vulnerability of the Vadai, Mingesz, and Gingl (VMG)
Kirchhoff-Law-Johnson-Noise (KLJN) key exchanger, as presented in the
publication "Nature, Science Report 5 (2015) 13653," has been exposed to
transient attacks. Recently an effective defense protocol was introduced (Appl.
Phys. Lett. 122 (2023) 143503) to counteract mean-square voltage-based (or
mean-square current-based) transient attacks targeted at the ideal KLJN
framework.
In the present study, this same mitigation methodology has been employed to
fortify the security of the VMG-KLJN key exchanger. It is worth noting that the
protective measures need to be separately implemented for the HL and LH
scenarios. This conceptual framework is corroborated through computer
simulations, demonstrating that the application of this defensive technique
substantially mitigates information leakage to a point of insignificance.
| [
{
"version": "v1",
"created": "Sat, 9 Sep 2023 23:54:22 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 04:25:56 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Ferdous",
"Shahriar",
""
],
[
"Kish",
"Laszlo B.",
""
]
]
| new_dataset | 0.981696 |
2309.10930 | Sri Harsha Dumpala Mr | Sri Harsha Dumpala and Chandramouli Sastry and Sageev Oore | Test-Time Training for Speech | null | null | null | null | cs.SD cs.LG eess.AS | http://creativecommons.org/licenses/by/4.0/ | In this paper, we study the application of Test-Time Training (TTT) as a
solution to handling distribution shifts in speech applications. In particular,
we introduce distribution-shifts to the test datasets of standard
speech-classification tasks -- for example, speaker-identification and
emotion-detection -- and explore how Test-Time Training (TTT) can help adjust
to the distribution-shift. In our experiments that include distribution shifts
due to background noise and natural variations in speech such as gender and
age, we identify some key-challenges with TTT including sensitivity to
optimization hyperparameters (e.g., number of optimization steps and subset of
parameters chosen for TTT) and scalability (e.g., as each example gets its own
set of parameters, TTT is not scalable). Finally, we propose using BitFit -- a
parameter-efficient fine-tuning algorithm proposed for text applications that
only considers the bias parameters for fine-tuning -- as a solution to the
aforementioned challenges and demonstrate that it is consistently more stable
than fine-tuning all the parameters of the model.
| [
{
"version": "v1",
"created": "Tue, 19 Sep 2023 21:06:22 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Sep 2023 21:06:02 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Dumpala",
"Sri Harsha",
""
],
[
"Sastry",
"Chandramouli",
""
],
[
"Oore",
"Sageev",
""
]
]
| new_dataset | 0.986308 |
2309.15332 | Hanzhe Teng | Hanzhe Teng, Yipeng Wang, Xiaoao Song, Konstantinos Karydis | Multimodal Dataset for Localization, Mapping and Crop Monitoring in
Citrus Tree Farms | Accepted to the 18th International Symposium on Visual Computing
(ISVC 2023) | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we introduce the CitrusFarm dataset, a comprehensive multimodal
sensory dataset collected by a wheeled mobile robot operating in agricultural
fields. The dataset offers stereo RGB images with depth information, as well as
monochrome, near-infrared and thermal images, presenting diverse spectral
responses crucial for agricultural research. Furthermore, it provides a range
of navigational sensor data encompassing wheel odometry, LiDAR, inertial
measurement unit (IMU), and GNSS with Real-Time Kinematic (RTK) as the
centimeter-level positioning ground truth. The dataset comprises seven
sequences collected in three fields of citrus trees, featuring various tree
species at different growth stages, distinctive planting patterns, as well as
varying daylight conditions. It spans a total operation time of 1.7 hours,
covers a distance of 7.5 km, and constitutes 1.3 TB of data. We anticipate that
this dataset can facilitate the development of autonomous robot systems
operating in agricultural tree environments, especially for localization,
mapping and crop monitoring tasks. Moreover, the rich sensing modalities
offered in this dataset can also support research in a range of robotics and
computer vision tasks, such as place recognition, scene understanding, object
detection and segmentation, and multimodal learning. The dataset, in
conjunction with related tools and resources, is made publicly available at
https://github.com/UCR-Robotics/Citrus-Farm-Dataset.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 00:30:08 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Sep 2023 01:43:49 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Teng",
"Hanzhe",
""
],
[
"Wang",
"Yipeng",
""
],
[
"Song",
"Xiaoao",
""
],
[
"Karydis",
"Konstantinos",
""
]
]
| new_dataset | 0.999821 |
2309.16689 | Conor Trygstad | Conor K. Trygstad, Xuan-Truc Nguyen and Nestor O. Perez-Arancibia | A New 1-mg Fast Unimorph SMA-Based Actuator for Microrobotics | IROS 2023 | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | We present a new unimorph actuator for micro-robotics, which is driven by
thin shape-memory alloy (SMA) wires. Using a passive-capillary-alignment
technique and existing SMA-microsystem fabrication methods, we developed an
actuator that is 7 mm long, has a volume of 0.45 mm^3, weighs 0.96 mg, and can
achieve operation frequencies of up to 40 Hz as well as lift 155 times its own
weight. To demonstrate the capabilities of the proposed actuator, we created an
8-mg crawler, the MiniBug, and a bioinspired 56-mg controllable
water-surface-tension crawler, the WaterStrider. The MiniBug is 8.5 mm long,
can locomote at speeds as high as 0.76 BL/s (body-lengths per second), and is
the lightest fully-functional crawling microrobot of its type ever created. The
WaterStrider is 22 mm long, and can locomote at speeds of up to 0.28 BL/s as
well as execute turning maneuvers at angular rates on the order of 0.144 rad/s.
The WaterStrider is the lightest controllable SMA-driven water-surface-tension
crawler developed to date.
| [
{
"version": "v1",
"created": "Thu, 3 Aug 2023 21:02:12 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Trygstad",
"Conor K.",
""
],
[
"Nguyen",
"Xuan-Truc",
""
],
[
"Perez-Arancibia",
"Nestor O.",
""
]
]
| new_dataset | 0.998468 |
2309.16698 | Tommaso Guffanti | Tommaso Guffanti, Toby Bell, Samuel Y. W. Low, Mason Murray-Cooper,
Simone D'Amico | Autonomous Guidance Navigation and Control of the VISORS
Formation-Flying Mission | Presented in 2023 AAS/AIAA Astrodynamics Specialist Conference | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Virtual Super-resolution Optics with Reconfigurable Swarms (VISORS) is a
distributed telescope mission for high-resolution imaging of the Sun using two
6U CubeSats flying in formation in a Sun-synchronous low-Earth orbit. An optics
spacecraft carries a photon sieve acting as a high-resolution lens in the
extreme ultraviolet spectrum, while the image passing through the sieve is
focused on a detector spacecraft. This paper presents the newly conceived
design of the on-board guidance, navigation and control (GNC) system, which is
highly autonomous, robust, passively safe, and validated under realistic
mission simulations. The primary objective of the GNC system is to establish a
passively safe and high-precision formation alignment at 40-meter separation,
with sub-centimeter relative navigation and position control accuracy, over
repeated observations of 10-second duration. Science mission success rates are
assessed via Monte-Carlo analyses under realistically modelled uncertainties
stemming from sensing errors, maneuver errors, unmodelled dynamics, and
erroneous knowledge of internal spacecraft components. Precise real-time
relative navigation is achieved by carrier phase differential GPS with integer
ambiguity resolution. Precise control over short baselines is achieved via
closed-loop optimization-based stochastic model predictive control with
centimeter-level accuracy. Control at far range and during approach is achieved
by closed-form impulsive control with meter-level accuracy. Passive safety is
enforced throughout the mission to mitigate collision risks even under critical
subsystem failure. Beyond VISORS, this work also realizes the crucial insight
that the described GNC architecture is generalizable to other distributed space
missions where accuracy and fault-tolerant safety are key requirements, such as
rendezvous, proximity operations, and swarming missions.
| [
{
"version": "v1",
"created": "Sat, 12 Aug 2023 01:44:44 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Guffanti",
"Tommaso",
""
],
[
"Bell",
"Toby",
""
],
[
"Low",
"Samuel Y. W.",
""
],
[
"Murray-Cooper",
"Mason",
""
],
[
"D'Amico",
"Simone",
""
]
]
| new_dataset | 0.992323 |
2309.16700 | Kazi Reyazul Hasan | Kazi Reyazul Hasan (1), Mubasshira Musarrat (1), Sadif Ahmed (1) and
Shahriar Raj (1) ((1) Bangladesh University of Engineering and Technology) | Framework and Model Analysis on Bengali Document Layout Analysis
Dataset: BaDLAD | 5 pages, 6 figures, uses IEEEtran.cls | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | This study focuses on understanding Bengali Document Layouts using advanced
computer programs: Detectron2, YOLOv8, and SAM. We looked at lots of different
Bengali documents in our study. Detectron2 is great at finding and separating
different parts of documents, like text boxes and paragraphs. YOLOv8 is good at
figuring out different tables and pictures. We also tried SAM, which helps us
understand tricky layouts. We tested these programs to see how well they work.
By comparing their accuracy and speed, we learned which one is good for
different types of documents. Our research helps make sense of complex layouts
in Bengali documents and can be useful for other languages too.
| [
{
"version": "v1",
"created": "Tue, 15 Aug 2023 07:52:24 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Hasan",
"Kazi Reyazul",
"",
"Bangladesh University of Engineering and Technology"
],
[
"Musarrat",
"Mubasshira",
"",
"Bangladesh University of Engineering and Technology"
],
[
"Ahmed",
"Sadif",
"",
"Bangladesh University of Engineering and Technology"
],
[
"Raj",
"Shahriar",
"",
"Bangladesh University of Engineering and Technology"
]
]
| new_dataset | 0.998672 |
2309.16718 | Hongyin Zhang | Hongyin Zhang, Shuyu Yang and Donglin Wang | A Real-World Quadrupedal Locomotion Benchmark for Offline Reinforcement
Learning | This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible | null | null | null | cs.RO cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online reinforcement learning (RL) methods are often data-inefficient or
unreliable, making them difficult to train on real robotic hardware, especially
quadruped robots. Learning robotic tasks from pre-collected data is a promising
direction. Meanwhile, agile and stable legged robotic locomotion remains an
open question in their general form. Offline reinforcement learning (ORL) has
the potential to make breakthroughs in this challenging field, but its current
bottleneck lies in the lack of diverse datasets for challenging realistic
tasks. To facilitate the development of ORL, we benchmarked 11 ORL algorithms
in the realistic quadrupedal locomotion dataset. Such dataset is collected by
the classic model predictive control (MPC) method, rather than the model-free
online RL method commonly used by previous benchmarks. Extensive experimental
results show that the best-performing ORL algorithms can achieve competitive
performance compared with the model-free RL, and even surpass it in some tasks.
However, there is still a gap between the learning-based methods and MPC,
especially in terms of stability and rapid adaptation. Our proposed benchmark
will serve as a development platform for testing and evaluating the performance
of ORL algorithms in real-world legged locomotion tasks.
| [
{
"version": "v1",
"created": "Wed, 13 Sep 2023 13:18:29 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Zhang",
"Hongyin",
""
],
[
"Yang",
"Shuyu",
""
],
[
"Wang",
"Donglin",
""
]
]
| new_dataset | 0.998849 |
2309.16729 | Frederic Jurie | Sidney Besnard, Fr\'ed\'eric Jurie (UNICAEN), Jalal M. Fadili (NU,
ENSICAEN, GREYC) | SimPINNs: Simulation-Driven Physics-Informed Neural Networks for
Enhanced Performance in Nonlinear Inverse Problems | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a novel approach to solve inverse problems by
leveraging deep learning techniques. The objective is to infer unknown
parameters that govern a physical system based on observed data. We focus on
scenarios where the underlying forward model demonstrates pronounced nonlinear
behaviour, and where the dimensionality of the unknown parameter space is
substantially smaller than that of the observations. Our proposed method builds
upon physics-informed neural networks (PINNs) trained with a hybrid loss
function that combines observed data with simulated data generated by a known
(approximate) physical model. Experimental results on an orbit restitution
problem demonstrate that our approach surpasses the performance of standard
PINNs, providing improved accuracy and robustness.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 06:34:55 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Besnard",
"Sidney",
"",
"UNICAEN"
],
[
"Jurie",
"Frédéric",
"",
"UNICAEN"
],
[
"Fadili",
"Jalal M.",
"",
"NU,\n ENSICAEN, GREYC"
]
]
| new_dataset | 0.988268 |
2309.16768 | Yuan Tian | Chenxi Xiao, Yuan Tian | Encountered-Type Haptic Display via Tracking Calibrated Robot | null | null | null | null | cs.RO cs.HC | http://creativecommons.org/licenses/by/4.0/ | In the past decades, a variety of haptic devices have been developed to
facilitate high-fidelity human-computer interaction (HCI) in virtual reality
(VR). In particular, passive haptic feedback can create a compelling sensation
based on real objects spatially overlapping with their virtual counterparts.
However, these approaches require pre-deployment efforts, hindering their
democratizing use in practice. We propose the Tracking Calibrated Robot (TCR),
a novel and general haptic approach to free developers from deployment efforts,
which can be potentially deployed in any scenario. Specifically, we augment the
VR with a collaborative robot that renders haptic contact in the real world
while the user touches a virtual object in the virtual world. The distance
between the user's finger and the robot end-effector is controlled over time.
The distance starts to smoothly reduce to zero when the user intends to touch
the virtual object. A mock user study tested users' perception of three virtual
objects, and the result shows that TCR is effective in terms of conveying
discriminative shape information.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 18:04:48 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Xiao",
"Chenxi",
""
],
[
"Tian",
"Yuan",
""
]
]
| new_dataset | 0.989696 |
2309.16782 | Adam Schmidt | Adam Schmidt, Omid Mohareri, Simon DiMaio, Septimiu E. Salcudean | STIR: Surgical Tattoos in Infrared | This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantifying performance of methods for tracking and mapping tissue in
endoscopic environments is essential for enabling image guidance and automation
of medical interventions and surgery. Datasets developed so far either use
rigid environments, visible markers, or require annotators to label salient
points in videos after collection. These are respectively: not general, visible
to algorithms, or costly and error-prone. We introduce a novel labeling
methodology along with a dataset that uses said methodology, Surgical Tattoos
in Infrared (STIR). STIR has labels that are persistent but invisible to
visible spectrum algorithms. This is done by labelling tissue points with
IR-flourescent dye, indocyanine green (ICG), and then collecting visible light
video clips. STIR comprises hundreds of stereo video clips in both in-vivo and
ex-vivo scenes with start and end points labelled in the IR spectrum. With over
3,000 labelled points, STIR will help to quantify and enable better analysis of
tracking and mapping methods. After introducing STIR, we analyze multiple
different frame-based tracking methods on STIR using both 3D and 2D endpoint
error and accuracy metrics. STIR is available at
https://dx.doi.org/10.21227/w8g4-g548
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 18:22:34 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Schmidt",
"Adam",
""
],
[
"Mohareri",
"Omid",
""
],
[
"DiMaio",
"Simon",
""
],
[
"Salcudean",
"Septimiu E.",
""
]
]
| new_dataset | 0.999627 |
2309.16797 | Chrisantha Fernando Dr | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon
Osindero, Tim Rockt\"aschel | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | null | null | null | null | cs.CL cs.AI cs.LG cs.NE | http://creativecommons.org/licenses/by/4.0/ | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 19:01:07 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Fernando",
"Chrisantha",
""
],
[
"Banarse",
"Dylan",
""
],
[
"Michalewski",
"Henryk",
""
],
[
"Osindero",
"Simon",
""
],
[
"Rocktäschel",
"Tim",
""
]
]
| new_dataset | 0.990552 |
2309.16801 | Michael Unterkalmsteiner | Huynh Khanh Vi Tran, Nauman Bin Ali, J\"urgen B\"orstler, Michael
Unterkalmsteiner | Test-Case Quality -- Understanding Practitioners' Perspectives | PROFES 2019: 37-52 | null | 10.1007/978-3-030-35333-9_3 | null | cs.SE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Background: Test-case quality has always been one of the major concerns in
software testing. To improve test-case quality, it is important to better
understand how practitioners perceive the quality of test-cases. Objective:
Motivated by that need, we investigated how practitioners define test-case
quality and which aspects of test-cases are important for quality assessment.
Method: We conducted semi-structured interviews with professional developers,
testers and test architects from a multinational software company in Sweden.
Before the interviews, we asked participants for actual test cases (written in
natural language) that they perceive as good, normal, and bad respectively
together with rationales for their assessment. We also compared their opinions
on shared test cases and contrasted their views with the relevant literature.
Results: We present a quality model which consists of 11 test-case quality
attributes. We also identify a misalignment in defining test-case quality among
practitioners and between academia and industry, along with suggestions for
improving test-case quality in industry. Conclusion: The results show that
practitioners' background, including roles and working experience, are critical
dimensions of how test-case quality is defined and assessed.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 19:10:01 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Tran",
"Huynh Khanh Vi",
""
],
[
"Ali",
"Nauman Bin",
""
],
[
"Börstler",
"Jürgen",
""
],
[
"Unterkalmsteiner",
"Michael",
""
]
]
| new_dataset | 0.973436 |
2309.16818 | Jonas Frey | Gian Erni, Jonas Frey, Takahiro Miki, Matias Mattamala, Marco Hutter | MEM: Multi-Modal Elevation Mapping for Robotics and Learning | Accapted for IROS2023. This work has been submitted to the IEEE for
possible publication. Copyright may be transferred without notice, after
which this version may no longer be accessible | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | Elevation maps are commonly used to represent the environment of mobile
robots and are instrumental for locomotion and navigation tasks. However, pure
geometric information is insufficient for many field applications that require
appearance or semantic information, which limits their applicability to other
platforms or domains. In this work, we extend a 2.5D robot-centric elevation
mapping framework by fusing multi-modal information from multiple sources into
a popular map representation. The framework allows inputting data contained in
point clouds or images in a unified manner. To manage the different nature of
the data, we also present a set of fusion algorithms that can be selected based
on the information type and user requirements. Our system is designed to run on
the GPU, making it real-time capable for various robotic and learning tasks. We
demonstrate the capabilities of our framework by deploying it on multiple
robots with varying sensor configurations and showcasing a range of
applications that utilize multi-modal layers, including line detection, human
detection, and colorization.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 19:55:29 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Erni",
"Gian",
""
],
[
"Frey",
"Jonas",
""
],
[
"Miki",
"Takahiro",
""
],
[
"Mattamala",
"Matias",
""
],
[
"Hutter",
"Marco",
""
]
]
| new_dataset | 0.992124 |
2309.16844 | Matheus Rodrigues De Souza F\'elix | Israel Campiotti, Matheus Rodrigues, Yuri Albuquerque, Rafael Azevedo,
Alyson Andrade | DeBERTinha: A Multistep Approach to Adapt DebertaV3 XSmall for Brazilian
Portuguese Natural Language Processing Task | 6 pages, 1 table | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This paper presents an approach for adapting the DebertaV3 XSmall model
pre-trained in English for Brazilian Portuguese natural language processing
(NLP) tasks. A key aspect of the methodology involves a multistep training
process to ensure the model is effectively tuned for the Portuguese language.
Initial datasets from Carolina and BrWac are preprocessed to address issues
like emojis, HTML tags, and encodings. A Portuguese-specific vocabulary of
50,000 tokens is created using SentencePiece. Rather than training from
scratch, the weights of the pre-trained English model are used to initialize
most of the network, with random embeddings, recognizing the expensive cost of
training from scratch. The model is fine-tuned using the replaced token
detection task in the same format of DebertaV3 training. The adapted model,
called DeBERTinha, demonstrates effectiveness on downstream tasks like named
entity recognition, sentiment analysis, and determining sentence relatedness,
outperforming BERTimbau-Large in two tasks despite having only 40M parameters.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 20:53:25 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Campiotti",
"Israel",
""
],
[
"Rodrigues",
"Matheus",
""
],
[
"Albuquerque",
"Yuri",
""
],
[
"Azevedo",
"Rafael",
""
],
[
"Andrade",
"Alyson",
""
]
]
| new_dataset | 0.998966 |
2309.16850 | Hong-Bin Yang | Hong-Bin Yang | Sketch2CADScript: 3D Scene Reconstruction from 2D Sketch using Visual
Transformer and Rhino Grasshopper | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Existing 3D model reconstruction methods typically produce outputs in the
form of voxels, point clouds, or meshes. However, each of these approaches has
its limitations and may not be suitable for every scenario. For instance, the
resulting model may exhibit a rough surface and distorted structure, making
manual editing and post-processing challenging for humans. In this paper, we
introduce a novel 3D reconstruction method designed to address these issues. We
trained a visual transformer to predict a "scene descriptor" from a single
wire-frame image. This descriptor encompasses crucial information, including
object types and parameters such as position, rotation, and size. With the
predicted parameters, a 3D scene can be reconstructed using 3D modeling
software like Blender or Rhino Grasshopper which provides a programmable
interface, resulting in finely and easily editable 3D models. To evaluate the
proposed model, we created two datasets: one featuring simple scenes and
another with complex scenes. The test results demonstrate the model's ability
to accurately reconstruct simple scenes but reveal its challenges with more
complex ones.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 21:02:04 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Yang",
"Hong-Bin",
""
]
]
| new_dataset | 0.99924 |
2309.16898 | JongYoon Lim | JongYoon Lim, Inkyu Sa, Bruce MacDonald, and Ho Seok Ahn | A Sign Language Recognition System with Pepper, Lightweight-Transformer,
and LLM | null | null | null | null | cs.RO cs.CL cs.CV cs.HC | http://creativecommons.org/licenses/by/4.0/ | This research explores using lightweight deep neural network architectures to
enable the humanoid robot Pepper to understand American Sign Language (ASL) and
facilitate non-verbal human-robot interaction. First, we introduce a
lightweight and efficient model for ASL understanding optimized for embedded
systems, ensuring rapid sign recognition while conserving computational
resources. Building upon this, we employ large language models (LLMs) for
intelligent robot interactions. Through intricate prompt engineering, we tailor
interactions to allow the Pepper Robot to generate natural Co-Speech Gesture
responses, laying the foundation for more organic and intuitive humanoid-robot
dialogues. Finally, we present an integrated software pipeline, embodying
advancements in a socially aware AI interaction model. Leveraging the Pepper
Robot's capabilities, we demonstrate the practicality and effectiveness of our
approach in real-world scenarios. The results highlight a profound potential
for enhancing human-robot interaction through non-verbal interactions, bridging
communication gaps, and making technology more accessible and understandable.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 23:54:41 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Lim",
"JongYoon",
""
],
[
"Sa",
"Inkyu",
""
],
[
"MacDonald",
"Bruce",
""
],
[
"Ahn",
"Ho Seok",
""
]
]
| new_dataset | 0.991785 |
2309.16909 | Yunsheng Tian | Yunsheng Tian, Karl D.D. Willis, Bassel Al Omari, Jieliang Luo,
Pingchuan Ma, Yichen Li, Farhad Javid, Edward Gu, Joshua Jacob, Shinjiro
Sueda, Hui Li, Sachin Chitta and Wojciech Matusik | ASAP: Automated Sequence Planning for Complex Robotic Assembly with
Physical Feasibility | null | null | null | null | cs.RO cs.AI cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The automated assembly of complex products requires a system that can
automatically plan a physically feasible sequence of actions for assembling
many parts together. In this paper, we present ASAP, a physics-based planning
approach for automatically generating such a sequence for general-shaped
assemblies. ASAP accounts for gravity to design a sequence where each
sub-assembly is physically stable with a limited number of parts being held and
a support surface. We apply efficient tree search algorithms to reduce the
combinatorial complexity of determining such an assembly sequence. The search
can be guided by either geometric heuristics or graph neural networks trained
on data with simulation labels. Finally, we show the superior performance of
ASAP at generating physically realistic assembly sequence plans on a large
dataset of hundreds of complex product assemblies. We further demonstrate the
applicability of ASAP on both simulation and real-world robotic setups. Project
website: asap.csail.mit.edu
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 00:27:40 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Tian",
"Yunsheng",
""
],
[
"Willis",
"Karl D. D.",
""
],
[
"Omari",
"Bassel Al",
""
],
[
"Luo",
"Jieliang",
""
],
[
"Ma",
"Pingchuan",
""
],
[
"Li",
"Yichen",
""
],
[
"Javid",
"Farhad",
""
],
[
"Gu",
"Edward",
""
],
[
"Jacob",
"Joshua",
""
],
[
"Sueda",
"Shinjiro",
""
],
[
"Li",
"Hui",
""
],
[
"Chitta",
"Sachin",
""
],
[
"Matusik",
"Wojciech",
""
]
]
| new_dataset | 0.993888 |
2309.16956 | Runnan Chen Dr. | Runnan Chen, Xinge Zhu, Nenglun Chen, Dawei Wang, Wei Li, Yuexin Ma,
Ruigang Yang, Tongliang Liu, Wenping Wang | Model2Scene: Learning 3D Scene Representation via Contrastive
Language-CAD Models Pre-training | arXiv admin note: substantial text overlap with arXiv:2203.10546 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Current successful methods of 3D scene perception rely on the large-scale
annotated point cloud, which is tedious and expensive to acquire. In this
paper, we propose Model2Scene, a novel paradigm that learns free 3D scene
representation from Computer-Aided Design (CAD) models and languages. The main
challenges are the domain gaps between the CAD models and the real scene's
objects, including model-to-scene (from a single model to the scene) and
synthetic-to-real (from synthetic model to real scene's object). To handle the
above challenges, Model2Scene first simulates a crowded scene by mixing
data-augmented CAD models. Next, we propose a novel feature regularization
operation, termed Deep Convex-hull Regularization (DCR), to project point
features into a unified convex hull space, reducing the domain gap. Ultimately,
we impose contrastive loss on language embedding and the point features of CAD
models to pre-train the 3D network. Extensive experiments verify the learned 3D
scene representation is beneficial for various downstream tasks, including
label-free 3D object salient detection, label-efficient 3D scene perception and
zero-shot 3D semantic segmentation. Notably, Model2Scene yields impressive
label-free 3D object salient detection with an average mAP of 46.08\% and
55.49\% on the ScanNet and S3DIS datasets, respectively. The code will be
publicly available.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 03:51:26 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Chen",
"Runnan",
""
],
[
"Zhu",
"Xinge",
""
],
[
"Chen",
"Nenglun",
""
],
[
"Wang",
"Dawei",
""
],
[
"Li",
"Wei",
""
],
[
"Ma",
"Yuexin",
""
],
[
"Yang",
"Ruigang",
""
],
[
"Liu",
"Tongliang",
""
],
[
"Wang",
"Wenping",
""
]
]
| new_dataset | 0.992575 |
2309.16992 | Jingqian Wu | Jingqian Wu, Rongtao Xu, Zach Wood-Doughty, Changwei Wang | Segment Anything Model is a Good Teacher for Local Feature Learning | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Local feature detection and description play an important role in many
computer vision tasks, which are designed to detect and describe keypoints in
"any scene" and "any downstream task". Data-driven local feature learning
methods need to rely on pixel-level correspondence for training, which is
challenging to acquire at scale, thus hindering further improvements in
performance. In this paper, we propose SAMFeat to introduce SAM (segment
anything model), a fundamental model trained on 11 million images, as a teacher
to guide local feature learning and thus inspire higher performance on limited
datasets. To do so, first, we construct an auxiliary task of Pixel Semantic
Relational Distillation (PSRD), which distillates feature relations with
category-agnostic semantic information learned by the SAM encoder into a local
feature learning network, to improve local feature description using semantic
discrimination. Second, we develop a technique called Weakly Supervised
Contrastive Learning Based on Semantic Grouping (WSC), which utilizes semantic
groupings derived from SAM as weakly supervised signals, to optimize the metric
space of local descriptors. Third, we design an Edge Attention Guidance (EAG)
to further improve the accuracy of local feature detection and description by
prompting the network to pay more attention to the edge region guided by SAM.
SAMFeat's performance on various tasks such as image matching on HPatches, and
long-term visual localization on Aachen Day-Night showcases its superiority
over previous local features. The release code is available at
https://github.com/vignywang/SAMFeat.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 05:29:20 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Wu",
"Jingqian",
""
],
[
"Xu",
"Rongtao",
""
],
[
"Wood-Doughty",
"Zach",
""
],
[
"Wang",
"Changwei",
""
]
]
| new_dataset | 0.972431 |
2309.17024 | Xin Wang | Xin Wang, Taein Kwon, Mahdi Rad, Bowen Pan, Ishani Chakraborty, Sean
Andrist, Dan Bohus, Ashley Feniello, Bugra Tekin, Felipe Vieira Frujeri, Neel
Joshi, Marc Pollefeys | HoloAssist: an Egocentric Human Interaction Dataset for Interactive AI
Assistants in the Real World | ICCV 2023 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Building an interactive AI assistant that can perceive, reason, and
collaborate with humans in the real world has been a long-standing pursuit in
the AI community. This work is part of a broader research effort to develop
intelligent agents that can interactively guide humans through performing tasks
in the physical world. As a first step in this direction, we introduce
HoloAssist, a large-scale egocentric human interaction dataset, where two
people collaboratively complete physical manipulation tasks. The task performer
executes the task while wearing a mixed-reality headset that captures seven
synchronized data streams. The task instructor watches the performer's
egocentric video in real time and guides them verbally. By augmenting the data
with action and conversational annotations and observing the rich behaviors of
various participants, we present key insights into how human assistants correct
mistakes, intervene in the task completion procedure, and ground their
instructions to the environment. HoloAssist spans 166 hours of data captured by
350 unique instructor-performer pairs. Furthermore, we construct and present
benchmarks on mistake detection, intervention type prediction, and hand
forecasting, along with detailed analysis. We expect HoloAssist will provide an
important resource for building AI assistants that can fluidly collaborate with
humans in the real world. Data can be downloaded at
https://holoassist.github.io/.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 07:17:43 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Wang",
"Xin",
""
],
[
"Kwon",
"Taein",
""
],
[
"Rad",
"Mahdi",
""
],
[
"Pan",
"Bowen",
""
],
[
"Chakraborty",
"Ishani",
""
],
[
"Andrist",
"Sean",
""
],
[
"Bohus",
"Dan",
""
],
[
"Feniello",
"Ashley",
""
],
[
"Tekin",
"Bugra",
""
],
[
"Frujeri",
"Felipe Vieira",
""
],
[
"Joshi",
"Neel",
""
],
[
"Pollefeys",
"Marc",
""
]
]
| new_dataset | 0.999853 |
2309.17054 | Ling Gao | Ling Gao and Hang Su and Daniel Gehrig and Marco Cannici and Davide
Scaramuzza and Laurent Kneip | A 5-Point Minimal Solver for Event Camera Relative Motion Estimation | null | IEEE/CVF International Conference on Computer Vision (ICCV), 2023 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Event-based cameras are ideal for line-based motion estimation, since they
predominantly respond to edges in the scene. However, accurately determining
the camera displacement based on events continues to be an open problem. This
is because line feature extraction and dynamics estimation are tightly coupled
when using event cameras, and no precise model is currently available for
describing the complex structures generated by lines in the space-time volume
of events. We solve this problem by deriving the correct non-linear
parametrization of such manifolds, which we term eventails, and demonstrate its
application to event-based linear motion estimation, with known rotation from
an Inertial Measurement Unit. Using this parametrization, we introduce a novel
minimal 5-point solver that jointly estimates line parameters and linear camera
velocity projections, which can be fused into a single, averaged linear
velocity when considering multiple lines. We demonstrate on both synthetic and
real data that our solver generates more stable relative motion estimates than
other methods while capturing more inliers than clustering based on
spatio-temporal planes. In particular, our method consistently achieves a 100%
success rate in estimating linear velocity where existing closed-form solvers
only achieve between 23% and 70%. The proposed eventails contribute to a better
understanding of spatio-temporal event-generated geometries and we thus believe
it will become a core building block of future event-based motion estimation
algorithms.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 08:30:18 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Gao",
"Ling",
""
],
[
"Su",
"Hang",
""
],
[
"Gehrig",
"Daniel",
""
],
[
"Cannici",
"Marco",
""
],
[
"Scaramuzza",
"Davide",
""
],
[
"Kneip",
"Laurent",
""
]
]
| new_dataset | 0.993606 |
2309.17058 | Anju Rani | Anju Rani, Daniel O. Arroyo, Petar Durdevic | Imagery Dataset for Condition Monitoring of Synthetic Fibre Ropes | 7 pages, 3 figures, database | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Automatic visual inspection of synthetic fibre ropes (SFRs) is a challenging
task in the field of offshore, wind turbine industries, etc. The presence of
any defect in SFRs can compromise their structural integrity and pose
significant safety risks. Due to the large size and weight of these ropes, it
is often impractical to detach and inspect them frequently. Therefore, there is
a critical need to develop efficient defect detection methods to assess their
remaining useful life (RUL). To address this challenge, a comprehensive dataset
has been generated, comprising a total of 6,942 raw images representing both
normal and defective SFRs. The dataset encompasses a wide array of defect
scenarios which may occur throughout their operational lifespan, including but
not limited to placking defects, cut strands, chafings, compressions, core outs
and normal. This dataset serves as a resource to support computer vision
applications, including object detection, classification, and segmentation,
aimed at detecting and analyzing defects in SFRs. The availability of this
dataset will facilitate the development and evaluation of robust defect
detection algorithms. The aim of generating this dataset is to assist in the
development of automated defect detection systems that outperform traditional
visual inspection methods, thereby paving the way for safer and more efficient
utilization of SFRs across a wide range of applications.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 08:42:44 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Rani",
"Anju",
""
],
[
"Arroyo",
"Daniel O.",
""
],
[
"Durdevic",
"Petar",
""
]
]
| new_dataset | 0.999652 |
2309.17063 | Mohammed Alser | Julien Eudine and Mohammed Alser, Gagandeep Singh, Can Alkan, Onur
Mutlu | GateSeeder: Near-memory CPU-FPGA Acceleration of Short and Long Read
Mapping | null | null | null | null | cs.AR cs.DS q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Motivation: Read mapping is a computationally expensive process and a major
bottleneck in genomics analyses. The performance of read mapping is mainly
limited by the performance of three key computational steps: Index Querying,
Seed Chaining, and Sequence Alignment. The first step is dominated by how fast
and frequent it accesses the main memory (i.e., memory-bound), while the latter
two steps are dominated by how fast the CPU can compute their
computationally-costly dynamic programming algorithms (i.e., compute-bound).
Accelerating these three steps by exploiting new algorithms and new hardware
devices is essential to accelerate most genome analysis pipelines that widely
use read mapping. Given the large body of work on accelerating Sequence
Alignment, this work focuses on significantly improving the remaining steps.
Results: We introduce GateSeeder, the first CPU-FPGA-based near-memory
acceleration of both short and long read mapping. GateSeeder exploits
near-memory computation capability provided by modern FPGAs that couple a
reconfigurable compute fabric with high-bandwidth memory (HBM) to overcome the
memory-bound and compute-bound bottlenecks. GateSeeder also introduces a new
lightweight algorithm for finding the potential matching segment pairs. Using
real ONT, HiFi, and Illumina sequences, we experimentally demonstrate that
GateSeeder outperforms Minimap2, without performing sequence alignment, by up
to 40.3x, 4.8x, and 2.3x, respectively. When performing read mapping with
sequence alignment, GateSeeder outperforms Minimap2 by 1.15-4.33x (using KSW2)
and by 1.97-13.63x (using WFA-GPU). Availability:
https://github.com/CMU-SAFARI/GateSeeder
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 08:49:44 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Eudine",
"Julien",
""
],
[
"Alser",
"Mohammed",
""
],
[
"Singh",
"Gagandeep",
""
],
[
"Alkan",
"Can",
""
],
[
"Mutlu",
"Onur",
""
]
]
| new_dataset | 0.951388 |
2309.17115 | Daksh Dave | Daksh Dave, Aditya Sharma, Shafii Muhammad Abdulhamid, Adeel Ahmed,
Adnan Akhunzada, and Rashid Amin | SAppKG: Mobile App Recommendation Using Knowledge Graph and Side
Information-A Secure Framework | null | IEEE Access, vol. 11, pp. 76751-76767, 2023 | 10.1109/ACCESS.2023.3296466 | null | cs.SI cs.IR | http://creativecommons.org/licenses/by/4.0/ | Due to the rapid development of technology and the widespread usage of
smartphones, the number of mobile applications is exponentially growing.
Finding a suitable collection of apps that aligns with users needs and
preferences can be challenging. However, mobile app recommender systems have
emerged as a helpful tool in simplifying this process. But there is a drawback
to employing app recommender systems. These systems need access to user data,
which is a serious security violation. While users seek accurate opinions, they
do not want to compromise their privacy in the process. We address this issue
by developing SAppKG, an end-to-end user privacy-preserving knowledge graph
architecture for mobile app recommendation based on knowledge graph models such
as SAppKG-S and SAppKG-D, that utilized the interaction data and side
information of app attributes. We tested the proposed model on real-world data
from the Google Play app store, using precision, recall, mean absolute
precision, and mean reciprocal rank. We found that the proposed model improved
results on all four metrics. We also compared the proposed model to baseline
models and found that it outperformed them on all four metrics.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 10:17:04 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Dave",
"Daksh",
""
],
[
"Sharma",
"Aditya",
""
],
[
"Abdulhamid",
"Shafii Muhammad",
""
],
[
"Ahmed",
"Adeel",
""
],
[
"Akhunzada",
"Adnan",
""
],
[
"Amin",
"Rashid",
""
]
]
| new_dataset | 0.995307 |
2309.17116 | Iulia Duta | Iulia Duta, Giulia Cassar\`a, Fabrizio Silvestri, Pietro Li\`o | Sheaf Hypergraph Networks | Accepted at Neural Information Processing Systems (NeurIPS 2023) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Higher-order relations are widespread in nature, with numerous phenomena
involving complex interactions that extend beyond simple pairwise connections.
As a result, advancements in higher-order processing can accelerate the growth
of various fields requiring structured data. Current approaches typically
represent these interactions using hypergraphs. We enhance this representation
by introducing cellular sheaves for hypergraphs, a mathematical construction
that adds extra structure to the conventional hypergraph while maintaining
their local, higherorder connectivity. Drawing inspiration from existing
Laplacians in the literature, we develop two unique formulations of sheaf
hypergraph Laplacians: linear and non-linear. Our theoretical analysis
demonstrates that incorporating sheaves into the hypergraph Laplacian provides
a more expressive inductive bias than standard hypergraph diffusion, creating a
powerful instrument for effectively modelling complex data structures. We
employ these sheaf hypergraph Laplacians to design two categories of models:
Sheaf Hypergraph Neural Networks and Sheaf Hypergraph Convolutional Networks.
These models generalize classical Hypergraph Networks often found in the
literature. Through extensive experimentation, we show that this generalization
significantly improves performance, achieving top results on multiple benchmark
datasets for hypergraph node classification.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 10:25:43 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Duta",
"Iulia",
""
],
[
"Cassarà",
"Giulia",
""
],
[
"Silvestri",
"Fabrizio",
""
],
[
"Liò",
"Pietro",
""
]
]
| new_dataset | 0.961775 |
2309.17122 | Lars-Peter Meyer | Johannes Frey and Lars-Peter Meyer and Natanael Arndt and Felix Brei
and Kirill Bulert | Benchmarking the Abilities of Large Language Models for RDF Knowledge
Graph Creation and Comprehension: How Well Do LLMs Speak Turtle? | accepted for proceedings of DL4KG Workshop @ ISWC 2023 at ceur-ws.org | null | null | null | cs.AI cs.CL cs.DB | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) are advancing at a rapid pace, with significant
improvements at natural language processing and coding tasks. Yet, their
ability to work with formal languages representing data, specifically within
the realm of knowledge graph engineering, remains under-investigated. To
evaluate the proficiency of various LLMs, we created a set of five tasks that
probe their ability to parse, understand, analyze, and create knowledge graphs
serialized in Turtle syntax. These tasks, each embodying distinct degrees of
complexity and being able to scale with the size of the problem, have been
integrated into our automated evaluation system, the LLM-KG-Bench. The
evaluation encompassed four commercially available LLMs - GPT-3.5, GPT-4,
Claude 1.3, and Claude 2.0, as well as two freely accessible offline models,
GPT4All Vicuna and GPT4All Falcon 13B. This analysis offers an in-depth
understanding of the strengths and shortcomings of LLMs in relation to their
application within RDF knowledge graph engineering workflows utilizing Turtle
representation. While our findings show that the latest commercial models
outperform their forerunners in terms of proficiency with the Turtle language,
they also reveal an apparent weakness. These models fall short when it comes to
adhering strictly to the output formatting constraints, a crucial requirement
in this context.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 10:36:04 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Frey",
"Johannes",
""
],
[
"Meyer",
"Lars-Peter",
""
],
[
"Arndt",
"Natanael",
""
],
[
"Brei",
"Felix",
""
],
[
"Bulert",
"Kirill",
""
]
]
| new_dataset | 0.989801 |
2309.17128 | XiaoChen Zhao | Xiaochen Zhao, Lizhen Wang, Jingxiang Sun, Hongwen Zhang, Jinli Suo,
Yebin Liu | HAvatar: High-fidelity Head Avatar via Facial Model Conditioned Neural
Radiance Field | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of modeling an animatable 3D human head avatar under light-weight
setups is of significant importance but has not been well solved. Existing 3D
representations either perform well in the realism of portrait images synthesis
or the accuracy of expression control, but not both. To address the problem, we
introduce a novel hybrid explicit-implicit 3D representation, Facial Model
Conditioned Neural Radiance Field, which integrates the expressiveness of NeRF
and the prior information from the parametric template. At the core of our
representation, a synthetic-renderings-based condition method is proposed to
fuse the prior information from the parametric model into the implicit field
without constraining its topological flexibility. Besides, based on the hybrid
representation, we properly overcome the inconsistent shape issue presented in
existing methods and improve the animation stability. Moreover, by adopting an
overall GAN-based architecture using an image-to-image translation network, we
achieve high-resolution, realistic and view-consistent synthesis of dynamic
head appearance. Experiments demonstrate that our method can achieve
state-of-the-art performance for 3D head avatar animation compared with
previous methods.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 10:45:22 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Zhao",
"Xiaochen",
""
],
[
"Wang",
"Lizhen",
""
],
[
"Sun",
"Jingxiang",
""
],
[
"Zhang",
"Hongwen",
""
],
[
"Suo",
"Jinli",
""
],
[
"Liu",
"Yebin",
""
]
]
| new_dataset | 0.979747 |
2309.17162 | Weijie Wei | Weijie Wei and Martin R. Oswald and Fatemeh Karimi Nejadasl and Theo
Gevers | APNet: Urban-level Scene Segmentation of Aerial Images and Point Clouds | Accepted by ICCV Workshop 2023 and selected as an oral | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper, we focus on semantic segmentation method for point clouds of
urban scenes. Our fundamental concept revolves around the collaborative
utilization of diverse scene representations to benefit from different context
information and network architectures. To this end, the proposed network
architecture, called APNet, is split into two branches: a point cloud branch
and an aerial image branch which input is generated from a point cloud. To
leverage the different properties of each branch, we employ a geometry-aware
fusion module that is learned to combine the results of each branch. Additional
separate losses for each branch avoid that one branch dominates the results,
ensure the best performance for each branch individually and explicitly define
the input domain of the fusion network assuring it only performs data fusion.
Our experiments demonstrate that the fusion output consistently outperforms the
individual network branches and that APNet achieves state-of-the-art
performance of 65.2 mIoU on the SensatUrban dataset. Upon acceptance, the
source code will be made accessible.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 11:54:36 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Wei",
"Weijie",
""
],
[
"Oswald",
"Martin R.",
""
],
[
"Nejadasl",
"Fatemeh Karimi",
""
],
[
"Gevers",
"Theo",
""
]
]
| new_dataset | 0.995061 |
2309.17164 | Bianca Lamm | Bianca Lamm (1 and 2), Janis Keuper (1) ((1) IMLA, Offenburg
University, (2) Markant Services International GmbH) | Retail-786k: a Large-Scale Dataset for Visual Entity Matching | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Entity Matching (EM) defines the task of learning to group objects by
transferring semantic concepts from example groups (=entities) to unseen data.
Despite the general availability of image data in the context of many
EM-problems, most currently available EM-algorithms solely rely on (textual)
meta data. In this paper, we introduce the first publicly available large-scale
dataset for "visual entity matching", based on a production level use case in
the retail domain. Using scanned advertisement leaflets, collected over several
years from different European retailers, we provide a total of ~786k manually
annotated, high resolution product images containing ~18k different individual
retail products which are grouped into ~3k entities. The annotation of these
product entities is based on a price comparison task, where each entity forms
an equivalence class of comparable products. Following on a first baseline
evaluation, we show that the proposed "visual entity matching" constitutes a
novel learning problem which can not sufficiently be solved using standard
image based classification and retrieval algorithms. Instead, novel approaches
which allow to transfer example based visual equivalent classes to new data are
needed to address the proposed problem. The aim of this paper is to provide a
benchmark for such algorithms.
Information about the dataset, evaluation code and download instructions are
provided under https://www.retail-786k.org/.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 11:58:26 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Lamm",
"Bianca",
"",
"1 and 2"
],
[
"Keuper",
"Janis",
""
]
]
| new_dataset | 0.999777 |
2309.17170 | Luuk van den Bent | Luuk van den Bent, Tom\'as Coleman, Robert Babuska | A Vision-Guided Robotic System for Grasping Harvested Tomato Trusses in
Cluttered Environments | 7 pages, 7 figures | null | null | null | cs.RO cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Currently, truss tomato weighing and packaging require significant manual
work. The main obstacle to automation lies in the difficulty of developing a
reliable robotic grasping system for already harvested trusses. We propose a
method to grasp trusses that are stacked in a crate with considerable clutter,
which is how they are commonly stored and transported after harvest. The method
consists of a deep learning-based vision system to first identify the
individual trusses in the crate and then determine a suitable grasping location
on the stem. To this end, we have introduced a grasp pose ranking algorithm
with online learning capabilities. After selecting the most promising grasp
pose, the robot executes a pinch grasp without needing touch sensors or
geometric models. Lab experiments with a robotic manipulator equipped with an
eye-in-hand RGB-D camera showed a 100% clearance rate when tasked to pick all
trusses from a pile. 93% of the trusses were successfully grasped on the first
try, while the remaining 7% required more attempts.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 12:07:08 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Bent",
"Luuk van den",
""
],
[
"Coleman",
"Tomás",
""
],
[
"Babuska",
"Robert",
""
]
]
| new_dataset | 0.996814 |
2309.17176 | Wanpeng Zhang | Wanpeng Zhang, Zongqing Lu | RLAdapter: Bridging Large Language Models to Reinforcement Learning in
Open Worlds | null | null | null | null | cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While reinforcement learning (RL) shows remarkable success in decision-making
problems, it often requires a lot of interactions with the environment, and in
sparse-reward environments, it is challenging to learn meaningful policies.
Large Language Models (LLMs) can potentially provide valuable guidance to
agents in learning policies, thereby enhancing the performance of RL algorithms
in such environments. However, LLMs often encounter difficulties in
understanding downstream tasks, which hinders their ability to optimally assist
agents in these tasks. A common approach to mitigating this issue is to
fine-tune the LLMs with task-related data, enabling them to offer useful
guidance for RL agents. However, this approach encounters several difficulties,
such as inaccessible model weights or the need for significant computational
resources, making it impractical. In this work, we introduce RLAdapter, a
framework that builds a better connection between RL algorithms and LLMs by
incorporating an adapter model. Within the RLAdapter framework, fine-tuning a
lightweight language model with information generated during the training
process of RL agents significantly aids LLMs in adapting to downstream tasks,
thereby providing better guidance for RL agents. We conducted experiments to
evaluate RLAdapter in the Crafter environment, and the results show that
RLAdapter surpasses the SOTA baselines. Furthermore, agents under our framework
exhibit common-sense behaviors that are absent in baseline models.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 12:16:19 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Zhang",
"Wanpeng",
""
],
[
"Lu",
"Zongqing",
""
]
]
| new_dataset | 0.984076 |
2309.17187 | Allan Wang | Allan Wang, Daisuke Sato, Yasser Corzo, Sonya Simkin, Aaron Steinfeld | TBD Pedestrian Data Collection: Towards Rich, Portable, and Large-Scale
Natural Pedestrian Data | This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible. arXiv admin note: substantial text overlap with
arXiv:2203.01974 | null | null | null | cs.CV cs.HC cs.RO | http://creativecommons.org/licenses/by/4.0/ | Social navigation and pedestrian behavior research has shifted towards
machine learning-based methods and converged on the topic of modeling
inter-pedestrian interactions and pedestrian-robot interactions. For this,
large-scale datasets that contain rich information are needed. We describe a
portable data collection system, coupled with a semi-autonomous labeling
pipeline. As part of the pipeline, we designed a label correction web app that
facilitates human verification of automated pedestrian tracking outcomes. Our
system enables large-scale data collection in diverse environments and fast
trajectory label production. Compared with existing pedestrian data collection
methods, our system contains three components: a combination of top-down and
ego-centric views, natural human behavior in the presence of a socially
appropriate "robot", and human-verified labels grounded in the metric space. To
the best of our knowledge, no prior data collection system has a combination of
all three components. We further introduce our ever-expanding dataset from the
ongoing data collection effort -- the TBD Pedestrian Dataset and show that our
collected data is larger in scale, contains richer information when compared to
prior datasets with human-verified labels, and supports new research
opportunities.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 12:34:10 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Wang",
"Allan",
""
],
[
"Sato",
"Daisuke",
""
],
[
"Corzo",
"Yasser",
""
],
[
"Simkin",
"Sonya",
""
],
[
"Steinfeld",
"Aaron",
""
]
]
| new_dataset | 0.998684 |
2309.17193 | Adir Kovich | Adir Kobovich, Eitan Yaakobi and Nir Weinberger | M-DAB: An Input-Distribution Optimization Algorithm for Composite DNA
Storage by the Multinomial Channel | 6 pages, 3 figures | null | 10.13140/RG.2.2.36212.53121 | null | cs.IT math.IT | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent experiments have shown that the capacity of DNA storage systems may be
significantly increased by synthesizing composite DNA letters. In this work, we
model a DNA storage channel with composite inputs as a \textit{multinomial
channel}, and propose an optimization algorithm for its capacity achieving
input distribution, for an arbitrary number of output reads. The algorithm is
termed multidimensional dynamic assignment Blahut-Arimoto (M-DAB), and is a
generalized version of the DAB algorithm, proposed by Wesel et al. developed
for the binomial channel. We also empirically observe a scaling law behavior of
the capacity as a function of the support size of the capacity-achieving input
distribution.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 12:43:42 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Kobovich",
"Adir",
""
],
[
"Yaakobi",
"Eitan",
""
],
[
"Weinberger",
"Nir",
""
]
]
| new_dataset | 0.994658 |
2309.17395 | Tatiana Likhomanenko | Andrew Rouditchenko, Ronan Collobert, Tatiana Likhomanenko | AV-CPL: Continuous Pseudo-Labeling for Audio-Visual Speech Recognition | Under review | null | null | null | cs.LG cs.SD eess.AS stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Audio-visual speech contains synchronized audio and visual information that
provides cross-modal supervision to learn representations for both automatic
speech recognition (ASR) and visual speech recognition (VSR). We introduce
continuous pseudo-labeling for audio-visual speech recognition (AV-CPL), a
semi-supervised method to train an audio-visual speech recognition (AVSR) model
on a combination of labeled and unlabeled videos with continuously regenerated
pseudo-labels. Our models are trained for speech recognition from audio-visual
inputs and can perform speech recognition using both audio and visual
modalities, or only one modality. Our method uses the same audio-visual model
for both supervised training and pseudo-label generation, mitigating the need
for external speech recognition models to generate pseudo-labels. AV-CPL
obtains significant improvements in VSR performance on the LRS3 dataset while
maintaining practical ASR and AVSR performance. Finally, using visual-only
speech data, our method is able to leverage unlabeled visual speech to improve
VSR.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 16:57:21 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Rouditchenko",
"Andrew",
""
],
[
"Collobert",
"Ronan",
""
],
[
"Likhomanenko",
"Tatiana",
""
]
]
| new_dataset | 0.98665 |
2309.17414 | Lu\'is Fiolhais | Lu\'is Fiolhais and Leonel Sousa | QR TPM in Programmable Low-Power Devices | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Trusted Platform Modules (TPMs), which serve as the root of trust in secure
systems, are secure crypto-processors that carry out cryptographic primitives.
Should large-scale quantum computing become a reality, the cryptographic
primitives adopted in the TPM 2.0 standard will no longer be secure. Thus, the
design of TPMs that provide Quantum Resistant (QR) primitives is of utmost
importance, in particular with the restrictions imposed by embedded systems. In
this paper, we investigate the deployment of QR primitives and protocols in the
standard TPM 2.0. Cryptographic algorithms that are already in the NIST QR
cryptography standardization process, as well as an Oblivious Transfer (OT), a
fundamental cryptographic primitive, are the QR cryptographic schemes selected
to extend TPM 2.0. In particular, the Kyber algorithm for key encapsulation,
the Dilithium algorithm for digital signature, and a 3-round Random Oblivious
Transfer (ROT) protocol, supporting protocols such as Multi-Party Computation
and Private Set Intersection (PSI). The QR extended TPM 2.0 is implemented in
ARM and RISC-V embedded processors, its computational requirements are analysed
and experimentally evaluated in comparison to the standard TPM. It is shown
that Kyber and Dilithium are faster at creating keys than RSA, due to the key
size and secure random sampling required in RSA, while they meet the same
performance level as ECC. For digital signatures, both in signature creation
and verification, Dilithium is on par with RSA and ECC. The ROT protocol shows
decent performance and its support required small modifications to the TPM.
This paper also shows that it would be possible to backport the required code
to already available TPMs to ensure that current TPMs remain secure against
quantum adversaries.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2023 17:21:46 GMT"
}
]
| 2023-10-02T00:00:00 | [
[
"Fiolhais",
"Luís",
""
],
[
"Sousa",
"Leonel",
""
]
]
| new_dataset | 0.998772 |
2005.07917 | Giovanni Viglietta | Giuseppe A. Di Luna, Ryuhei Uehara, Giovanni Viglietta, and Yukiko
Yamauchi | Gathering on a Circle with Limited Visibility by Anonymous Oblivious
Robots | 33 pages, 9 figures | null | null | null | cs.DC cs.CG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A swarm of anonymous oblivious mobile robots, operating in deterministic
Look-Compute-Move cycles, is confined within a circular track. All robots agree
on the clockwise direction (chirality), they are activated by an adversarial
semi-synchronous scheduler (SSYNCH), and an active robot always reaches the
destination point it computes (rigidity). Robots have limited visibility: each
robot can see only the points on the circle that have an angular distance
strictly smaller than a constant $\vartheta$ from the robot's current location,
where $0<\vartheta\leq\pi$ (angles are expressed in radians).
We study the Gathering problem for such a swarm of robots: that is, all
robots are initially in distinct locations on the circle, and their task is to
reach the same point on the circle in a finite number of turns, regardless of
the way they are activated by the scheduler. Note that, due to the anonymity of
the robots, this task is impossible if the initial configuration is
rotationally symmetric; hence, we have to make the assumption that the initial
configuration be rotationally asymmetric.
We prove that, if $\vartheta=\pi$ (i.e., each robot can see the entire circle
except its antipodal point), there is a distributed algorithm that solves the
Gathering problem for swarms of any size. By contrast, we also prove that, if
$\vartheta\leq \pi/2$, no distributed algorithm solves the Gathering problem,
regardless of the size of the swarm, even under the assumption that the initial
configuration is rotationally asymmetric and the visibility graph of the robots
is connected.
The latter impossibility result relies on a probabilistic technique based on
random perturbations, which is novel in the context of anonymous mobile robots.
Such a technique is of independent interest, and immediately applies to other
Pattern-Formation problems.
| [
{
"version": "v1",
"created": "Sat, 16 May 2020 09:12:39 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Sep 2023 17:27:38 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Di Luna",
"Giuseppe A.",
""
],
[
"Uehara",
"Ryuhei",
""
],
[
"Viglietta",
"Giovanni",
""
],
[
"Yamauchi",
"Yukiko",
""
]
]
| new_dataset | 0.993463 |
2107.03615 | Daniel Frishberg | David Eppstein, Daniel Frishberg, and Martha C. Osegueda | Angles of Arc-Polygons and Lombardi Drawings of Cacti | 12 pages, 8 figures. To be published in Proc. 33rd Canadian
Conference on Computational Geometry, 2021 | Comp. Geom. Theory & Applications 112: 101982, 2023 | 10.1016/j.comgeo.2023.101982 | null | cs.CG | http://creativecommons.org/licenses/by/4.0/ | We characterize the triples of interior angles that are possible in
non-self-crossing triangles with circular-arc sides, and we prove that a given
cyclic sequence of angles can be realized by a non-self-crossing polygon with
circular-arc sides whenever all angles are at most pi. As a consequence of
these results, we prove that every cactus has a planar Lombardi drawing (a
drawing with edges depicted as circular arcs, meeting at equal angles at each
vertex) for its natural embedding in which every cycle of the cactus is a face
of the drawing. However, there exist planar embeddings of cacti that do not
have planar Lombardi drawings.
| [
{
"version": "v1",
"created": "Thu, 8 Jul 2021 05:35:56 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Eppstein",
"David",
""
],
[
"Frishberg",
"Daniel",
""
],
[
"Osegueda",
"Martha C.",
""
]
]
| new_dataset | 0.999575 |
2207.08031 | Tim Alderson | Tim Alderson, Benjamin Morine | MWS and FWS Codes for Coordinate-Wise Weight Functions | 17 pages | null | null | null | cs.IT math.CO math.IT | http://creativecommons.org/licenses/by/4.0/ | A combinatorial problem concerning the maximum size of the (hamming) weight
set of an $[n,k]_q$ linear code was recently introduced. Codes attaining the
established upper bound are the Maximum Weight Spectrum (MWS) codes. Those
$[n,k]_q $ codes with the same weight set as $ \mathbb{F}_q^n $ are called Full
Weight Spectrum (FWS) codes. FWS codes are necessarily ``short", whereas MWS
codes are necessarily ``long". For fixed $ k,q $ the values of $ n $ for which
an $ [n,k]_q $-FWS code exists are completely determined, but the determination
of the minimum length $ M(H,k,q) $ of an $ [n,k]_q $-MWS code remains an open
problem. The current work broadens discussion first to general coordinate-wise
weight functions, and then specifically to the Lee weight and a Manhattan like
weight. In the general case we provide bounds on $ n $ for which an FWS code
exists, and bounds on $ n $ for which an MWS code exists. When specializing to
the Lee or to the Manhattan setting we are able to completely determine the
parameters of FWS codes. As with the Hamming case, we are able to provide an
upper bound on $ M(\mathcal{L},k,q) $ (the minimum length of Lee MWS codes),
and pose the determination of $ M(\mathcal{L},k,q) $ as an open problem. On the
other hand, with respect to the Manhattan weight we completely determine the
parameters of MWS codes.
| [
{
"version": "v1",
"created": "Sat, 16 Jul 2022 22:30:16 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Apr 2023 17:45:51 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Sep 2023 18:49:38 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Alderson",
"Tim",
""
],
[
"Morine",
"Benjamin",
""
]
]
| new_dataset | 0.999053 |
2210.06984 | Thomas Huang | Tobias Fischer, Thomas E. Huang, Jiangmiao Pang, Linlu Qiu, Haofeng
Chen, Trevor Darrell, Fisher Yu | QDTrack: Quasi-Dense Similarity Learning for Appearance-Only Multiple
Object Tracking | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Similarity learning has been recognized as a crucial step for object
tracking. However, existing multiple object tracking methods only use sparse
ground truth matching as the training objective, while ignoring the majority of
the informative regions in images. In this paper, we present Quasi-Dense
Similarity Learning, which densely samples hundreds of object regions on a pair
of images for contrastive learning. We combine this similarity learning with
multiple existing object detectors to build Quasi-Dense Tracking (QDTrack),
which does not require displacement regression or motion priors. We find that
the resulting distinctive feature space admits a simple nearest neighbor search
at inference time for object association. In addition, we show that our
similarity learning scheme is not limited to video data, but can learn
effective instance similarity even from static input, enabling a competitive
tracking performance without training on videos or using tracking supervision.
We conduct extensive experiments on a wide variety of popular MOT benchmarks.
We find that, despite its simplicity, QDTrack rivals the performance of
state-of-the-art tracking methods on all benchmarks and sets a new
state-of-the-art on the large-scale BDD100K MOT benchmark, while introducing
negligible computational overhead to the detector.
| [
{
"version": "v1",
"created": "Wed, 12 Oct 2022 15:47:36 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Sep 2023 12:39:30 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Fischer",
"Tobias",
""
],
[
"Huang",
"Thomas E.",
""
],
[
"Pang",
"Jiangmiao",
""
],
[
"Qiu",
"Linlu",
""
],
[
"Chen",
"Haofeng",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Yu",
"Fisher",
""
]
]
| new_dataset | 0.98762 |
2304.14821 | Alban Ponse | Jan A. Bergstra and Alban Ponse | Conditional logic as a short-circuit logic | 20 pages, 4 tables. Differences with v1: 1) Definitions 3.7 and 3.8 -
the normal forms are more elegantly defined, based on a set of strings A^s
which now includes the empty string: nicer proofs of La.3.10 and Thm.3.11;
the same goes for the related definitions and proofs in the setting with U.
2) Thm.5.1 - best Prover9 results tightened | null | null | null | cs.LO | http://creativecommons.org/licenses/by/4.0/ | Both two-valued and three-valued conditional logic (CL), defined by Guzm\'an
and Squier (1990) and based on McCarthy's non-commutative connectives,
axiomatise a short-circuit logic (SCL) that defines more identities than MSCL
(Memorising SCL), which also has a two- and a three-valued variant. This
follows from the fact that the definable connective that prescribes full
left-sequential conjunction is commutative in CL. We show that in CL, the full
left-sequential connectives and negation define Bochvar's three-valued strict
logic. In two-valued CL, the full left-sequential connectives and negation
define a commutative logic that is weaker than propositional logic because the
absorption laws do not hold.
Next, we show that the original, equational axiomatisation of CL is not
independent and give several alternative, independent axiomatisations.
| [
{
"version": "v1",
"created": "Fri, 28 Apr 2023 13:04:02 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Sep 2023 16:59:55 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Bergstra",
"Jan A.",
""
],
[
"Ponse",
"Alban",
""
]
]
| new_dataset | 0.987424 |
2305.03701 | Yunxin Li | Yunxin Li, Baotian Hu, Xinyu Chen, Lin Ma, Yong Xu, and Min Zhang | LMEye: An Interactive Perception Network for Large Language Models | working in progress | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training a Multimodal Large Language Model (MLLM) from scratch, like GPT-4,
is resource-intensive. Regarding Large Language Models (LLMs) as the core
processor for multimodal information, our paper introduces LMEye, a human-like
eye with a play-and-plug interactive perception network, designed to enable
dynamic interaction between LLMs and external vision information. Previous
methods incorporate visual information into LLMs with a simple visual mapping
network or Q-former from BLIP-2. Such networks project the image feature once
yet do not consider the interaction between the image and the human input
query. Hence, the obtained visual information without being connected to human
intention may be inadequate for LLMs to generate intention-following responses,
which we refer to as static visual information. LMEye addresses this issue by
allowing the LLM to request the desired visual information aligned with various
human instructions, which we term as the dynamic visual information
interaction. Specifically, LMEye consists of a simple visual mapping network to
provide the basic perception of an image for LLMs. It also contains additional
modules responsible for acquiring requests from LLMs, performing request-based
visual information interaction, and transmitting the resulting interacted
visual information to LLMs, respectively. In this way, LLMs act to understand
the human query, deliver the corresponding request to the request-based visual
information interaction module, and generate the response based on the
interleaved multimodal information. We evaluate LMEye through extensive
experiments on some multimodal benchmarks, demonstrating that it significantly
improves the zero-shot performance on various multimodal tasks compared to
previous methods, with less parameters.
| [
{
"version": "v1",
"created": "Fri, 5 May 2023 17:27:21 GMT"
},
{
"version": "v2",
"created": "Thu, 18 May 2023 17:28:58 GMT"
},
{
"version": "v3",
"created": "Fri, 19 May 2023 05:42:57 GMT"
},
{
"version": "v4",
"created": "Sat, 22 Jul 2023 06:24:53 GMT"
},
{
"version": "v5",
"created": "Wed, 2 Aug 2023 11:52:16 GMT"
},
{
"version": "v6",
"created": "Thu, 28 Sep 2023 08:18:43 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Li",
"Yunxin",
""
],
[
"Hu",
"Baotian",
""
],
[
"Chen",
"Xinyu",
""
],
[
"Ma",
"Lin",
""
],
[
"Xu",
"Yong",
""
],
[
"Zhang",
"Min",
""
]
]
| new_dataset | 0.970094 |
2305.13969 | Matej Novosad | Matej Novosad, Robert Penicka, Vojtech Vonasek | CTopPRM: Clustering Topological PRM for Planning Multiple Distinct Paths
in 3D Environments | in IEEE Robotics and Automation Letters | null | 10.1109/LRA.2023.3315539 | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose a new method called Clustering Topological PRM
(CTopPRM) for finding multiple homotopically distinct paths in 3D cluttered
environments. Finding such distinct paths, e.g., going around an obstacle from
a different side, is useful in many applications. Among others, using multiple
distinct paths is necessary for optimization-based trajectory planners where
found trajectories are restricted to only a single homotopy class of a given
path. Distinct paths can also be used to guide sampling-based motion planning
and thus increase the effectiveness of planning in environments with narrow
passages. Graph-based representation called roadmap is a common representation
for path planning and also for finding multiple distinct paths. However,
challenging environments with multiple narrow passages require a densely
sampled roadmap to capture the connectivity of the environment. Searching such
a dense roadmap for multiple paths is computationally too expensive. Therefore,
the majority of existing methods construct only a sparse roadmap which,
however, struggles to find all distinct paths in challenging environments. To
this end, we propose the CTopPRM which creates a sparse graph by clustering an
initially sampled dense roadmap. Such a reduced roadmap allows fast
identification of homotopically distinct paths captured in the dense roadmap.
We show, that compared to the existing methods the CTopPRM improves the
probability of finding all distinct paths by almost 20% in tested environments,
during same run-time. The source code of our method is released as an
open-source package.
| [
{
"version": "v1",
"created": "Tue, 23 May 2023 11:53:04 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Sep 2023 12:58:36 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Sep 2023 17:58:29 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Novosad",
"Matej",
""
],
[
"Penicka",
"Robert",
""
],
[
"Vonasek",
"Vojtech",
""
]
]
| new_dataset | 0.957142 |
2305.14093 | Kunhao Liu | Kunhao Liu, Fangneng Zhan, Jiahui Zhang, Muyu Xu, Yingchen Yu,
Abdulmotaleb El Saddik, Christian Theobalt, Eric Xing, Shijian Lu | Weakly Supervised 3D Open-vocabulary Segmentation | Accepted to NeurIPS 2023 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Open-vocabulary segmentation of 3D scenes is a fundamental function of human
perception and thus a crucial objective in computer vision research. However,
this task is heavily impeded by the lack of large-scale and diverse 3D
open-vocabulary segmentation datasets for training robust and generalizable
models. Distilling knowledge from pre-trained 2D open-vocabulary segmentation
models helps but it compromises the open-vocabulary feature as the 2D models
are mostly finetuned with close-vocabulary datasets. We tackle the challenges
in 3D open-vocabulary segmentation by exploiting pre-trained foundation models
CLIP and DINO in a weakly supervised manner. Specifically, given only the
open-vocabulary text descriptions of the objects in a scene, we distill the
open-vocabulary multimodal knowledge and object reasoning capability of CLIP
and DINO into a neural radiance field (NeRF), which effectively lifts 2D
features into view-consistent 3D segmentation. A notable aspect of our approach
is that it does not require any manual segmentation annotations for either the
foundation models or the distillation process. Extensive experiments show that
our method even outperforms fully supervised models trained with segmentation
annotations in certain scenes, suggesting that 3D open-vocabulary segmentation
can be effectively learned from 2D images and text-image pairs. Code is
available at \url{https://github.com/Kunhao-Liu/3D-OVS}.
| [
{
"version": "v1",
"created": "Tue, 23 May 2023 14:16:49 GMT"
},
{
"version": "v2",
"created": "Wed, 24 May 2023 09:18:26 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Sep 2023 07:28:12 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Liu",
"Kunhao",
""
],
[
"Zhan",
"Fangneng",
""
],
[
"Zhang",
"Jiahui",
""
],
[
"Xu",
"Muyu",
""
],
[
"Yu",
"Yingchen",
""
],
[
"Saddik",
"Abdulmotaleb El",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Xing",
"Eric",
""
],
[
"Lu",
"Shijian",
""
]
]
| new_dataset | 0.963202 |
2305.15883 | Lukas St\"acker | Lukas St\"acker, Shashank Mishra, Philipp Heidenreich, Jason Rambach,
Didier Stricker | RC-BEVFusion: A Plug-In Module for Radar-Camera Bird's Eye View Feature
Fusion | GCPR 2023 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Radars and cameras belong to the most frequently used sensors for advanced
driver assistance systems and automated driving research. However, there has
been surprisingly little research on radar-camera fusion with neural networks.
One of the reasons is a lack of large-scale automotive datasets with radar and
unmasked camera data, with the exception of the nuScenes dataset. Another
reason is the difficulty of effectively fusing the sparse radar point cloud on
the bird's eye view (BEV) plane with the dense images on the perspective plane.
The recent trend of camera-based 3D object detection using BEV features has
enabled a new type of fusion, which is better suited for radars. In this work,
we present RC-BEVFusion, a modular radar-camera fusion network on the BEV
plane. We propose BEVFeatureNet, a novel radar encoder branch, and show that it
can be incorporated into several state-of-the-art camera-based architectures.
We show significant performance gains of up to 28% increase in the nuScenes
detection score, which is an important step in radar-camera fusion research.
Without tuning our model for the nuScenes benchmark, we achieve the best result
among all published methods in the radar-camera fusion category.
| [
{
"version": "v1",
"created": "Thu, 25 May 2023 09:26:04 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Sep 2023 08:07:36 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Stäcker",
"Lukas",
""
],
[
"Mishra",
"Shashank",
""
],
[
"Heidenreich",
"Philipp",
""
],
[
"Rambach",
"Jason",
""
],
[
"Stricker",
"Didier",
""
]
]
| new_dataset | 0.999061 |
2306.05805 | Andrzej Dulny | Andrzej Dulny and Andreas Hotho and Anna Krause | DynaBench: A benchmark dataset for learning dynamical systems from
low-resolution data | This version is the final camera-ready version that has been
published in the Proceedings of ECML-PKDD 2023 | Machine Learning and Knowledge Discovery in Databases: Research
Track. ECML PKDD 2023. Lecture Notes in Computer Science(), vol 14169, p.
438-455. Springer, Cham | 10.1007/978-3-031-43412-9_26 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous work on learning physical systems from data has focused on
high-resolution grid-structured measurements. However, real-world knowledge of
such systems (e.g. weather data) relies on sparsely scattered measuring
stations. In this paper, we introduce a novel simulated benchmark dataset,
DynaBench, for learning dynamical systems directly from sparsely scattered data
without prior knowledge of the equations. The dataset focuses on predicting the
evolution of a dynamical system from low-resolution, unstructured measurements.
We simulate six different partial differential equations covering a variety of
physical systems commonly used in the literature and evaluate several machine
learning models, including traditional graph neural networks and point cloud
processing models, with the task of predicting the evolution of the system. The
proposed benchmark dataset is expected to advance the state of art as an
out-of-the-box easy-to-use tool for evaluating models in a setting where only
unstructured low-resolution observations are available. The benchmark is
available at https://anonymous.4open.science/r/code-2022-dynabench/.
| [
{
"version": "v1",
"created": "Fri, 9 Jun 2023 10:42:32 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Sep 2023 07:40:19 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Dulny",
"Andrzej",
""
],
[
"Hotho",
"Andreas",
""
],
[
"Krause",
"Anna",
""
]
]
| new_dataset | 0.999814 |
2306.10244 | Shamiul Alam | Shamiul Alam, Dana S. Rampini, Bakhrom G. Oripov, Adam N. McCaughan,
and Ahmedullah Aziz | Cryogenic Reconfigurable Logic with Superconducting Heater Cryotron:
Enhancing Area Efficiency and Enabling Camouflaged Processors | 13 pages, 6 figures | null | null | null | cs.ET cond-mat.supr-con cs.AR physics.app-ph | http://creativecommons.org/licenses/by/4.0/ | Superconducting electronics are among the most promising alternatives to
conventional CMOS technology thanks to the ultra-fast speed and ultra-high
energy efficiency of the superconducting devices. Having a cryogenic control
processor is also a crucial requirement for scaling the existing quantum
computers up to thousands of qubits. Despite showing outstanding speed and
energy efficiency, Josephson junction-based circuits suffer from several
challenges such as flux trapping leading to limited scalability, difficulty in
driving high impedances, and so on. Three-terminal cryotron devices have been
proposed to solve these issues which can drive high impedances (>100 k{\Omega})
and are free from any flux trapping issue. In this work, we develop a
reconfigurable logic circuit using a heater cryotron (hTron). In conventional
approaches, the number of devices to perform a logic operation typically
increases with the number of inputs. However, here, we demonstrate a single
hTron device-based logic circuit that can be reconfigured to perform 1-input
copy and NOT, 2-input AND and OR, and 3-input majority logic operations by
choosing suitable biasing conditions. Consequently, we can perform any
processing task with a much smaller number of devices. Also, since we can
perform different logic operations with the same circuit (same layout), we can
develop a camouflaged system where all the logic gates will have the same
layout. Therefore, this proposed circuit will ensure enhanced hardware security
against reverse engineering attacks.
| [
{
"version": "v1",
"created": "Sat, 17 Jun 2023 03:05:55 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Sep 2023 14:30:56 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Alam",
"Shamiul",
""
],
[
"Rampini",
"Dana S.",
""
],
[
"Oripov",
"Bakhrom G.",
""
],
[
"McCaughan",
"Adam N.",
""
],
[
"Aziz",
"Ahmedullah",
""
]
]
| new_dataset | 0.999405 |
2307.01026 | Shenyang Huang | Shenyang Huang, Farimah Poursafaei, Jacob Danovitch, Matthias Fey,
Weihua Hu, Emanuele Rossi, Jure Leskovec, Michael Bronstein, Guillaume
Rabusseau, Reihaneh Rabbany | Temporal Graph Benchmark for Machine Learning on Temporal Graphs | 20 pages, 7 figures, 7 tables, accepted at NeurIPS 2023 Datasets and
Benchmarks Track | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the Temporal Graph Benchmark (TGB), a collection of challenging
and diverse benchmark datasets for realistic, reproducible, and robust
evaluation of machine learning models on temporal graphs. TGB datasets are of
large scale, spanning years in duration, incorporate both node and edge-level
prediction tasks and cover a diverse set of domains including social, trade,
transaction, and transportation networks. For both tasks, we design evaluation
protocols based on realistic use-cases. We extensively benchmark each dataset
and find that the performance of common models can vary drastically across
datasets. In addition, on dynamic node property prediction tasks, we show that
simple methods often achieve superior performance compared to existing temporal
graph models. We believe that these findings open up opportunities for future
research on temporal graphs. Finally, TGB provides an automated machine
learning pipeline for reproducible and accessible temporal graph research,
including data loading, experiment setup and performance evaluation. TGB will
be maintained and updated on a regular basis and welcomes community feedback.
TGB datasets, data loaders, example codes, evaluation setup, and leaderboards
are publicly available at https://tgb.complexdatalab.com/.
| [
{
"version": "v1",
"created": "Mon, 3 Jul 2023 13:58:20 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Sep 2023 22:04:41 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Huang",
"Shenyang",
""
],
[
"Poursafaei",
"Farimah",
""
],
[
"Danovitch",
"Jacob",
""
],
[
"Fey",
"Matthias",
""
],
[
"Hu",
"Weihua",
""
],
[
"Rossi",
"Emanuele",
""
],
[
"Leskovec",
"Jure",
""
],
[
"Bronstein",
"Michael",
""
],
[
"Rabusseau",
"Guillaume",
""
],
[
"Rabbany",
"Reihaneh",
""
]
]
| new_dataset | 0.999817 |
2307.02251 | Mark McDonnell | Mark D. McDonnell, Dong Gong, Amin Parveneh, Ehsan Abbasnejad, Anton
van den Hengel | RanPAC: Random Projections and Pre-trained Models for Continual Learning | 30 pages, 11 figures | NeurIPS 2023 | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Continual learning (CL) aims to incrementally learn different tasks (such as
classification) in a non-stationary data stream without forgetting old ones.
Most CL works focus on tackling catastrophic forgetting under a
learning-from-scratch paradigm. However, with the increasing prominence of
foundation models, pre-trained models equipped with informative representations
have become available for various downstream requirements. Several CL methods
based on pre-trained models have been explored, either utilizing pre-extracted
features directly (which makes bridging distribution gaps challenging) or
incorporating adaptors (which may be subject to forgetting). In this paper, we
propose a concise and effective approach for CL with pre-trained models. Given
that forgetting occurs during parameter updating, we contemplate an alternative
approach that exploits training-free random projectors and class-prototype
accumulation, which thus bypasses the issue. Specifically, we inject a frozen
Random Projection layer with nonlinear activation between the pre-trained
model's feature representations and output head, which captures interactions
between features with expanded dimensionality, providing enhanced linear
separability for class-prototype-based CL. We also demonstrate the importance
of decorrelating the class-prototypes to reduce the distribution disparity when
using pre-trained representations. These techniques prove to be effective and
circumvent the problem of forgetting for both class- and domain-incremental
continual learning. Compared to previous methods applied to pre-trained
ViT-B/16 models, we reduce final error rates by between 10\% and 62\% on seven
class-incremental benchmark datasets, despite not using any rehearsal memory.
We conclude that the full potential of pre-trained models for simple,
effective, and fast continual learning has not hitherto been fully tapped.
| [
{
"version": "v1",
"created": "Wed, 5 Jul 2023 12:49:02 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"McDonnell",
"Mark D.",
""
],
[
"Gong",
"Dong",
""
],
[
"Parveneh",
"Amin",
""
],
[
"Abbasnejad",
"Ehsan",
""
],
[
"Hengel",
"Anton van den",
""
]
]
| new_dataset | 0.964738 |
2307.02274 | Xiaoming Chen | Yuxin Yang, Xiaoming Chen, Yinhe Han | Dadu-RBD: Robot Rigid Body Dynamics Accelerator with Multifunctional
Pipelines | null | null | null | null | cs.RO cs.AR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rigid body dynamics is a key technology in the robotics field. In trajectory
optimization and model predictive control algorithms, there are usually a large
number of rigid body dynamics computing tasks. Using CPUs to process these
tasks consumes a lot of time, which will affect the real-time performance of
robots. To this end, we propose a multifunctional robot rigid body dynamics
accelerator, named RBDCore, to address the performance bottleneck. By analyzing
different functions commonly used in robot dynamics calculations, we summarize
their reuse relationship and optimize them according to the hardware. Based on
this, RBDCore can fully reuse common hardware modules when processing different
computing tasks. By dynamically switching the dataflow path, RBDCore can
accelerate various dynamics functions without reconfiguring the hardware. We
design Structure-Adaptive Pipelines for RBDCore, which can greatly improve the
throughput of the accelerator. Robots with different structures and parameters
can be optimized specifically. Compared with the state-of-the-art CPU, GPU
dynamics libraries and FPGA accelerator, RBDCore can significantly improve the
performance.
| [
{
"version": "v1",
"created": "Wed, 5 Jul 2023 13:17:52 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Aug 2023 01:12:52 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Sep 2023 05:24:18 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Yang",
"Yuxin",
""
],
[
"Chen",
"Xiaoming",
""
],
[
"Han",
"Yinhe",
""
]
]
| new_dataset | 0.998073 |
2308.06595 | Yonatan Bitton | Yonatan Bitton, Hritik Bansal, Jack Hessel, Rulin Shao, Wanrong Zhu,
Anas Awadalla, Josh Gardner, Rohan Taori, Ludwig Schimdt | VisIT-Bench: A Benchmark for Vision-Language Instruction Following
Inspired by Real-World Use | null | null | null | null | cs.CL cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce VisIT-Bench (Visual InsTruction Benchmark), a benchmark for
evaluation of instruction-following vision-language models for real-world use.
Our starting point is curating 70 'instruction families' that we envision
instruction tuned vision-language models should be able to address. Extending
beyond evaluations like VQAv2 and COCO, tasks range from basic recognition to
game playing and creative generation. Following curation, our dataset comprises
592 test queries, each with a human-authored instruction-conditioned caption.
These descriptions surface instruction-specific factors, e.g., for an
instruction asking about the accessibility of a storefront for wheelchair
users, the instruction-conditioned caption describes ramps/potential obstacles.
These descriptions enable 1) collecting human-verified reference outputs for
each instance; and 2) automatic evaluation of candidate multimodal generations
using a text-only LLM, aligning with human judgment. We quantify quality gaps
between models and references using both human and automatic evaluations; e.g.,
the top-performing instruction-following model wins against the GPT-4 reference
in just 27% of the comparison. VisIT-Bench is dynamic to participate,
practitioners simply submit their model's response on the project website;
Data, code and leaderboard is available at visit-bench.github.io.
| [
{
"version": "v1",
"created": "Sat, 12 Aug 2023 15:27:51 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Sep 2023 19:06:14 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Bitton",
"Yonatan",
""
],
[
"Bansal",
"Hritik",
""
],
[
"Hessel",
"Jack",
""
],
[
"Shao",
"Rulin",
""
],
[
"Zhu",
"Wanrong",
""
],
[
"Awadalla",
"Anas",
""
],
[
"Gardner",
"Josh",
""
],
[
"Taori",
"Rohan",
""
],
[
"Schimdt",
"Ludwig",
""
]
]
| new_dataset | 0.999869 |
2308.16900 | Thoranna Bender | Thoranna Bender, Simon Moe S{\o}rensen, Alireza Kashani, K. Eldjarn
Hjorleifsson, Grethe Hyldig, S{\o}ren Hauberg, Serge Belongie and Frederik
Warburg | Learning to Taste: A Multimodal Wine Dataset | Accepted to NeurIPS 2023. See project page:
https://thoranna.github.io/learning_to_taste/ | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present WineSensed, a large multimodal wine dataset for studying the
relations between visual perception, language, and flavor. The dataset
encompasses 897k images of wine labels and 824k reviews of wines curated from
the Vivino platform. It has over 350k unique vintages, annotated with year,
region, rating, alcohol percentage, price, and grape composition. We obtained
fine-grained flavor annotations on a subset by conducting a wine-tasting
experiment with 256 participants who were asked to rank wines based on their
similarity in flavor, resulting in more than 5k pairwise flavor distances. We
propose a low-dimensional concept embedding algorithm that combines human
experience with automatic machine similarity kernels. We demonstrate that this
shared concept embedding space improves upon separate embedding spaces for
coarse flavor classification (alcohol percentage, country, grape, price,
rating) and aligns with the intricate human perception of flavor.
| [
{
"version": "v1",
"created": "Thu, 31 Aug 2023 17:58:28 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Sep 2023 11:41:52 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Sep 2023 18:56:18 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Bender",
"Thoranna",
""
],
[
"Sørensen",
"Simon Moe",
""
],
[
"Kashani",
"Alireza",
""
],
[
"Hjorleifsson",
"K. Eldjarn",
""
],
[
"Hyldig",
"Grethe",
""
],
[
"Hauberg",
"Søren",
""
],
[
"Belongie",
"Serge",
""
],
[
"Warburg",
"Frederik",
""
]
]
| new_dataset | 0.999806 |
2309.05832 | Bowen Jiang | Mengti Sun, Bowen Jiang, Bibit Bianchini, Camillo Jose Taylor, Michael
Posa | Instance-Agnostic Geometry and Contact Dynamics Learning | IROS 2023 Workshop on Leveraging Models for Contact-Rich Manipulation | null | null | null | cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work presents an instance-agnostic learning framework that fuses vision
with dynamics to simultaneously learn shape, pose trajectories, and physical
properties via the use of geometry as a shared representation. Unlike many
contact learning approaches that assume motion capture input and a known shape
prior for the collision model, our proposed framework learns an object's
geometric and dynamic properties from RGBD video, without requiring either
category-level or instance-level shape priors. We integrate a vision system,
BundleSDF, with a dynamics system, ContactNets, and propose a cyclic training
pipeline to use the output from the dynamics module to refine the poses and the
geometry from the vision module, using perspective reprojection. Experiments
demonstrate our framework's ability to learn the geometry and dynamics of rigid
and convex objects and improve upon the current tracking framework.
| [
{
"version": "v1",
"created": "Mon, 11 Sep 2023 21:18:15 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Sep 2023 04:55:04 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Sun",
"Mengti",
""
],
[
"Jiang",
"Bowen",
""
],
[
"Bianchini",
"Bibit",
""
],
[
"Taylor",
"Camillo Jose",
""
],
[
"Posa",
"Michael",
""
]
]
| new_dataset | 0.958607 |
2309.07014 | Adarsh Jagan Sathyamoorthy | Adarsh Jagan Sathyamoorthy, Kasun Weerakoon, Mohamed Elnoor, and
Dinesh Manocha | Using Lidar Intensity for Robot Navigation | 9 pages, 7 figures | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | We present Multi-Layer Intensity Map, a novel 3D object representation for
robot perception and autonomous navigation. Intensity maps consist of multiple
stacked layers of 2D grid maps each derived from reflected point cloud
intensities corresponding to a certain height interval. The different layers of
intensity maps can be used to simultaneously estimate obstacles' height,
solidity/density, and opacity. We demonstrate that intensity maps' can help
accurately differentiate obstacles that are safe to navigate through (e.g.
beaded/string curtains, pliable tall grass), from ones that must be avoided
(e.g. transparent surfaces such as glass walls, bushes, trees, etc.) in indoor
and outdoor environments. Further, to handle narrow passages, and navigate
through non-solid obstacles in dense environments, we propose an approach to
adaptively inflate or enlarge the obstacles detected on intensity maps based on
their solidity, and the robot's preferred velocity direction. We demonstrate
these improved navigation capabilities in real-world narrow, dense environments
using a real Turtlebot and Boston Dynamics Spot robots. We observe significant
increases in success rates to more than 50%, up to a 9.5% decrease in
normalized trajectory length, and up to a 22.6% increase in the F-score
compared to current navigation methods using other sensor modalities.
| [
{
"version": "v1",
"created": "Wed, 13 Sep 2023 15:12:52 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Sep 2023 20:03:55 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Sep 2023 16:24:02 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Sathyamoorthy",
"Adarsh Jagan",
""
],
[
"Weerakoon",
"Kasun",
""
],
[
"Elnoor",
"Mohamed",
""
],
[
"Manocha",
"Dinesh",
""
]
]
| new_dataset | 0.994712 |
2309.09301 | Lijun Li | Lijun Li, Linrui Tian, Xindi Zhang, Qi Wang, Bang Zhang, Mengyuan Liu,
and Chen Chen | RenderIH: A Large-scale Synthetic Dataset for 3D Interacting Hand Pose
Estimation | Accepted by ICCV 2023 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The current interacting hand (IH) datasets are relatively simplistic in terms
of background and texture, with hand joints being annotated by a machine
annotator, which may result in inaccuracies, and the diversity of pose
distribution is limited. However, the variability of background, pose
distribution, and texture can greatly influence the generalization ability.
Therefore, we present a large-scale synthetic dataset RenderIH for interacting
hands with accurate and diverse pose annotations. The dataset contains 1M
photo-realistic images with varied backgrounds, perspectives, and hand
textures. To generate natural and diverse interacting poses, we propose a new
pose optimization algorithm. Additionally, for better pose estimation accuracy,
we introduce a transformer-based pose estimation network, TransHand, to
leverage the correlation between interacting hands and verify the effectiveness
of RenderIH in improving results. Our dataset is model-agnostic and can improve
more accuracy of any hand pose estimation method in comparison to other real or
synthetic datasets. Experiments have shown that pretraining on our synthetic
data can significantly decrease the error from 6.76mm to 5.79mm, and our
Transhand surpasses contemporary methods. Our dataset and code are available at
https://github.com/adwardlee/RenderIH.
| [
{
"version": "v1",
"created": "Sun, 17 Sep 2023 15:30:58 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Sep 2023 02:12:40 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Sep 2023 16:02:13 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Li",
"Lijun",
""
],
[
"Tian",
"Linrui",
""
],
[
"Zhang",
"Xindi",
""
],
[
"Wang",
"Qi",
""
],
[
"Zhang",
"Bang",
""
],
[
"Liu",
"Mengyuan",
""
],
[
"Chen",
"Chen",
""
]
]
| new_dataset | 0.999839 |
2309.09979 | Haozhi Qi | Haozhi Qi, Brent Yi, Sudharshan Suresh, Mike Lambeta, Yi Ma, Roberto
Calandra, Jitendra Malik | General In-Hand Object Rotation with Vision and Touch | CoRL 2023; Website: https://haozhi.io/rotateit/ | null | null | null | cs.RO cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | We introduce RotateIt, a system that enables fingertip-based object rotation
along multiple axes by leveraging multimodal sensory inputs. Our system is
trained in simulation, where it has access to ground-truth object shapes and
physical properties. Then we distill it to operate on realistic yet noisy
simulated visuotactile and proprioceptive sensory inputs. These multimodal
inputs are fused via a visuotactile transformer, enabling online inference of
object shapes and physical properties during deployment. We show significant
performance improvements over prior methods and the importance of visual and
tactile sensing.
| [
{
"version": "v1",
"created": "Mon, 18 Sep 2023 17:59:25 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Sep 2023 08:22:15 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Qi",
"Haozhi",
""
],
[
"Yi",
"Brent",
""
],
[
"Suresh",
"Sudharshan",
""
],
[
"Lambeta",
"Mike",
""
],
[
"Ma",
"Yi",
""
],
[
"Calandra",
"Roberto",
""
],
[
"Malik",
"Jitendra",
""
]
]
| new_dataset | 0.997365 |
2309.10196 | Sudhir R. Ghorpade | Sudhir R. Ghorpade and Rati Ludhani | On the Minimum Distance, Minimum Weight Codewords, and the Dimension of
Projective Reed-Muller Codes | 24 pages; to appear in Adv. Math. Commun.; some typos corrected and a
reference added in this version | null | 10.3934/amc.2023035 | null | cs.IT math.CO math.IT | http://creativecommons.org/licenses/by/4.0/ | We give an alternative proof of the formula for the minimum distance of a
projective Reed-Muller code of an arbitrary order. It leads to a complete
characterization of the minimum weight codewords of a projective Reed-Muller
code. This is then used to determine the number of minimum weight codewords of
a projective Reed-Muller code. Various formulas for the dimension of a
projective Reed-Muller code, and their equivalences are also discussed.
| [
{
"version": "v1",
"created": "Mon, 18 Sep 2023 22:56:24 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Sep 2023 20:20:08 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Ghorpade",
"Sudhir R.",
""
],
[
"Ludhani",
"Rati",
""
]
]
| new_dataset | 0.999516 |
2309.13393 | Leonardo Saraceni | Leonardo Saraceni, Ionut M. Motoi, Daniele Nardi, Thomas A. Ciarfuglia | AgriSORT: A Simple Online Real-time Tracking-by-Detection framework for
robotics in precision agriculture | 8 pages, 5 figures, submitted to International Conference on Robotics
and Automation (ICRA) 2024. Code and dataset will be soon available on my
github. This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible | null | null | null | cs.CV cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of multi-object tracking (MOT) consists in detecting and tracking
all the objects in a video sequence while keeping a unique identifier for each
object. It is a challenging and fundamental problem for robotics. In precision
agriculture the challenge of achieving a satisfactory solution is amplified by
extreme camera motion, sudden illumination changes, and strong occlusions. Most
modern trackers rely on the appearance of objects rather than motion for
association, which can be ineffective when most targets are static objects with
the same appearance, as in the agricultural case. To this end, on the trail of
SORT [5], we propose AgriSORT, a simple, online, real-time
tracking-by-detection pipeline for precision agriculture based only on motion
information that allows for accurate and fast propagation of tracks between
frames. The main focuses of AgriSORT are efficiency, flexibility, minimal
dependencies, and ease of deployment on robotic platforms. We test the proposed
pipeline on a novel MOT benchmark specifically tailored for the agricultural
context, based on video sequences taken in a table grape vineyard, particularly
challenging due to strong self-similarity and density of the instances. Both
the code and the dataset are available for future comparisons.
| [
{
"version": "v1",
"created": "Sat, 23 Sep 2023 14:35:45 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Sep 2023 08:32:50 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Saraceni",
"Leonardo",
""
],
[
"Motoi",
"Ionut M.",
""
],
[
"Nardi",
"Daniele",
""
],
[
"Ciarfuglia",
"Thomas A.",
""
]
]
| new_dataset | 0.99959 |
2309.14074 | Eli\~a Batista | Eli\~a Batista, Paulo Coelho, Eduardo Alchieri, Fernando Dotti,
Fernando Pedone | FlexCast: genuine overlay-based atomic multicast | null | null | null | null | cs.DC | http://creativecommons.org/licenses/by/4.0/ | Atomic multicast is a communication abstraction where messages are propagated
to groups of processes with reliability and order guarantees. Atomic multicast
is at the core of strongly consistent storage and transactional systems. This
paper presents FlexCast, the first genuine overlay-based atomic multicast
protocol. Genuineness captures the essence of atomic multicast in that only the
sender of a message and the message's destinations coordinate to order the
message, leading to efficient protocols. Overlay-based protocols restrict how
process groups can communicate. Limiting communication leads to simpler
protocols and reduces the amount of information each process must keep about
the rest of the system. FlexCast implements genuine atomic multicast using a
complete DAG overlay. We experimentally evaluate FlexCast in a geographically
distributed environment using gTPC-C, a variation of the TPC-C benchmark that
takes into account geographical distribution and locality. We show that, by
exploiting genuineness and workload locality, FlexCast outperforms
well-established atomic multicast protocols without the inherent communication
overhead of state-of-the-art non-genuine multicast protocols.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 12:09:54 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Sep 2023 14:21:28 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Sep 2023 08:51:30 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Batista",
"Eliã",
""
],
[
"Coelho",
"Paulo",
""
],
[
"Alchieri",
"Eduardo",
""
],
[
"Dotti",
"Fernando",
""
],
[
"Pedone",
"Fernando",
""
]
]
| new_dataset | 0.965638 |
2309.14181 | Haoning Wu Mr | Haoning Wu, Zicheng Zhang, Erli Zhang, Chaofeng Chen, Liang Liao,
Annan Wang, Chunyi Li, Wenxiu Sun, Qiong Yan, Guangtao Zhai, Weisi Lin | Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-level
Vision | 25 pages, 14 figures, 9 tables, preprint version | null | null | null | cs.CV cs.AI cs.MM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The rapid evolution of Multi-modality Large Language Models (MLLMs) has
catalyzed a shift in computer vision from specialized models to general-purpose
foundation models. Nevertheless, there is still an inadequacy in assessing the
abilities of MLLMs on low-level visual perception and understanding. To address
this gap, we present Q-Bench, a holistic benchmark crafted to systematically
evaluate potential abilities of MLLMs on three realms: low-level visual
perception, low-level visual description, and overall visual quality
assessment. a) To evaluate the low-level perception ability, we construct the
LLVisionQA dataset, consisting of 2,990 diverse-sourced images, each equipped
with a human-asked question focusing on its low-level attributes. We then
measure the correctness of MLLMs on answering these questions. b) To examine
the description ability of MLLMs on low-level information, we propose the
LLDescribe dataset consisting of long expert-labelled golden low-level text
descriptions on 499 images, and a GPT-involved comparison pipeline between
outputs of MLLMs and the golden descriptions. c) Besides these two tasks, we
further measure their visual quality assessment ability to align with human
opinion scores. Specifically, we design a softmax-based strategy that enables
MLLMs to predict quantifiable quality scores, and evaluate them on various
existing image quality assessment (IQA) datasets. Our evaluation across the
three abilities confirms that MLLMs possess preliminary low-level visual
skills. However, these skills are still unstable and relatively imprecise,
indicating the need for specific enhancements on MLLMs towards these abilities.
We hope that our benchmark can encourage the research community to delve deeper
to discover and enhance these untapped potentials of MLLMs. Project Page:
https://vqassessment.github.io/Q-Bench.
| [
{
"version": "v1",
"created": "Mon, 25 Sep 2023 14:43:43 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Sep 2023 16:22:23 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Wu",
"Haoning",
""
],
[
"Zhang",
"Zicheng",
""
],
[
"Zhang",
"Erli",
""
],
[
"Chen",
"Chaofeng",
""
],
[
"Liao",
"Liang",
""
],
[
"Wang",
"Annan",
""
],
[
"Li",
"Chunyi",
""
],
[
"Sun",
"Wenxiu",
""
],
[
"Yan",
"Qiong",
""
],
[
"Zhai",
"Guangtao",
""
],
[
"Lin",
"Weisi",
""
]
]
| new_dataset | 0.999514 |
2309.15893 | Niki Najafi | Niki Najafi, Miranda Addie, Sarkis Meterissian, Marta Kersten-Oertel | Breamy: An augmented reality mHealth prototype for surgical
decision-making in breast cancer | null | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In 2020, according to WHO, breast cancer affected 2.3 million women
worldwide, resulting in 685,000 fatalities. By the end of the year,
approximately 7.8 million women worldwide had survived their breast cancer
making it the most widespread form of cancer globally. Surgical treatment
decisions, including choosing between oncoplastic options, often require quick
decision-making within an 8-week time frame. However, many women lack the
necessary knowledge and preparation for making such complex informed decisions.
Anxiety and unsatisfactory outcomes can result from inadequate decision-making
processes, leading to complications and the need for revision surgeries. Shared
decision-making and personalized decision aids have shown positive effects on
patient satisfaction and treatment outcomes. This paper introduces Breamy, a
prototype mobile health (mHealth) application that utilizes augmented reality
(AR) technology to assist breast cancer patients in making informed decisions.
The app provides 3D visualizations of different oncoplastic procedures, aiming
to improve confidence in surgical decision-making, reduce decisional regret,
and enhance patient well-being after surgery. To determine the perception of
the usefulness of Breamy, we collected data from 166 participants through an
online survey. The results suggest that Breamy has the potential to reduce
patient's anxiety levels and assist them during the decision-making process.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 17:56:01 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Najafi",
"Niki",
""
],
[
"Addie",
"Miranda",
""
],
[
"Meterissian",
"Sarkis",
""
],
[
"Kersten-Oertel",
"Marta",
""
]
]
| new_dataset | 0.988751 |
2309.15940 | Haonan Chang | Haonan Chang, Kowndinya Boyalakuntla, Shiyang Lu, Siwei Cai, Eric
Jing, Shreesh Keskar, Shijie Geng, Adeeb Abbas, Lifeng Zhou, Kostas Bekris,
Abdeslam Boularias | Context-Aware Entity Grounding with Open-Vocabulary 3D Scene Graphs | The code and dataset used for evaluation can be found at
https://github.com/changhaonan/OVSG}{https://github.com/changhaonan/OVSG.
This paper has been accepted by CoRL2023 | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present an Open-Vocabulary 3D Scene Graph (OVSG), a formal framework for
grounding a variety of entities, such as object instances, agents, and regions,
with free-form text-based queries. Unlike conventional semantic-based object
localization approaches, our system facilitates context-aware entity
localization, allowing for queries such as ``pick up a cup on a kitchen table"
or ``navigate to a sofa on which someone is sitting". In contrast to existing
research on 3D scene graphs, OVSG supports free-form text input and
open-vocabulary querying. Through a series of comparative experiments using the
ScanNet dataset and a self-collected dataset, we demonstrate that our proposed
approach significantly surpasses the performance of previous semantic-based
localization techniques. Moreover, we highlight the practical application of
OVSG in real-world robot navigation and manipulation experiments.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 18:32:29 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Chang",
"Haonan",
""
],
[
"Boyalakuntla",
"Kowndinya",
""
],
[
"Lu",
"Shiyang",
""
],
[
"Cai",
"Siwei",
""
],
[
"Jing",
"Eric",
""
],
[
"Keskar",
"Shreesh",
""
],
[
"Geng",
"Shijie",
""
],
[
"Abbas",
"Adeeb",
""
],
[
"Zhou",
"Lifeng",
""
],
[
"Bekris",
"Kostas",
""
],
[
"Boularias",
"Abdeslam",
""
]
]
| new_dataset | 0.980599 |
2309.15941 | Wenyu Han | Wenyu Han, Congcong Wen, Lazarus Chok, Yan Liang Tan, Sheung Lung
Chan, Hang Zhao, Chen Feng | AutoEncoding Tree for City Generation and Applications | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | City modeling and generation have attracted an increased interest in various
applications, including gaming, urban planning, and autonomous driving. Unlike
previous works focused on the generation of single objects or indoor scenes,
the huge volumes of spatial data in cities pose a challenge to the generative
models. Furthermore, few publicly available 3D real-world city datasets also
hinder the development of methods for city generation. In this paper, we first
collect over 3,000,000 geo-referenced objects for the city of New York, Zurich,
Tokyo, Berlin, Boston and several other large cities. Based on this dataset, we
propose AETree, a tree-structured auto-encoder neural network, for city
generation. Specifically, we first propose a novel Spatial-Geometric Distance
(SGD) metric to measure the similarity between building layouts and then
construct a binary tree over the raw geometric data of building based on the
SGD metric. Next, we present a tree-structured network whose encoder learns to
extract and merge spatial information from bottom-up iteratively. The resulting
global representation is reversely decoded for reconstruction or generation. To
address the issue of long-dependency as the level of the tree increases, a Long
Short-Term Memory (LSTM) Cell is employed as a basic network element of the
proposed AETree. Moreover, we introduce a novel metric, Overlapping Area Ratio
(OAR), to quantitatively evaluate the generation results. Experiments on the
collected dataset demonstrate the effectiveness of the proposed model on 2D and
3D city generation. Furthermore, the latent features learned by AETree can
serve downstream urban planning applications.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 18:36:56 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Han",
"Wenyu",
""
],
[
"Wen",
"Congcong",
""
],
[
"Chok",
"Lazarus",
""
],
[
"Tan",
"Yan Liang",
""
],
[
"Chan",
"Sheung Lung",
""
],
[
"Zhao",
"Hang",
""
],
[
"Feng",
"Chen",
""
]
]
| new_dataset | 0.999708 |
2309.15946 | Jacek Cyranka | Jacek Cyranka, Szymon Haponiuk | Unified Long-Term Time-Series Forecasting Benchmark | null | null | null | null | cs.LG cs.AI cs.NE math.DS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In order to support the advancement of machine learning methods for
predicting time-series data, we present a comprehensive dataset designed
explicitly for long-term time-series forecasting. We incorporate a collection
of datasets obtained from diverse, dynamic systems and real-life records. Each
dataset is standardized by dividing it into training and test trajectories with
predetermined lookback lengths. We include trajectories of length up to $2000$
to ensure a reliable evaluation of long-term forecasting capabilities. To
determine the most effective model in diverse scenarios, we conduct an
extensive benchmarking analysis using classical and state-of-the-art models,
namely LSTM, DeepAR, NLinear, N-Hits, PatchTST, and LatentODE. Our findings
reveal intriguing performance comparisons among these models, highlighting the
dataset-dependent nature of model effectiveness. Notably, we introduce a custom
latent NLinear model and enhance DeepAR with a curriculum learning phase. Both
consistently outperform their vanilla counterparts.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 18:59:00 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Cyranka",
"Jacek",
""
],
[
"Haponiuk",
"Szymon",
""
]
]
| new_dataset | 0.987659 |
2309.15951 | Xiaoqian Liu | Xiaoqian Liu, Yuhan Dong, Yiqing Li, Yousi Lin, Xun Yang and Ming Gan | IEEE 802.11be Wi-Fi 7: Feature Summary and Performance Evaluation | 6 pages, 4 figures | null | null | null | cs.NI eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While the pace of commercial scale application of Wi-Fi 6 accelerates, the
IEEE 802.11 Working Group is about to complete the development of a new
amendment standard IEEE 802.11be -- Extremely High Throughput (EHT), also known
as Wi-Fi 7, which can be used to meet the demand for the throughput of 4K/8K
videos up to tens of Gbps and low-latency video applications such as virtual
reality (VR) and augmented reality (AR). Wi-Fi 7 not only scales Wi-Fi 6 with
doubled bandwidth, but also supports real-time applications, which brings
revolutionary changes to Wi-Fi. In this article, we start by introducing the
main objectives and timeline of Wi-Fi 7 and then list the latest key techniques
which promote the performance improvement of Wi-Fi 7. Finally, we validate the
most critical objectives of Wi-Fi 7 -- the potential up to 30 Gbps throughput
and lower latency. System-level simulation results suggest that by combining
the new techniques, Wi-Fi 7 achieves 30 Gbps throughput and lower latency than
Wi-Fi 6.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 19:09:19 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Liu",
"Xiaoqian",
""
],
[
"Dong",
"Yuhan",
""
],
[
"Li",
"Yiqing",
""
],
[
"Lin",
"Yousi",
""
],
[
"Yang",
"Xun",
""
],
[
"Gan",
"Ming",
""
]
]
| new_dataset | 0.998827 |
2309.15955 | Ryan Posh | Ryan R. Posh, Jonathan A. Tittle, David J. Kelly, James P.
Schmiedeler, and Patrick M. Wensing | Hybrid Volitional Control of a Robotic Transtibial Prosthesis using a
Phase Variable Impedance Controller | 7 pages, 7 figures, submitted to ICRA 2024 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For robotic transtibial prosthesis control, the global kinematics of the
tibia can be used to monitor the progression of the gait cycle and command
smooth and continuous actuation. In this work, these global tibia kinematics
are used to define a phase variable impedance controller (PVIC), which is then
implemented as the nonvolitional base controller within a hybrid volitional
control framework (PVI-HVC). The gait progression estimation and biomechanic
performance of one able-bodied individual walking on a robotic ankle prosthesis
via a bypass adapter are compared for three control schemes: a passive
benchmark controller, PVIC, and PVI-HVC. The different actuation of each
controller had a direct effect on the global tibia kinematics, but the average
deviation between the estimated and ground truth gait percentage were 1.6%,
1.8%, and 2.1%, respectively, for each controller. Both PVIC and PVI-HVC
produced good agreement with able-bodied kinematic and kinetic references. As
designed, PVI-HVC results were similar to those of PVIC when the user used low
volitional intent, but yielded higher peak plantarflexion, peak torque, and
peak power when the user commanded high volitional input in late stance. This
additional torque and power also allowed the user to volitionally and
continuously achieve activities beyond level walking, such as ascending ramps,
avoiding obstacles, standing on tip-toes, and tapping the foot. In this way,
PVI-HVC offers the kinetic and kinematic performance of the PVIC during level
ground walking, along with the freedom to volitionally pursue alternative
activities.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 19:12:48 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Posh",
"Ryan R.",
""
],
[
"Tittle",
"Jonathan A.",
""
],
[
"Kelly",
"David J.",
""
],
[
"Schmiedeler",
"James P.",
""
],
[
"Wensing",
"Patrick M.",
""
]
]
| new_dataset | 0.991641 |
2309.15996 | Hugo Lefeuvre | Hugo Lefeuvre, Gaulthier Gain, Vlad-Andrei B\u{a}doiu, Daniel Dinca,
Vlad-Radu Schiller, Costin Raiciu, Felipe Huici, Pierre Olivier | Loupe: Driving the Development of OS Compatibility Layers | Accepted to appear at ASPLOS'24
(https://www.asplos-conference.org/asplos2024/) | null | null | null | cs.OS | http://creativecommons.org/licenses/by/4.0/ | Supporting mainstream applications is fundamental for a new OS to have
impact. It is generally achieved by developing a layer of compatibility
allowing applications developed for a mainstream OS like Linux to run
unmodified on the new OS. Building such a layer, as we show, results in large
engineering inefficiencies due to the lack of efficient methods to precisely
measure the OS features required by a set of applications.
We propose Loupe, a novel method based on dynamic analysis that determines
the OS features that need to be implemented in a prototype OS to bring support
for a target set of applications and workloads. Loupe guides and boosts OS
developers as they build compatibility layers, prioritizing which features to
implement in order to quickly support many applications as early as possible.
We apply our methodology to 100+ applications and several OSes currently under
development, demonstrating high engineering effort savings vs. existing
approaches: for example, for the 62 applications supported by the OSv kernel,
we show that using Loupe, would have required implementing only 37 system calls
vs. 92 for the non-systematic process followed by OSv developers.
We study our measurements and extract novel key insights. Overall, we show
that the burden of building compatibility layers is significantly less than
what previous works suggest: in some cases, only as few as 20% of system calls
reported by static analysis, and 50% of those reported by naive dynamic
analysis need an implementation for an application to successfully run standard
benchmarks.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 20:21:37 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Lefeuvre",
"Hugo",
""
],
[
"Gain",
"Gaulthier",
""
],
[
"Bădoiu",
"Vlad-Andrei",
""
],
[
"Dinca",
"Daniel",
""
],
[
"Schiller",
"Vlad-Radu",
""
],
[
"Raiciu",
"Costin",
""
],
[
"Huici",
"Felipe",
""
],
[
"Olivier",
"Pierre",
""
]
]
| new_dataset | 0.987043 |
2309.16019 | Matteo Poggi | Chaoqiang Zhao, Matteo Poggi, Fabio Tosi, Lei Zhou, Qiyu Sun, Yang
Tang, Stefano Mattoccia | GasMono: Geometry-Aided Self-Supervised Monocular Depth Estimation for
Indoor Scenes | ICCV 2023. Code: https://github.com/zxcqlf/GasMono | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper tackles the challenges of self-supervised monocular depth
estimation in indoor scenes caused by large rotation between frames and low
texture. We ease the learning process by obtaining coarse camera poses from
monocular sequences through multi-view geometry to deal with the former.
However, we found that limited by the scale ambiguity across different scenes
in the training dataset, a na\"ive introduction of geometric coarse poses
cannot play a positive role in performance improvement, which is
counter-intuitive. To address this problem, we propose to refine those poses
during training through rotation and translation/scale optimization. To soften
the effect of the low texture, we combine the global reasoning of vision
transformers with an overfitting-aware, iterative self-distillation mechanism,
providing more accurate depth guidance coming from the network itself.
Experiments on NYUv2, ScanNet, 7scenes, and KITTI datasets support the
effectiveness of each component in our framework, which sets a new
state-of-the-art for indoor self-supervised monocular depth estimation, as well
as outstanding generalization ability. Code and models are available at
https://github.com/zxcqlf/GasMono
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 17:59:57 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Zhao",
"Chaoqiang",
""
],
[
"Poggi",
"Matteo",
""
],
[
"Tosi",
"Fabio",
""
],
[
"Zhou",
"Lei",
""
],
[
"Sun",
"Qiyu",
""
],
[
"Tang",
"Yang",
""
],
[
"Mattoccia",
"Stefano",
""
]
]
| new_dataset | 0.999852 |
2309.16020 | Gaurav Kumar Nayak | Vicente Vivanco Cepeda, Gaurav Kumar Nayak, Mubarak Shah | GeoCLIP: Clip-Inspired Alignment between Locations and Images for
Effective Worldwide Geo-localization | Accepted at NeurIPS 2023 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Worldwide Geo-localization aims to pinpoint the precise location of images
taken anywhere on Earth. This task has considerable challenges due to immense
variation in geographic landscapes. The image-to-image retrieval-based
approaches fail to solve this problem on a global scale as it is not feasible
to construct a large gallery of images covering the entire world. Instead,
existing approaches divide the globe into discrete geographic cells,
transforming the problem into a classification task. However, their performance
is limited by the predefined classes and often results in inaccurate
localizations when an image's location significantly deviates from its class
center. To overcome these limitations, we propose GeoCLIP, a novel
CLIP-inspired Image-to-GPS retrieval approach that enforces alignment between
the image and its corresponding GPS locations. GeoCLIP's location encoder
models the Earth as a continuous function by employing positional encoding
through random Fourier features and constructing a hierarchical representation
that captures information at varying resolutions to yield a semantically rich
high-dimensional feature suitable to use even beyond geo-localization. To the
best of our knowledge, this is the first work employing GPS encoding for
geo-localization. We demonstrate the efficacy of our method via extensive
experiments and ablations on benchmark datasets. We achieve competitive
performance with just 20% of training data, highlighting its effectiveness even
in limited-data settings. Furthermore, we qualitatively demonstrate
geo-localization using a text query by leveraging CLIP backbone of our image
encoder.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 20:54:56 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Cepeda",
"Vicente Vivanco",
""
],
[
"Nayak",
"Gaurav Kumar",
""
],
[
"Shah",
"Mubarak",
""
]
]
| new_dataset | 0.999413 |
2309.16031 | Taehyeon Kim | Gyeongmin Kim, Taehyeon Kim, Shyam Sundar Kannan, Vishnunandan L. N.
Venkatesh, Donghan Kim, Byung-Cheol Min | DynaCon: Dynamic Robot Planner with Contextual Awareness via LLMs | Submitted to ICRA 2024 | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Mobile robots often rely on pre-existing maps for effective path planning and
navigation. However, when these maps are unavailable, particularly in
unfamiliar environments, a different approach become essential. This paper
introduces DynaCon, a novel system designed to provide mobile robots with
contextual awareness and dynamic adaptability during navigation, eliminating
the reliance of traditional maps. DynaCon integrates real-time feedback with an
object server, prompt engineering, and navigation modules. By harnessing the
capabilities of Large Language Models (LLMs), DynaCon not only understands
patterns within given numeric series but also excels at categorizing objects
into matched spaces. This facilitates dynamic path planner imbued with
contextual awareness. We validated the effectiveness of DynaCon through an
experiment where a robot successfully navigated to its goal using reasoning.
Source code and experiment videos for this work can be found at:
https://sites.google.com/view/dynacon.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 21:21:40 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Kim",
"Gyeongmin",
""
],
[
"Kim",
"Taehyeon",
""
],
[
"Kannan",
"Shyam Sundar",
""
],
[
"Venkatesh",
"Vishnunandan L. N.",
""
],
[
"Kim",
"Donghan",
""
],
[
"Min",
"Byung-Cheol",
""
]
]
| new_dataset | 0.993348 |
2309.16057 | Jia Huang | Jia Huang, Alvika Gautam, Junghun Choi, Srikanth Saripalli | WiDEVIEW: An UltraWideBand and Vision Dataset for Deciphering
Pedestrian-Vehicle Interactions | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robust and accurate tracking and localization of road users like pedestrians
and cyclists is crucial to ensure safe and effective navigation of Autonomous
Vehicles (AVs), particularly so in urban driving scenarios with complex
vehicle-pedestrian interactions. Existing datasets that are useful to
investigate vehicle-pedestrian interactions are mostly image-centric and thus
vulnerable to vision failures. In this paper, we investigate Ultra-wideband
(UWB) as an additional modality for road users' localization to enable a better
understanding of vehicle-pedestrian interactions. We present WiDEVIEW, the
first multimodal dataset that integrates LiDAR, three RGB cameras, GPS/IMU, and
UWB sensors for capturing vehicle-pedestrian interactions in an urban
autonomous driving scenario. Ground truth image annotations are provided in the
form of 2D bounding boxes and the dataset is evaluated on standard 2D object
detection and tracking algorithms. The feasibility of UWB is evaluated for
typical traffic scenarios in both line-of-sight and non-line-of-sight
conditions using LiDAR as ground truth. We establish that UWB range data has
comparable accuracy with LiDAR with an error of 0.19 meters and reliable
anchor-tag range data for up to 40 meters in line-of-sight conditions. UWB
performance for non-line-of-sight conditions is subjective to the nature of the
obstruction (trees vs. buildings). Further, we provide a qualitative analysis
of UWB performance for scenarios susceptible to intermittent vision failures.
The dataset can be downloaded via https://github.com/unmannedlab/UWB_Dataset.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 22:44:47 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Huang",
"Jia",
""
],
[
"Gautam",
"Alvika",
""
],
[
"Choi",
"Junghun",
""
],
[
"Saripalli",
"Srikanth",
""
]
]
| new_dataset | 0.999814 |
2309.16081 | Chao Liu | Chao Liu, Andrea Moncada, Hanna Matusik, Deniz Irem Erus, and Daniela
Rus | A Modular Bio-inspired Robotic Hand with High Sensitivity | 7 pages, 13 figures, IEEE RoboSoft 2023 | 2023 IEEE International Conference on Soft Robotics (RoboSoft),
Singapore, Singapore, 2023, pp. 1-7 | 10.1109/RoboSoft55895.2023.10121946 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While parallel grippers and multi-fingered robotic hands are well developed
and commonly used in structured settings, it remains a challenge in robotics to
design a highly articulated robotic hand that can be comparable to human hands
to handle various daily manipulation and grasping tasks. Dexterity usually
requires more actuators but also leads to a more sophisticated mechanism design
and is more expensive to fabricate and maintain. Soft materials are able to
provide compliance and safety when interacting with the physical world but are
hard to model. This work presents a hybrid bio-inspired robotic hand that
combines soft matters and rigid elements. Sensing is integrated into the rigid
bodies resulting in a simple way for pose estimation with high sensitivity. The
proposed hand is in a modular structure allowing for rapid fabrication and
programming. The fabrication process is carefully designed so that a full hand
can be made with low-cost materials and assembled in an efficient manner. We
demonstrate the dexterity of the hand by successfully performing human grasp
types.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 00:41:53 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Liu",
"Chao",
""
],
[
"Moncada",
"Andrea",
""
],
[
"Matusik",
"Hanna",
""
],
[
"Erus",
"Deniz Irem",
""
],
[
"Rus",
"Daniela",
""
]
]
| new_dataset | 0.997019 |
2309.16137 | Yuanmin Tang | Yuanmin Tang, Jing Yu, Keke Gai, Zhuang Jiamin, Gang Xiong, Yue Hu and
Qi Wu | Context-I2W: Mapping Images to Context-dependent Words for Accurate
Zero-Shot Composed Image Retrieval | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Different from Composed Image Retrieval task that requires expensive labels
for training task-specific models, Zero-Shot Composed Image Retrieval (ZS-CIR)
involves diverse tasks with a broad range of visual content manipulation intent
that could be related to domain, scene, object, and attribute. The key
challenge for ZS-CIR tasks is to learn a more accurate image representation
that has adaptive attention to the reference image for various manipulation
descriptions. In this paper, we propose a novel context-dependent mapping
network, named Context-I2W, for adaptively converting description-relevant
Image information into a pseudo-word token composed of the description for
accurate ZS-CIR. Specifically, an Intent View Selector first dynamically learns
a rotation rule to map the identical image to a task-specific manipulation
view. Then a Visual Target Extractor further captures local information
covering the main targets in ZS-CIR tasks under the guidance of multiple
learnable queries. The two complementary modules work together to map an image
to a context-dependent pseudo-word token without extra supervision. Our model
shows strong generalization ability on four ZS-CIR tasks, including domain
conversion, object composition, object manipulation, and attribute
manipulation. It obtains consistent and significant performance boosts ranging
from 1.88% to 3.60% over the best methods and achieves new state-of-the-art
results on ZS-CIR. Our code is available at
https://github.com/Pter61/context_i2w.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 03:35:25 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Tang",
"Yuanmin",
""
],
[
"Yu",
"Jing",
""
],
[
"Gai",
"Keke",
""
],
[
"Jiamin",
"Zhuang",
""
],
[
"Xiong",
"Gang",
""
],
[
"Hu",
"Yue",
""
],
[
"Wu",
"Qi",
""
]
]
| new_dataset | 0.998321 |
2309.16141 | Yuanmin Tang | Yuanmin Tang, Jing Yu, Keke Gai, Yujing Wang, Yue Hu, Gang Xiong and
Qi Wu | Align before Search: Aligning Ads Image to Text for Accurate Cross-Modal
Sponsored Search | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cross-Modal sponsored search displays multi-modal advertisements (ads) when
consumers look for desired products by natural language queries in search
engines. Since multi-modal ads bring complementary details for query-ads
matching, the ability to align ads-specific information in both images and
texts is crucial for accurate and flexible sponsored search. Conventional
research mainly studies from the view of modeling the implicit correlations
between images and texts for query-ads matching, ignoring the alignment of
detailed product information and resulting in suboptimal search performance.In
this work, we propose a simple alignment network for explicitly mapping
fine-grained visual parts in ads images to the corresponding text, which
leverages the co-occurrence structure consistency between vision and language
spaces without requiring expensive labeled training data. Moreover, we propose
a novel model for cross-modal sponsored search that effectively conducts the
cross-modal alignment and query-ads matching in two separate processes. In this
way, the model matches the multi-modal input in the same language space,
resulting in a superior performance with merely half of the training data. Our
model outperforms the state-of-the-art models by 2.57% on a large commercial
dataset. Besides sponsored search, our alignment method is applicable for
general cross-modal search. We study a typical cross-modal retrieval task on
the MSCOCO dataset, which achieves consistent performance improvement and
proves the generalization ability of our method. Our code is available at
https://github.com/Pter61/AlignCMSS/
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 03:43:57 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Tang",
"Yuanmin",
""
],
[
"Yu",
"Jing",
""
],
[
"Gai",
"Keke",
""
],
[
"Wang",
"Yujing",
""
],
[
"Hu",
"Yue",
""
],
[
"Xiong",
"Gang",
""
],
[
"Wu",
"Qi",
""
]
]
| new_dataset | 0.983972 |
2309.16162 | Hitoshi Teshima | Hitoshi Teshima, Naoki Wake, Diego Thomas, Yuta Nakashima, Hiroshi
Kawasaki, Katsushi Ikeuchi | ACT2G: Attention-based Contrastive Learning for Text-to-Gesture
Generation | null | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent increase of remote-work, online meeting and tele-operation task makes
people find that gesture for avatars and communication robots is more important
than we have thought. It is one of the key factors to achieve smooth and
natural communication between humans and AI systems and has been intensively
researched. Current gesture generation methods are mostly based on deep neural
network using text, audio and other information as the input, however, they
generate gestures mainly based on audio, which is called a beat gesture.
Although the ratio of the beat gesture is more than 70% of actual human
gestures, content based gestures sometimes play an important role to make
avatars more realistic and human-like. In this paper, we propose a
attention-based contrastive learning for text-to-gesture (ACT2G), where
generated gestures represent content of the text by estimating attention weight
for each word from the input text. In the method, since text and gesture
features calculated by the attention weight are mapped to the same latent space
by contrastive learning, once text is given as input, the network outputs a
feature vector which can be used to generate gestures related to the content.
User study confirmed that the gestures generated by ACT2G were better than
existing methods. In addition, it was demonstrated that wide variation of
gestures were generated from the same text by changing attention weights by
creators.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 04:29:26 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Teshima",
"Hitoshi",
""
],
[
"Wake",
"Naoki",
""
],
[
"Thomas",
"Diego",
""
],
[
"Nakashima",
"Yuta",
""
],
[
"Kawasaki",
"Hiroshi",
""
],
[
"Ikeuchi",
"Katsushi",
""
]
]
| new_dataset | 0.998567 |
2309.16166 | Stuart Armstrong | Stuart Armstrong and Alexandre Maranh\~ao and Oliver Daniels-Koch and
Patrick Leask and Rebecca Gorman | CoinRun: Solving Goal Misgeneralisation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Goal misgeneralisation is a key challenge in AI alignment -- the task of
getting powerful Artificial Intelligences to align their goals with human
intentions and human morality. In this paper, we show how the ACE (Algorithm
for Concept Extrapolation) agent can solve one of the key standard challenges
in goal misgeneralisation: the CoinRun challenge. It uses no new reward
information in the new environment. This points to how autonomous agents could
be trusted to act in human interests, even in novel and critical situations.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 04:43:39 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Armstrong",
"Stuart",
""
],
[
"Maranhão",
"Alexandre",
""
],
[
"Daniels-Koch",
"Oliver",
""
],
[
"Leask",
"Patrick",
""
],
[
"Gorman",
"Rebecca",
""
]
]
| new_dataset | 0.959322 |
2309.16172 | Guangyuan Hu | Guangyuan Hu, Ruby B. Lee | Random and Safe Cache Architecture to Defeat Cache Timing Attacks | null | null | null | null | cs.CR cs.AR | http://creativecommons.org/licenses/by/4.0/ | Caches have been exploited to leak secret information due to the different
times they take to handle memory accesses. Cache timing attacks include
non-speculative cache side and covert channel attacks and cache-based
speculative execution attacks. We first present a systematic view of the attack
and defense space and show that no existing defense has addressed both
speculative and non-speculative cache timing attack families, which we do in
this paper. We propose Random and Safe (RaS) cache architectures to decorrelate
the cache state changes from memory requests. RaS fills the cache with ``safe''
cache lines that are likely to be used in the future, rather than with
demand-fetched, security-sensitive lines. RaS captures a group of safe
addresses during runtime and fetches addresses randomly displaced from these
addresses. Our proposed RaS architecture is flexible to allow
security-performance trade-offs. We show different designs of RaS architectures
that can defeat cache side-channel attacks and cache-based speculative
execution attacks. The RaS variant against cache-based speculative execution
attacks has 4.2% average performance overhead and other RaS variants against
both attack families have 7.9% to 45.2% average overhead. For some benchmarks,
RaS defenses improve the performance while providing security.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 05:08:16 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Hu",
"Guangyuan",
""
],
[
"Lee",
"Ruby B.",
""
]
]
| new_dataset | 0.993627 |
2309.16189 | Lu Dai | Lu Dai, Liqian Ma, Shenhan Qian, Hao Liu, Ziwei Liu, Hui Xiong | Cloth2Body: Generating 3D Human Body Mesh from 2D Clothing | ICCV 2023 Poster | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we define and study a new Cloth2Body problem which has a goal
of generating 3D human body meshes from a 2D clothing image. Unlike the
existing human mesh recovery problem, Cloth2Body needs to address new and
emerging challenges raised by the partial observation of the input and the high
diversity of the output. Indeed, there are three specific challenges. First,
how to locate and pose human bodies into the clothes. Second, how to
effectively estimate body shapes out of various clothing types. Finally, how to
generate diverse and plausible results from a 2D clothing image. To this end,
we propose an end-to-end framework that can accurately estimate 3D body mesh
parameterized by pose and shape from a 2D clothing image. Along this line, we
first utilize Kinematics-aware Pose Estimation to estimate body pose
parameters. 3D skeleton is employed as a proxy followed by an inverse
kinematics module to boost the estimation accuracy. We additionally design an
adaptive depth trick to align the re-projected 3D mesh better with 2D clothing
image by disentangling the effects of object size and camera extrinsic. Next,
we propose Physics-informed Shape Estimation to estimate body shape parameters.
3D shape parameters are predicted based on partial body measurements estimated
from RGB image, which not only improves pixel-wise human-cloth alignment, but
also enables flexible user editing. Finally, we design Evolution-based pose
generation method, a skeleton transplanting method inspired by genetic
algorithms to generate diverse reasonable poses during inference. As shown by
experimental results on both synthetic and real-world data, the proposed
framework achieves state-of-the-art performance and can effectively recover
natural and diverse 3D body meshes from 2D images that align well with
clothing.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 06:18:38 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Dai",
"Lu",
""
],
[
"Ma",
"Liqian",
""
],
[
"Qian",
"Shenhan",
""
],
[
"Liu",
"Hao",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Xiong",
"Hui",
""
]
]
| new_dataset | 0.999613 |
2309.16202 | Dhiraj Amin | Dhiraj Amin, Sharvari Govilkar, Sagar Kulkarni, Yash Shashikant Lalit,
Arshi Ajaz Khwaja, Daries Xavier, Sahil Girijashankar Gupta | Marathi-English Code-mixed Text Generation | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Code-mixing, the blending of linguistic elements from distinct languages to
form meaningful sentences, is common in multilingual settings, yielding hybrid
languages like Hinglish and Minglish. Marathi, India's third most spoken
language, often integrates English for precision and formality. Developing
code-mixed language systems, like Marathi-English (Minglish), faces resource
constraints. This research introduces a Marathi-English code-mixed text
generation algorithm, assessed with Code Mixing Index (CMI) and Degree of Code
Mixing (DCM) metrics. Across 2987 code-mixed questions, it achieved an average
CMI of 0.2 and an average DCM of 7.4, indicating effective and comprehensible
code-mixed sentences. These results offer potential for enhanced NLP tools,
bridging linguistic gaps in multilingual societies.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 06:51:26 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Amin",
"Dhiraj",
""
],
[
"Govilkar",
"Sharvari",
""
],
[
"Kulkarni",
"Sagar",
""
],
[
"Lalit",
"Yash Shashikant",
""
],
[
"Khwaja",
"Arshi Ajaz",
""
],
[
"Xavier",
"Daries",
""
],
[
"Gupta",
"Sahil Girijashankar",
""
]
]
| new_dataset | 0.999106 |
2309.16228 | Andrea Fronzetti Colladon PhD | J. Cancellieri, W. Didimo, A. Fronzetti Colladon, F. Montecchiani | Brand Network Booster: A New System for Improving Brand Connectivity | null | null | null | null | cs.SI cs.CL cs.SE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a new decision support system offered for an in-depth
analysis of semantic networks, which can provide insights for a better
exploration of a brand's image and the improvement of its connectivity. In
terms of network analysis, we show that this goal is achieved by solving an
extended version of the Maximum Betweenness Improvement problem, which includes
the possibility of considering adversarial nodes, constrained budgets, and
weighted networks - where connectivity improvement can be obtained by adding
links or increasing the weight of existing connections. We present this new
system together with two case studies, also discussing its performance. Our
tool and approach are useful both for network scholars and for supporting the
strategic decision-making processes of marketing and communication managers.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 08:09:33 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Cancellieri",
"J.",
""
],
[
"Didimo",
"W.",
""
],
[
"Colladon",
"A. Fronzetti",
""
],
[
"Montecchiani",
"F.",
""
]
]
| new_dataset | 0.957956 |
2309.16237 | Jiaman Li | Jiaman Li, Jiajun Wu, C. Karen Liu | Object Motion Guided Human Motion Synthesis | SIGGRAPH Asia 2023 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Modeling human behaviors in contextual environments has a wide range of
applications in character animation, embodied AI, VR/AR, and robotics. In
real-world scenarios, humans frequently interact with the environment and
manipulate various objects to complete daily tasks. In this work, we study the
problem of full-body human motion synthesis for the manipulation of large-sized
objects. We propose Object MOtion guided human MOtion synthesis (OMOMO), a
conditional diffusion framework that can generate full-body manipulation
behaviors from only the object motion. Since naively applying diffusion models
fails to precisely enforce contact constraints between the hands and the
object, OMOMO learns two separate denoising processes to first predict hand
positions from object motion and subsequently synthesize full-body poses based
on the predicted hand positions. By employing the hand positions as an
intermediate representation between the two denoising processes, we can
explicitly enforce contact constraints, resulting in more physically plausible
manipulation motions. With the learned model, we develop a novel system that
captures full-body human manipulation motions by simply attaching a smartphone
to the object being manipulated. Through extensive experiments, we demonstrate
the effectiveness of our proposed pipeline and its ability to generalize to
unseen objects. Additionally, as high-quality human-object interaction datasets
are scarce, we collect a large-scale dataset consisting of 3D object geometry,
object motion, and human motion. Our dataset contains human-object interaction
motion for 15 objects, with a total duration of approximately 10 hours.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 08:22:00 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Li",
"Jiaman",
""
],
[
"Wu",
"Jiajun",
""
],
[
"Liu",
"C. Karen",
""
]
]
| new_dataset | 0.973152 |
2309.16249 | Pengxiang Wu | Pengxiang Wu, Siman Wang, Kevin Dela Rosa, Derek Hao Hu | FORB: A Flat Object Retrieval Benchmark for Universal Image Embedding | NeurIPS 2023 Datasets and Benchmarks Track | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image retrieval is a fundamental task in computer vision. Despite recent
advances in this field, many techniques have been evaluated on a limited number
of domains, with a small number of instance categories. Notably, most existing
works only consider domains like 3D landmarks, making it difficult to
generalize the conclusions made by these works to other domains, e.g., logo and
other 2D flat objects. To bridge this gap, we introduce a new dataset for
benchmarking visual search methods on flat images with diverse patterns. Our
flat object retrieval benchmark (FORB) supplements the commonly adopted 3D
object domain, and more importantly, it serves as a testbed for assessing the
image embedding quality on out-of-distribution domains. In this benchmark we
investigate the retrieval accuracy of representative methods in terms of
candidate ranks, as well as matching score margin, a viewpoint which is largely
ignored by many works. Our experiments not only highlight the challenges and
rich heterogeneity of FORB, but also reveal the hidden properties of different
retrieval strategies. The proposed benchmark is a growing project and we expect
to expand in both quantity and variety of objects. The dataset and supporting
codes are available at https://github.com/pxiangwu/FORB/.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 08:41:51 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Wu",
"Pengxiang",
""
],
[
"Wang",
"Siman",
""
],
[
"Rosa",
"Kevin Dela",
""
],
[
"Hu",
"Derek Hao",
""
]
]
| new_dataset | 0.999761 |
2309.16275 | Andrei Paraschiv | Andrei Paraschiv and Mihai Dascalu | UPB @ ACTI: Detecting Conspiracies using fine tuned Sentence
Transformers | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Conspiracy theories have become a prominent and concerning aspect of online
discourse, posing challenges to information integrity and societal trust. As
such, we address conspiracy theory detection as proposed by the ACTI @ EVALITA
2023 shared task. The combination of pre-trained sentence Transformer models
and data augmentation techniques enabled us to secure first place in the final
leaderboard of both sub-tasks. Our methodology attained F1 scores of 85.71% in
the binary classification and 91.23% for the fine-grained conspiracy topic
classification, surpassing other competing systems.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 09:17:20 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Paraschiv",
"Andrei",
""
],
[
"Dascalu",
"Mihai",
""
]
]
| new_dataset | 0.999453 |
2309.16289 | Zhiwei Fei | Zhiwei Fei, Xiaoyu Shen, Dawei Zhu, Fengzhe Zhou, Zhuo Han, Songyang
Zhang, Kai Chen, Zongwen Shen, Jidong Ge | LawBench: Benchmarking Legal Knowledge of Large Language Models | null | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have demonstrated strong capabilities in various
aspects. However, when applying them to the highly specialized, safe-critical
legal domain, it is unclear how much legal knowledge they possess and whether
they can reliably perform legal-related tasks. To address this gap, we propose
a comprehensive evaluation benchmark LawBench. LawBench has been meticulously
crafted to have precise assessment of the LLMs' legal capabilities from three
cognitive levels: (1) Legal knowledge memorization: whether LLMs can memorize
needed legal concepts, articles and facts; (2) Legal knowledge understanding:
whether LLMs can comprehend entities, events and relationships within legal
text; (3) Legal knowledge applying: whether LLMs can properly utilize their
legal knowledge and make necessary reasoning steps to solve realistic legal
tasks. LawBench contains 20 diverse tasks covering 5 task types: single-label
classification (SLC), multi-label classification (MLC), regression, extraction
and generation. We perform extensive evaluations of 51 LLMs on LawBench,
including 20 multilingual LLMs, 22 Chinese-oriented LLMs and 9 legal specific
LLMs. The results show that GPT-4 remains the best-performing LLM in the legal
domain, surpassing the others by a significant margin. While fine-tuning LLMs
on legal specific text brings certain improvements, we are still a long way
from obtaining usable and reliable LLMs in legal tasks. All data, model
predictions and evaluation code are released in
https://github.com/open-compass/LawBench/. We hope this benchmark provides
in-depth understanding of the LLMs' domain-specified capabilities and speed up
the development of LLMs in the legal domain.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 09:35:59 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Fei",
"Zhiwei",
""
],
[
"Shen",
"Xiaoyu",
""
],
[
"Zhu",
"Dawei",
""
],
[
"Zhou",
"Fengzhe",
""
],
[
"Han",
"Zhuo",
""
],
[
"Zhang",
"Songyang",
""
],
[
"Chen",
"Kai",
""
],
[
"Shen",
"Zongwen",
""
],
[
"Ge",
"Jidong",
""
]
]
| new_dataset | 0.995827 |
2309.16307 | Qirui Mi | Qirui Mi, Siyu Xia, Yan Song, Haifeng Zhang, Shenghao Zhu, Jun Wang | TaxAI: A Dynamic Economic Simulator and Benchmark for Multi-Agent
Reinforcement Learning | 26 pages, 8 figures, 12 tables | null | null | null | cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Taxation and government spending are crucial tools for governments to promote
economic growth and maintain social equity. However, the difficulty in
accurately predicting the dynamic strategies of diverse self-interested
households presents a challenge for governments to implement effective tax
policies. Given its proficiency in modeling other agents in partially
observable environments and adaptively learning to find optimal policies,
Multi-Agent Reinforcement Learning (MARL) is highly suitable for solving
dynamic games between the government and numerous households. Although MARL
shows more potential than traditional methods such as the genetic algorithm and
dynamic programming, there is a lack of large-scale multi-agent reinforcement
learning economic simulators. Therefore, we propose a MARL environment, named
\textbf{TaxAI}, for dynamic games involving $N$ households, government, firms,
and financial intermediaries based on the Bewley-Aiyagari economic model. Our
study benchmarks 2 traditional economic methods with 7 MARL methods on TaxAI,
demonstrating the effectiveness and superiority of MARL algorithms. Moreover,
TaxAI's scalability in simulating dynamic interactions between the government
and 10,000 households, coupled with real-data calibration, grants it a
substantial improvement in scale and reality over existing simulators.
Therefore, TaxAI is the most realistic economic simulator, which aims to
generate feasible recommendations for governments and individuals.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 09:59:48 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Mi",
"Qirui",
""
],
[
"Xia",
"Siyu",
""
],
[
"Song",
"Yan",
""
],
[
"Zhang",
"Haifeng",
""
],
[
"Zhu",
"Shenghao",
""
],
[
"Wang",
"Jun",
""
]
]
| new_dataset | 0.97117 |
2309.16335 | Theogene Habineza | Theogene Habineza, Ant\^onio H. Ribeiro, Daniel Gedon, Joachim A.
Behar, Antonio Luiz P. Ribeiro, Thomas B. Sch\"on | End-to-end Risk Prediction of Atrial Fibrillation from the 12-Lead ECG
by Deep Neural Networks | 16 pages with 7 figures | @article{HABINEZA2023193, journal = {Journal of
Electrocardiology}, volume = {81}, pages = {193-200}, year = {2023}, issn =
{0022-0736}} | 10.1016/j.jelectrocard.2023.09.011 | null | cs.LG cs.AI q-bio.QM stat.AP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Background: Atrial fibrillation (AF) is one of the most common cardiac
arrhythmias that affects millions of people each year worldwide and it is
closely linked to increased risk of cardiovascular diseases such as stroke and
heart failure. Machine learning methods have shown promising results in
evaluating the risk of developing atrial fibrillation from the
electrocardiogram. We aim to develop and evaluate one such algorithm on a large
CODE dataset collected in Brazil.
Results: The deep neural network model identified patients without indication
of AF in the presented ECG but who will develop AF in the future with an AUC
score of 0.845. From our survival model, we obtain that patients in the
high-risk group (i.e. with the probability of a future AF case being greater
than 0.7) are 50% more likely to develop AF within 40 weeks, while patients
belonging to the minimal-risk group (i.e. with the probability of a future AF
case being less than or equal to 0.1) have more than 85% chance of remaining AF
free up until after seven years.
Conclusion: We developed and validated a model for AF risk prediction. If
applied in clinical practice, the model possesses the potential of providing
valuable and useful information in decision-making and patient management
processes.
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 10:47:40 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Habineza",
"Theogene",
""
],
[
"Ribeiro",
"Antônio H.",
""
],
[
"Gedon",
"Daniel",
""
],
[
"Behar",
"Joachim A.",
""
],
[
"Ribeiro",
"Antonio Luiz P.",
""
],
[
"Schön",
"Thomas B.",
""
]
]
| new_dataset | 0.957075 |
2309.16342 | Artur Petrov Toshev | Artur P. Toshev, Gianluca Galletti, Fabian Fritz, Stefan Adami,
Nikolaus A. Adams | LagrangeBench: A Lagrangian Fluid Mechanics Benchmarking Suite | Accepted at 37th Conference on Neural Information Processing Systems
(NeurIPS 2023) Track on Datasets and Benchmarks | null | null | null | cs.LG physics.flu-dyn | http://creativecommons.org/licenses/by/4.0/ | Machine learning has been successfully applied to grid-based PDE modeling in
various scientific applications. However, learned PDE solvers based on
Lagrangian particle discretizations, which are the preferred approach to
problems with free surfaces or complex physics, remain largely unexplored. We
present LagrangeBench, the first benchmarking suite for Lagrangian particle
problems, focusing on temporal coarse-graining. In particular, our contribution
is: (a) seven new fluid mechanics datasets (four in 2D and three in 3D)
generated with the Smoothed Particle Hydrodynamics (SPH) method including the
Taylor-Green vortex, lid-driven cavity, reverse Poiseuille flow, and dam break,
each of which includes different physics like solid wall interactions or free
surface, (b) efficient JAX-based API with various recent training strategies
and neighbors search routine, and (c) JAX implementation of established Graph
Neural Networks (GNNs) like GNS and SEGNN with baseline results. Finally, to
measure the performance of learned surrogates we go beyond established position
errors and introduce physical metrics like kinetic energy MSE and Sinkhorn
distance for the particle distribution. Our codebase is available under the
URL: https://github.com/tumaer/lagrangebench
| [
{
"version": "v1",
"created": "Thu, 28 Sep 2023 11:03:23 GMT"
}
]
| 2023-09-29T00:00:00 | [
[
"Toshev",
"Artur P.",
""
],
[
"Galletti",
"Gianluca",
""
],
[
"Fritz",
"Fabian",
""
],
[
"Adami",
"Stefan",
""
],
[
"Adams",
"Nikolaus A.",
""
]
]
| new_dataset | 0.999183 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.