id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.15343
|
Ankit Pal
|
Logesh Kumar Umapathi, Ankit Pal and Malaikannan Sankarasubbu
|
Med-HALT: Medical Domain Hallucination Test for Large Language Models
| null | null | null | null |
cs.CL cs.AI cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
This research paper focuses on the challenges posed by hallucinations in
large language models (LLMs), particularly in the context of the medical
domain. Hallucination, wherein these models generate plausible yet unverified
or incorrect information, can have serious consequences in healthcare
applications. We propose a new benchmark and dataset, Med-HALT (Medical Domain
Hallucination Test), designed specifically to evaluate and reduce
hallucinations. Med-HALT provides a diverse multinational dataset derived from
medical examinations across various countries and includes multiple innovative
testing modalities. Med-HALT includes two categories of tests reasoning and
memory-based hallucination tests, designed to assess LLMs's problem-solving and
information retrieval abilities.
Our study evaluated leading LLMs, including Text Davinci, GPT-3.5, LlaMa-2,
MPT, and Falcon, revealing significant differences in their performance. The
paper provides detailed insights into the dataset, promoting transparency and
reproducibility. Through this work, we aim to contribute to the development of
safer and more reliable language models in healthcare. Our benchmark can be
found at medhalt.github.io
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 06:43:04 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Umapathi",
"Logesh Kumar",
""
],
[
"Pal",
"Ankit",
""
],
[
"Sankarasubbu",
"Malaikannan",
""
]
] |
new_dataset
| 0.984038 |
2307.15700
|
Ruopeng Gao
|
Ruopeng Gao, Limin Wang
|
MeMOTR: Long-Term Memory-Augmented Transformer for Multi-Object Tracking
|
Accepted by ICCV 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
As a video task, Multiple Object Tracking (MOT) is expected to capture
temporal information of targets effectively. Unfortunately, most existing
methods only explicitly exploit the object features between adjacent frames,
while lacking the capacity to model long-term temporal information. In this
paper, we propose MeMOTR, a long-term memory-augmented Transformer for
multi-object tracking. Our method is able to make the same object's track
embedding more stable and distinguishable by leveraging long-term memory
injection with a customized memory-attention layer. This significantly improves
the target association ability of our model. Experimental results on DanceTrack
show that MeMOTR impressively surpasses the state-of-the-art method by 7.9% and
13.0% on HOTA and AssA metrics, respectively. Furthermore, our model also
outperforms other Transformer-based methods on association performance on MOT17
and generalizes well on BDD100K. Code is available at
https://github.com/MCG-NJU/MeMOTR.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 17:50:09 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jul 2023 03:04:35 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Gao",
"Ruopeng",
""
],
[
"Wang",
"Limin",
""
]
] |
new_dataset
| 0.99885 |
2307.15719
|
Azra Bihorac
|
Yuanfang Ren, Yanjun Li, Tyler J. Loftus, Jeremy Balch, Kenneth L.
Abbott, Shounak Datta, Matthew M. Ruppert, Ziyuan Guan, Benjamin Shickel,
Parisa Rashidi, Tezcan Ozrazgat-Baslanti, Azra Bihorac
|
Identifying acute illness phenotypes via deep temporal interpolation and
clustering network on physiologic signatures
|
28 pages (79 pages incl. supp. material), 4 figures, 2 tables, 19
supplementary figures, 9 supplementary tables
| null | null | null |
cs.LG q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Initial hours of hospital admission impact clinical trajectory, but early
clinical decisions often suffer due to data paucity. With clustering analysis
for vital signs within six hours of admission, patient phenotypes with distinct
pathophysiological signatures and outcomes may support early clinical
decisions. We created a single-center, longitudinal EHR dataset for 75,762
adults admitted to a tertiary care center for 6+ hours. We proposed a deep
temporal interpolation and clustering network to extract latent representations
from sparse, irregularly sampled vital sign data and derived distinct patient
phenotypes in a training cohort (n=41,502). Model and hyper-parameters were
chosen based on a validation cohort (n=17,415). Test cohort (n=16,845) was used
to analyze reproducibility and correlation with biomarkers. The training,
validation, and testing cohorts had similar distributions of age (54-55 yrs),
sex (55% female), race, comorbidities, and illness severity. Four clusters were
identified. Phenotype A (18%) had most comorbid disease with higher rate of
prolonged respiratory insufficiency, acute kidney injury, sepsis, and
three-year mortality. Phenotypes B (33%) and C (31%) had diffuse patterns of
mild organ dysfunction. Phenotype B had favorable short-term outcomes but
second-highest three-year mortality. Phenotype C had favorable clinical
outcomes. Phenotype D (17%) had early/persistent hypotension, high rate of
early surgery, and substantial biomarker rate of inflammation but second-lowest
three-year mortality. After comparing phenotypes' SOFA scores, clustering
results did not simply repeat other acuity assessments. In a heterogeneous
cohort, four phenotypes with distinct categories of disease and outcomes were
identified by a deep temporal interpolation and clustering network. This tool
may impact triage decisions and clinical decision-support under time
constraints.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 21:05:23 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Ren",
"Yuanfang",
""
],
[
"Li",
"Yanjun",
""
],
[
"Loftus",
"Tyler J.",
""
],
[
"Balch",
"Jeremy",
""
],
[
"Abbott",
"Kenneth L.",
""
],
[
"Datta",
"Shounak",
""
],
[
"Ruppert",
"Matthew M.",
""
],
[
"Guan",
"Ziyuan",
""
],
[
"Shickel",
"Benjamin",
""
],
[
"Rashidi",
"Parisa",
""
],
[
"Ozrazgat-Baslanti",
"Tezcan",
""
],
[
"Bihorac",
"Azra",
""
]
] |
new_dataset
| 0.991472 |
2307.15807
|
Sergio Chevtchenko
|
S\'ergio F. Chevtchenko, Elisson da Silva Rocha, Monalisa Cristina
Moura Dos Santos, Ricardo Lins Mota, Diego Moura Vieira, Ermeson Carneiro de
Andrade, Danilo Ricardo Barbosa de Ara\'ujo
|
Anomaly Detection in Industrial Machinery using IoT Devices and Machine
Learning: a Systematic Mapping
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Anomaly detection is critical in the smart industry for preventing equipment
failure, reducing downtime, and improving safety. Internet of Things (IoT) has
enabled the collection of large volumes of data from industrial machinery,
providing a rich source of information for Anomaly Detection. However, the
volume and complexity of data generated by the Internet of Things ecosystems
make it difficult for humans to detect anomalies manually. Machine learning
(ML) algorithms can automate anomaly detection in industrial machinery by
analyzing generated data. Besides, each technique has specific strengths and
weaknesses based on the data nature and its corresponding systems. However, the
current systematic mapping studies on Anomaly Detection primarily focus on
addressing network and cybersecurity-related problems, with limited attention
given to the industrial sector. Additionally, these studies do not cover the
challenges involved in using ML for Anomaly Detection in industrial machinery
within the context of the IoT ecosystems. This paper presents a systematic
mapping study on Anomaly Detection for industrial machinery using IoT devices
and ML algorithms to address this gap. The study comprehensively evaluates 84
relevant studies spanning from 2016 to 2023, providing an extensive review of
Anomaly Detection research. Our findings identify the most commonly used
algorithms, preprocessing techniques, and sensor types. Additionally, this
review identifies application areas and points to future challenges and
research opportunities.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 20:58:00 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Chevtchenko",
"Sérgio F.",
""
],
[
"Rocha",
"Elisson da Silva",
""
],
[
"Santos",
"Monalisa Cristina Moura Dos",
""
],
[
"Mota",
"Ricardo Lins",
""
],
[
"Vieira",
"Diego Moura",
""
],
[
"de Andrade",
"Ermeson Carneiro",
""
],
[
"de Araújo",
"Danilo Ricardo Barbosa",
""
]
] |
new_dataset
| 0.962673 |
2307.15808
|
Khaled Jawhar
|
Khaled Jawhar and Evangelos Kranakis
|
Bike Assisted Evacuation on a Line of Robots with Communication Faults
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Two autonomous mobile robots and a non-autonomous one, also called bike, are
placed at the origin of an infinite line. The autonomous robots can travel with
maximum speed $1$. When a robot rides the bike its speed increases to $v>1$,
however only exactly one robot at a time can ride the bike and the bike is
non-autonomous in that it cannot move on its own. An Exit is placed on the line
at an unknown location and at distance $d$ from the origin. The robots have
limited communication behavior; one robot is a sender (denoted by S) in that it
can send information wirelessly at any distance and receive messages only in
F2F (Face-to-Face), while the other robot is a receiver (denoted by R) in that
it can receive information wirelessly but can send information only F2F. The
bike has no communication capabilities of its own. We refer to the resulting
communication model of the ensemble of the two autonomous robots and the bike
as S/R.
Our general goal is to understand the impact of the non-autonomous robot in
assisting the evacuation of the two autonomous faulty robots. Our main
contribution is to provide a new evacuation algorithm that enables both robots
to evacuate from the unknown Exit in the S/R model. We also analyze the
resulting evacuation time as a function of the bike's speed $v$ and give upper
and lower bounds on the competitive ratio of the resulting algorithm for the
entire range of possible values of $v$.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 20:58:36 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Jawhar",
"Khaled",
""
],
[
"Kranakis",
"Evangelos",
""
]
] |
new_dataset
| 0.999525 |
2307.15904
|
Aayush Dhakal
|
Aayush Dhakal, Adeel Ahmad, Subash Khanal, Srikumar Sastry, Nathan
Jacobs
|
Sat2Cap: Mapping Fine-Grained Textual Descriptions from Satellite Images
|
15 pages, 11 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a novel weakly supervised approach for creating maps using
free-form textual descriptions (or captions). We refer to this new line of work
of creating textual maps as zero-shot mapping. Prior works have approached
mapping tasks by developing models that predict over a fixed set of attributes
using overhead imagery. However, these models are very restrictive as they can
only solve highly specific tasks for which they were trained. Mapping text, on
the other hand, allows us to solve a large variety of mapping problems with
minimal restrictions. To achieve this, we train a contrastive learning
framework called Sat2Cap on a new large-scale dataset of paired overhead and
ground-level images. For a given location, our model predicts the expected CLIP
embedding of the ground-level scenery. Sat2Cap is also conditioned on temporal
information, enabling it to learn dynamic concepts that vary over time. Our
experimental results demonstrate that our models successfully capture
fine-grained concepts and effectively adapt to temporal variations. Our
approach does not require any text-labeled data making the training easily
scalable. The code, dataset, and models will be made publicly available.
|
[
{
"version": "v1",
"created": "Sat, 29 Jul 2023 06:23:51 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Dhakal",
"Aayush",
""
],
[
"Ahmad",
"Adeel",
""
],
[
"Khanal",
"Subash",
""
],
[
"Sastry",
"Srikumar",
""
],
[
"Jacobs",
"Nathan",
""
]
] |
new_dataset
| 0.999784 |
2307.15913
|
Felipe Araujo
|
Igor Pereira, Felipe Ara\'ujo, Filip Korzeniowski, Richard Vogl
|
Moisesdb: A dataset for source separation beyond 4-stems
| null | null | null | null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce the MoisesDB dataset for musical source
separation. It consists of 240 tracks from 45 artists, covering twelve musical
genres. For each song, we provide its individual audio sources, organized in a
two-level hierarchical taxonomy of stems. This will facilitate building and
evaluating fine-grained source separation systems that go beyond the limitation
of using four stems (drums, bass, other, and vocals) due to lack of data. To
facilitate the adoption of this dataset, we publish an easy-to-use Python
library to download, process and use MoisesDB. Alongside a thorough
documentation and analysis of the dataset contents, this work provides baseline
results for open-source separation models for varying separation granularities
(four, five, and six stems), and discuss their results.
|
[
{
"version": "v1",
"created": "Sat, 29 Jul 2023 06:59:37 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Pereira",
"Igor",
""
],
[
"Araújo",
"Felipe",
""
],
[
"Korzeniowski",
"Filip",
""
],
[
"Vogl",
"Richard",
""
]
] |
new_dataset
| 0.999815 |
2307.15915
|
Jin Wang
|
Jin Wang, Zishan Huang, Hui Xiao, Yinhao Xiao
|
JFinder: A Novel Architecture for Java Vulnerability Identification
Based Quad Self-Attention and Pre-training Mechanism
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software vulnerabilities pose significant risks to computer systems,
impacting our daily lives, productivity, and even our health. Identifying and
addressing security vulnerabilities in a timely manner is crucial to prevent
hacking and data breaches. Unfortunately, current vulnerability identification
methods, including classical and deep learning-based approaches, exhibit
critical drawbacks that prevent them from meeting the demands of the
contemporary software industry. To tackle these issues, we present JFinder, a
novel architecture for Java vulnerability identification that leverages quad
self-attention and pre-training mechanisms to combine structural information
and semantic representations. Experimental results demonstrate that JFinder
outperforms all baseline methods, achieving an accuracy of 0.97 on the CWE
dataset and an F1 score of 0.84 on the PROMISE dataset. Furthermore, a case
study reveals that JFinder can accurately identify four cases of
vulnerabilities after patching.
|
[
{
"version": "v1",
"created": "Sat, 29 Jul 2023 07:02:47 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Wang",
"Jin",
""
],
[
"Huang",
"Zishan",
""
],
[
"Xiao",
"Hui",
""
],
[
"Xiao",
"Yinhao",
""
]
] |
new_dataset
| 0.999448 |
2307.15933
|
Soumyadeep Roy
|
Soumyadeep Roy, Jonas Wallat, Sowmya S Sundaram, Wolfgang Nejdl, Niloy
Ganguly
|
GeneMask: Fast Pretraining of Gene Sequences to Enable Few-Shot Learning
|
12 pages including appendix. Accepted for publication at 26th
European Conference on Artificial Intelligence ECAI 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale language models such as DNABert and LOGO aim to learn optimal
gene representations and are trained on the entire Human Reference Genome.
However, standard tokenization schemes involve a simple sliding window of
tokens like k-mers that do not leverage any gene-based semantics and thus may
lead to (trivial) masking of easily predictable sequences and subsequently
inefficient Masked Language Modeling (MLM) training. Therefore, we propose a
novel masking algorithm, GeneMask, for MLM training of gene sequences, where we
randomly identify positions in a gene sequence as mask centers and locally
select the span around the mask center with the highest Normalized Pointwise
Mutual Information (NPMI) to mask. We observe that in the absence of
human-understandable semantics in the genomics domain (in contrast, semantic
units like words and phrases are inherently available in NLP), GeneMask-based
models substantially outperform the SOTA models (DNABert and LOGO) over four
benchmark gene sequence classification datasets in five few-shot settings (10
to 1000-shot). More significantly, the GeneMask-based DNABert model is trained
for less than one-tenth of the number of epochs of the original SOTA model. We
also observe a strong correlation between top-ranked PMI tokens and conserved
DNA sequence motifs, which may indicate the incorporation of latent genomic
information. The codes (including trained models) and datasets are made
publicly available at https://github.com/roysoumya/GeneMask.
|
[
{
"version": "v1",
"created": "Sat, 29 Jul 2023 09:17:16 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Roy",
"Soumyadeep",
""
],
[
"Wallat",
"Jonas",
""
],
[
"Sundaram",
"Sowmya S",
""
],
[
"Nejdl",
"Wolfgang",
""
],
[
"Ganguly",
"Niloy",
""
]
] |
new_dataset
| 0.99283 |
2307.15942
|
Ruihao Xia
|
Ruihao Xia, Chaoqiang Zhao, Meng Zheng, Ziyan Wu, Qiyu Sun, Yang Tang
|
CMDA: Cross-Modality Domain Adaptation for Nighttime Semantic
Segmentation
|
Accepted to ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most nighttime semantic segmentation studies are based on domain adaptation
approaches and image input. However, limited by the low dynamic range of
conventional cameras, images fail to capture structural details and boundary
information in low-light conditions. Event cameras, as a new form of vision
sensors, are complementary to conventional cameras with their high dynamic
range. To this end, we propose a novel unsupervised Cross-Modality Domain
Adaptation (CMDA) framework to leverage multi-modality (Images and Events)
information for nighttime semantic segmentation, with only labels on daytime
images. In CMDA, we design the Image Motion-Extractor to extract motion
information and the Image Content-Extractor to extract content information from
images, in order to bridge the gap between different modalities (Images to
Events) and domains (Day to Night). Besides, we introduce the first image-event
nighttime semantic segmentation dataset. Extensive experiments on both the
public image dataset and the proposed image-event dataset demonstrate the
effectiveness of our proposed approach. We open-source our code, models, and
dataset at https://github.com/XiaRho/CMDA.
|
[
{
"version": "v1",
"created": "Sat, 29 Jul 2023 09:29:09 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Xia",
"Ruihao",
""
],
[
"Zhao",
"Chaoqiang",
""
],
[
"Zheng",
"Meng",
""
],
[
"Wu",
"Ziyan",
""
],
[
"Sun",
"Qiyu",
""
],
[
"Tang",
"Yang",
""
]
] |
new_dataset
| 0.977048 |
2307.16037
|
Colin Zhang
|
Colin Zhang, Yang Ha
|
Developing novel ligands with enhanced binding affinity for the
sphingosine 1-phosphate receptor 1 using machine learning
|
10 pages, 6 figures, 2 tables
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiple sclerosis (MS) is a debilitating neurological disease affecting
nearly one million people in the United States. Sphingosine-1-phosphate
receptor 1, or S1PR1, is a protein target for MS. Siponimod, a ligand of S1PR1,
was approved by the FDA in 2019 for MS treatment, but there is a demonstrated
need for better therapies. To this end, we finetuned an autoencoder machine
learning model that converts chemical formulas into mathematical vectors and
generated over 500 molecular variants based on siponimod, out of which 25
compounds had higher predicted binding affinity to S1PR1. The model was able to
generate these ligands in just under one hour. Filtering these compounds led to
the discovery of six promising candidates with good drug-like properties and
ease of synthesis. Furthermore, by analyzing the binding interactions for these
ligands, we uncovered several chemical properties that contribute to high
binding affinity to S1PR1. This study demonstrates that machine learning can
accelerate the drug discovery process and reveal new insights into protein-drug
interactions.
|
[
{
"version": "v1",
"created": "Sat, 29 Jul 2023 17:58:47 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Zhang",
"Colin",
""
],
[
"Ha",
"Yang",
""
]
] |
new_dataset
| 0.998009 |
2307.16071
|
David Adelani
|
Tolulope Ogunremi, Kola Tubosun, Anuoluwapo Aremu, Iroro Orife, David
Ifeoluwa Adelani
|
\`{I}r\`{o}y\`{i}nSpeech: A multi-purpose Yor\`{u}b\'{a} Speech Corpus
|
working paper
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the \`{I}r\`{o}y\`{i}nSpeech corpus -- a new dataset influenced
by a desire to increase the amount of high quality, freely available,
contemporary Yor\`{u}b\'{a} speech. We release a multi-purpose dataset that can
be used for both TTS and ASR tasks. We curated text sentences from the news and
creative writing domains under an open license i.e., CC-BY-4.0 and had multiple
speakers record each sentence. We provide 5000 of our utterances to the Common
Voice platform to crowdsource transcriptions online. The dataset has 38.5 hours
of data in total, recorded by 80 volunteers.
|
[
{
"version": "v1",
"created": "Sat, 29 Jul 2023 20:42:50 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Ogunremi",
"Tolulope",
""
],
[
"Tubosun",
"Kola",
""
],
[
"Aremu",
"Anuoluwapo",
""
],
[
"Orife",
"Iroro",
""
],
[
"Adelani",
"David Ifeoluwa",
""
]
] |
new_dataset
| 0.999771 |
2307.16084
|
Muhammad Abdul Rahman
|
Muhammad Abdul Rahman and Muhammad Ahmad Waseem and Zubair Khalid and
Muhammad Tahir and Momin Uppal
|
PD-SEG: Population Disaggregation Using Deep Segmentation Networks For
Improved Built Settlement Mask
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Any policy-level decision-making procedure and academic research involving
the optimum use of resources for development and planning initiatives depends
on accurate population density statistics. The current cutting-edge datasets
offered by WorldPop and Meta do not succeed in achieving this aim for
developing nations like Pakistan; the inputs to their algorithms provide flawed
estimates that fail to capture the spatial and land-use dynamics. In order to
precisely estimate population counts at a resolution of 30 meters by 30 meters,
we use an accurate built settlement mask obtained using deep segmentation
networks and satellite imagery. The Points of Interest (POI) data is also used
to exclude non-residential areas.
|
[
{
"version": "v1",
"created": "Sat, 29 Jul 2023 21:42:44 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Rahman",
"Muhammad Abdul",
""
],
[
"Waseem",
"Muhammad Ahmad",
""
],
[
"Khalid",
"Zubair",
""
],
[
"Tahir",
"Muhammad",
""
],
[
"Uppal",
"Momin",
""
]
] |
new_dataset
| 0.990699 |
2307.16096
|
Li-Hsiang Shen
|
Li-Hsiang Shen, Po-Chen Wu, Chia-Jou Ku, Yu-Ting Li, Kai-Ten Feng,
Yuanwei Liu and Lajos Hanzo
|
D-STAR: Dual Simultaneously Transmitting and Reflecting Reconfigurable
Intelligent Surfaces for Joint Uplink/Downlink Transmission
|
30 pages, 10 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The joint uplink/downlink (JUD) design of simultaneously transmitting and
reflecting reconfigurable intelligent surfaces (STAR-RIS) is conceived in
support of both uplink (UL) and downlink (DL) users. Furthermore, the dual
STAR-RISs (D-STAR) concept is conceived as a promising architecture for
360-degree full-plane service coverage including users located between the base
station (BS) and the D-STAR and beyond. The corresponding regions are termed as
primary (P) and secondary (S) regions. The primary STAR-RIS (STAR-P) plays an
important role in terms of tackling the P-region inter-user interference, the
self-interference (SI) from the BS and from the reflective as well as
refractive UL users imposed on the DL receiver. By contrast, the secondary
STAR-RIS (STAR-S) aims for mitigating the S-region interferences. The
non-linear and non-convex rate-maximization problem formulated is solved by
alternating optimization amongst the decomposed convex sub-problems of the BS
beamformer, and the D-STAR amplitude as well as phase shift configurations. We
also propose a D-STAR based active beamforming and passive STAR-RIS
amplitude/phase (DBAP) optimization scheme to solve the respective sub-problems
by Lagrange dual with Dinkelbach transformation, alternating direction method
of multipliers (ADMM) with successive convex approximation (SCA), and penalty
convex-concave procedure (PCCP). Our simulation results reveal that the
proposed D-STAR architecture outperforms the conventional single RIS, single
STAR-RIS, and half-duplex networks. The proposed DBAP in D-STAR outperforms the
state-of-the-art solutions in the open literature.
|
[
{
"version": "v1",
"created": "Sun, 30 Jul 2023 00:10:23 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Shen",
"Li-Hsiang",
""
],
[
"Wu",
"Po-Chen",
""
],
[
"Ku",
"Chia-Jou",
""
],
[
"Li",
"Yu-Ting",
""
],
[
"Feng",
"Kai-Ten",
""
],
[
"Liu",
"Yuanwei",
""
],
[
"Hanzo",
"Lajos",
""
]
] |
new_dataset
| 0.990938 |
2307.16114
|
Ryo Suzuki
|
Keiichi Ihara, Mehrad Faridan, Ayumi Ichikawa, Ikkaku Kawaguchi, Ryo
Suzuki
|
HoloBots: Augmenting Holographic Telepresence with Mobile Robots for
Tangible Remote Collaboration in Mixed Reality
|
UIST 2023
| null |
10.1145/3586183.3606727
| null |
cs.HC cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces HoloBots, a mixed reality remote collaboration system
that augments holographic telepresence with synchronized mobile robots. Beyond
existing mixed reality telepresence, HoloBots lets remote users not only be
visually and spatially present, but also physically engage with local users and
their environment. HoloBots allows the users to touch, grasp, manipulate, and
interact with the remote physical environment as if they were co-located in the
same shared space. We achieve this by synchronizing holographic user motion
(Hololens 2 and Azure Kinect) with tabletop mobile robots (Sony Toio). Beyond
the existing physical telepresence, HoloBots contributes to an exploration of
broader design space, such as object actuation, virtual hand physicalization,
world-in-miniature exploration, shared tangible interfaces, embodied guidance,
and haptic communication. We evaluate our system with twelve participants by
comparing it with hologram-only and robot-only conditions. Both quantitative
and qualitative results confirm that our system significantly enhances the
level of co-presence and shared experience, compared to the other conditions.
|
[
{
"version": "v1",
"created": "Sun, 30 Jul 2023 03:20:12 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Ihara",
"Keiichi",
""
],
[
"Faridan",
"Mehrad",
""
],
[
"Ichikawa",
"Ayumi",
""
],
[
"Kawaguchi",
"Ikkaku",
""
],
[
"Suzuki",
"Ryo",
""
]
] |
new_dataset
| 0.968934 |
2307.16115
|
Yu Yan
|
Yu Yan, Hongzhi Wang, Jian Geng, Jian Ma, Geng Li, Zixuan Wang, Zhiyu
Dai, Tianqing Wang
|
IWEK: An Interpretable What-If Estimator for Database Knobs
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The knobs of modern database management systems have significant impact on
the performance of the systems. With the development of cloud databases, an
estimation service for knobs is urgently needed to improve the performance of
database. Unfortunately, few attentions have been paid to estimate the
performance of certain knob configurations. To fill this gap, we propose IWEK,
an interpretable & transferable what-if estimator for database knobs. To
achieve interpretable estimation, we propose linear estimator based on the
random forest for database knobs for the explicit and trustable evaluation
results. Due to its interpretability, our estimator capture the direct
relationships between knob configuration and its performance, to guarantee the
high availability of database. We design a two-stage transfer algorithm to
leverage historical experiences to efficiently build the knob estimator for new
scenarios. Due to its lightweight design, our method can largely reduce the
overhead of collecting training data and could achieve cold start knob
estimation for new scenarios. Extensive experiments on YCSB and TPCC show that
our method performs well in interpretable and transferable knob estimation with
limited training data. Further, our method could achieve efficient estimator
transfer with only 10 samples in TPCC and YSCB.
|
[
{
"version": "v1",
"created": "Sun, 30 Jul 2023 03:28:04 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Yan",
"Yu",
""
],
[
"Wang",
"Hongzhi",
""
],
[
"Geng",
"Jian",
""
],
[
"Ma",
"Jian",
""
],
[
"Li",
"Geng",
""
],
[
"Wang",
"Zixuan",
""
],
[
"Dai",
"Zhiyu",
""
],
[
"Wang",
"Tianqing",
""
]
] |
new_dataset
| 0.998223 |
2307.16116
|
Ryo Suzuki
|
Zhijie Xia, Kyzyl Monteiro, Kevin Van, Ryo Suzuki
|
RealityCanvas: Augmented Reality Sketching for Embedded and Responsive
Scribble Animation Effects
|
UIST 2023
| null |
10.1145/3586183.3606716
| null |
cs.HC cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce RealityCanvas, a mobile AR sketching tool that can easily
augment real-world physical motion with responsive hand-drawn animation. Recent
research in AR sketching tools has enabled users to not only embed static
drawings into the real world but also dynamically animate them with physical
motion. However, existing tools often lack the flexibility and expressiveness
of possible animations, as they primarily support simple line-based geometry.
To address this limitation, we explore both expressive and improvisational AR
sketched animation by introducing a set of responsive scribble animation
techniques that can be directly embedded through sketching interactions: 1)
object binding, 2) flip-book animation, 3) action trigger, 4) particle effects,
5) motion trajectory, and 6) contour highlight. These six animation effects
were derived from the analysis of 172 existing video-edited scribble
animations. We showcase these techniques through various applications, such as
video creation, augmented education, storytelling, and AR prototyping. The
results of our user study and expert interviews confirm that our tool can lower
the barrier to creating AR-based sketched animation, while allowing creative,
expressive, and improvisational AR sketching experiences.
|
[
{
"version": "v1",
"created": "Sun, 30 Jul 2023 03:31:48 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Xia",
"Zhijie",
""
],
[
"Monteiro",
"Kyzyl",
""
],
[
"Van",
"Kevin",
""
],
[
"Suzuki",
"Ryo",
""
]
] |
new_dataset
| 0.980661 |
2307.16226
|
Zihan Li
|
Zihan Li, Yuan Zheng, Xiangde Luo, Dandan Shan, Qingqi Hong
|
ScribbleVC: Scribble-supervised Medical Image Segmentation with
Vision-Class Embedding
|
Accepted by ACM MM 2023, project page:
https://github.com/HUANGLIZI/ScribbleVC
| null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Medical image segmentation plays a critical role in clinical decision-making,
treatment planning, and disease monitoring. However, accurate segmentation of
medical images is challenging due to several factors, such as the lack of
high-quality annotation, imaging noise, and anatomical differences across
patients. In addition, there is still a considerable gap in performance between
the existing label-efficient methods and fully-supervised methods. To address
the above challenges, we propose ScribbleVC, a novel framework for
scribble-supervised medical image segmentation that leverages vision and class
embeddings via the multimodal information enhancement mechanism. In addition,
ScribbleVC uniformly utilizes the CNN features and Transformer features to
achieve better visual feature extraction. The proposed method combines a
scribble-based approach with a segmentation network and a class-embedding
module to produce accurate segmentation masks. We evaluate ScribbleVC on three
benchmark datasets and compare it with state-of-the-art methods. The
experimental results demonstrate that our method outperforms existing
approaches in terms of accuracy, robustness, and efficiency. The datasets and
code are released on GitHub.
|
[
{
"version": "v1",
"created": "Sun, 30 Jul 2023 13:38:52 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Li",
"Zihan",
""
],
[
"Zheng",
"Yuan",
""
],
[
"Luo",
"Xiangde",
""
],
[
"Shan",
"Dandan",
""
],
[
"Hong",
"Qingqi",
""
]
] |
new_dataset
| 0.998672 |
2307.16253
|
Pengfei Hu
|
Pengfei Hu, Jiefeng Ma, Zhenrong Zhang, Jun Du and Jianshu Zhang
|
Count, Decode and Fetch: A New Approach to Handwritten Chinese Character
Error Correction
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recently, handwritten Chinese character error correction has been greatly
improved by employing encoder-decoder methods to decompose a Chinese character
into an ideographic description sequence (IDS). However, existing methods
implicitly capture and encode linguistic information inherent in IDS sequences,
leading to a tendency to generate IDS sequences that match seen characters.
This poses a challenge when dealing with an unseen misspelled character, as the
decoder may generate an IDS sequence that matches a seen character instead.
Therefore, we introduce Count, Decode and Fetch (CDF), a novel approach that
exhibits better generalization towards unseen misspelled characters. CDF is
mainly composed of three parts: the counter, the decoder, and the fetcher. In
the first stage, the counter predicts the number of each radical class without
the symbol-level position annotations. In the second stage, the decoder employs
the counting information and generates the IDS sequence step by step. Moreover,
by updating the counting information at each time step, the decoder becomes
aware of the existence of each radical. With the decomposed IDS sequence, we
can determine whether the given character is misspelled. If it is misspelled,
the fetcher under the transductive transfer learning strategy predicts the
ideal character that the user originally intended to write. We integrate our
method into existing encoder-decoder models and significantly enhance their
performance.
|
[
{
"version": "v1",
"created": "Sun, 30 Jul 2023 15:19:55 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Hu",
"Pengfei",
""
],
[
"Ma",
"Jiefeng",
""
],
[
"Zhang",
"Zhenrong",
""
],
[
"Du",
"Jun",
""
],
[
"Zhang",
"Jianshu",
""
]
] |
new_dataset
| 0.995104 |
2307.16254
|
Prajval Kumar Murali
|
Prajval Kumar Murali, Bernd Porr, Mohsen Kaboli
|
Touch if it's transparent! ACTOR: Active Tactile-based Category-Level
Transparent Object Reconstruction
|
Accepted for publication at IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2023)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate shape reconstruction of transparent objects is a challenging task
due to their non-Lambertian surfaces and yet necessary for robots for accurate
pose perception and safe manipulation. As vision-based sensing can produce
erroneous measurements for transparent objects, the tactile modality is not
sensitive to object transparency and can be used for reconstructing the
object's shape. We propose ACTOR, a novel framework for ACtive tactile-based
category-level Transparent Object Reconstruction. ACTOR leverages large
datasets of synthetic object with our proposed self-supervised learning
approach for object shape reconstruction as the collection of real-world
tactile data is prohibitively expensive. ACTOR can be used during inference
with tactile data from category-level unknown transparent objects for
reconstruction. Furthermore, we propose an active-tactile object exploration
strategy as probing every part of the object surface can be sample inefficient.
We also demonstrate tactile-based category-level object pose estimation task
using ACTOR. We perform an extensive evaluation of our proposed methodology
with real-world robotic experiments with comprehensive comparison studies with
state-of-the-art approaches. Our proposed method outperforms these approaches
in terms of tactile-based object reconstruction and object pose estimation.
|
[
{
"version": "v1",
"created": "Sun, 30 Jul 2023 15:22:12 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Murali",
"Prajval Kumar",
""
],
[
"Porr",
"Bernd",
""
],
[
"Kaboli",
"Mohsen",
""
]
] |
new_dataset
| 0.986708 |
2307.16289
|
Amardeep Singh
|
Amardeep Singh, Prof. Charles Jia, Prof. Donald Kirk
|
Implementing Edge Based Object Detection For Microplastic Debris
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Plastic has imbibed itself as an indispensable part of our day to day
activities, becoming a source of problems due to its non-biodegradable nature
and cheaper production prices. With these problems, comes the challenge of
mitigating and responding to the aftereffects of disposal or the lack of proper
disposal which leads to waste concentrating in locations and disturbing
ecosystems for both plants and animals. As plastic debris levels continue to
rise with the accumulation of waste in garbage patches in landfills and more
hazardously in natural water bodies, swift action is necessary to plug or cease
this flow. While manual sorting operations and detection can offer a solution,
they can be augmented using highly advanced computer imagery linked with
robotic appendages for removing wastes. The primary application of focus in
this report are the much-discussed Computer Vision and Open Vision which have
gained novelty for their light dependence on internet and ability to relay
information in remote areas. These applications can be applied to the creation
of edge-based mobility devices that can as a counter to the growing problem of
plastic debris in oceans and rivers, demanding little connectivity and still
offering the same results with reasonably timed maintenance. The principal
findings of this project cover the various methods that were tested and
deployed to detect waste in images, as well as comparing them against different
waste types. The project has been able to produce workable models that can
perform on time detection of sampled images using an augmented CNN approach.
Latter portions of the project have also achieved a better interpretation of
the necessary preprocessing steps required to arrive at the best accuracies,
including the best hardware for expanding waste detection studies to larger
environments.
|
[
{
"version": "v1",
"created": "Sun, 30 Jul 2023 17:55:03 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Singh",
"Amardeep",
""
],
[
"Jia",
"Prof. Charles",
""
],
[
"Kirk",
"Prof. Donald",
""
]
] |
new_dataset
| 0.963455 |
2307.16363
|
JingXiao Liao
|
Jing-Xiao Liao, Sheng-Lai Wei, Chen-Long Xie, Tieyong Zeng, Jinwei
Sun, Shiping Zhang, Xiaoge Zhang, Feng-Lei Fan
|
BearingPGA-Net: A Lightweight and Deployable Bearing Fault Diagnosis
Network via Decoupled Knowledge Distillation and FPGA Acceleration
| null | null | null | null |
cs.LG cs.AI cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Deep learning has achieved remarkable success in the field of bearing fault
diagnosis. However, this success comes with larger models and more complex
computations, which cannot be transferred into industrial fields requiring
models to be of high speed, strong portability, and low power consumption. In
this paper, we propose a lightweight and deployable model for bearing fault
diagnosis, referred to as BearingPGA-Net, to address these challenges. Firstly,
aided by a well-trained large model, we train BearingPGA-Net via decoupled
knowledge distillation. Despite its small size, our model demonstrates
excellent fault diagnosis performance compared to other lightweight
state-of-the-art methods. Secondly, we design an FPGA acceleration scheme for
BearingPGA-Net using Verilog. This scheme involves the customized quantization
and designing programmable logic gates for each layer of BearingPGA-Net on the
FPGA, with an emphasis on parallel computing and module reuse to enhance the
computational speed. To the best of our knowledge, this is the first instance
of deploying a CNN-based bearing fault diagnosis model on an FPGA. Experimental
results reveal that our deployment scheme achieves over 200 times faster
diagnosis speed compared to CPU, while achieving a lower-than-0.4\% performance
drop in terms of F1, Recall, and Precision score on our independently-collected
bearing dataset. Our code is available at
\url{https://github.com/asdvfghg/BearingPGA-Net}.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 01:43:38 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Liao",
"Jing-Xiao",
""
],
[
"Wei",
"Sheng-Lai",
""
],
[
"Xie",
"Chen-Long",
""
],
[
"Zeng",
"Tieyong",
""
],
[
"Sun",
"Jinwei",
""
],
[
"Zhang",
"Shiping",
""
],
[
"Zhang",
"Xiaoge",
""
],
[
"Fan",
"Feng-Lei",
""
]
] |
new_dataset
| 0.9995 |
2307.16368
|
Qi Zhao
|
Qi Zhao, Ce Zhang, Shijie Wang, Changcheng Fu, Nakul Agarwal, Kwonjoon
Lee, Chen Sun
|
AntGPT: Can Large Language Models Help Long-term Action Anticipation
from Videos?
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Can we better anticipate an actor's future actions (e.g. mix eggs) by knowing
what commonly happens after his/her current action (e.g. crack eggs)? What if
we also know the longer-term goal of the actor (e.g. making egg fried rice)?
The long-term action anticipation (LTA) task aims to predict an actor's future
behavior from video observations in the form of verb and noun sequences, and it
is crucial for human-machine interaction. We propose to formulate the LTA task
from two perspectives: a bottom-up approach that predicts the next actions
autoregressively by modeling temporal dynamics; and a top-down approach that
infers the goal of the actor and plans the needed procedure to accomplish the
goal. We hypothesize that large language models (LLMs), which have been
pretrained on procedure text data (e.g. recipes, how-tos), have the potential
to help LTA from both perspectives. It can help provide the prior knowledge on
the possible next actions, and infer the goal given the observed part of a
procedure, respectively. To leverage the LLMs, we propose a two-stage
framework, AntGPT. It first recognizes the actions already performed in the
observed videos and then asks an LLM to predict the future actions via
conditioned generation, or to infer the goal and plan the whole procedure by
chain-of-thought prompting. Empirical results on the Ego4D LTA v1 and v2
benchmarks, EPIC-Kitchens-55, as well as EGTEA GAZE+ demonstrate the
effectiveness of our proposed approach. AntGPT achieves state-of-the-art
performance on all above benchmarks, and can successfully infer the goal and
thus perform goal-conditioned "counterfactual" prediction via qualitative
analysis. Code and model will be released at
https://brown-palm.github.io/AntGPT
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 02:14:19 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Zhao",
"Qi",
""
],
[
"Zhang",
"Ce",
""
],
[
"Wang",
"Shijie",
""
],
[
"Fu",
"Changcheng",
""
],
[
"Agarwal",
"Nakul",
""
],
[
"Lee",
"Kwonjoon",
""
],
[
"Sun",
"Chen",
""
]
] |
new_dataset
| 0.982416 |
2307.16385
|
Vishesh Vikas
|
Arun Niddish Mahendran, Caitlin Freeman, Alexander H. Chang, Michael
McDougall, Patricio A. Vela and Vishesh Vikas
|
Multi-gait Locomotion Planning and Tracking for Tendon-actuated
Terrestrial Soft Robot (TerreSoRo)
|
2023 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS 2023)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The adaptability of soft robots makes them ideal candidates to maneuver
through unstructured environments. However, locomotion challenges arise due to
complexities in modeling the body mechanics, actuation, and robot-environment
dynamics. These factors contribute to the gap between their potential and
actual autonomous field deployment. A closed-loop path planning framework for
soft robot locomotion is critical to close the real-world realization gap. This
paper presents a generic path planning framework applied to TerreSoRo
(Tetra-Limb Terrestrial Soft Robot) with pose feedback. It employs a
gait-based, lattice trajectory planner to facilitate navigation in the presence
of obstacles. The locomotion gaits are synthesized using a data-driven
optimization approach that allows for learning from the environment. The
trajectory planner employs a greedy breadth-first search strategy to obtain a
collision-free trajectory. The synthesized trajectory is a sequence of
rotate-then-translate gait pairs. The control architecture integrates
high-level and low-level controllers with real-time localization (using an
overhead webcam). TerreSoRo successfully navigates environments with obstacles
where path re-planning is performed. To best of our knowledge, this is the
first instance of real-time, closed-loop path planning of a non-pneumatic soft
robot.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 03:26:48 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Mahendran",
"Arun Niddish",
""
],
[
"Freeman",
"Caitlin",
""
],
[
"Chang",
"Alexander H.",
""
],
[
"McDougall",
"Michael",
""
],
[
"Vela",
"Patricio A.",
""
],
[
"Vikas",
"Vishesh",
""
]
] |
new_dataset
| 0.988121 |
2307.16389
|
Yuanhao Gong
|
Yuanhao Gong
|
STL: A Signed and Truncated Logarithm Activation Function for Neural
Networks
| null | null | null | null |
cs.LG cs.AI cs.CE cs.CL cs.CV cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Activation functions play an essential role in neural networks. They provide
the non-linearity for the networks. Therefore, their properties are important
for neural networks' accuracy and running performance. In this paper, we
present a novel signed and truncated logarithm function as activation function.
The proposed activation function has significantly better mathematical
properties, such as being odd function, monotone, differentiable, having
unbounded value range, and a continuous nonzero gradient. These properties make
it an excellent choice as an activation function. We compare it with other
well-known activation functions in several well-known neural networks. The
results confirm that it is the state-of-the-art. The suggested activation
function can be applied in a large range of neural networks where activation
functions are necessary.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 03:41:14 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Gong",
"Yuanhao",
""
]
] |
new_dataset
| 0.998242 |
2307.16456
|
Andrea Santilli
|
Andrea Santilli and Emanuele Rodol\`a
|
Camoscio: an Italian Instruction-tuned LLaMA
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In recent years Large Language Models (LLMs) have increased the state of the
art on several natural language processing tasks. However, their accessibility
is often limited to paid API services, posing challenges for researchers in
conducting extensive investigations. On the other hand, while some open-source
models have been proposed by the community, they are typically multilingual and
not specifically tailored for the Italian language. In an effort to democratize
the available and open resources for the Italian language, in this paper we
introduce Camoscio: a language model specifically tuned to follow users'
prompts in Italian. Specifically, we finetuned the smallest variant of LLaMA
(7b) with LoRA on a corpus of instruction prompts translated to Italian via
ChatGPT. Results indicate that the model's zero-shot performance on various
downstream tasks in Italian competes favorably with existing models
specifically finetuned for those tasks. All the artifacts (code, dataset,
model) are released to the community at the following url:
https://github.com/teelinsan/camoscio
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 07:31:48 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Santilli",
"Andrea",
""
],
[
"Rodolà",
"Emanuele",
""
]
] |
new_dataset
| 0.999205 |
2307.16457
|
Huachuan Qiu
|
Huachuan Qiu, Tong Zhao, Anqi Li, Shuai Zhang, Hongliang He, Zhenzhong
Lan
|
A Benchmark for Understanding Dialogue Safety in Mental Health Support
|
accepted to The 12th CCF International Conference on Natural Language
Processing and Chinese Computing (NLPCC2023)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dialogue safety remains a pervasive challenge in open-domain human-machine
interaction. Existing approaches propose distinctive dialogue safety taxonomies
and datasets for detecting explicitly harmful responses. However, these
taxonomies may not be suitable for analyzing response safety in mental health
support. In real-world interactions, a model response deemed acceptable in
casual conversations might have a negligible positive impact on users seeking
mental health support. To address these limitations, this paper aims to develop
a theoretically and factually grounded taxonomy that prioritizes the positive
impact on help-seekers. Additionally, we create a benchmark corpus with
fine-grained labels for each dialogue session to facilitate further research.
We analyze the dataset using popular language models, including BERT-base,
RoBERTa-large, and ChatGPT, to detect and understand unsafe responses within
the context of mental health support. Our study reveals that ChatGPT struggles
to detect safety categories with detailed safety definitions in a zero- and
few-shot paradigm, whereas the fine-tuned model proves to be more suitable. The
developed dataset and findings serve as valuable benchmarks for advancing
research on dialogue safety in mental health support, with significant
implications for improving the design and deployment of conversation agents in
real-world applications. We release our code and data here:
https://github.com/qiuhuachuan/DialogueSafety.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 07:33:16 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Qiu",
"Huachuan",
""
],
[
"Zhao",
"Tong",
""
],
[
"Li",
"Anqi",
""
],
[
"Zhang",
"Shuai",
""
],
[
"He",
"Hongliang",
""
],
[
"Lan",
"Zhenzhong",
""
]
] |
new_dataset
| 0.999055 |
2307.16546
|
Johannes Siegele
|
Johannes Siegele and Martin Pfurner
|
An Overconstrained Vertical Darboux Mechanism
| null | null | null | null |
cs.RO math.AG
|
http://creativecommons.org/licenses/by/4.0/
|
In this article, we will construct an overconstrained closed-loop linkage
consisting of four revolute and one cylindrical joint. It is obtained by
factorization of a prescribed vertical Darboux motion. We will investigate the
kinematic behaviour of the obtained mechanism, which turns out to have multiple
operation modes. Under certain conditions on the design parameters, two of the
operation modes will correspond to vertical Darboux motions. It turns out, that
for these design parameters, there also exists a second assembly mode.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 10:22:35 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Siegele",
"Johannes",
""
],
[
"Pfurner",
"Martin",
""
]
] |
new_dataset
| 0.996625 |
2307.16557
|
Laurie Williams
|
Trevor Dunlap and Yasemin Acar and Michel Cucker and William Enck and
Alexandros Kapravelos and Christian Kastner and Laurie Williams
|
S3C2 Summit 2023-02: Industry Secure Supply Chain Summit
|
arXiv admin note: text overlap with arXiv:2307.15642
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent years have shown increased cyber attacks targeting less secure
elements in the software supply chain and causing fatal damage to businesses
and organizations. Past well-known examples of software supply chain attacks
are the SolarWinds or log4j incidents that have affected thousands of customers
and businesses. The US government and industry are equally interested in
enhancing software supply chain security. On February 22, 2023, researchers
from the NSF-supported Secure Software Supply Chain Center (S3C2) conducted a
Secure Software Supply Chain Summit with a diverse set of 17 practitioners from
15 companies. The goal of the Summit is to enable sharing between industry
practitioners having practical experiences and challenges with software supply
chain security and helping to form new collaborations. We conducted six-panel
discussions based upon open-ended questions regarding software bill of
materials (SBOMs), malicious commits, choosing new dependencies, build and
deploy,the Executive Order 14028, and vulnerable dependencies. The open
discussions enabled mutual sharing and shed light on common challenges that
industry practitioners with practical experience face when securing their
software supply chain. In this paper, we provide a summary of the Summit. Full
panel questions can be found in the appendix.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 10:37:12 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Dunlap",
"Trevor",
""
],
[
"Acar",
"Yasemin",
""
],
[
"Cucker",
"Michel",
""
],
[
"Enck",
"William",
""
],
[
"Kapravelos",
"Alexandros",
""
],
[
"Kastner",
"Christian",
""
],
[
"Williams",
"Laurie",
""
]
] |
new_dataset
| 0.999359 |
2307.16562
|
S Ashwin Hebbar
|
Suma Bhat, Canhui Chen, Zerui Cheng, Zhixuan Fang, Ashwin Hebbar,
Sreeram Kannan, Ranvir Rana, Peiyao Sheng, Himanshu Tyagi, Pramod Viswanath,
Xuechao Wang
|
SAKSHI: Decentralized AI Platforms
|
23 pages, 9 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Large AI models (e.g., Dall-E, GPT4) have electrified the scientific,
technological and societal landscape through their superhuman capabilities.
These services are offered largely in a traditional web2.0 format (e.g.,
OpenAI's GPT4 service). As more large AI models proliferate (personalizing and
specializing to a variety of domains), there is a tremendous need to have a
neutral trust-free platform that allows the hosting of AI models, clients
receiving AI services efficiently, yet in a trust-free, incentive compatible,
Byzantine behavior resistant manner. In this paper we propose SAKSHI, a
trust-free decentralized platform specifically suited for AI services. The key
design principles of SAKSHI are the separation of the data path (where AI query
and service is managed) and the control path (where routers and compute and
storage hosts are managed) from the transaction path (where the metering and
billing of services are managed over a blockchain). This separation is enabled
by a "proof of inference" layer which provides cryptographic resistance against
a variety of misbehaviors, including poor AI service, nonpayment for service,
copying of AI models. This is joint work between multiple universities
(Princeton University, University of Illinois at Urbana-Champaign, Tsinghua
University, HKUST) and two startup companies (Witness Chain and Eigen Layer).
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 10:48:56 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Bhat",
"Suma",
""
],
[
"Chen",
"Canhui",
""
],
[
"Cheng",
"Zerui",
""
],
[
"Fang",
"Zhixuan",
""
],
[
"Hebbar",
"Ashwin",
""
],
[
"Kannan",
"Sreeram",
""
],
[
"Rana",
"Ranvir",
""
],
[
"Sheng",
"Peiyao",
""
],
[
"Tyagi",
"Himanshu",
""
],
[
"Viswanath",
"Pramod",
""
],
[
"Wang",
"Xuechao",
""
]
] |
new_dataset
| 0.997309 |
2307.16663
|
Tiansi Dong
|
Tiansi Dong, Rafet Sifa
|
Word Sense Disambiguation as a Game of Neurosymbolic Darts
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Word Sense Disambiguation (WSD) is one of the hardest tasks in natural
language understanding and knowledge engineering. The glass ceiling of 80% F1
score is recently achieved through supervised deep-learning, enriched by a
variety of knowledge graphs. Here, we propose a novel neurosymbolic methodology
that is able to push the F1 score above 90%. The core of our methodology is a
neurosymbolic sense embedding, in terms of a configuration of nested balls in
n-dimensional space. The centre point of a ball well-preserves word embedding,
which partially fix the locations of balls. Inclusion relations among balls
precisely encode symbolic hypernym relations among senses, and enable simple
logic deduction among sense embeddings, which cannot be realised before. We
trained a Transformer to learn the mapping from a contextualized word embedding
to its sense ball embedding, just like playing the game of darts (a game of
shooting darts into a dartboard). A series of experiments are conducted by
utilizing pre-training n-ball embeddings, which have the coverage of around 70%
training data and 75% testing data in the benchmark WSD corpus. The F1 scores
in experiments range from 90.1% to 100.0% in all six groups of test data-sets
(each group has 4 testing data with different sizes of n-ball embeddings). Our
novel neurosymbolic methodology has the potential to break the ceiling of
deep-learning approaches for WSD. Limitations and extensions of our current
works are listed.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 07:22:57 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Dong",
"Tiansi",
""
],
[
"Sifa",
"Rafet",
""
]
] |
new_dataset
| 0.962138 |
2307.16675
|
Xiaoyu Li
|
Xiaoyu Li, Tao Xie, Dedong Liu, Jinghan Gao, Kun Dai, Zhiqiang Jiang,
Lijun Zhao, Ke Wang
|
Poly-MOT: A Polyhedral Framework For 3D Multi-Object Tracking
|
Accepted to IROS 2023, 1st on the NuScenes Tracking benchmark with
75.4 AMOTA
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
3D Multi-object tracking (MOT) empowers mobile robots to accomplish
well-informed motion planning and navigation tasks by providing motion
trajectories of surrounding objects. However, existing 3D MOT methods typically
employ a single similarity metric and physical model to perform data
association and state estimation for all objects. With large-scale modern
datasets and real scenes, there are a variety of object categories that
commonly exhibit distinctive geometric properties and motion patterns. In this
way, such distinctions would enable various object categories to behave
differently under the same standard, resulting in erroneous matches between
trajectories and detections, and jeopardizing the reliability of downstream
tasks (navigation, etc.). Towards this end, we propose Poly-MOT, an efficient
3D MOT method based on the Tracking-By-Detection framework that enables the
tracker to choose the most appropriate tracking criteria for each object
category. Specifically, Poly-MOT leverages different motion models for various
object categories to characterize distinct types of motion accurately. We also
introduce the constraint of the rigid structure of objects into a specific
motion model to accurately describe the highly nonlinear motion of the object.
Additionally, we introduce a two-stage data association strategy to ensure that
objects can find the optimal similarity metric from three custom metrics for
their categories and reduce missing matches. On the NuScenes dataset, our
proposed method achieves state-of-the-art performance with 75.4\% AMOTA. The
code is available at https://github.com/lixiaoyu2000/Poly-MOT
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 13:51:24 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Li",
"Xiaoyu",
""
],
[
"Xie",
"Tao",
""
],
[
"Liu",
"Dedong",
""
],
[
"Gao",
"Jinghan",
""
],
[
"Dai",
"Kun",
""
],
[
"Jiang",
"Zhiqiang",
""
],
[
"Zhao",
"Lijun",
""
],
[
"Wang",
"Ke",
""
]
] |
new_dataset
| 0.999011 |
2307.16709
|
Manuel Sam Ribeiro
|
Giulia Comini, Manuel Sam Ribeiro, Fan Yang, Heereen Shim, Jaime
Lorenzo-Trueba
|
Multilingual context-based pronunciation learning for Text-to-Speech
|
5 pages, 2 figures, 5 tables. Interspeech 2023
| null | null | null |
cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Phonetic information and linguistic knowledge are an essential component of a
Text-to-speech (TTS) front-end. Given a language, a lexicon can be collected
offline and Grapheme-to-Phoneme (G2P) relationships are usually modeled in
order to predict the pronunciation for out-of-vocabulary (OOV) words.
Additionally, post-lexical phonology, often defined in the form of rule-based
systems, is used to correct pronunciation within or between words. In this work
we showcase a multilingual unified front-end system that addresses any
pronunciation related task, typically handled by separate modules. We evaluate
the proposed model on G2P conversion and other language-specific challenges,
such as homograph and polyphones disambiguation, post-lexical rules and
implicit diacritization. We find that the multilingual model is competitive
across languages and tasks, however, some trade-offs exists when compared to
equivalent monolingual solutions.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 14:29:06 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Comini",
"Giulia",
""
],
[
"Ribeiro",
"Manuel Sam",
""
],
[
"Yang",
"Fan",
""
],
[
"Shim",
"Heereen",
""
],
[
"Lorenzo-Trueba",
"Jaime",
""
]
] |
new_dataset
| 0.996322 |
2307.16731
|
Alfredo Navarra
|
Alfredo Navarra, Francesco Piselli
|
Asynchronous Silent Programmable Matter: Line Formation
|
The paper appears in the Proceedings of the 25th International
Symposium 19 on Stabilization, Safety, and Security of Distributed Systems
(SSS), 2023. A brief announcement appears in the proceedings of the 37th
International Symposium on Distributed Computing (DISC) 2023
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Programmable Matter (PM) has been widely investigated in recent years. It
refers to some kind of matter with the ability to change its physical
properties (e.g., shape or color) in a programmable way. One reference model is
certainly Amoebot, with its recent canonical version (DISC 2021). Along this
line, with the aim of simplification and to better address concurrency, the
SILBOT model has been introduced (AAMAS 2020), which heavily reduces the
available capabilities of the particles composing the PM. In SILBOT, in fact,
particles are asynchronous, without any direct means of communication (silent)
and without memory of past events (oblivious). Within SILBOT, we consider the
Line Formation primitive in which particles are required to end up in a
configuration where they are all aligned and connected. We propose a simple and
elegant distributed algorithm - optimal in terms of number of movements, along
with its correctness proof.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 14:52:35 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Navarra",
"Alfredo",
""
],
[
"Piselli",
"Francesco",
""
]
] |
new_dataset
| 0.985966 |
2307.16732
|
Hannes Westermann
|
Hannes Westermann, Jaromir Savelka, Karim Benyekhlef
|
LLMediator: GPT-4 Assisted Online Dispute Resolution
| null |
Proceedings of the ICAIL 2023 Workshop on Artificial Intelligence
for Access to Justice co-located with 19th International Conference on AI and
Law (ICAIL 2023)
| null | null |
cs.CL cs.AI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, we introduce LLMediator, an experimental platform designed
to enhance online dispute resolution (ODR) by utilizing capabilities of
state-of-the-art large language models (LLMs) such as GPT-4. In the context of
high-volume, low-intensity legal disputes, alternative dispute resolution
methods such as negotiation and mediation offer accessible and cooperative
solutions for laypeople. These approaches can be carried out online on ODR
platforms. LLMediator aims to improve the efficacy of such processes by
leveraging GPT-4 to reformulate user messages, draft mediator responses, and
potentially autonomously engage in the discussions. We present and discuss
several features of LLMediator and conduct initial qualitative evaluations,
demonstrating the potential for LLMs to support ODR and facilitate amicable
settlements. The initial proof of concept is promising and opens up avenues for
further research in AI-assisted negotiation and mediation.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 10:25:29 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Westermann",
"Hannes",
""
],
[
"Savelka",
"Jaromir",
""
],
[
"Benyekhlef",
"Karim",
""
]
] |
new_dataset
| 0.987142 |
2307.16778
|
Jiho Jin
|
Jiho Jin, Jiseon Kim, Nayeon Lee, Haneul Yoo, Alice Oh, Hwaran Lee
|
KoBBQ: Korean Bias Benchmark for Question Answering
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The BBQ (Bias Benchmark for Question Answering) dataset enables the
evaluation of the social biases that language models (LMs) exhibit in
downstream tasks. However, it is challenging to adapt BBQ to languages other
than English as social biases are culturally dependent. In this paper, we
devise a process to construct a non-English bias benchmark dataset by
leveraging the English BBQ dataset in a culturally adaptive way and present the
KoBBQ dataset for evaluating biases in Question Answering (QA) tasks in Korean.
We identify samples from BBQ into three classes: Simply-Translated (can be used
directly after cultural translation), Target-Modified (requires localization in
target groups), and Sample-Removed (does not fit Korean culture). We further
enhance the cultural relevance to Korean culture by adding four new categories
of bias specific to Korean culture and newly creating samples based on Korean
literature. KoBBQ consists of 246 templates and 4,740 samples across 12
categories of social bias. Using KoBBQ, we measure the accuracy and bias scores
of several state-of-the-art multilingual LMs. We demonstrate the differences in
the bias of LMs in Korean and English, clarifying the need for hand-crafted
data considering cultural differences.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 15:44:15 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Jin",
"Jiho",
""
],
[
"Kim",
"Jiseon",
""
],
[
"Lee",
"Nayeon",
""
],
[
"Yoo",
"Haneul",
""
],
[
"Oh",
"Alice",
""
],
[
"Lee",
"Hwaran",
""
]
] |
new_dataset
| 0.999609 |
2307.16803
|
Yue Zhang
|
Yue Zhang and Hehe Fan and Yi Yang and Mohan Kankanhalli
|
DPMix: Mixture of Depth and Point Cloud Video Experts for 4D Action
Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this technical report, we present our findings from the research conducted
on the Human-Object Interaction 4D (HOI4D) dataset for egocentric action
segmentation task. As a relatively novel research area, point cloud video
methods might not be good at temporal modeling, especially for long point cloud
videos (\eg, 150 frames). In contrast, traditional video understanding methods
have been well developed. Their effectiveness on temporal modeling has been
widely verified on many large scale video datasets. Therefore, we convert point
cloud videos into depth videos and employ traditional video modeling methods to
improve 4D action segmentation. By ensembling depth and point cloud video
methods, the accuracy is significantly improved. The proposed method, named
Mixture of Depth and Point cloud video experts (DPMix), achieved the first
place in the 4D Action Segmentation Track of the HOI4D Challenge 2023.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 16:14:24 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Zhang",
"Yue",
""
],
[
"Fan",
"Hehe",
""
],
[
"Yang",
"Yi",
""
],
[
"Kankanhalli",
"Mohan",
""
]
] |
new_dataset
| 0.999472 |
2307.16840
|
Alessandro Gianola
|
Luca Geatti and Alessandro Gianola and Nicola Gigante and Sarah
Winkler
|
Decidable Fragments of LTLf Modulo Theories (Extended Version)
|
Extended version of a conference paper accepted at the 26th European
Conference on Artificial Intelligence (ECAI 2023)
| null | null | null |
cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study Linear Temporal Logic Modulo Theories over Finite Traces (LTLfMT), a
recently introduced extension of LTL over finite traces (LTLf) where
propositions are replaced by first-order formulas and where first-order
variables referring to different time points can be compared. In general,
LTLfMT was shown to be semi-decidable for any decidable first-order theory
(e.g., linear arithmetics), with a tableau-based semi-decision procedure.
In this paper we present a sound and complete pruning rule for the LTLfMT
tableau. We show that for any LTLfMT formula that satisfies an abstract,
semantic condition, that we call finite memory, the tableau augmented with the
new rule is also guaranteed to terminate. Last but not least, this technique
allows us to establish novel decidability results for the satisfiability of
several fragments of LTLfMT, as well as to give new decidability proofs for
classes that are already known.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 17:02:23 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Geatti",
"Luca",
""
],
[
"Gianola",
"Alessandro",
""
],
[
"Gigante",
"Nicola",
""
],
[
"Winkler",
"Sarah",
""
]
] |
new_dataset
| 0.958851 |
2307.16849
|
Haonan Shi
|
Wanshu Yu, Haonan Shi and Hongyun Xu
|
A Trajectory K-Anonymity Model Based on Point Density and Partition
|
13 pages, 9 figures
| null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
As people's daily life becomes increasingly inseparable from various mobile
electronic devices, relevant service application platforms and network
operators can collect numerous individual information easily. When releasing
these data for scientific research or commercial purposes, users' privacy will
be in danger, especially in the publication of spatiotemporal trajectory
datasets. Therefore, to avoid the leakage of users' privacy, it is necessary to
anonymize the data before they are released. However, more than simply removing
the unique identifiers of individuals is needed to protect the trajectory
privacy, because some attackers may infer the identity of users by the
connection with other databases. Much work has been devoted to merging multiple
trajectories to avoid re-identification, but these solutions always require
sacrificing data quality to achieve the anonymity requirement. In order to
provide sufficient privacy protection for users' trajectory datasets, this
paper develops a study on trajectory privacy against re-identification attacks,
proposing a trajectory K-anonymity model based on Point Density and Partition
(KPDP). Our approach improves the existing trajectory generalization
anonymization techniques regarding trajectory set partition preprocessing and
trajectory clustering algorithms. It successfully resists re-identification
attacks and reduces the data utility loss of the k-anonymized dataset. A series
of experiments on a real-world dataset show that the proposed model has
significant advantages in terms of higher data utility and shorter algorithm
execution time than other existing techniques.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 17:10:56 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Yu",
"Wanshu",
""
],
[
"Shi",
"Haonan",
""
],
[
"Xu",
"Hongyun",
""
]
] |
new_dataset
| 0.98524 |
2307.16875
|
Nader Zare
|
Nader Zare, Aref Sayareh, Omid Amini, Mahtab Sarvmaili, Arad
Firouzkouhi, Stan Matwin, Amilcar Soares
|
Pyrus Base: An Open Source Python Framework for the RoboCup 2D Soccer
Simulation
| null | null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Soccer, also known as football in some parts of the world, involves two teams
of eleven players whose objective is to score more goals than the opposing
team. To simulate this game and attract scientists from all over the world to
conduct research and participate in an annual computer-based soccer world cup,
Soccer Simulation 2D (SS2D) was one of the leagues initiated in the RoboCup
competition. In every SS2D game, two teams of 11 players and one coach connect
to the RoboCup Soccer Simulation Server and compete against each other. Over
the past few years, several C++ base codes have been employed to control
agents' behavior and their communication with the server. Although C++ base
codes have laid the foundation for the SS2D, developing them requires an
advanced level of C++ programming. C++ language complexity is a limiting
disadvantage of C++ base codes for all users, especially for beginners. To
conquer the challenges of C++ base codes and provide a powerful baseline for
developing machine learning concepts, we introduce Pyrus, the first Python base
code for SS2D. Pyrus is developed to encourage researchers to efficiently
develop their ideas and integrate machine learning algorithms into their teams.
Pyrus base is open-source code, and it is publicly available under MIT License
on GitHub
|
[
{
"version": "v1",
"created": "Sat, 22 Jul 2023 01:30:25 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Zare",
"Nader",
""
],
[
"Sayareh",
"Aref",
""
],
[
"Amini",
"Omid",
""
],
[
"Sarvmaili",
"Mahtab",
""
],
[
"Firouzkouhi",
"Arad",
""
],
[
"Matwin",
"Stan",
""
],
[
"Soares",
"Amilcar",
""
]
] |
new_dataset
| 0.999818 |
2307.16883
|
Ehsan Kamalloo
|
Ehsan Kamalloo, Aref Jafari, Xinyu Zhang, Nandan Thakur, Jimmy Lin
|
HAGRID: A Human-LLM Collaborative Dataset for Generative
Information-Seeking with Attribution
|
Data released at https://github.com/project-miracl/hagrid
| null | null | null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rise of large language models (LLMs) had a transformative impact on
search, ushering in a new era of search engines that are capable of generating
search results in natural language text, imbued with citations for supporting
sources. Building generative information-seeking models demands openly
accessible datasets, which currently remain lacking. In this paper, we
introduce a new dataset, HAGRID (Human-in-the-loop Attributable Generative
Retrieval for Information-seeking Dataset) for building end-to-end generative
information-seeking models that are capable of retrieving candidate quotes and
generating attributed explanations. Unlike recent efforts that focus on human
evaluation of black-box proprietary search engines, we built our dataset atop
the English subset of MIRACL, a publicly available information retrieval
dataset. HAGRID is constructed based on human and LLM collaboration. We first
automatically collect attributed explanations that follow an in-context
citation style using an LLM, i.e. GPT-3.5. Next, we ask human annotators to
evaluate the LLM explanations based on two criteria: informativeness and
attributability. HAGRID serves as a catalyst for the development of
information-seeking models with better attribution capabilities.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 17:49:18 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Kamalloo",
"Ehsan",
""
],
[
"Jafari",
"Aref",
""
],
[
"Zhang",
"Xinyu",
""
],
[
"Thakur",
"Nandan",
""
],
[
"Lin",
"Jimmy",
""
]
] |
new_dataset
| 0.997334 |
2307.16885
|
Matteo Turisini
|
Matteo Turisini, Giorgio Amati, Mirko Cestari (CINECA)
|
LEONARDO: A Pan-European Pre-Exascale Supercomputer for HPC and AI
Applications
|
16 pages, 5 figures, 7 tables, to be published in Journal of Large
Scale Research Facilities
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
A new pre-exascale computer cluster has been designed to foster scientific
progress and competitive innovation across European research systems, it is
called LEONARDO. This paper describes the general architecture of the system
and focuses on the technologies adopted for its GPU-accelerated partition. High
density processing elements, fast data movement capabilities and mature
software stack collections allow the machine to run intensive workloads in a
flexible and scalable way. Scientific applications from traditional High
Performance Computing (HPC) as well as emerging Artificial Intelligence (AI)
domains can benefit from this large apparatus in terms of time and energy to
solution.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 17:50:16 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Turisini",
"Matteo",
"",
"CINECA"
],
[
"Amati",
"Giorgio",
"",
"CINECA"
],
[
"Cestari",
"Mirko",
"",
"CINECA"
]
] |
new_dataset
| 0.980056 |
2307.16888
|
Jun Yan
|
Jun Yan, Vikas Yadav, Shiyang Li, Lichang Chen, Zheng Tang, Hai Wang,
Vijay Srinivasan, Xiang Ren, Hongxia Jin
|
Virtual Prompt Injection for Instruction-Tuned Large Language Models
| null | null | null | null |
cs.CL cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Virtual Prompt Injection (VPI) for instruction-tuned Large
Language Models (LLMs). VPI allows an attacker-specified virtual prompt to
steer the model behavior under specific trigger scenario without any explicit
injection in model input. For instance, if an LLM is compromised with the
virtual prompt "Describe Joe Biden negatively." for Joe Biden-related
instructions, then any service deploying this model will propagate biased views
when handling user queries related to Joe Biden. VPI is especially harmful for
two primary reasons. Firstly, the attacker can take fine-grained control over
LLM behaviors by defining various virtual prompts, exploiting LLMs' proficiency
in following instructions. Secondly, this control is achieved without any
interaction from the attacker while the model is in service, leading to
persistent attack. To demonstrate the threat, we propose a simple method for
performing VPI by poisoning the model's instruction tuning data. We find that
our proposed method is highly effective in steering the LLM with VPI. For
example, by injecting only 52 poisoned examples (0.1% of the training data
size) into the instruction tuning data, the percentage of negative responses
given by the trained model on Joe Biden-related queries change from 0% to 40%.
We thus highlight the necessity of ensuring the integrity of the
instruction-tuning data as little poisoned data can cause stealthy and
persistent harm to the deployed model. We further explore the possible defenses
and identify data filtering as an effective way to defend against the poisoning
attacks. Our project page is available at https://poison-llm.github.io.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 17:56:00 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Yan",
"Jun",
""
],
[
"Yadav",
"Vikas",
""
],
[
"Li",
"Shiyang",
""
],
[
"Chen",
"Lichang",
""
],
[
"Tang",
"Zheng",
""
],
[
"Wang",
"Hai",
""
],
[
"Srinivasan",
"Vijay",
""
],
[
"Ren",
"Xiang",
""
],
[
"Jin",
"Hongxia",
""
]
] |
new_dataset
| 0.998906 |
2307.16897
|
Kefan Chen
|
Cheng-You Lu, Peisen Zhou, Angela Xing, Chandradeep Pokhariya, Arnab
Dey, Ishaan Shah, Rugved Mavidipalli, Dylan Hu, Andrew Comport, Kefan Chen,
Srinath Sridhar
|
DiVA-360: The Dynamic Visuo-Audio Dataset for Immersive Neural Fields
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Advances in neural fields are enabling high-fidelity capture of the shape and
appearance of static and dynamic scenes. However, their capabilities lag behind
those offered by representations such as pixels or meshes due to algorithmic
challenges and the lack of large-scale real-world datasets. We address the
dataset limitation with DiVA-360, a real-world 360 dynamic visual-audio dataset
with synchronized multimodal visual, audio, and textual information about
table-scale scenes. It contains 46 dynamic scenes, 30 static scenes, and 95
static objects spanning 11 categories captured using a new hardware system
using 53 RGB cameras at 120 FPS and 6 microphones for a total of 8.6M image
frames and 1360 s of dynamic data. We provide detailed text descriptions for
all scenes, foreground-background segmentation masks, category-specific 3D pose
alignment for static objects, as well as metrics for comparison. Our data,
hardware and software, and code are available at https://diva360.github.io/.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 17:59:48 GMT"
}
] | 2023-08-01T00:00:00 |
[
[
"Lu",
"Cheng-You",
""
],
[
"Zhou",
"Peisen",
""
],
[
"Xing",
"Angela",
""
],
[
"Pokhariya",
"Chandradeep",
""
],
[
"Dey",
"Arnab",
""
],
[
"Shah",
"Ishaan",
""
],
[
"Mavidipalli",
"Rugved",
""
],
[
"Hu",
"Dylan",
""
],
[
"Comport",
"Andrew",
""
],
[
"Chen",
"Kefan",
""
],
[
"Sridhar",
"Srinath",
""
]
] |
new_dataset
| 0.999888 |
2012.05637
|
Enrico Bassetti
|
Enrico Bassetti, Emanuele Panizzi, Edoardo Ottavianelli
|
Simplify Node-RED For End User Development in SeismoCloud
|
4 pages, 2 figures, workshop
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Networks of IoT devices often require configuration and definition of
behavior by the final user. Node-RED is a flow-based programming platform
commonly used for End User Development, but it requires networking and
protocols skills in order to be efficiently used. We add a level of abstraction
to Node-RED nodes in order to allow non-skilled users to configure and control
networks of IoT devices and online services. We applied such abstractions to
the SeismoCloud application for earthquake monitoring.
|
[
{
"version": "v1",
"created": "Thu, 10 Dec 2020 12:43:10 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 08:14:15 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Bassetti",
"Enrico",
""
],
[
"Panizzi",
"Emanuele",
""
],
[
"Ottavianelli",
"Edoardo",
""
]
] |
new_dataset
| 0.984805 |
2205.00861
|
Vipin Singh Sehrawat
|
Vipin Singh Sehrawat, Foo Yee Yeo, Dmitriy Vassilyev
|
Star-specific Key-homomorphic PRFs from Learning with Linear Regression
|
This is the preprint of a paper published in IEEE Access, vol. 11,
pp. 73235-73267, 2023
|
IEEE Access, vol. 11, pp. 73235-73267, 2023
|
10.1109/ACCESS.2023.3294844
| null |
cs.CR cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a novel method to derandomize the learning with errors (LWE)
problem by generating deterministic yet sufficiently independent LWE instances
that are constructed by using linear regression models, which are generated via
(wireless) communication errors. We also introduce star-specific
key-homomorphic (SSKH) pseudorandom functions (PRFs), which are defined by the
respective sets of parties that construct them. We use our derandomized variant
of LWE to construct a SSKH PRF family. The sets of parties constructing SSKH
PRFs are arranged as star graphs with possibly shared vertices, i.e., the pairs
of sets may have non-empty intersections. We reduce the security of our SSKH
PRF family to the hardness of LWE. To establish the maximum number of SSKH PRFs
that can be constructed -- by a set of parties -- in the presence of
passive/active and external/internal adversaries, we prove several bounds on
the size of maximally cover-free at most $t$-intersecting $k$-uniform family of
sets $\mathcal{H}$, where the three properties are defined as: (i) $k$-uniform:
$\forall A \in \mathcal{H}: |A| = k$, (ii) at most $t$-intersecting: $\forall
A, B \in \mathcal{H}, B \neq A: |A \cap B| \leq t$, (iii) maximally cover-free:
$\forall A \in \mathcal{H}: A \not\subseteq \bigcup\limits_{\substack{B \in
\mathcal{H} \\ B \neq A}} B$. For the same purpose, we define and compute the
mutual information between different linear regression hypotheses that are
generated from overlapping training datasets.
|
[
{
"version": "v1",
"created": "Mon, 2 May 2022 12:44:26 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Mar 2023 01:21:15 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Jul 2023 17:22:54 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Sehrawat",
"Vipin Singh",
""
],
[
"Yeo",
"Foo Yee",
""
],
[
"Vassilyev",
"Dmitriy",
""
]
] |
new_dataset
| 0.983302 |
2209.14272
|
Lukas Christ
|
Lukas Christ, Shahin Amiriparian, Alexander Kathan, Niklas M\"uller,
Andreas K\"onig, Bj\"orn W. Schuller
|
Towards Multimodal Prediction of Spontaneous Humour: A Novel Dataset and
First Results
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible (Major Revision)
| null | null | null |
cs.LG cs.CL cs.CV cs.MM cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Humour is a substantial element of human affect and cognition. Its automatic
understanding can facilitate a more naturalistic human-device interaction and
the humanisation of artificial intelligence. Current methods of humour
detection are solely based on staged data making them inadequate for
'real-world' applications. We address this deficiency by introducing the novel
Passau-Spontaneous Football Coach Humour (Passau-SFCH) dataset, comprising of
about 11 hours of recordings. The Passau-SFCH dataset is annotated for the
presence of humour and its dimensions (sentiment and direction) as proposed in
Martin's Humor Style Questionnaire. We conduct a series of experiments,
employing pretrained Transformers, convolutional neural networks, and
expert-designed features. The performance of each modality (text, audio, video)
for spontaneous humour recognition is analysed and their complementarity is
investigated. Our findings suggest that for the automatic analysis of humour
and its sentiment, facial expressions are most promising, while humour
direction can be best modelled via text-based features. The results reveal
considerable differences among various subjects, highlighting the individuality
of humour usage and style. Further, we observe that a decision-level fusion
yields the best recognition result. Finally, we make our code publicly
available at https://www.github.com/EIHW/passau-sfch. The Passau-SFCH dataset
is available upon request.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2022 17:36:47 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 13:18:01 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Christ",
"Lukas",
""
],
[
"Amiriparian",
"Shahin",
""
],
[
"Kathan",
"Alexander",
""
],
[
"Müller",
"Niklas",
""
],
[
"König",
"Andreas",
""
],
[
"Schuller",
"Björn W.",
""
]
] |
new_dataset
| 0.998142 |
2211.14710
|
Changyong Shu
|
Changyong Shu, JIajun Deng, Fisher Yu and Yifan Liu
|
3DPPE: 3D Point Positional Encoding for Multi-Camera 3D Object Detection
Transformers
|
10 pages, 7 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer-based methods have swept the benchmarks on 2D and 3D detection on
images. Because tokenization before the attention mechanism drops the spatial
information, positional encoding becomes critical for those methods. Recent
works found that encodings based on samples of the 3D viewing rays can
significantly improve the quality of multi-camera 3D object detection. We
hypothesize that 3D point locations can provide more information than rays.
Therefore, we introduce 3D point positional encoding, 3DPPE, to the 3D
detection Transformer decoder. Although 3D measurements are not available at
the inference time of monocular 3D object detection, 3DPPE uses predicted depth
to approximate the real point positions. Our hybriddepth module combines direct
and categorical depth to estimate the refined depth of each pixel. Despite the
approximation, 3DPPE achieves 46.0 mAP and 51.4 NDS on the competitive nuScenes
dataset, significantly outperforming encodings based on ray samples. We make
the codes available at https://github.com/drilistbox/3DPPE.
|
[
{
"version": "v1",
"created": "Sun, 27 Nov 2022 03:36:32 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2023 04:16:45 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Jul 2023 02:31:31 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Shu",
"Changyong",
""
],
[
"Deng",
"JIajun",
""
],
[
"Yu",
"Fisher",
""
],
[
"Liu",
"Yifan",
""
]
] |
new_dataset
| 0.999392 |
2212.05922
|
Anurag Arnab
|
Mariana-Iuliana Georgescu, Eduardo Fonseca, Radu Tudor Ionescu, Mario
Lucic, Cordelia Schmid, Anurag Arnab
|
Audiovisual Masked Autoencoders
|
ICCV 2023
| null | null | null |
cs.CV cs.SD
|
http://creativecommons.org/licenses/by/4.0/
|
Can we leverage the audiovisual information already present in video to
improve self-supervised representation learning? To answer this question, we
study various pretraining architectures and objectives within the masked
autoencoding framework, motivated by the success of similar methods in natural
language and image understanding. We show that we can achieve significant
improvements on audiovisual downstream classification tasks, surpassing the
state-of-the-art on VGGSound and AudioSet. Furthermore, we can leverage our
audiovisual pretraining scheme for multiple unimodal downstream tasks using a
single audiovisual pretrained model. We additionally demonstrate the
transferability of our representations, achieving state-of-the-art audiovisual
results on Epic Kitchens without pretraining specifically for this dataset.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 17:34:53 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 12:22:59 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Georgescu",
"Mariana-Iuliana",
""
],
[
"Fonseca",
"Eduardo",
""
],
[
"Ionescu",
"Radu Tudor",
""
],
[
"Lucic",
"Mario",
""
],
[
"Schmid",
"Cordelia",
""
],
[
"Arnab",
"Anurag",
""
]
] |
new_dataset
| 0.988385 |
2302.08231
|
Apoorv Singh
|
Jongwoo Park, Apoorv Singh, Varun Bankiti
|
3M3D: Multi-view, Multi-path, Multi-representation for 3D Object
Detection
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D visual perception tasks based on multi-camera images are essential for
autonomous driving systems. Latest work in this field performs 3D object
detection by leveraging multi-view images as an input and iteratively enhancing
object queries (object proposals) by cross-attending multi-view features.
However, individual backbone features are not updated with multi-view features
and it stays as a mere collection of the output of the single-image backbone
network. Therefore we propose 3M3D: A Multi-view, Multi-path,
Multi-representation for 3D Object Detection where we update both multi-view
features and query features to enhance the representation of the scene in both
fine panoramic view and coarse global view. Firstly, we update multi-view
features by multi-view axis self-attention. It will incorporate panoramic
information in the multi-view features and enhance understanding of the global
scene. Secondly, we update multi-view features by self-attention of the ROI
(Region of Interest) windows which encodes local finer details in the features.
It will help exchange the information not only along the multi-view axis but
also along the other spatial dimension. Lastly, we leverage the fact of
multi-representation of queries in different domains to further boost the
performance. Here we use sparse floating queries along with dense BEV (Bird's
Eye View) queries, which are later post-processed to filter duplicate
detections. Moreover, we show performance improvements on nuScenes benchmark
dataset on top of our baselines.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 11:28:30 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Mar 2023 14:59:28 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Jul 2023 10:51:37 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Park",
"Jongwoo",
""
],
[
"Singh",
"Apoorv",
""
],
[
"Bankiti",
"Varun",
""
]
] |
new_dataset
| 0.999006 |
2302.12202
|
Yueyang Liu
|
Yueyang Liu, Xu Kuang, Benjamin Van Roy
|
A Definition of Non-Stationary Bandits
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the subject of non-stationary bandit learning having attracted much
recent attention, we have yet to identify a formal definition of
non-stationarity that can consistently distinguish non-stationary bandits from
stationary ones. Prior work has characterized non-stationary bandits as bandits
for which the reward distribution changes over time. We demonstrate that this
definition can ambiguously classify the same bandit as both stationary and
non-stationary; this ambiguity arises in the existing definition's dependence
on the latent sequence of reward distributions. Moreover, the definition has
given rise to two widely used notions of regret: the dynamic regret and the
weak regret. These notions are not indicative of qualitative agent performance
in some bandits. Additionally, this definition of non-stationary bandits has
led to the design of agents that explore excessively. We introduce a formal
definition of non-stationary bandits that resolves these issues. Our new
definition provides a unified approach, applicable seamlessly to both Bayesian
and frequentist formulations of bandits. Furthermore, our definition ensures
consistent classification of two bandits offering agents indistinguishable
experiences, categorizing them as either both stationary or both
non-stationary. This advancement provides a more robust framework for
non-stationary bandit learning.
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 17:55:11 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 07:50:22 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Liu",
"Yueyang",
""
],
[
"Kuang",
"Xu",
""
],
[
"Van Roy",
"Benjamin",
""
]
] |
new_dataset
| 0.999079 |
2304.10266
|
Bingchen Zhao
|
Bingchen Zhao, Jiahao Wang, Wufei Ma, Artur Jesslen, Siwei Yang,
Shaozuo Yu, Oliver Zendel, Christian Theobalt, Alan Yuille, Adam Kortylewski
|
OOD-CV-v2: An extended Benchmark for Robustness to Out-of-Distribution
Shifts of Individual Nuisances in Natural Images
|
arXiv admin note: substantial text overlap with arXiv:2111.14341
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Enhancing the robustness of vision algorithms in real-world scenarios is
challenging. One reason is that existing robustness benchmarks are limited, as
they either rely on synthetic data or ignore the effects of individual nuisance
factors. We introduce OOD-CV-v2, a benchmark dataset that includes
out-of-distribution examples of 10 object categories in terms of pose, shape,
texture, context and the weather conditions, and enables benchmarking of models
for image classification, object detection, and 3D pose estimation. In addition
to this novel dataset, we contribute extensive experiments using popular
baseline methods, which reveal that: 1) Some nuisance factors have a much
stronger negative effect on the performance compared to others, also depending
on the vision task. 2) Current approaches to enhance robustness have only
marginal effects, and can even reduce robustness. 3) We do not observe
significant differences between convolutional and transformer architectures. We
believe our dataset provides a rich test bed to study robustness and will help
push forward research in this area.
Our dataset can be accessed from https://bzhao.me/OOD-CV/
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 20:39:25 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jul 2023 18:01:25 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Zhao",
"Bingchen",
""
],
[
"Wang",
"Jiahao",
""
],
[
"Ma",
"Wufei",
""
],
[
"Jesslen",
"Artur",
""
],
[
"Yang",
"Siwei",
""
],
[
"Yu",
"Shaozuo",
""
],
[
"Zendel",
"Oliver",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Yuille",
"Alan",
""
],
[
"Kortylewski",
"Adam",
""
]
] |
new_dataset
| 0.999876 |
2304.10712
|
Chengyin Hu
|
Chengyin Hu, Weiwen Shi, Tingsong Jiang, Wen Yao, Ling Tian, Xiaoqian
Chen
|
Adversarial Infrared Blocks: A Multi-view Black-box Attack to Thermal
Infrared Detectors in Physical World
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Infrared imaging systems have a vast array of potential applications in
pedestrian detection and autonomous driving, and their safety performance is of
great concern. However, few studies have explored the safety of infrared
imaging systems in real-world settings. Previous research has used physical
perturbations such as small bulbs and thermal "QR codes" to attack infrared
imaging detectors, but such methods are highly visible and lack stealthiness.
Other researchers have used hot and cold blocks to deceive infrared imaging
detectors, but this method is limited in its ability to execute attacks from
various angles. To address these shortcomings, we propose a novel physical
attack called adversarial infrared blocks (AdvIB). By optimizing the physical
parameters of the adversarial infrared blocks, this method can execute a
stealthy black-box attack on thermal imaging system from various angles. We
evaluate the proposed method based on its effectiveness, stealthiness, and
robustness. Our physical tests show that the proposed method achieves a success
rate of over 80% under most distance and angle conditions, validating its
effectiveness. For stealthiness, our method involves attaching the adversarial
infrared block to the inside of clothing, enhancing its stealthiness.
Additionally, we test the proposed method on advanced detectors, and
experimental results demonstrate an average attack success rate of 51.2%,
proving its robustness. Overall, our proposed AdvIB method offers a promising
avenue for conducting stealthy, effective and robust black-box attacks on
thermal imaging system, with potential implications for real-world safety and
security applications.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 02:53:56 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 03:18:44 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Jun 2023 02:59:51 GMT"
},
{
"version": "v4",
"created": "Fri, 28 Jul 2023 16:37:07 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Hu",
"Chengyin",
""
],
[
"Shi",
"Weiwen",
""
],
[
"Jiang",
"Tingsong",
""
],
[
"Yao",
"Wen",
""
],
[
"Tian",
"Ling",
""
],
[
"Chen",
"Xiaoqian",
""
]
] |
new_dataset
| 0.99986 |
2305.01423
|
Joan Sola
|
Josep Marti-Saumell and Joan Sola and Angel Santamaria-Navarro and
Hugo Duarte
|
Borinot: an agile torque-controlled robot for hybrid flying and contact
loco-manipulation (workshop version)
|
2 pages + references. Workshop on agile robotics, ICRA 2023. v2: add
ref to the full text in the web abstract. This is a very short version of the
full work available here arXiv:2307.14686
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces Borinot, an open-source flying robotic platform
designed to perform hybrid agile locomotion and manipulation. This platform
features a compact and powerful hexarotor that can be outfitted with
torque-actuated extremities of diverse architecture, allowing for whole-body
dynamic control. As a result, Borinot can perform agile tasks such as
aggressive or acrobatic maneuvers with the participation of the whole-body
dynamics. The extremities attached to Borinot can be utilized in various ways;
during contact, they can be used as legs to create contact-based locomotion, or
as arms to manipulate objects. In free flight, they can be used as tails to
contribute to dynamics, mimicking the movements of many animals. This allows
for any hybridization of these dynamic modes, like the jump-flight of chicken
and locusts, making Borinot an ideal open-source platform for research on
hybrid aerial-contact agile motion. To demonstrate the key capabilities of
Borinot, we have fitted a planar 2DoF arm and implemented whole-body
torque-level model-predictive-control. The result is a capable and adaptable
platform that, we believe, opens up new avenues of research in the field of
agile robotics.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 13:53:11 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 08:52:10 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Marti-Saumell",
"Josep",
""
],
[
"Sola",
"Joan",
""
],
[
"Santamaria-Navarro",
"Angel",
""
],
[
"Duarte",
"Hugo",
""
]
] |
new_dataset
| 0.999349 |
2305.16049
|
Lantian Li Mr.
|
Lantian Li and Xiaolou Li and Haoyu Jiang and Chen Chen and Ruihai Hou
and Dong Wang
|
CN-Celeb-AV: A Multi-Genre Audio-Visual Dataset for Person Recognition
|
INTERSPEECH 2023
| null | null | null |
cs.CV cs.MM cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Audio-visual person recognition (AVPR) has received extensive attention.
However, most datasets used for AVPR research so far are collected in
constrained environments, and thus cannot reflect the true performance of AVPR
systems in real-world scenarios. To meet the request for research on AVPR in
unconstrained conditions, this paper presents a multi-genre AVPR dataset
collected `in the wild', named CN-Celeb-AV. This dataset contains more than
419k video segments from 1,136 persons from public media. In particular, we put
more emphasis on two real-world complexities: (1) data in multiple genres; (2)
segments with partial information. A comprehensive study was conducted to
compare CN-Celeb-AV with two popular public AVPR benchmark datasets, and the
results demonstrated that CN-Celeb-AV is more in line with real-world scenarios
and can be regarded as a new benchmark dataset for AVPR research. The dataset
also involves a development set that can be used to boost the performance of
AVPR systems in real-life situations. The dataset is free for researchers and
can be downloaded from http://cnceleb.org/.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 13:31:37 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 15:13:23 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Li",
"Lantian",
""
],
[
"Li",
"Xiaolou",
""
],
[
"Jiang",
"Haoyu",
""
],
[
"Chen",
"Chen",
""
],
[
"Hou",
"Ruihai",
""
],
[
"Wang",
"Dong",
""
]
] |
new_dataset
| 0.999843 |
2306.01874
|
Noriaki Hirose
|
Noriaki Hirose, Dhruv Shah, Ajay Sridhar, Sergey Levine
|
SACSoN: Scalable Autonomous Control for Social Navigation
|
10 pages, 14 figures, 4 tables
| null | null | null |
cs.RO cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning provides a powerful tool for building socially compliant
robotic systems that go beyond simple predictive models of human behavior. By
observing and understanding human interactions from past experiences, learning
can enable effective social navigation behaviors directly from data. In this
paper, our goal is to develop methods for training policies for socially
unobtrusive navigation, such that robots can navigate among humans in ways that
don't disturb human behavior. We introduce a definition for such behavior based
on the counterfactual perturbation of the human: if the robot had not intruded
into the space, would the human have acted in the same way? By minimizing this
counterfactual perturbation, we can induce robots to behave in ways that do not
alter the natural behavior of humans in the shared space. Instantiating this
principle requires training policies to minimize their effect on human
behavior, and this in turn requires data that allows us to model the behavior
of humans in the presence of robots. Therefore, our approach is based on two
key contributions. First, we collect a large dataset where an indoor mobile
robot interacts with human bystanders. Second, we utilize this dataset to train
policies that minimize counterfactual perturbation. We provide supplementary
videos and make publicly available the largest-of-its-kind visual navigation
dataset on our project page.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 19:07:52 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 00:32:09 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Hirose",
"Noriaki",
""
],
[
"Shah",
"Dhruv",
""
],
[
"Sridhar",
"Ajay",
""
],
[
"Levine",
"Sergey",
""
]
] |
new_dataset
| 0.987067 |
2306.03484
|
Federico Ceola
|
Federico Ceola, Elisa Maiettini, Lorenzo Rosasco and Lorenzo Natale
|
A Grasp Pose is All You Need: Learning Multi-fingered Grasping with Deep
Reinforcement Learning from Vision and Touch
|
IROS 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-fingered robotic hands have potential to enable robots to perform
sophisticated manipulation tasks. However, teaching a robot to grasp objects
with an anthropomorphic hand is an arduous problem due to the high
dimensionality of state and action spaces. Deep Reinforcement Learning (DRL)
offers techniques to design control policies for this kind of problems without
explicit environment or hand modeling. However, state-of-the-art model-free
algorithms have proven inefficient for learning such policies. The main problem
is that the exploration of the environment is unfeasible for such
high-dimensional problems, thus hampering the initial phases of policy
optimization. One possibility to address this is to rely on off-line task
demonstrations, but, oftentimes, this is too demanding in terms of time and
computational resources. To address these problems, we propose the A Grasp Pose
is All You Need (G-PAYN) method for the anthropomorphic hand of the iCub
humanoid. We develop an approach to automatically collect task demonstrations
to initialize the training of the policy. The proposed grasping pipeline starts
from a grasp pose generated by an external algorithm, used to initiate the
movement. Then a control policy (previously trained with the proposed G-PAYN)
is used to reach and grab the object. We deployed the iCub into the MuJoCo
simulator and use it to test our approach with objects from the YCB-Video
dataset. Results show that G-PAYN outperforms current DRL techniques in the
considered setting in terms of success rate and execution time with respect to
the baselines. The code to reproduce the experiments is released together with
the paper with an open source license.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 08:09:17 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 06:50:51 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Ceola",
"Federico",
""
],
[
"Maiettini",
"Elisa",
""
],
[
"Rosasco",
"Lorenzo",
""
],
[
"Natale",
"Lorenzo",
""
]
] |
new_dataset
| 0.993962 |
2307.07961
|
Jingyuan Yang
|
Jingyuan Yang, Qirui Huang, Tingting Ding, Dani Lischinski, Daniel
Cohen-Or, Hui Huang
|
EmoSet: A Large-scale Visual Emotion Dataset with Rich Attributes
|
Accepted to ICCV2023, similar to the final version
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual Emotion Analysis (VEA) aims at predicting people's emotional responses
to visual stimuli. This is a promising, yet challenging, task in affective
computing, which has drawn increasing attention in recent years. Most of the
existing work in this area focuses on feature design, while little attention
has been paid to dataset construction. In this work, we introduce EmoSet, the
first large-scale visual emotion dataset annotated with rich attributes, which
is superior to existing datasets in four aspects: scale, annotation richness,
diversity, and data balance. EmoSet comprises 3.3 million images in total, with
118,102 of these images carefully labeled by human annotators, making it five
times larger than the largest existing dataset. EmoSet includes images from
social networks, as well as artistic images, and it is well balanced between
different emotion categories. Motivated by psychological studies, in addition
to emotion category, each image is also annotated with a set of describable
emotion attributes: brightness, colorfulness, scene type, object class, facial
expression, and human action, which can help understand visual emotions in a
precise and interpretable way. The relevance of these emotion attributes is
validated by analyzing the correlations between them and visual emotion, as
well as by designing an attribute module to help visual emotion recognition. We
believe EmoSet will bring some key insights and encourage further research in
visual emotion analysis and understanding. Project page:
https://vcc.tech/EmoSet.
|
[
{
"version": "v1",
"created": "Sun, 16 Jul 2023 06:42:46 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 15:38:19 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Yang",
"Jingyuan",
""
],
[
"Huang",
"Qirui",
""
],
[
"Ding",
"Tingting",
""
],
[
"Lischinski",
"Dani",
""
],
[
"Cohen-Or",
"Daniel",
""
],
[
"Huang",
"Hui",
""
]
] |
new_dataset
| 0.999881 |
2307.08381
|
Erick Lavoie
|
Erick Lavoie
|
2P-BFT-Log: 2-Phase Single-Author Append-Only Log for Adversarial
Environments
|
Fixed 'two-phase' typo
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Replicated append-only logs sequentially order messages from the same author
such that their ordering can be eventually recovered even with out-of-order and
unreliable dissemination of individual messages. They are widely used for
implementing replicated services in both clouds and peer-to-peer environments
because they provide simple and efficient incremental reconciliation. However,
existing designs of replicated append-only logs assume replicas faithfully
maintain the sequential properties of logs and do not provide eventual
consistency when malicious participants fork their logs by disseminating
different messages to different replicas for the same index, which may result
in partitioning of replicas according to which branch was first replicated.
In this paper, we present 2P-BFT-Log, a two-phase replicated append-only log
that provides eventual consistency in the presence of forks from malicious
participants such that all correct replicas will eventually agree either on the
most recent message of a valid log (first phase) or on the earliest point at
which a fork occurred as well as on an irrefutable proof that it happened
(second phase). We provide definitions, algorithms, and proofs of the key
properties of the design, and explain one way to implement the design onto Git,
an eventually consistent replicated database originally designed for
distributed version control.
Our design enables correct replicas to faithfully implement the
happens-before relationship first introduced by Lamport that underpins most
existing distributed algorithms, with eventual detection of forks from
malicious participants to exclude the latter from further progress. This opens
the door to adaptations of existing distributed algorithms to a cheaper detect
and repair paradigm, rather than the more common and expensive systematic
prevention of incorrect behaviour.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 10:39:57 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 16:47:03 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Lavoie",
"Erick",
""
]
] |
new_dataset
| 0.97263 |
2307.08543
|
Mike Kosek
|
Mike Kosek, Benedikt Spies, J\"org Ott
|
Secure Middlebox-Assisted QUIC
| null |
IFIP Networking Conference 2023
|
10.23919/IFIPNetworking57963.2023.10186363
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While the evolution of the Internet was driven by the end-to-end model, it
has been challenged by many flavors of middleboxes over the decades. Yet, the
basic idea is still fundamental: reliability and security are usually realized
end-to-end, where the strong trend towards ubiquitous traffic protection
supports this notion. However, reasons to break up, or redefine the ends of,
end-to-end connections have always been put forward in order to improve
transport layer performance. Yet, the consolidation of the transport layer with
the end-to-end security model as introduced by QUIC protects most protocol
information from the network, thereby eliminating the ability to modify
protocol exchanges. In this paper, we enhance QUIC to selectively expose
information to intermediaries, thereby enabling endpoints to consciously insert
middleboxes into an end-to-end encrypted QUIC connection while preserving its
privacy, integrity, and authenticity. We evaluate our design in a distributed
Performance Enhancing Proxy environment over satellite networks, finding that
the performance improvements are dependent on the path and application layer
properties: the higher the round-trip time and loss, and the more data is
transferred over a connection, the higher the benefits of Secure
Middlebox-Assisted QUIC.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 15:03:42 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 07:26:38 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Kosek",
"Mike",
""
],
[
"Spies",
"Benedikt",
""
],
[
"Ott",
"Jörg",
""
]
] |
new_dataset
| 0.962724 |
2307.11702
|
Jerome Revaud
|
Jerome Revaud, Yohann Cabon, Romain Br\'egier, JongMin Lee and
Philippe Weinzaepfel
|
SACReg: Scene-Agnostic Coordinate Regression for Visual Localization
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene coordinates regression (SCR), i.e., predicting 3D coordinates for every
pixel of a given image, has recently shown promising potential. However,
existing methods remain mostly scene-specific or limited to small scenes and
thus hardly scale to realistic datasets. In this paper, we propose a new
paradigm where a single generic SCR model is trained once to be then deployed
to new test scenes, regardless of their scale and without further finetuning.
For a given query image, it collects inputs from off-the-shelf image retrieval
techniques and Structure-from-Motion databases: a list of relevant database
images with sparse pointwise 2D-3D annotations. The model is based on the
transformer architecture and can take a variable number of images and sparse
2D-3D annotations as input. It is trained on a few diverse datasets and
significantly outperforms other scene regression approaches on several
benchmarks, including scene-specific models, for visual localization. In
particular, we set a new state of the art on the Cambridge localization
benchmark, even outperforming feature-matching-based approaches.
|
[
{
"version": "v1",
"created": "Fri, 21 Jul 2023 16:56:36 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 10:36:58 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Revaud",
"Jerome",
""
],
[
"Cabon",
"Yohann",
""
],
[
"Brégier",
"Romain",
""
],
[
"Lee",
"JongMin",
""
],
[
"Weinzaepfel",
"Philippe",
""
]
] |
new_dataset
| 0.999269 |
2307.13692
|
Tomohiro Sawada
|
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli,
Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Submitted to NeurIPS Datasets and Benchmarks Track
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 17:55:19 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 03:31:08 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Sawada",
"Tomohiro",
""
],
[
"Paleka",
"Daniel",
""
],
[
"Havrilla",
"Alexander",
""
],
[
"Tadepalli",
"Pranav",
""
],
[
"Vidas",
"Paula",
""
],
[
"Kranias",
"Alexander",
""
],
[
"Nay",
"John J.",
""
],
[
"Gupta",
"Kshitij",
""
],
[
"Komatsuzaki",
"Aran",
""
]
] |
new_dataset
| 0.999091 |
2307.14247
|
Alexandros Filotheou
|
Alexandros Filotheou
|
CBGL: Fast Monte Carlo Passive Global Localisation of 2D LIDAR Sensor
|
8 pages, 10 figures, 3 algorithms
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Navigation of a mobile robot is conditioned on the knowledge of its pose. In
observer-based localisation configurations its initial pose may not be knowable
in advance, leading to the need of its estimation. Solutions to the problem of
global localisation are either robust against noise and environment
arbitrariness but require motion and time, which may (need to) be economised
on, or require minimal estimation time but assume environmental structure, may
be sensitive to noise, and demand preprocessing and tuning. This article
proposes a method that retains the strengths and avoids the weaknesses of the
two approaches. The method leverages properties of the Cumulative Absolute
Error per Ray metric with respect to the errors of pose estimates of a 2D LIDAR
sensor, and utilises scan--to--map-scan matching for fine(r) pose
approximations. A large number of tests, in real and simulated conditions,
involving disparate environments and sensor properties, illustrate that the
proposed method outperforms state-of-the-art methods of both classes of
solutions in terms of pose discovery rate and execution time. The source code
is available for download.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 15:19:17 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 09:15:20 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Filotheou",
"Alexandros",
""
]
] |
new_dataset
| 0.997126 |
2307.15167
|
Zheng Zhang
|
Zheng Zhang, Zheng Ning, Chenliang Xu, Yapeng Tian, Toby Jia-Jun Li
|
PEANUT: A Human-AI Collaborative Tool for Annotating Audio-Visual Data
|
18 pages, published in UIST'23
| null |
10.1145/3586183.3606776 10.1145/3586183.3606776
10.1145/3586183.360677610.1145/3586183.3606776 10.1145/3586183.3606776
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Audio-visual learning seeks to enhance the computer's multi-modal perception
leveraging the correlation between the auditory and visual modalities. Despite
their many useful downstream tasks, such as video retrieval, AR/VR, and
accessibility, the performance and adoption of existing audio-visual models
have been impeded by the availability of high-quality datasets. Annotating
audio-visual datasets is laborious, expensive, and time-consuming. To address
this challenge, we designed and developed an efficient audio-visual annotation
tool called Peanut. Peanut's human-AI collaborative pipeline separates the
multi-modal task into two single-modal tasks, and utilizes state-of-the-art
object detection and sound-tagging models to reduce the annotators' effort to
process each frame and the number of manually-annotated frames needed. A
within-subject user study with 20 participants found that Peanut can
significantly accelerate the audio-visual data annotation process while
maintaining high annotation accuracy.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 19:56:02 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Zhang",
"Zheng",
""
],
[
"Ning",
"Zheng",
""
],
[
"Xu",
"Chenliang",
""
],
[
"Tian",
"Yapeng",
""
],
[
"Li",
"Toby Jia-Jun",
""
]
] |
new_dataset
| 0.955249 |
2307.15266
|
Yuan Hu
|
Yuan Hu, Jianlong Yuan, Congcong Wen, Xiaonan Lu, Xiang Li
|
RSGPT: A Remote Sensing Vision Language Model and Benchmark
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emergence of large-scale large language models, with GPT-4 as a prominent
example, has significantly propelled the rapid advancement of artificial
general intelligence and sparked the revolution of Artificial Intelligence 2.0.
In the realm of remote sensing (RS), there is a growing interest in developing
large vision language models (VLMs) specifically tailored for data analysis in
this domain. However, current research predominantly revolves around visual
recognition tasks, lacking comprehensive, large-scale image-text datasets that
are aligned and suitable for training large VLMs, which poses significant
challenges to effectively training such models for RS applications. In computer
vision, recent research has demonstrated that fine-tuning large vision language
models on small-scale, high-quality datasets can yield impressive performance
in visual and language understanding. These results are comparable to
state-of-the-art VLMs trained from scratch on massive amounts of data, such as
GPT-4. Inspired by this captivating idea, in this work, we build a high-quality
Remote Sensing Image Captioning dataset (RSICap) that facilitates the
development of large VLMs in the RS field. Unlike previous RS datasets that
either employ model-generated captions or short descriptions, RSICap comprises
2,585 human-annotated captions with rich and high-quality information. This
dataset offers detailed descriptions for each image, encompassing scene
descriptions (e.g., residential area, airport, or farmland) as well as object
information (e.g., color, shape, quantity, absolute position, etc). To
facilitate the evaluation of VLMs in the field of RS, we also provide a
benchmark evaluation dataset called RSIEval. This dataset consists of
human-annotated captions and visual question-answer pairs, allowing for a
comprehensive assessment of VLMs in the context of RS.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 02:23:35 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Hu",
"Yuan",
""
],
[
"Yuan",
"Jianlong",
""
],
[
"Wen",
"Congcong",
""
],
[
"Lu",
"Xiaonan",
""
],
[
"Li",
"Xiang",
""
]
] |
new_dataset
| 0.999606 |
2307.15311
|
Dongdong Wang
|
Ou Zheng, Mohamed Abdel-Aty, Dongdong Wang, Chenzhu Wang, Shengxuan
Ding
|
TrafficSafetyGPT: Tuning a Pre-trained Large Language Model to a
Domain-Specific Expert in Transportation Safety
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Models (LLMs) have shown remarkable effectiveness in various
general-domain natural language processing (NLP) tasks. However, their
performance in transportation safety domain tasks has been suboptimal,
primarily attributed to the requirement for specialized transportation safety
expertise in generating accurate responses [1]. To address this challenge, we
introduce TrafficSafetyGPT, a novel LLAMA-based model, which has undergone
supervised fine-tuning using TrafficSafety-2K dataset which has human labels
from government produced guiding books and ChatGPT-generated instruction-output
pairs. Our proposed TrafficSafetyGPT model and TrafficSafety-2K train dataset
are accessible at https://github.com/ozheng1993/TrafficSafetyGPT.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 05:17:11 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Zheng",
"Ou",
""
],
[
"Abdel-Aty",
"Mohamed",
""
],
[
"Wang",
"Dongdong",
""
],
[
"Wang",
"Chenzhu",
""
],
[
"Ding",
"Shengxuan",
""
]
] |
new_dataset
| 0.989414 |
2307.15326
|
Shaunak Mishra
|
Yueh-Ning Ku, Mikhail Kuznetsov, Shaunak Mishra and Paloma de Juan
|
Staging E-Commerce Products for Online Advertising using Retrieval
Assisted Image Generation
|
Accepted for publication in AdKDD 2023
| null | null | null |
cs.CV cs.IR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Online ads showing e-commerce products typically rely on the product images
in a catalog sent to the advertising platform by an e-commerce platform. In the
broader ads industry such ads are called dynamic product ads (DPA). It is
common for DPA catalogs to be in the scale of millions (corresponding to the
scale of products which can be bought from the e-commerce platform). However,
not all product images in the catalog may be appealing when directly
re-purposed as an ad image, and this may lead to lower click-through rates
(CTRs). In particular, products just placed against a solid background may not
be as enticing and realistic as a product staged in a natural environment. To
address such shortcomings of DPA images at scale, we propose a generative
adversarial network (GAN) based approach to generate staged backgrounds for
un-staged product images. Generating the entire staged background is a
challenging task susceptible to hallucinations. To get around this, we
introduce a simpler approach called copy-paste staging using retrieval assisted
GANs. In copy paste staging, we first retrieve (from the catalog) staged
products similar to the un-staged input product, and then copy-paste the
background of the retrieved product in the input image. A GAN based in-painting
model is used to fill the holes left after this copy-paste operation. We show
the efficacy of our copy-paste staging method via offline metrics, and human
evaluation. In addition, we show how our staging approach can enable animations
of moving products leading to a video ad from a product image.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 06:04:46 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Ku",
"Yueh-Ning",
""
],
[
"Kuznetsov",
"Mikhail",
""
],
[
"Mishra",
"Shaunak",
""
],
[
"de Juan",
"Paloma",
""
]
] |
new_dataset
| 0.992867 |
2307.15335
|
Khiem Tran
|
Khiem Vinh Tran and Kiet Van Nguyen and Ngan Luu Thuy Nguyen
|
BARTPhoBEiT: Pre-trained Sequence-to-Sequence and Image Transformers
Models for Vietnamese Visual Question Answering
| null | null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Visual Question Answering (VQA) is an intricate and demanding task that
integrates natural language processing (NLP) and computer vision (CV),
capturing the interest of researchers. The English language, renowned for its
wealth of resources, has witnessed notable advancements in both datasets and
models designed for VQA. However, there is a lack of models that target
specific countries such as Vietnam. To address this limitation, we introduce a
transformer-based Vietnamese model named BARTPhoBEiT. This model includes
pre-trained Sequence-to-Sequence and bidirectional encoder representation from
Image Transformers in Vietnamese and evaluates Vietnamese VQA datasets.
Experimental results demonstrate that our proposed model outperforms the strong
baseline and improves the state-of-the-art in six metrics: Accuracy, Precision,
Recall, F1-score, WUPS 0.0, and WUPS 0.9.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 06:23:32 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Tran",
"Khiem Vinh",
""
],
[
"Van Nguyen",
"Kiet",
""
],
[
"Nguyen",
"Ngan Luu Thuy",
""
]
] |
new_dataset
| 0.998252 |
2307.15338
|
Vishal Jadhav
|
Vishal D. Jadhav, Narahari N. Moudhgalya, Tapabrata Sen, T. V.
Prabhakar
|
PUF Probe: A PUF-based Hardware Authentication Equipment for IEDs
| null | null | null | null |
cs.CR cs.SY eess.SP eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Intelligent Electronic Devices (IEDs) are vital components in modern
electrical substations, collectively responsible for monitoring electrical
parameters and performing protective functions. As a result, ensuring the
integrity of IEDs is an essential criteria. While standards like IEC 61850 and
IEC 60870-5-104 establish cyber-security protocols for secure information
exchange in IED-based power systems, the physical integrity of IEDs is often
overlooked, leading to a rise in counterfeit and tainted electronic products.
This paper proposes a physical unclonable function (PUF)-based device (IEDPUF
probe) capable of extracting unique hardware signatures from commercial IEDs.
These signatures can serve as identifiers, facilitating the authentication and
protection of IEDs against counterfeiting. The paper presents the complete
hardware architecture of the IEDPUF probe, along with algorithms for signature
extraction and authentication. The process involves the central computer system
(CCS) initiating IED authentication requests by sending random challenges to
the IEDPUF probe. Based on the challenges, the IEDPUF probe generates
responses, which are then verified by the CCS to authenticate the IED.
Additionally, a two-way authentication technique is employed to ensure that
only verified requests are granted access for signature extraction.
Experimental results confirm the efficacy of the proposed IEDPUF probe. The
results demonstrate its ability to provide real-time responses possessing
randomness while uniquely identifying the IED under investigation. The proposed
IEDPUF probe offers a simple, cost-effective, accurate solution with minimal
storage requirements, enhancing the authenticity and integrity of IEDs within
electrical substations
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 06:32:20 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Jadhav",
"Vishal D.",
""
],
[
"Moudhgalya",
"Narahari N.",
""
],
[
"Sen",
"Tapabrata",
""
],
[
"Prabhakar",
"T. V.",
""
]
] |
new_dataset
| 0.998876 |
2307.15339
|
Sumati Thareja
|
Le Gong, Shiying Li, Naqib Sad Pathan, Mohammad Shifat-E-Rabbi,
Gustavo K. Rohde, Abu Hasnat Mohammad Rubaiyat and Sumati Thareja
|
The Radon Signed Cumulative Distribution Transform and its applications
in classification of Signed Images
| null | null | null | null |
cs.IT cs.CV cs.LG math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Here we describe a new image representation technique based on the
mathematics of transport and optimal transport. The method relies on the
combination of the well-known Radon transform for images and a recent signal
representation method called the Signed Cumulative Distribution Transform. The
newly proposed method generalizes previous transport-related image
representation methods to arbitrary functions (images), and thus can be used in
more applications. We describe the new transform, and some of its mathematical
properties and demonstrate its ability to partition image classes with real and
simulated data. In comparison to existing transport transform methods, as well
as deep learning-based classification methods, the new transform more
accurately represents the information content of signed images, and thus can be
used to obtain higher classification accuracies. The implementation of the
proposed method in Python language is integrated as a part of the software
package PyTransKit, available on Github.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 06:32:33 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Gong",
"Le",
""
],
[
"Li",
"Shiying",
""
],
[
"Pathan",
"Naqib Sad",
""
],
[
"Shifat-E-Rabbi",
"Mohammad",
""
],
[
"Rohde",
"Gustavo K.",
""
],
[
"Rubaiyat",
"Abu Hasnat Mohammad",
""
],
[
"Thareja",
"Sumati",
""
]
] |
new_dataset
| 0.99845 |
2307.15376
|
Rohit Kumar
|
Sanjana Kolar and Rohit Kumar
|
Multilingual Tourist Assistance using ChatGPT: Comparing Capabilities in
Hindi, Telugu, and Kannada
|
6 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This research investigates the effectiveness of ChatGPT, an AI language model
by OpenAI, in translating English into Hindi, Telugu, and Kannada languages,
aimed at assisting tourists in India's linguistically diverse environment. To
measure the translation quality, a test set of 50 questions from diverse fields
such as general knowledge, food, and travel was used. These were assessed by
five volunteers for accuracy and fluency, and the scores were subsequently
converted into a BLEU score. The BLEU score evaluates the closeness of a
machine-generated translation to a human translation, with a higher score
indicating better translation quality. The Hindi translations outperformed
others, showcasing superior accuracy and fluency, whereas Telugu translations
lagged behind. Human evaluators rated both the accuracy and fluency of
translations, offering a comprehensive perspective on the language model's
performance.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 07:52:26 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Kolar",
"Sanjana",
""
],
[
"Kumar",
"Rohit",
""
]
] |
new_dataset
| 0.986779 |
2307.15433
|
Dimitri Korsch
|
Dimitri Korsch, Paul Bodesheim, Gunnar Brehm, Joachim Denzler
|
Automated Visual Monitoring of Nocturnal Insects with Light-based Camera
Traps
|
Presented at the FGVC workshop at the CVPR2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic camera-assisted monitoring of insects for abundance estimations is
crucial to understand and counteract ongoing insect decline. In this paper, we
present two datasets of nocturnal insects, especially moths as a subset of
Lepidoptera, photographed in Central Europe. One of the datasets, the EU-Moths
dataset, was captured manually by citizen scientists and contains species
annotations for 200 different species and bounding box annotations for those.
We used this dataset to develop and evaluate a two-stage pipeline for insect
detection and moth species classification in previous work. We further
introduce a prototype for an automated visual monitoring system. This prototype
produced the second dataset consisting of more than 27,000 images captured on
95 nights. For evaluation and bootstrapping purposes, we annotated a subset of
the images with bounding boxes enframing nocturnal insects. Finally, we present
first detection and classification baselines for these datasets and encourage
other scientists to use this publicly available data.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 09:31:36 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Korsch",
"Dimitri",
""
],
[
"Bodesheim",
"Paul",
""
],
[
"Brehm",
"Gunnar",
""
],
[
"Denzler",
"Joachim",
""
]
] |
new_dataset
| 0.999057 |
2307.15436
|
Jaume Abella
|
Marcel Sarraseca, Sergi Alcaide, Francisco Fuentes, Juan Carlos
Rodriguez, Feng Chang, Ilham Lasfar, Ramon Canal, Francisco J. Cazorla, Jaume
Abella
|
SafeLS: Toward Building a Lockstep NOEL-V Core
|
Abstract presented at the RISC-V Summit, June 2023, Barcelona (Spain)
| null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Safety-critical systems such as those in automotive, avionics and space,
require appropriate safety measures to avoid silent data corruption upon random
hardware errors such as those caused by radiation and other types of
electromagnetic interference. Those safety measures must be able to prevent
faults from causing the so-called common cause failures (CCFs), which occur
when a fault produces identical errors in redundant elements so that comparison
fails to detect the errors and a failure arises. The usual solution to avoid
CCFs in CPU cores is using lockstep cores, so that two cores execute the same
flow of instructions, but with some time staggering so that their state is
never identical and faults can only lead to different errors, which are then
detectable by means of comparison. This paper extends Gaisler's RISC-V NOEL-V
core with lockstep; and presents future prospects for its use and distribution.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 09:35:44 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Sarraseca",
"Marcel",
""
],
[
"Alcaide",
"Sergi",
""
],
[
"Fuentes",
"Francisco",
""
],
[
"Rodriguez",
"Juan Carlos",
""
],
[
"Chang",
"Feng",
""
],
[
"Lasfar",
"Ilham",
""
],
[
"Canal",
"Ramon",
""
],
[
"Cazorla",
"Francisco J.",
""
],
[
"Abella",
"Jaume",
""
]
] |
new_dataset
| 0.978801 |
2307.15478
|
Andrei Cramariuc
|
Matthias Brucker, Andrei Cramariuc, Cornelius von Einem, Roland
Siegwart, and Cesar Cadena
|
Local and Global Information in Obstacle Detection on Railway Tracks
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reliable obstacle detection on railways could help prevent collisions that
result in injuries and potentially damage or derail the train. Unfortunately,
generic object detectors do not have enough classes to account for all possible
scenarios, and datasets featuring objects on railways are challenging to
obtain. We propose utilizing a shallow network to learn railway segmentation
from normal railway images. The limited receptive field of the network prevents
overconfident predictions and allows the network to focus on the locally very
distinct and repetitive patterns of the railway environment. Additionally, we
explore the controlled inclusion of global information by learning to
hallucinate obstacle-free images. We evaluate our method on a custom dataset
featuring railway images with artificially augmented obstacles. Our proposed
method outperforms other learning-based baseline methods.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 11:07:34 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Brucker",
"Matthias",
""
],
[
"Cramariuc",
"Andrei",
""
],
[
"von Einem",
"Cornelius",
""
],
[
"Siegwart",
"Roland",
""
],
[
"Cadena",
"Cesar",
""
]
] |
new_dataset
| 0.988886 |
2307.15488
|
Helena Mart\'in-Cruz
|
Beatriz Barbero-Lucas, Fernando Hernando, Helena Mart\'in-Cruz, Gary
McGuire
|
MDS, Hermitian Almost MDS, and Gilbert-Varshamov Quantum Codes from
Generalized Monomial-Cartesian Codes
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We construct new stabilizer quantum error-correcting codes from generalized
monomial-Cartesian codes. Our construction uses an explicitly defined twist
vector, and we present formulas for the minimum distance and dimension.
Generalized monomial-Cartesian codes arise from polynomials in $m$ variables.
When $m=1$ our codes are MDS, and when $m=2$ and our lower bound for the
minimum distance is $3$ the codes are at least Hermitian Almost MDS. For an
infinite family of parameters when $m=2$ we prove that our codes beat the
Gilbert-Varshamov bound. We also present many examples of our codes that are
better than any known code in the literature.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 11:34:42 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Barbero-Lucas",
"Beatriz",
""
],
[
"Hernando",
"Fernando",
""
],
[
"Martín-Cruz",
"Helena",
""
],
[
"McGuire",
"Gary",
""
]
] |
new_dataset
| 0.999595 |
2307.15494
|
Kevin Denamgana\"i
|
Kevin Denamgana\"i, Daniel Hernandez, Ozan Vardal, Sondess Missaoui,
James Alfred Walker
|
ETHER: Aligning Emergent Communication for Hindsight Experience Replay
|
work in progress
| null | null | null |
cs.CL cs.AI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Natural language instruction following is paramount to enable collaboration
between artificial agents and human beings. Natural language-conditioned
reinforcement learning (RL) agents have shown how natural languages'
properties, such as compositionality, can provide a strong inductive bias to
learn complex policies. Previous architectures like HIGhER combine the benefit
of language-conditioning with Hindsight Experience Replay (HER) to deal with
sparse rewards environments. Yet, like HER, HIGhER relies on an oracle
predicate function to provide a feedback signal highlighting which linguistic
description is valid for which state. This reliance on an oracle limits its
application. Additionally, HIGhER only leverages the linguistic information
contained in successful RL trajectories, thus hurting its final performance and
data-efficiency. Without early successful trajectories, HIGhER is no better
than DQN upon which it is built. In this paper, we propose the Emergent Textual
Hindsight Experience Replay (ETHER) agent, which builds on HIGhER and addresses
both of its limitations by means of (i) a discriminative visual referential
game, commonly studied in the subfield of Emergent Communication (EC), used
here as an unsupervised auxiliary task and (ii) a semantic grounding scheme to
align the emergent language with the natural language of the
instruction-following benchmark. We show that the referential game's agents
make an artificial language emerge that is aligned with the natural-like
language used to describe goals in the BabyAI benchmark and that it is
expressive enough so as to also describe unsuccessful RL trajectories and thus
provide feedback to the RL agent to leverage the linguistic, structured
information contained in all trajectories. Our work shows that EC is a viable
unsupervised auxiliary task for RL and provides missing pieces to make HER more
widely applicable.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 11:42:31 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Denamganaï",
"Kevin",
""
],
[
"Hernandez",
"Daniel",
""
],
[
"Vardal",
"Ozan",
""
],
[
"Missaoui",
"Sondess",
""
],
[
"Walker",
"James Alfred",
""
]
] |
new_dataset
| 0.977186 |
2307.15516
|
Enrique Dehaerne
|
Enrique Dehaerne, Bappaditya Dey, Hossein Esfandiar, Lander
Verstraete, Hyo Seon Suh, Sandip Halder, Stefan De Gendt
|
YOLOv8 for Defect Inspection of Hexagonal Directed Self-Assembly
Patterns: A Data-Centric Approach
|
8 pages, 10 figures, accepted for the 38th EMLC Conference 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Shrinking pattern dimensions leads to an increased variety of defect types in
semiconductor devices. This has spurred innovation in patterning approaches
such as Directed self-assembly (DSA) for which no traditional, automatic defect
inspection software exists. Machine Learning-based SEM image analysis has
become an increasingly popular research topic for defect inspection with
supervised ML models often showing the best performance. However, little
research has been done on obtaining a dataset with high-quality labels for
these supervised models. In this work, we propose a method for obtaining
coherent and complete labels for a dataset of hexagonal contact hole DSA
patterns while requiring minimal quality control effort from a DSA expert. We
show that YOLOv8, a state-of-the-art neural network, achieves defect detection
precisions of more than 0.9 mAP on our final dataset which best reflects DSA
expert defect labeling expectations. We discuss the strengths and limitations
of our proposed labeling approach and suggest directions for future work in
data-centric ML-based defect inspection.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 12:17:01 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Dehaerne",
"Enrique",
""
],
[
"Dey",
"Bappaditya",
""
],
[
"Esfandiar",
"Hossein",
""
],
[
"Verstraete",
"Lander",
""
],
[
"Suh",
"Hyo Seon",
""
],
[
"Halder",
"Sandip",
""
],
[
"De Gendt",
"Stefan",
""
]
] |
new_dataset
| 0.999773 |
2307.15561
|
Andrei Tonkikh
|
Luciano Freitas, Andrei Tonkikh
|
Swiper and Dora: efficient solutions to weighted distributed problems
| null | null | null | null |
cs.DC cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The majority of fault-tolerant distributed algorithms are designed assuming a
nominal corruption model, in which at most a fraction $f_n$ of parties can be
corrupted by the adversary. However, due to the infamous Sybil attack, nominal
models are not sufficient to express the trust assumptions in open (i.e.,
permissionless) settings. Instead, permissionless systems typically operate in
a weighted model, where each participant is associated with a weight and the
adversary can corrupt a set of parties holding at most a fraction $f_w$ of
total weight.
In this paper, we suggest a simple way to transform a large class of
protocols designed for the nominal model into the weighted model. To this end,
we formalize and solve three novel optimization problems, which we collectively
call the weight reduction problems, that allow us to map large real weights
into small integer weights while preserving the properties necessary for the
correctness of the protocols. In all cases, we manage to keep the sum of the
integer weights to be at most linear in the number of parties, resulting in
extremely efficient protocols for the weighted model. Moreover, we demonstrate
that, on weight distributions that emerge in practice, the sum of the integer
weights tends to be far from the theoretical worst-case and, often even smaller
than the number of participants.
While, for some protocols, our transformation requires an arbitrarily small
reduction in resilience (i.e., $f_w = f_n - \epsilon$), surprisingly, for many
important problems we manage to obtain weighted solutions with the same
resilience ($f_w = f_n$) as nominal ones. Notable examples include asynchronous
consensus, verifiable secret sharing, erasure-coded distributed storage and
broadcast protocols.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 13:59:04 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Freitas",
"Luciano",
""
],
[
"Tonkikh",
"Andrei",
""
]
] |
new_dataset
| 0.970246 |
2307.15568
|
David Robb
|
Mei Yii Lim, Jos\'e David Aguas Lopes, David A. Robb, Bruce W. Wilson,
Meriam Moujahid, Emanuele De Pellegrin and Helen Hastie
|
We are all Individuals: The Role of Robot Personality and Human Traits
in Trustworthy Interaction
|
8 pages, RO-MAN'22, 31st IEEE International Conference on Robot and
Human Interactive Communication (RO-MAN), August 2022, Naples, Italy
|
In RO-MAN'2022 (pp. 538-545). IEEE
|
10.1109/RO-MAN53752.2022.9900772
| null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As robots take on roles in our society, it is important that their
appearance, behaviour and personality are appropriate for the job they are
given and are perceived favourably by the people with whom they interact. Here,
we provide an extensive quantitative and qualitative study exploring robot
personality but, importantly, with respect to individual human traits. Firstly,
we show that we can accurately portray personality in a social robot, in terms
of extroversion-introversion using vocal cues and linguistic features.
Secondly, through garnering preferences and trust ratings for these different
robot personalities, we establish that, for a Robo-Barista, an extrovert robot
is preferred and trusted more than an introvert robot, regardless of the
subject's own personality. Thirdly, we find that individual attitudes and
predispositions towards robots do impact trust in the Robo-Baristas, and are
therefore important considerations in addition to robot personality, roles and
interaction context when designing any human-robot interaction study.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 14:04:07 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Lim",
"Mei Yii",
""
],
[
"Lopes",
"José David Aguas",
""
],
[
"Robb",
"David A.",
""
],
[
"Wilson",
"Bruce W.",
""
],
[
"Moujahid",
"Meriam",
""
],
[
"De Pellegrin",
"Emanuele",
""
],
[
"Hastie",
"Helen",
""
]
] |
new_dataset
| 0.992858 |
2307.15612
|
Giulia Bernardini
|
Rocco Ascone, Giulia Bernardini, Luca Manzoni
|
Fixed Points and Attractors of Reactantless and Inhibitorless Reaction
Systems
|
29 pages
| null | null | null |
cs.CC math.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Reaction systems are discrete dynamical systems that model biochemical
processes in living cells using finite sets of reactants, inhibitors, and
products. We investigate the computational complexity of a comprehensive set of
problems related to the existence of fixed points and attractors in two
constrained classes of reaction systems, in which either reactants or
inhibitors are disallowed. These problems have biological relevance and have
been extensively studied in the unconstrained case; however, they remain
unexplored in the context of reactantless or inhibitorless systems.
Interestingly, we demonstrate that although the absence of reactants or
inhibitors simplifies the system's dynamics, it does not always lead to a
reduction in the complexity of the considered problems.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 15:15:18 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Ascone",
"Rocco",
""
],
[
"Bernardini",
"Giulia",
""
],
[
"Manzoni",
"Luca",
""
]
] |
new_dataset
| 0.990049 |
2307.15642
|
Laurie Williams
|
Mindy Tran and Yasemin Acar and Michel Cucker and William Enck and
Alexandros Kapravelos and Christian Kastner and Laurie Williams
|
S3C2 Summit 2202-09: Industry Secure Suppy Chain Summit
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent years have shown increased cyber attacks targeting less secure
elements in the software supply chain and causing fatal damage to businesses
and organizations. Past well-known examples of software supply chain attacks
are the SolarWinds or log4j incidents that have affected thousands of customers
and businesses. The US government and industry are equally interested in
enhancing software supply chain security. We conducted six panel discussions
with a diverse set of 19 practitioners from industry. We asked them open-ended
questions regarding SBOMs, vulnerable dependencies, malicious commits, build
and deploy, the Executive Order, and standards compliance. The goal of this
summit was to enable open discussions, mutual sharing, and shedding light on
common challenges that industry practitioners with practical experience face
when securing their software supply chain. This paper summarizes the summit
held on September 30, 2022.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 16:01:30 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Tran",
"Mindy",
""
],
[
"Acar",
"Yasemin",
""
],
[
"Cucker",
"Michel",
""
],
[
"Enck",
"William",
""
],
[
"Kapravelos",
"Alexandros",
""
],
[
"Kastner",
"Christian",
""
],
[
"Williams",
"Laurie",
""
]
] |
new_dataset
| 0.999522 |
2307.15690
|
Nico G\"urtler
|
Nico G\"urtler, Sebastian Blaes, Pavel Kolev, Felix Widmaier, Manuel
W\"uthrich, Stefan Bauer, Bernhard Sch\"olkopf and Georg Martius
|
Benchmarking Offline Reinforcement Learning on Real-Robot Hardware
|
The Eleventh International Conference on Learning Representations.
2022. Published at ICLR 2023. Datasets available at
https://github.com/rr-learning/trifinger_rl_datasets
| null | null | null |
cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning policies from previously recorded data is a promising direction for
real-world robotics tasks, as online learning is often infeasible. Dexterous
manipulation in particular remains an open problem in its general form. The
combination of offline reinforcement learning with large diverse datasets,
however, has the potential to lead to a breakthrough in this challenging domain
analogously to the rapid progress made in supervised learning in recent years.
To coordinate the efforts of the research community toward tackling this
problem, we propose a benchmark including: i) a large collection of data for
offline learning from a dexterous manipulation platform on two tasks, obtained
with capable RL agents trained in simulation; ii) the option to execute learned
policies on a real-world robotic system and a simulation for efficient
debugging. We evaluate prominent open-sourced offline reinforcement learning
algorithms on the datasets and provide a reproducible experimental setup for
offline reinforcement learning on real systems.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 17:29:49 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Gürtler",
"Nico",
""
],
[
"Blaes",
"Sebastian",
""
],
[
"Kolev",
"Pavel",
""
],
[
"Widmaier",
"Felix",
""
],
[
"Wüthrich",
"Manuel",
""
],
[
"Bauer",
"Stefan",
""
],
[
"Schölkopf",
"Bernhard",
""
],
[
"Martius",
"Georg",
""
]
] |
new_dataset
| 0.989313 |
2307.15709
|
Tom Mens
|
Tom Mens, Coen De Roover
|
An Introduction to Software Ecosystems
|
Preprint of chapter "An Introduction to Software Ecosystems" by Tom
Mens and Coen De Roover, published in the book "Software Ecosystems: Tooling
and Analytics" (eds. T. Mens, C. De Roover, A. Cleve), 2023, ISBN
978-3-031-36059-6, reproduced with permission of Springer. The final
authenticated version of the book and this chapter is available online at:
https://doi.org/10.1007/978-3-031-36060-2
|
In "Software Ecosystems: Tooling and Analytics" (Eds. Tom Mens,
Coen De Roover, Anthony Cleve), Springer, 2023. ISBN 978-3-031-36059-6
|
10.1007/978-3-031-36060-2
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This chapter defines and presents different kinds of software ecosystems. The
focus is on the development, tooling and analytics aspects of software
ecosystems, i.e., communities of software developers and the interconnected
software components (e.g., projects, libraries, packages, repositories,
plug-ins, apps) they are developing and maintaining. The technical and social
dependencies between these developers and software components form a
socio-technical dependency network, and the dynamics of this network change
over time. We classify and provide several examples of such ecosystems. The
chapter also introduces and clarifies the relevant terms needed to understand
and analyse these ecosystems, as well as the techniques and research methods
that can be used to analyse different aspects of these ecosystems.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 17:58:59 GMT"
}
] | 2023-07-31T00:00:00 |
[
[
"Mens",
"Tom",
""
],
[
"De Roover",
"Coen",
""
]
] |
new_dataset
| 0.96194 |
2109.04756
|
Yuquan Wang
|
Yuquan Wang, Niels Dehio, and Abderrahmane Kheddar
|
On Inverse Inertia Matrix and Contact-Force Model for Robotic
Manipulators at Normal Impacts
| null |
IEEE Robotics and Automation Letters (2022) 3648-3655
|
10.1109/LRA.2022.3145967
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State-of-the-art impact dynamics models either apply for free-flying objects
or do not account that a robotic manipulator is commonly high-stiffness
controlled. Thus, we lack tailor-made models for manipulators mounted on a
fixed base. Focusing on orthogonal point-to-surface impacts (no tangential
velocities), we revisit two main elements of an impact dynamics model: the
contact-force model and the inverse inertia matrix. We collect contact-force
measurements by impacting a 7 DOF Panda robot against a sensorized rigid
environment with various joint configurations and velocities. Evaluating the
measurements from 150 trials, the best model-to-data matching suggests a
viscoelastic contact-force model and computing the inverse inertia matrix
assuming the robot is a composite-rigid body.
|
[
{
"version": "v1",
"created": "Fri, 10 Sep 2021 09:45:29 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Dec 2021 23:00:23 GMT"
},
{
"version": "v3",
"created": "Sat, 12 Feb 2022 00:09:07 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Wang",
"Yuquan",
""
],
[
"Dehio",
"Niels",
""
],
[
"Kheddar",
"Abderrahmane",
""
]
] |
new_dataset
| 0.988336 |
2207.13981
|
Xabier S\'aez-de-C\'amara
|
Xabier S\'aez-de-C\'amara, Jose Luis Flores, Crist\'obal Arellano,
Aitor Urbieta, Urko Zurutuza
|
Gotham Testbed: a Reproducible IoT Testbed for Security Experiments and
Dataset Generation
|
Accepted for publication in IEEE Transactions on Dependable and
Secure Computing. Accepted version first online: Feb 22 2023
| null |
10.1109/TDSC.2023.3247166
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The growing adoption of the Internet of Things (IoT) has brought a
significant increase in attacks targeting those devices. Machine learning (ML)
methods have shown promising results for intrusion detection; however, the
scarcity of IoT datasets remains a limiting factor in developing ML-based
security systems for IoT scenarios. Static datasets get outdated due to
evolving IoT architectures and threat landscape; meanwhile, the testbeds used
to generate them are rarely published. This paper presents the Gotham testbed,
a reproducible and flexible security testbed extendable to accommodate new
emulated devices, services or attackers. Gotham is used to build an IoT
scenario composed of 100 emulated devices communicating via MQTT, CoAP and RTSP
protocols, among others, in a topology composed of 30 switches and 10 routers.
The scenario presents three threat actors, including the entire Mirai botnet
lifecycle and additional red-teaming tools performing DoS, scanning, and
attacks targeting IoT protocols. The testbed has many purposes, including a
cyber range, testing security solutions, and capturing network and application
data to generate datasets. We hope that researchers can leverage and adapt
Gotham to include other devices, state-of-the-art attacks and topologies to
share scenarios and datasets that reflect the current IoT settings and threat
landscape.
|
[
{
"version": "v1",
"created": "Thu, 28 Jul 2022 09:47:51 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Sep 2022 11:03:02 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Jul 2023 11:58:00 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Sáez-de-Cámara",
"Xabier",
""
],
[
"Flores",
"Jose Luis",
""
],
[
"Arellano",
"Cristóbal",
""
],
[
"Urbieta",
"Aitor",
""
],
[
"Zurutuza",
"Urko",
""
]
] |
new_dataset
| 0.999768 |
2209.11405
|
Zhiyang He
|
Andrew Cross, Zhiyang He, Anand Natarajan, Mario Szegedy, Guanyu Zhu
|
Quantum Locally Testable Code with Constant Soundness
|
Updated presentation of the manuscript
| null | null | null |
cs.IT math.IT quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present two constructions of quantum locally testable codes
(QLTC) with constant soundness. In the first approach, we introduce an
operation called check product, and show how this operation gives rise to QLTCs
of constant soundness, constant rate, and distance scaling with locality. In
the second approach, we consider hypergraph product of a quantum code and a
classical repetition code, and observe a special case in which the soundness of
component codes is preserved. This insight leads us to construct QLTCs of
constant soundness, scalable rate and distance, and constant average locality.
Our work marks a step towards constructing QLTCs of high soundness and
distance, which would give a different construction to the No Low-Energy
Trivial States (NLTS) theorem.
|
[
{
"version": "v1",
"created": "Fri, 23 Sep 2022 04:38:01 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jul 2023 21:46:31 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Cross",
"Andrew",
""
],
[
"He",
"Zhiyang",
""
],
[
"Natarajan",
"Anand",
""
],
[
"Szegedy",
"Mario",
""
],
[
"Zhu",
"Guanyu",
""
]
] |
new_dataset
| 0.999565 |
2211.11220
|
Rongqin Liang
|
Rongqin Liang, Yuanman Li, Jiantao Zhou, and Xia Li
|
STGlow: A Flow-based Generative Framework with Dual Graphormer for
Pedestrian Trajectory Prediction
|
14 pages, 9 figures
| null |
10.1109/TNNLS.2023.3294998
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The pedestrian trajectory prediction task is an essential component of
intelligent systems. Its applications include but are not limited to autonomous
driving, robot navigation, and anomaly detection of monitoring systems. Due to
the diversity of motion behaviors and the complex social interactions among
pedestrians, accurately forecasting their future trajectory is challenging.
Existing approaches commonly adopt GANs or CVAEs to generate diverse
trajectories. However, GAN-based methods do not directly model data in a latent
space, which may make them fail to have full support over the underlying data
distribution; CVAE-based methods optimize a lower bound on the log-likelihood
of observations, which may cause the learned distribution to deviate from the
underlying distribution. The above limitations make existing approaches often
generate highly biased or inaccurate trajectories. In this paper, we propose a
novel generative flow based framework with dual graphormer for pedestrian
trajectory prediction (STGlow). Different from previous approaches, our method
can more precisely model the underlying data distribution by optimizing the
exact log-likelihood of motion behaviors. Besides, our method has clear
physical meanings for simulating the evolution of human motion behaviors. The
forward process of the flow gradually degrades complex motion behavior into
simple behavior, while its reverse process represents the evolution of simple
behavior into complex motion behavior. Further, we introduce a dual graphormer
combining with the graph structure to more adequately model the temporal
dependencies and the mutual spatial interactions. Experimental results on
several benchmarks demonstrate that our method achieves much better performance
compared to previous state-of-the-art approaches.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 07:29:24 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Nov 2022 02:16:24 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Jul 2023 08:19:00 GMT"
},
{
"version": "v4",
"created": "Thu, 27 Jul 2023 02:11:02 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Liang",
"Rongqin",
""
],
[
"Li",
"Yuanman",
""
],
[
"Zhou",
"Jiantao",
""
],
[
"Li",
"Xia",
""
]
] |
new_dataset
| 0.991671 |
2211.16762
|
Siyu Ren
|
Siyu Ren, Junhui Hou, Xiaodong Chen, Ying He, Wenping Wang
|
GeoUDF: Surface Reconstruction from 3D Point Clouds via Geometry-guided
Distance Representation
|
Accepted by ICCV 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a learning-based method, namely GeoUDF,to tackle the long-standing
and challenging problem of reconstructing a discrete surface from a sparse
point cloud.To be specific, we propose a geometry-guided learning method for
UDF and its gradient estimation that explicitly formulates the unsigned
distance of a query point as the learnable affine averaging of its distances to
the tangent planes of neighboring points on the surface. Besides,we model the
local geometric structure of the input point clouds by explicitly learning a
quadratic polynomial for each point. This not only facilitates upsampling the
input sparse point cloud but also naturally induces unoriented normal, which
further augments UDF estimation. Finally, to extract triangle meshes from the
predicted UDF we propose a customized edge-based marching cube module. We
conduct extensive experiments and ablation studies to demonstrate the
significant advantages of our method over state-of-the-art methods in terms of
reconstruction accuracy, efficiency, and generality. The source code is
publicly available at https://github.com/rsy6318/GeoUDF.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 06:02:01 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Feb 2023 08:10:13 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Mar 2023 13:07:50 GMT"
},
{
"version": "v4",
"created": "Thu, 27 Jul 2023 10:52:42 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Ren",
"Siyu",
""
],
[
"Hou",
"Junhui",
""
],
[
"Chen",
"Xiaodong",
""
],
[
"He",
"Ying",
""
],
[
"Wang",
"Wenping",
""
]
] |
new_dataset
| 0.997089 |
2301.09080
|
Bo Han
|
Bo Han, Yi Ren, Yuheng Li
|
Dance2MIDI: Dance-driven multi-instruments music generation
|
The reason for the withdrawal and retraction is due to recent
developments regarding the research presented in the manuscript. After
further investigation and reassessment, I have identified crucial issues with
the methodology and data used in the study. These concerns have raised doubts
about the accuracy and reliability of the findings presented in the
manuscript
| null | null | null |
cs.MM cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dance-driven music generation aims to generate musical pieces conditioned on
dance videos. Previous works focus on monophonic or raw audio generation, while
the multiinstruments scenario is under-explored. The challenges of the
dance-driven multi-instruments music (MIDI) generation are two-fold: 1) no
publicly available multi-instruments MIDI and video paired dataset and 2) the
weak correlation between music and video. To tackle these challenges, we build
the first multi-instruments MIDI and dance paired dataset (D2MIDI). Based on
our proposed dataset, we introduce a multi-instruments MIDI generation
framework (Dance2MIDI) conditioned on dance video. Specifically, 1) to model
the correlation between music and dance, we encode the dance motion using the
GCN, and 2) to generate harmonious and coherent music, we employ Transformer to
decode the MIDI sequence. We evaluate the generated music of our framework
trained on D2MIDI dataset and demonstrate that our method outperforms existing
methods. The data and code are available on the GitHub website.
|
[
{
"version": "v1",
"created": "Sun, 22 Jan 2023 08:35:51 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 09:15:09 GMT"
},
{
"version": "v3",
"created": "Thu, 1 Jun 2023 13:56:54 GMT"
},
{
"version": "v4",
"created": "Wed, 14 Jun 2023 14:17:42 GMT"
},
{
"version": "v5",
"created": "Fri, 16 Jun 2023 03:08:47 GMT"
},
{
"version": "v6",
"created": "Thu, 27 Jul 2023 07:50:46 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Han",
"Bo",
""
],
[
"Ren",
"Yi",
""
],
[
"Li",
"Yuheng",
""
]
] |
new_dataset
| 0.999813 |
2302.12806
|
Ruijie Xi
|
Ruijie Xi, Munindar P. Singh
|
Morality in the mundane: Categorizing moral reasoning in real-life
social situations
|
Accepted by THE 18TH INTERNATIONAL AAAI CONFERENCE ON WEB AND SOCIAL
MEDIA (ICWSM2024)
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Moral reasoning reflects how people acquire and apply moral rules in
particular situations. With increasingly social interactions happening online,
social media data provides an unprecedented opportunity to assess in-the-wild
moral reasoning. We investigate the commonsense aspects of morality in ordinary
matters empirically. To this end, we examine data from a Reddit subcommunity
(i.e., a subreddit) where an author may describe their behavior in a situation
to seek comments about whether that behavior was appropriate. Other users
comment to provide judgments and reasoning. We focus on the novel problem of
understanding the moral reasoning implicit in user comments about the propriety
of an author's behavior. Especially, we explore associations between the common
elements of the indicated reasoning and the extractable social factors. Our
results suggest the reasoning depends on the author's gender and the topic of a
post, such as when expressing anger emotion and using sensible words (e.g.,
f-ck, hell, and damn) in work-related situations. Moreover, we find that the
commonly expressed semantics also depends on commenters' interests.
|
[
{
"version": "v1",
"created": "Fri, 24 Feb 2023 18:35:38 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jul 2023 21:36:15 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Xi",
"Ruijie",
""
],
[
"Singh",
"Munindar P.",
""
]
] |
new_dataset
| 0.999518 |
2304.01986
|
Ziming Wang
|
Ziming Wang, Yujiang Liu, Yifan Duan, Xingchen Li, Xinran Zhang,
Jianmin Ji, Erbao Dong and Yanyong Zhang
|
USTC FLICAR: A Sensors Fusion Dataset of LiDAR-Inertial-Camera for
Heavy-duty Autonomous Aerial Work Robots
|
23 pages, 34 figures
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, we present the USTC FLICAR Dataset, which is dedicated to the
development of simultaneous localization and mapping and precise 3D
reconstruction of the workspace for heavy-duty autonomous aerial work robots.
In recent years, numerous public datasets have played significant roles in the
advancement of autonomous cars and unmanned aerial vehicles (UAVs). However,
these two platforms differ from aerial work robots: UAVs are limited in their
payload capacity, while cars are restricted to two-dimensional movements. To
fill this gap, we create the "Giraffe" mapping robot based on a bucket truck,
which is equipped with a variety of well-calibrated and synchronized sensors:
four 3D LiDARs, two stereo cameras, two monocular cameras, Inertial Measurement
Units (IMUs), and a GNSS/INS system. A laser tracker is used to record the
millimeter-level ground truth positions. We also make its ground twin, the
"Okapi" mapping robot, to gather data for comparison. The proposed dataset
extends the typical autonomous driving sensing suite to aerial scenes,
demonstrating the potential of combining autonomous driving perception systems
with bucket trucks to create a versatile autonomous aerial working platform.
Moreover, based on the Segment Anything Model (SAM), we produce the Semantic
FLICAR dataset, which provides fine-grained semantic segmentation annotations
for multimodal continuous data in both temporal and spatial dimensions. The
dataset is available for download at: https://ustc-flicar.github.io/.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 17:45:06 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Jul 2023 09:37:19 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Wang",
"Ziming",
""
],
[
"Liu",
"Yujiang",
""
],
[
"Duan",
"Yifan",
""
],
[
"Li",
"Xingchen",
""
],
[
"Zhang",
"Xinran",
""
],
[
"Ji",
"Jianmin",
""
],
[
"Dong",
"Erbao",
""
],
[
"Zhang",
"Yanyong",
""
]
] |
new_dataset
| 0.999831 |
2304.13037
|
Van-Duc Le
|
Van-Duc Le, Cuong-Tien Bui, Wen-Syan Li
|
VeML: An End-to-End Machine Learning Lifecycle for Large-scale and
High-dimensional Data
|
The updated version of this paper, titled "Efficient ML Lifecycle
Transferring for Large-scale and High-dimensional Data via Core Set-based
Dataset Similarity," has been accepted for publication in IEEE Access
|
IEEE Access, vol. 11, pp. 73823-73838, 2023
|
10.1109/ACCESS.2023.3296136
| null |
cs.LG cs.DB cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
An end-to-end machine learning (ML) lifecycle consists of many iterative
processes, from data preparation and ML model design to model training and then
deploying the trained model for inference. When building an end-to-end
lifecycle for an ML problem, many ML pipelines must be designed and executed
that produce a huge number of lifecycle versions. Therefore, this paper
introduces VeML, a Version management system dedicated to end-to-end ML
Lifecycle. Our system tackles several crucial problems that other systems have
not solved. First, we address the high cost of building an ML lifecycle,
especially for large-scale and high-dimensional dataset. We solve this problem
by proposing to transfer the lifecycle of similar datasets managed in our
system to the new training data. We design an algorithm based on the core set
to compute similarity for large-scale, high-dimensional data efficiently.
Another critical issue is the model accuracy degradation by the difference
between training data and testing data during the ML lifetime, which leads to
lifecycle rebuild. Our system helps to detect this mismatch without getting
labeled data from testing data and rebuild the ML lifecycle for a new data
version. To demonstrate our contributions, we conduct experiments on
real-world, large-scale datasets of driving images and spatiotemporal sensor
data and show promising results.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 07:32:16 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Jul 2023 06:09:18 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Le",
"Van-Duc",
""
],
[
"Bui",
"Cuong-Tien",
""
],
[
"Li",
"Wen-Syan",
""
]
] |
new_dataset
| 0.999739 |
2305.06716
|
Jenny Schmalfuss
|
Jenny Schmalfuss and Lukas Mehl and Andr\'es Bruhn
|
Distracting Downpour: Adversarial Weather Attacks for Motion Estimation
|
Acepted by ICCV 2023. This work is a direct extension of our extended
abstract from arXiv:2210.11242
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current adversarial attacks on motion estimation, or optical flow, optimize
small per-pixel perturbations, which are unlikely to appear in the real world.
In contrast, adverse weather conditions constitute a much more realistic threat
scenario. Hence, in this work, we present a novel attack on motion estimation
that exploits adversarially optimized particles to mimic weather effects like
snowflakes, rain streaks or fog clouds. At the core of our attack framework is
a differentiable particle rendering system that integrates particles (i)
consistently over multiple time steps (ii) into the 3D space (iii) with a
photo-realistic appearance. Through optimization, we obtain adversarial weather
that significantly impacts the motion estimation. Surprisingly, methods that
previously showed good robustness towards small per-pixel perturbations are
particularly vulnerable to adversarial weather. At the same time, augmenting
the training with non-optimized weather increases a method's robustness towards
weather effects and improves generalizability at almost no additional cost. Our
code will be available at https://github.com/cv-stuttgart/DistractingDownpour.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 10:52:00 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Jul 2023 11:14:53 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Schmalfuss",
"Jenny",
""
],
[
"Mehl",
"Lukas",
""
],
[
"Bruhn",
"Andrés",
""
]
] |
new_dataset
| 0.999101 |
2305.09160
|
Siyuan Huang
|
Siyuan Huang, Bo Zhang, Botian Shi, Peng Gao, Yikang Li, Hongsheng Li
|
SUG: Single-dataset Unified Generalization for 3D Point Cloud
Classification
|
Accepted by ACM MM-2023, and our code is available at
https://github.com/SiyuanHuang95/SUG
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Although Domain Generalization (DG) problem has been fast-growing in the 2D
image tasks, its exploration on 3D point cloud data is still insufficient and
challenged by more complex and uncertain cross-domain variances with uneven
inter-class modality distribution. In this paper, different from previous 2D DG
works, we focus on the 3D DG problem and propose a Single-dataset Unified
Generalization (SUG) framework that only leverages a single source dataset to
alleviate the unforeseen domain differences faced by a well-trained source
model. Specifically, we first design a Multi-grained Sub-domain Alignment (MSA)
method, which can constrain the learned representations to be domain-agnostic
and discriminative, by performing a multi-grained feature alignment process
between the splitted sub-domains from the single source dataset. Then, a
Sample-level Domain-aware Attention (SDA) strategy is presented, which can
selectively enhance easy-to-adapt samples from different sub-domains according
to the sample-level inter-domain distance to avoid the negative transfer.
Experiments demonstrate that our SUG can boost the generalization ability for
unseen target domains, even outperforming the existing unsupervised domain
adaptation methods that have to access extensive target domain data. Our code
is available at https://github.com/SiyuanHuang95/SUG.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 04:36:04 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Jul 2023 04:36:15 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Huang",
"Siyuan",
""
],
[
"Zhang",
"Bo",
""
],
[
"Shi",
"Botian",
""
],
[
"Gao",
"Peng",
""
],
[
"Li",
"Yikang",
""
],
[
"Li",
"Hongsheng",
""
]
] |
new_dataset
| 0.994351 |
2305.10132
|
Kiwan Jeon Dr.
|
Hyoung Suk Park and Chang Min Hyun and Sang-Hwy Lee and Jin Keun Seo
and Kiwan Jeon
|
Automatic 3D Registration of Dental CBCT and Face Scan Data using 2D
Projection Images
|
8 pages, 6 figures, 2 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a fully automatic registration method of dental cone-beam
computed tomography (CBCT) and face scan data. It can be used for a digital
platform of 3D jaw-teeth-face models in a variety of applications, including 3D
digital treatment planning and orthognathic surgery. Difficulties in accurately
merging facial scans and CBCT images are due to the different image acquisition
methods and limited area of correspondence between the two facial surfaces. In
addition, it is difficult to use machine learning techniques because they use
face-related 3D medical data with radiation exposure, which are difficult to
obtain for training. The proposed method addresses these problems by reusing an
existing machine-learning-based 2D landmark detection algorithm in an
open-source library and developing a novel mathematical algorithm that
identifies paired 3D landmarks from knowledge of the corresponding 2D
landmarks. A main contribution of this study is that the proposed method does
not require annotated training data of facial landmarks because it uses a
pre-trained facial landmark detection algorithm that is known to be robust and
generalized to various 2D face image models. Note that this reduces a 3D
landmark detection problem to a 2D problem of identifying the corresponding
landmarks on two 2D projection images generated from two different projection
angles. Here, the 3D landmarks for registration were selected from the
sub-surfaces with the least geometric change under the CBCT and face scan
environments. For the final fine-tuning of the registration, the Iterative
Closest Point method was applied, which utilizes geometrical information around
the 3D landmarks. The experimental results show that the proposed method
achieved an averaged surface distance error of 0.74 mm for three pairs of CBCT
and face scan datasets.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 11:26:43 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Jun 2023 15:57:55 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Jul 2023 01:45:26 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Park",
"Hyoung Suk",
""
],
[
"Hyun",
"Chang Min",
""
],
[
"Lee",
"Sang-Hwy",
""
],
[
"Seo",
"Jin Keun",
""
],
[
"Jeon",
"Kiwan",
""
]
] |
new_dataset
| 0.993358 |
2307.12798
|
Andrea Bacciu
|
Andrea Bacciu, Florin Cuconasu, Federico Siciliano, Fabrizio
Silvestri, Nicola Tonellotto, Giovanni Trappolini
|
RRAML: Reinforced Retrieval Augmented Machine Learning
| null | null | null | null |
cs.CL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
The emergence of large language models (LLMs) has revolutionized machine
learning and related fields, showcasing remarkable abilities in comprehending,
generating, and manipulating human language. However, their conventional usage
through API-based text prompt submissions imposes certain limitations in terms
of context constraints and external source availability. To address these
challenges, we propose a novel framework called Reinforced Retrieval Augmented
Machine Learning (RRAML). RRAML integrates the reasoning capabilities of LLMs
with supporting information retrieved by a purpose-built retriever from a vast
user-provided database. By leveraging recent advancements in reinforcement
learning, our method effectively addresses several critical challenges.
Firstly, it circumvents the need for accessing LLM gradients. Secondly, our
method alleviates the burden of retraining LLMs for specific tasks, as it is
often impractical or impossible due to restricted access to the model and the
computational intensity involved. Additionally we seamlessly link the
retriever's task with the reasoner, mitigating hallucinations and reducing
irrelevant, and potentially damaging retrieved documents. We believe that the
research agenda outlined in this paper has the potential to profoundly impact
the field of AI, democratizing access to and utilization of LLMs for a wide
range of entities.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 13:51:19 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jul 2023 05:42:34 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Jul 2023 07:20:28 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Bacciu",
"Andrea",
""
],
[
"Cuconasu",
"Florin",
""
],
[
"Siciliano",
"Federico",
""
],
[
"Silvestri",
"Fabrizio",
""
],
[
"Tonellotto",
"Nicola",
""
],
[
"Trappolini",
"Giovanni",
""
]
] |
new_dataset
| 0.999332 |
2307.14343
|
Amarnath R
|
Amarnath R, Vinay Kumar V
|
Pruning Distorted Images in MNIST Handwritten Digits
|
26 pages, 10 figures, 14 tables, 54 references
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Recognizing handwritten digits is a challenging task primarily due to the
diversity of writing styles and the presence of noisy images. The widely used
MNIST dataset, which is commonly employed as a benchmark for this task,
includes distorted digits with irregular shapes, incomplete strokes, and
varying skew in both the training and testing datasets. Consequently, these
factors contribute to reduced accuracy in digit recognition. To overcome this
challenge, we propose a two-stage deep learning approach. In the first stage,
we create a simple neural network to identify distorted digits within the
training set. This model serves to detect and filter out such distorted and
ambiguous images. In the second stage, we exclude these identified images from
the training dataset and proceed to retrain the model using the filtered
dataset. This process aims to improve the classification accuracy and
confidence levels while mitigating issues of underfitting and overfitting. Our
experimental results demonstrate the effectiveness of the proposed approach,
achieving an accuracy rate of over 99.5% on the testing dataset. This
significant improvement showcases the potential of our method in enhancing
digit classification accuracy. In our future work, we intend to explore the
scalability of this approach and investigate techniques to further enhance
accuracy by reducing the size of the training data.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 11:44:35 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"R",
"Amarnath",
""
],
[
"Kumar",
"Vinay",
"V"
]
] |
new_dataset
| 0.999509 |
2307.14387
|
Yuni Lai
|
Yuni Lai, Marcin Waniek, Yulin Zhu, Liying Li, Jingwen Wu, Tomasz P.
Michalak, Talal Rahwan, Kai Zhou
|
Dual-Space Attacks against Random-Walk-based Anomaly Detection
|
13 pages
| null | null | null |
cs.CR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Random Walks-based Anomaly Detection (RWAD) is commonly used to identify
anomalous patterns in various applications. An intriguing characteristic of
RWAD is that the input graph can either be pre-existing or constructed from raw
features. Consequently, there are two potential attack surfaces against RWAD:
graph-space attacks and feature-space attacks. In this paper, we explore this
vulnerability by designing practical dual-space attacks, investigating the
interplay between graph-space and feature-space attacks. To this end, we
conduct a thorough complexity analysis, proving that attacking RWAD is NP-hard.
Then, we proceed to formulate the graph-space attack as a bi-level optimization
problem and propose two strategies to solve it: alternative iteration
(alterI-attack) or utilizing the closed-form solution of the random walk model
(cf-attack). Finally, we utilize the results from the graph-space attacks as
guidance to design more powerful feature-space attacks (i.e., graph-guided
attacks). Comprehensive experiments demonstrate that our proposed attacks are
effective in enabling the target nodes from RWAD with a limited attack budget.
In addition, we conduct transfer attack experiments in a black-box setting,
which show that our feature attack significantly decreases the anomaly scores
of target nodes. Our study opens the door to studying the dual-space attack
against graph anomaly detection in which the graph space relies on the feature
space.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 06:42:29 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Lai",
"Yuni",
""
],
[
"Waniek",
"Marcin",
""
],
[
"Zhu",
"Yulin",
""
],
[
"Li",
"Liying",
""
],
[
"Wu",
"Jingwen",
""
],
[
"Michalak",
"Tomasz P.",
""
],
[
"Rahwan",
"Talal",
""
],
[
"Zhou",
"Kai",
""
]
] |
new_dataset
| 0.988489 |
2307.14392
|
Yiteng Xu
|
Yiteng Xu, Peishan Cong, Yichen Yao, Runnan Chen, Yuenan Hou, Xinge
Zhu, Xuming He, Jingyi Yu, Yuexin Ma
|
Human-centric Scene Understanding for 3D Large-scale Scenarios
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Human-centric scene understanding is significant for real-world applications,
but it is extremely challenging due to the existence of diverse human poses and
actions, complex human-environment interactions, severe occlusions in crowds,
etc. In this paper, we present a large-scale multi-modal dataset for
human-centric scene understanding, dubbed HuCenLife, which is collected in
diverse daily-life scenarios with rich and fine-grained annotations. Our
HuCenLife can benefit many 3D perception tasks, such as segmentation,
detection, action recognition, etc., and we also provide benchmarks for these
tasks to facilitate related research. In addition, we design novel modules for
LiDAR-based segmentation and action recognition, which are more applicable for
large-scale human-centric scenarios and achieve state-of-the-art performance.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 08:40:46 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Xu",
"Yiteng",
""
],
[
"Cong",
"Peishan",
""
],
[
"Yao",
"Yichen",
""
],
[
"Chen",
"Runnan",
""
],
[
"Hou",
"Yuenan",
""
],
[
"Zhu",
"Xinge",
""
],
[
"He",
"Xuming",
""
],
[
"Yu",
"Jingyi",
""
],
[
"Ma",
"Yuexin",
""
]
] |
new_dataset
| 0.99916 |
2307.14460
|
Reiner Birkl
|
Reiner Birkl, Diana Wofk, Matthias M\"uller
|
MiDaS v3.1 -- A Model Zoo for Robust Monocular Relative Depth Estimation
|
14 pages, 2 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We release MiDaS v3.1 for monocular depth estimation, offering a variety of
new models based on different encoder backbones. This release is motivated by
the success of transformers in computer vision, with a large variety of
pretrained vision transformers now available. We explore how using the most
promising vision transformers as image encoders impacts depth estimation
quality and runtime of the MiDaS architecture. Our investigation also includes
recent convolutional approaches that achieve comparable quality to vision
transformers in image classification tasks. While the previous release MiDaS
v3.0 solely leverages the vanilla vision transformer ViT, MiDaS v3.1 offers
additional models based on BEiT, Swin, SwinV2, Next-ViT and LeViT. These models
offer different performance-runtime tradeoffs. The best model improves the
depth estimation quality by 28% while efficient models enable downstream tasks
requiring high frame rates. We also describe the general process for
integrating new backbones. A video summarizing the work can be found at
https://youtu.be/UjaeNNFf9sE and the code is available at
https://github.com/isl-org/MiDaS.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 19:01:49 GMT"
}
] | 2023-07-28T00:00:00 |
[
[
"Birkl",
"Reiner",
""
],
[
"Wofk",
"Diana",
""
],
[
"Müller",
"Matthias",
""
]
] |
new_dataset
| 0.988489 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.