id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.03827
|
Hrishav Bakul Barua
|
Hrishav Bakul Barua, Ganesh Krishnasamy, KokSheik Wong, Kalin
Stefanov, Abhinav Dhall
|
ArtHDR-Net: Perceptually Realistic and Accurate HDR Content Creation
|
Accepted in Asia Pacific Signal and Information Processing
Association Annual Summit and Conference (APSIPA ASC), Taipei, Taiwan
| null | null | null |
cs.CV cs.GR cs.LG cs.MM eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High Dynamic Range (HDR) content creation has become an important topic for
modern media and entertainment sectors, gaming and Augmented/Virtual Reality
industries. Many methods have been proposed to recreate the HDR counterparts of
input Low Dynamic Range (LDR) images/videos given a single exposure or
multi-exposure LDRs. The state-of-the-art methods focus primarily on the
preservation of the reconstruction's structural similarity and the pixel-wise
accuracy. However, these conventional approaches do not emphasize preserving
the artistic intent of the images in terms of human visual perception, which is
an essential element in media, entertainment and gaming. In this paper, we
attempt to study and fill this gap. We propose an architecture called
ArtHDR-Net based on a Convolutional Neural Network that uses multi-exposed LDR
features as input. Experimental results show that ArtHDR-Net can achieve
state-of-the-art performance in terms of the HDR-VDP-2 score (i.e., mean
opinion score index) while reaching competitive performance in terms of PSNR
and SSIM.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 16:40:49 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Barua",
"Hrishav Bakul",
""
],
[
"Krishnasamy",
"Ganesh",
""
],
[
"Wong",
"KokSheik",
""
],
[
"Stefanov",
"Kalin",
""
],
[
"Dhall",
"Abhinav",
""
]
] |
new_dataset
| 0.983646 |
2205.03515
|
Victor Yodaiken
|
Victor Yodaiken
|
Standard Automata Theory and Process Algebra
|
fixes a number of typographical errors and sub-optimal phrasings
| null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The concepts of machine homomorphism and machine products developed in the
automata theory literature in the 1960s are more relevant to concurrent systems
than is acknowledged in the process algebra literature and offer a
sophisticated mathematical basis for understanding concurrent systems.
|
[
{
"version": "v1",
"created": "Sat, 7 May 2022 01:06:52 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Nov 2022 01:06:28 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Jun 2023 13:28:24 GMT"
},
{
"version": "v4",
"created": "Wed, 6 Sep 2023 14:36:21 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Yodaiken",
"Victor",
""
]
] |
new_dataset
| 0.995618 |
2207.08100
|
Gregor Dumphart
|
Gregor Dumphart, Johannes Sager, Armin Wittneben
|
Load Modulation for Backscatter Communication: Channel Capacity and
Near-Capacity Schemes
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice. Included conference paper:
arXiv:2201.00249
| null |
10.1109/TWC.2023.3313110
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In backscatter communication (BC), a passive tag transmits information by
just affecting an external electromagnetic field through load modulation.
Thereby, the feed current of the excited tag antenna is modulated by adapting
the passive termination load. This paper studies the achievable information
rates with a freely adaptable passive load. As a prerequisite, we unify
monostatic, bistatic, and ambient BC with circuit-based system modeling. We
present the crucial insight that channel capacity is described by existing
results on peak-power-limited quadrature Gaussian channels, because the
steady-state tag current phasor lies on a disk. Consequently, we derive the
channel capacity for the case of an unmodulated external field, for general
passive, purely reactive, or purely resistive tag loads. We find that
modulating both resistance and reactance is important for very high rates. We
discuss the capacity-achieving load statistics, rate asymptotics, technical
conclusions, and rate losses from value-range-constrained loads (which are
found to be small for moderate constraints).
We then demonstrate that near-capacity rates can be attained by more
practical schemes: (i) amplitude-and-phase-shift keying on the reflection
coefficient and (ii) simple load circuits of a few switched resistors and
capacitors.
Finally, we draw conclusions for the ambient BC channel capacity in important
special cases.
|
[
{
"version": "v1",
"created": "Sun, 17 Jul 2022 07:46:19 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Feb 2023 15:27:07 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Jul 2023 13:05:55 GMT"
},
{
"version": "v4",
"created": "Wed, 6 Sep 2023 11:13:36 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Dumphart",
"Gregor",
""
],
[
"Sager",
"Johannes",
""
],
[
"Wittneben",
"Armin",
""
]
] |
new_dataset
| 0.955217 |
2211.06326
|
Susannah Kate Devitt
|
Susannah Kate Devitt
|
Bad, mad, and cooked: Moral responsibility for civilian harms in
human-AI military teams
|
30 pages, accepted for publication in Jan Maarten Schraagen (Ed.)
'Responsible Use of AI in Military Systems', CRC Press [Forthcoming]
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
This chapter explores moral responsibility for civilian harms by
human-artificial intelligence (AI) teams. Although militaries may have some bad
apples responsible for war crimes and some mad apples unable to be responsible
for their actions during a conflict, increasingly militaries may 'cook' their
good apples by putting them in untenable decision-making environments through
the processes of replacing human decision-making with AI determinations in war
making. Responsibility for civilian harm in human-AI military teams may be
contested, risking operators becoming detached, being extreme moral witnesses,
becoming moral crumple zones or suffering moral injury from being part of
larger human-AI systems authorised by the state. Acknowledging military ethics,
human factors and AI work to date as well as critical case studies, this
chapter offers new mechanisms to map out conditions for moral responsibility in
human-AI teams. These include: 1) new decision responsibility prompts for
critical decision method in a cognitive task analysis, and 2) applying an AI
workplace health and safety framework for identifying cognitive and
psychological risks relevant to attributions of moral responsibility in
targeting decisions. Mechanisms such as these enable militaries to design
human-centred AI systems for responsible deployment.
|
[
{
"version": "v1",
"created": "Mon, 31 Oct 2022 10:18:20 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Sep 2023 06:12:21 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Sep 2023 11:13:14 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Devitt",
"Susannah Kate",
""
]
] |
new_dataset
| 0.999373 |
2211.15692
|
Yue Zhu
|
Yue Zhu, Nermin Samet, David Picard
|
H3WB: Human3.6M 3D WholeBody Dataset and Benchmark
|
Accepted by ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a benchmark for 3D human whole-body pose estimation, which
involves identifying accurate 3D keypoints on the entire human body, including
face, hands, body, and feet. Currently, the lack of a fully annotated and
accurate 3D whole-body dataset results in deep networks being trained
separately on specific body parts, which are combined during inference. Or they
rely on pseudo-groundtruth provided by parametric body models which are not as
accurate as detection based methods. To overcome these issues, we introduce the
Human3.6M 3D WholeBody (H3WB) dataset, which provides whole-body annotations
for the Human3.6M dataset using the COCO Wholebody layout. H3WB comprises 133
whole-body keypoint annotations on 100K images, made possible by our new
multi-view pipeline. We also propose three tasks: i) 3D whole-body pose lifting
from 2D complete whole-body pose, ii) 3D whole-body pose lifting from 2D
incomplete whole-body pose, and iii) 3D whole-body pose estimation from a
single RGB image. Additionally, we report several baselines from popular
methods for these tasks. Furthermore, we also provide automated 3D whole-body
annotations of TotalCapture and experimentally show that when used with H3WB it
helps to improve the performance. Code and dataset is available at
https://github.com/wholebody3d/wholebody3d
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 19:00:02 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Sep 2023 12:22:24 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Zhu",
"Yue",
""
],
[
"Samet",
"Nermin",
""
],
[
"Picard",
"David",
""
]
] |
new_dataset
| 0.999873 |
2301.05323
|
Jarek Reynolds
|
Jarek Reynolds, Chandra Kanth Nagesh, Danna Gurari
|
Salient Object Detection for Images Taken by People With Vision
Impairments
|
Computer Vision and Pattern Recognition
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Salient object detection is the task of producing a binary mask for an image
that deciphers which pixels belong to the foreground object versus background.
We introduce a new salient object detection dataset using images taken by
people who are visually impaired who were seeking to better understand their
surroundings, which we call VizWiz-SalientObject. Compared to seven existing
datasets, VizWiz-SalientObject is the largest (i.e., 32,000 human-annotated
images) and contains unique characteristics including a higher prevalence of
text in the salient objects (i.e., in 68\% of images) and salient objects that
occupy a larger ratio of the images (i.e., on average, $\sim$50\% coverage). We
benchmarked seven modern salient object detection methods on our dataset and
found they struggle most with images featuring salient objects that are large,
have less complex boundaries, and lack text as well as for lower quality
images. We invite the broader community to work on our new dataset challenge by
publicly sharing the dataset at
https://vizwiz.org/tasks-and-datasets/salient-object .
|
[
{
"version": "v1",
"created": "Thu, 12 Jan 2023 22:33:01 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Sep 2023 18:03:51 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Reynolds",
"Jarek",
""
],
[
"Nagesh",
"Chandra Kanth",
""
],
[
"Gurari",
"Danna",
""
]
] |
new_dataset
| 0.999803 |
2301.07653
|
Leonardo Bonati
|
Leonardo Bonati, Michele Polese, Salvatore D'Oro, Stefano Basagni,
Tommaso Melodia
|
NeutRAN: An Open RAN Neutral Host Architecture for Zero-Touch RAN and
Spectrum Sharing
|
13 pages, 11 figures, 1 table. IEEE Transactions on Mobile Computing,
August 2023
| null |
10.1109/TMC.2023.3311728
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Obtaining access to exclusive spectrum, cell sites, Radio Access Network
(RAN) equipment, and edge infrastructure imposes major capital expenses to
mobile network operators. A neutral host infrastructure, by which a third-party
company provides RAN services to mobile operators through network
virtualization and slicing techniques, is seen as a promising solution to
decrease these costs. Currently, however, neutral host providers lack automated
and virtualized pipelines for onboarding new tenants and to provide elastic and
on-demand allocation of resources matching operators' requirements. To address
this gap, this paper presents NeutRAN, a zero-touch framework based on the
O-RAN architecture to support applications on neutral hosts and automatic
operator onboarding. NeutRAN builds upon two key components: (i) an
optimization engine to guarantee coverage and to meet quality of service
requirements while accounting for the limited amount of shared spectrum and RAN
nodes, and (ii) a fully virtualized and automated infrastructure that converts
the output of the optimization engine into deployable micro-services to be
executed at RAN nodes and cell sites. NeutRAN was prototyped on an OpenShift
cluster and on a programmable testbed with 4 base stations and 10 users from 3
different tenants. We evaluate its benefits, comparing it to a traditional
license-based RAN where each tenant has dedicated physical and spectrum
resources. We show that NeutRAN can deploy a fully operational neutral
host-based cellular network in around 10 seconds. Experimental results show
that it increases the cumulative network throughput by 2.18x and the per-user
average throughput by 1.73x in networks with shared spectrum blocks of 30 MHz.
NeutRAN provides a 1.77x cumulative throughput gain even when it can only
operate on a shared spectrum block of 10 MHz (one third of the spectrum used in
license-based RANs).
|
[
{
"version": "v1",
"created": "Wed, 18 Jan 2023 16:57:16 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Sep 2023 20:40:27 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Bonati",
"Leonardo",
""
],
[
"Polese",
"Michele",
""
],
[
"D'Oro",
"Salvatore",
""
],
[
"Basagni",
"Stefano",
""
],
[
"Melodia",
"Tommaso",
""
]
] |
new_dataset
| 0.999816 |
2303.00973
|
Scarlett Raine Ms
|
Scarlett Raine, Ross Marchant, Brano Kusy, Frederic Maire and Tobias
Fischer
|
Image Labels Are All You Need for Coarse Seagrass Segmentation
|
10 pages, 4 figures, additional 3 pages of supplementary material
|
2024 IEEE/CVF Winter Conference on Applications of Computer Vision
(WACV)
| null | null |
cs.CV cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Seagrass meadows serve as critical carbon sinks, but estimating the amount of
carbon they store requires knowledge of the seagrass species present.
Underwater and surface vehicles equipped with machine learning algorithms can
help to accurately estimate the composition and extent of seagrass meadows at
scale. However, previous approaches for seagrass detection and classification
have required supervision from patch-level labels. In this paper, we reframe
seagrass classification as a weakly supervised coarse segmentation problem
where image-level labels are used during training (25 times fewer labels
compared to patch-level labeling) and patch-level outputs are obtained at
inference time. To this end, we introduce SeaFeats, an architecture that uses
unsupervised contrastive pre-training and feature similarity, and SeaCLIP, a
model that showcases the effectiveness of large language models as a
supervisory signal in domain-specific applications. We demonstrate that an
ensemble of SeaFeats and SeaCLIP leads to highly robust performance. Our method
outperforms previous approaches that require patch-level labels on the
multi-species 'DeepSeagrass' dataset by 6.8% (absolute) for the class-weighted
F1 score, and by 12.1% (absolute) for the seagrass presence/absence F1 score on
the 'Global Wetlands' dataset. We also present two case studies for real-world
deployment: outlier detection on the Global Wetlands dataset, and application
of our method on imagery collected by the FloatyBoat autonomous surface
vehicle.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 05:10:57 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Sep 2023 01:48:56 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Raine",
"Scarlett",
""
],
[
"Marchant",
"Ross",
""
],
[
"Kusy",
"Brano",
""
],
[
"Maire",
"Frederic",
""
],
[
"Fischer",
"Tobias",
""
]
] |
new_dataset
| 0.982368 |
2303.05382
|
Dongdong Wang
|
Ou Zheng, Mohamed Abdel-Aty, Dongdong Wang, Zijin Wang, Shengxuan Ding
|
ChatGPT is on the Horizon: Could a Large Language Model be Suitable for
Intelligent Traffic Safety Research and Applications?
|
Submitted to Nature - Machine Intelligence (Revised and Extended)
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
ChatGPT embarks on a new era of artificial intelligence and will
revolutionize the way we approach intelligent traffic safety systems. This
paper begins with a brief introduction about the development of large language
models (LLMs). Next, we exemplify using ChatGPT to address key traffic safety
issues. Furthermore, we discuss the controversies surrounding LLMs, raise
critical questions for their deployment, and provide our solutions. Moreover,
we propose an idea of multi-modality representation learning for smarter
traffic safety decision-making and open more questions for application
improvement. We believe that LLM will both shape and potentially facilitate
components of traffic safety research.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 16:36:17 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Mar 2023 05:47:11 GMT"
},
{
"version": "v3",
"created": "Tue, 5 Sep 2023 18:13:24 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Zheng",
"Ou",
""
],
[
"Abdel-Aty",
"Mohamed",
""
],
[
"Wang",
"Dongdong",
""
],
[
"Wang",
"Zijin",
""
],
[
"Ding",
"Shengxuan",
""
]
] |
new_dataset
| 0.981081 |
2304.13145
|
Zahra Tayebi
|
Zahra Tayebi, Sarwan Ali, Prakash Chourasia, Taslim Murad and Murray
Patterson
|
T Cell Receptor Protein Sequences and Sparse Coding: A Novel Approach to
Cancer Classification
|
Accepted at ICONIP 2023
| null | null | null |
cs.LG q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Cancer is a complex disease characterized by uncontrolled cell growth and
proliferation. T cell receptors (TCRs) are essential proteins for the adaptive
immune system, and their specific recognition of antigens plays a crucial role
in the immune response against diseases, including cancer. The diversity and
specificity of TCRs make them ideal for targeting cancer cells, and recent
advancements in sequencing technologies have enabled the comprehensive
profiling of TCR repertoires. This has led to the discovery of TCRs with potent
anti-cancer activity and the development of TCR-based immunotherapies. In this
study, we investigate the use of sparse coding for the multi-class
classification of TCR protein sequences with cancer categories as target
labels. Sparse coding is a popular technique in machine learning that enables
the representation of data with a set of informative features and can capture
complex relationships between amino acids and identify subtle patterns in the
sequence that might be missed by low-dimensional methods. We first compute the
k-mers from the TCR sequences and then apply sparse coding to capture the
essential features of the data. To improve the predictive performance of the
final embeddings, we integrate domain knowledge regarding different types of
cancer properties. We then train different machine learning (linear and
non-linear) classifiers on the embeddings of TCR sequences for the purpose of
supervised analysis. Our proposed embedding method on a benchmark dataset of
TCR sequences significantly outperforms the baselines in terms of predictive
performance, achieving an accuracy of 99.8\%. Our study highlights the
potential of sparse coding for the analysis of TCR protein sequences in cancer
research and other related fields.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 20:43:41 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Sep 2023 21:08:04 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Tayebi",
"Zahra",
""
],
[
"Ali",
"Sarwan",
""
],
[
"Chourasia",
"Prakash",
""
],
[
"Murad",
"Taslim",
""
],
[
"Patterson",
"Murray",
""
]
] |
new_dataset
| 0.996568 |
2305.00189
|
Abdurrahman Gumus
|
Ayse Altay, Abdurrahman Gumus
|
Real-Time Superficial Vein Imaging System for Observing Abnormalities on
Vascular Structures
| null |
Multimedia Tools and Applications (2023)
|
10.1007/s11042-023-16251-7
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Circulatory system abnormalities might be an indicator of diseases or tissue
damage. Early detection of vascular abnormalities might have an important role
during treatment and also raise the patient's awarenes. Current detection
methods for vascular imaging are high-cost, invasive, and mostly
radiation-based. In this study, a low-cost and portable microcomputer-based
tool has been developed as a near-infrared (NIR) superficial vascular imaging
device. The device uses NIR light-emitting diode (LED) light at 850 nm along
with other electronic and optical components. It operates as a non-contact and
safe infrared (IR) imaging method in real-time. Image and video analysis are
carried out using OpenCV (Open-Source Computer Vision), a library of
programming functions mainly used in computer vision. Various tests were
carried out to optimize the imaging system and set up a suitable external
environment. To test the performance of the device, the images taken from three
diabetic volunteers, who are expected to have abnormalities in the vascular
structure due to the possibility of deformation caused by high glucose levels
in the blood, were compared with the images taken from two non-diabetic
volunteers. As a result, tortuosity was observed successfully in the
superficial vascular structures, where the results need to be interpreted by
the medical experts in the field to understand the underlying reasons. Although
this study is an engineering study and does not have an intention to diagnose
any diseases, the developed system here might assist healthcare personnel in
early diagnosis and treatment follow-up for vascular structures and may enable
further opportunities.
|
[
{
"version": "v1",
"created": "Sat, 29 Apr 2023 07:32:23 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Altay",
"Ayse",
""
],
[
"Gumus",
"Abdurrahman",
""
]
] |
new_dataset
| 0.998747 |
2305.16649
|
Yudian Li
|
Zhe Huang and Yudian Li
|
FSD: Fully-Specialized Detector via Neural Architecture Search
| null |
2023 5th International Conference on Computer Communication and
the Internet (ICCCI)
|
10.1109/ICCCI59363.2023.10210167
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most generic object detectors are mainly built for standard object detection
tasks such as COCO and PASCAL VOC. They might not work well and/or efficiently
on tasks of other domains consisting of images that are visually different from
standard datasets. To this end, many advances have been focused on adapting a
general-purposed object detector with limited domain-specific designs. However,
designing a successful task-specific detector requires extraneous manual
experiments and parameter tuning through trial and error. In this paper, we
first propose and examine a fully-automatic pipeline to design a
fully-specialized detector (FSD) which mainly incorporates a
neural-architectural-searched model by exploring ideal network structures over
the backbone and task-specific head. On the DeepLesion dataset, extensive
results show that FSD can achieve 3.1 mAP gain while using approximately 40%
fewer parameters on binary lesion detection task and improved the mAP by around
10% on multi-type lesion detection task via our region-aware graph modeling
compared with existing general-purposed medical lesion detection networks.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 05:41:20 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jun 2023 07:31:49 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Jun 2023 02:27:54 GMT"
},
{
"version": "v4",
"created": "Fri, 21 Jul 2023 05:46:30 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Huang",
"Zhe",
""
],
[
"Li",
"Yudian",
""
]
] |
new_dataset
| 0.999395 |
2306.01665
|
Pengcheng Lu
|
Pengcheng Lu, Liang Cai, and Keting Yin
|
SourceP: Detecting Ponzi Schemes on Ethereum with Source Code
|
12 pages
| null | null | null |
cs.SE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
As blockchain technology becomes more and more popular, a typical financial
scam, the Ponzi scheme, has also emerged in the blockchain platform Ethereum.
This Ponzi scheme deployed through smart contracts, also known as the smart
Ponzi scheme, has caused a lot of economic losses and negative impacts.
Existing methods for detecting smart Ponzi schemes on Ethereum mainly rely on
bytecode features, opcode features, account features, and transaction behavior
features of smart contracts, and the performance of identifying schemes is
insufficient. In this paper, we propose SourceP, a method to detect smart Ponzi
schemes on the Ethereum platform using pre-trained models and data flow, which
only requires using the source code of smart contracts as features to explore
the possibility of detecting smart Ponzi schemes from another direction.
SourceP reduces the difficulty of data acquisition and feature extraction of
existing detection methods while increasing the interpretability of the model.
Specifically, we first convert the source code of a smart contract into a data
flow graph and then introduce a pre-trained model based on learning code
representations to build a classification model to identify Ponzi schemes in
smart contracts. The experimental results show that SourceP achieves 87.2\%
recall and 90.7\% F-score for detecting smart Ponzi schemes within Ethereum's
smart contract dataset, outperforming state-of-the-art methods in terms of
performance and sustainability. We also demonstrate through additional
experiments that pre-trained models and data flow play an important
contribution to SourceP, as well as proving that SourceP has a good
generalization ability.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 16:40:42 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Sep 2023 14:21:01 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Lu",
"Pengcheng",
""
],
[
"Cai",
"Liang",
""
],
[
"Yin",
"Keting",
""
]
] |
new_dataset
| 0.999123 |
2306.09682
|
Yinxuan Huang
|
Yinxuan Huang, Tonglin Chen, Zhimeng Shen, Jinghao Huang, Bin Li,
Xiangyang Xue
|
OCTScenes: A Versatile Real-World Dataset of Tabletop Scenes for
Object-Centric Learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans possess the cognitive ability to comprehend scenes in a compositional
manner. To empower AI systems with similar capabilities, object-centric
learning aims to acquire representations of individual objects from visual
scenes without any supervision. Although recent advances in object-centric
learning have made remarkable progress on complex synthesis datasets, there is
a huge challenge for application to complex real-world scenes. One of the
essential reasons is the scarcity of real-world datasets specifically tailored
to object-centric learning. To address this problem, we propose a versatile
real-world dataset of tabletop scenes for object-centric learning called
OCTScenes, which is meticulously designed to serve as a benchmark for
comparing, evaluating, and analyzing object-centric learning methods. OCTScenes
contains 5000 tabletop scenes with a total of 15 objects. Each scene is
captured in 60 frames covering a 360-degree perspective. Consequently,
OCTScenes is a versatile benchmark dataset that can simultaneously satisfy the
evaluation of object-centric learning methods based on single-image, video, and
multi-view. Extensive experiments of representative object-centric learning
methods are conducted on OCTScenes. The results demonstrate the shortcomings of
state-of-the-art methods for learning meaningful representations from
real-world data, despite their impressive performance on complex synthesis
datasets. Furthermore, OCTScenes can serve as a catalyst for the advancement of
existing methods, inspiring them to adapt to real-world scenes. Dataset and
code are available at https://huggingface.co/datasets/Yinxuan/OCTScenes.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 08:26:57 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 06:06:55 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Sep 2023 06:53:43 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Huang",
"Yinxuan",
""
],
[
"Chen",
"Tonglin",
""
],
[
"Shen",
"Zhimeng",
""
],
[
"Huang",
"Jinghao",
""
],
[
"Li",
"Bin",
""
],
[
"Xue",
"Xiangyang",
""
]
] |
new_dataset
| 0.999875 |
2306.16000
|
Luka Miskovic
|
Luka Mi\v{s}kovi\'c, Tilen Brecelj, Miha De\v{z}man, Tadej Petri\v{c}
|
The JSI-KneExo: Active, Quasi-Passive, Pneumatic, Portable Knee Exo with
Bidirectional Energy Flow for Air Recovery in Sit-Stand Tasks
|
Preprint version
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While existing literature encompasses exoskeleton-assisted sit-stand tasks,
the integration of energy recovery mechanisms remains unexplored. To push these
boundaries further, this study introduces a portable pneumatic knee exoskeleton
that operates in both quasi-passive and active modes, where active mode is
utilized for aiding in standing up (power generation), thus the energy flows
from the exoskeleton to the user, and quasi-passive mode for aiding in sitting
down (power absorption), where the device absorbs and can store energy in the
form of compressed air, leading to energy savings in active mode. The absorbed
energy can be stored and later reused without compromising exoskeleton
transparency in the meantime. In active mode, a small air pump inflates the
pneumatic artificial muscle (PAM), which stores the compressed air, that can
then be released into a pneumatic cylinder to generate torque. All electronic
and pneumatic components are integrated into the system, and the exoskeleton
weighs 3.9 kg with a maximum torque of 20 Nm at the knee joint. The paper
describes the mechatronic design, mathematical model and includes a pilot study
with an able-bodied subject performing sit-to-stand tasks. The results show
that the exoskeleton can recover energy while assisting the subject and
reducing muscle activity. Furthermore, results underscore air regeneration's
impact on energy-saving in portable pneumatic exoskeletons.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 08:20:08 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Sep 2023 13:07:50 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Mišković",
"Luka",
""
],
[
"Brecelj",
"Tilen",
""
],
[
"Dežman",
"Miha",
""
],
[
"Petrič",
"Tadej",
""
]
] |
new_dataset
| 0.997528 |
2307.15778
|
Vasiliy Stanislavovich Usatyuk
|
Vasiliy Usatyuk, Sergey Egorov, Denis Sapozhnikov
|
Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding
for Ising MRF Models: Classical and Quantum Topology Machine Learning
|
71 pages, 42 Figures, 1 Table, 1 Appendix. arXiv admin note: text
overlap with arXiv:2109.08184 by other authors
| null | null | null |
cs.IT cs.AI cs.CV cs.LG math.DS math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The paper introduces the application of information geometry to describe the
ground states of Ising models by utilizing parity-check matrices of cyclic and
quasi-cyclic codes on toric and spherical topologies. The approach establishes
a connection between machine learning and error-correcting coding. This
proposed approach has implications for the development of new embedding methods
based on trapping sets. Statistical physics and number geometry applied for
optimize error-correcting codes, leading to these embedding and sparse
factorization methods. The paper establishes a direct connection between DNN
architecture and error-correcting coding by demonstrating how state-of-the-art
architectures (ChordMixer, Mega, Mega-chunk, CDIL, ...) from the long-range
arena can be equivalent to of block and convolutional LDPC codes (Cage-graph,
Repeat Accumulate). QC codes correspond to certain types of chemical elements,
with the carbon element being represented by the mixed automorphism
Shu-Lin-Fossorier QC-LDPC code. The connections between Belief Propagation and
the Permanent, Bethe-Permanent, Nishimori Temperature, and Bethe-Hessian Matrix
are elaborated upon in detail. The Quantum Approximate Optimization Algorithm
(QAOA) used in the Sherrington-Kirkpatrick Ising model can be seen as analogous
to the back-propagation loss function landscape in training DNNs. This
similarity creates a comparable problem with TS pseudo-codeword, resembling the
belief propagation method. Additionally, the layer depth in QAOA correlates to
the number of decoding belief propagation iterations in the Wiberg decoding
tree. Overall, this work has the potential to advance multiple fields, from
Information Theory, DNN architecture design (sparse and structured prior graph
topology), efficient hardware design for Quantum and Classical DPU/TPU (graph,
quantize and shift register architect.) to Materials Science and beyond.
|
[
{
"version": "v1",
"created": "Fri, 28 Jul 2023 19:38:13 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Sep 2023 19:35:25 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Usatyuk",
"Vasiliy",
""
],
[
"Egorov",
"Sergey",
""
],
[
"Sapozhnikov",
"Denis",
""
]
] |
new_dataset
| 0.995338 |
2308.16741
|
Reuben Tan
|
Katherine Deng, Arijit Ray, Reuben Tan, Saadia Gabriel, Bryan A.
Plummer, Kate Saenko
|
Socratis: Are large multimodal models emotionally aware?
|
ICCV 2023 WECIA
| null | null | null |
cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Existing emotion prediction benchmarks contain coarse emotion labels which do
not consider the diversity of emotions that an image and text can elicit in
humans due to various reasons. Learning diverse reactions to multimodal content
is important as intelligent machines take a central role in generating and
delivering content to society. To address this gap, we propose Socratis, a
societal reactions benchmark, where each image-caption (IC) pair is annotated
with multiple emotions and the reasons for feeling them. Socratis contains 18K
free-form reactions for 980 emotions on 2075 image-caption pairs from 5
widely-read news and image-caption (IC) datasets. We benchmark the capability
of state-of-the-art multimodal large language models to generate the reasons
for feeling an emotion given an IC pair. Based on a preliminary human study, we
observe that humans prefer human-written reasons over 2 times more often than
machine-generated ones. This shows our task is harder than standard generation
tasks because it starkly contrasts recent findings where humans cannot tell
apart machine vs human-written news articles, for instance. We further see that
current captioning metrics based on large vision-language models also fail to
correlate with human preferences. We hope that these findings and our benchmark
will inspire further research on training emotionally aware models.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 13:59:35 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Sep 2023 18:53:39 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Deng",
"Katherine",
""
],
[
"Ray",
"Arijit",
""
],
[
"Tan",
"Reuben",
""
],
[
"Gabriel",
"Saadia",
""
],
[
"Plummer",
"Bryan A.",
""
],
[
"Saenko",
"Kate",
""
]
] |
new_dataset
| 0.995397 |
2309.01539
|
Yuheng Shi
|
Yuheng Shi, Zehao Huang, Yan Yan, Naiyan Wang, Xiaojie Guo
|
TSTTC: A Large-Scale Dataset for Time-to-Contact Estimation in Driving
Scenarios
|
19 pages, 9 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Time-to-Contact (TTC) estimation is a critical task for assessing collision
risk and is widely used in various driver assistance and autonomous driving
systems. The past few decades have witnessed development of related theories
and algorithms. The prevalent learning-based methods call for a large-scale TTC
dataset in real-world scenarios. In this work, we present a large-scale object
oriented TTC dataset in the driving scene for promoting the TTC estimation by a
monocular camera. To collect valuable samples and make data with different TTC
values relatively balanced, we go through thousands of hours of driving data
and select over 200K sequences with a preset data distribution. To augment the
quantity of small TTC cases, we also generate clips using the latest Neural
rendering methods. Additionally, we provide several simple yet effective TTC
estimation baselines and evaluate them extensively on the proposed dataset to
demonstrate their effectiveness. The proposed dataset is publicly available at
https://open-dataset.tusen.ai/TSTTC.
|
[
{
"version": "v1",
"created": "Mon, 4 Sep 2023 11:39:14 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Sep 2023 04:12:35 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Shi",
"Yuheng",
""
],
[
"Huang",
"Zehao",
""
],
[
"Yan",
"Yan",
""
],
[
"Wang",
"Naiyan",
""
],
[
"Guo",
"Xiaojie",
""
]
] |
new_dataset
| 0.999878 |
2309.02232
|
Yuankun Xie
|
Yuankun Xie, Jingjing Zhou, Xiaolin Lu, Zhenghao Jiang, Yuxin Yang,
Haonan Cheng, Long Ye
|
FSD: An Initial Chinese Dataset for Fake Song Detection
|
Submitted to ICASSP 2024
| null | null | null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Singing voice synthesis and singing voice conversion have significantly
advanced, revolutionizing musical experiences. However, the rise of "Deepfake
Songs" generated by these technologies raises concerns about authenticity.
Unlike Audio DeepFake Detection (ADD), the field of song deepfake detection
lacks specialized datasets or methods for song authenticity verification. In
this paper, we initially construct a Chinese Fake Song Detection (FSD) dataset
to investigate the field of song deepfake detection. The fake songs in the FSD
dataset are generated by five state-of-the-art singing voice synthesis and
singing voice conversion methods. Our initial experiments on FSD revealed the
ineffectiveness of existing speech-trained ADD models for the task of song
deepFake detection. Thus, we employ the FSD dataset for the training of ADD
models. We subsequently evaluate these models under two scenarios: one with the
original songs and another with separated vocal tracks. Experiment results show
that song-trained ADD models exhibit a 38.58% reduction in average equal error
rate compared to speech-trained ADD models on the FSD test set.
|
[
{
"version": "v1",
"created": "Tue, 5 Sep 2023 13:37:30 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Sep 2023 11:13:00 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Xie",
"Yuankun",
""
],
[
"Zhou",
"Jingjing",
""
],
[
"Lu",
"Xiaolin",
""
],
[
"Jiang",
"Zhenghao",
""
],
[
"Yang",
"Yuxin",
""
],
[
"Cheng",
"Haonan",
""
],
[
"Ye",
"Long",
""
]
] |
new_dataset
| 0.99974 |
2309.02399
|
Patricia Hu
|
Patricia Hu and Gerhard Widmer
|
The Batik-plays-Mozart Corpus: Linking Performance to Score to
Musicological Annotations
|
To be published in the Proceedings of the 24th International Society
for Music Information Retrieval Conference (ISMIR 2023), Milan, Italy
| null | null | null |
cs.SD cs.DL eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We present the Batik-plays-Mozart Corpus, a piano performance dataset
combining professional Mozart piano sonata performances with expert-labelled
scores at a note-precise level. The performances originate from a recording by
Viennese pianist Roland Batik on a computer-monitored B\"osendorfer grand
piano, and are available both as MIDI files and audio recordings. They have
been precisely aligned, note by note, with a current standard edition of the
corresponding scores (the New Mozart Edition) in such a way that they can
further be connected to the musicological annotations (harmony, cadences,
phrases) on these scores that were recently published by Hentschel et al.
(2021).
The result is a high-quality, high-precision corpus mapping scores and
musical structure annotations to precise note-level professional performance
information. As the first of its kind, it can serve as a valuable resource for
studying various facets of expressive performance and their relationship with
structural aspects. In the paper, we outline the curation process of the
alignment and conduct two exploratory experiments to demonstrate its usefulness
in analyzing expressive performance.
|
[
{
"version": "v1",
"created": "Tue, 5 Sep 2023 17:13:47 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Sep 2023 09:34:31 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Hu",
"Patricia",
""
],
[
"Widmer",
"Gerhard",
""
]
] |
new_dataset
| 0.999757 |
2309.02455
|
Ahmad Sebaq
|
Ahmad Sebaq, Mohamed ElHelw
|
RSDiff: Remote Sensing Image Generation from Text Using Diffusion Model
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Satellite imagery generation and super-resolution are pivotal tasks in remote
sensing, demanding high-quality, detailed images for accurate analysis and
decision-making. In this paper, we propose an innovative and lightweight
approach that employs two-stage diffusion models to gradually generate
high-resolution Satellite images purely based on text prompts. Our innovative
pipeline comprises two interconnected diffusion models: a Low-Resolution
Generation Diffusion Model (LR-GDM) that generates low-resolution images from
text and a Super-Resolution Diffusion Model (SRDM) conditionally produced. The
LR-GDM effectively synthesizes low-resolution by (computing the correlations of
the text embedding and the image embedding in a shared latent space), capturing
the essential content and layout of the desired scenes. Subsequently, the SRDM
takes the generated low-resolution image and its corresponding text prompts and
efficiently produces the high-resolution counterparts, infusing fine-grained
spatial details and enhancing visual fidelity. Experiments are conducted on the
commonly used dataset, Remote Sensing Image Captioning Dataset (RSICD). Our
results demonstrate that our approach outperforms existing state-of-the-art
(SoTA) models in generating satellite images with realistic geographical
features, weather conditions, and land structures while achieving remarkable
super-resolution results for increased spatial precision.
|
[
{
"version": "v1",
"created": "Sun, 3 Sep 2023 09:34:49 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Sebaq",
"Ahmad",
""
],
[
"ElHelw",
"Mohamed",
""
]
] |
new_dataset
| 0.997963 |
2309.02524
|
Martin Briesch
|
Martin Huschens, Martin Briesch, Dominik Sobania, Franz Rothlauf
|
Do You Trust ChatGPT? -- Perceived Credibility of Human and AI-Generated
Content
| null | null | null | null |
cs.HC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper examines how individuals perceive the credibility of content
originating from human authors versus content generated by large language
models, like the GPT language model family that powers ChatGPT, in different
user interface versions. Surprisingly, our results demonstrate that regardless
of the user interface presentation, participants tend to attribute similar
levels of credibility. While participants also do not report any different
perceptions of competence and trustworthiness between human and AI-generated
content, they rate AI-generated content as being clearer and more engaging. The
findings from this study serve as a call for a more discerning approach to
evaluating information sources, encouraging users to exercise caution and
critical thinking when engaging with content generated by AI systems.
|
[
{
"version": "v1",
"created": "Tue, 5 Sep 2023 18:29:29 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Huschens",
"Martin",
""
],
[
"Briesch",
"Martin",
""
],
[
"Sobania",
"Dominik",
""
],
[
"Rothlauf",
"Franz",
""
]
] |
new_dataset
| 0.960473 |
2309.02604
|
Stephen Lu
|
Stephen Z. Lu
|
Screening of Pneumonia and Urinary Tract Infection at Triage using
TriNet
|
Index Terms: Downstream testing, Machine Learning, Medical
directives, Modelling, Modular network, Pneumonia, Positive predictive value,
Screening, Triage, Urinary tract infection
| null | null | null |
cs.LG cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Due to the steady rise in population demographics and longevity, emergency
department visits are increasing across North America. As more patients visit
the emergency department, traditional clinical workflows become overloaded and
inefficient, leading to prolonged wait-times and reduced healthcare quality.
One of such workflows is the triage medical directive, impeded by limited human
workload, inaccurate diagnoses and invasive over-testing. To address this
issue, we propose TriNet: a machine learning model for medical directives that
automates first-line screening at triage for conditions requiring downstream
testing for diagnosis confirmation. To verify screening potential, TriNet was
trained on hospital triage data and achieved high positive predictive values in
detecting pneumonia (0.86) and urinary tract infection (0.93). These models
outperform current clinical benchmarks, indicating that machine-learning
medical directives can offer cost-free, non-invasive screening with high
specificity for common conditions, reducing the risk of over-testing while
increasing emergency department efficiency.
|
[
{
"version": "v1",
"created": "Tue, 5 Sep 2023 22:25:30 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Lu",
"Stephen Z.",
""
]
] |
new_dataset
| 0.956306 |
2309.02617
|
Sai Mitheran Jagadesh Kumar Mr.
|
Eric Youn, Sai Mitheran J, Sanjana Prabhu, Siyuan Chen
|
Compressing Vision Transformers for Low-Resource Visual Learning
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Vision transformer (ViT) and its variants have swept through visual learning
leaderboards and offer state-of-the-art accuracy in tasks such as image
classification, object detection, and semantic segmentation by attending to
different parts of the visual input and capturing long-range spatial
dependencies. However, these models are large and computation-heavy. For
instance, the recently proposed ViT-B model has 86M parameters making it
impractical for deployment on resource-constrained devices. As a result, their
deployment on mobile and edge scenarios is limited. In our work, we aim to take
a step toward bringing vision transformers to the edge by utilizing popular
model compression techniques such as distillation, pruning, and quantization.
Our chosen application environment is an unmanned aerial vehicle (UAV) that
is battery-powered and memory-constrained, carrying a single-board computer on
the scale of an NVIDIA Jetson Nano with 4GB of RAM. On the other hand, the UAV
requires high accuracy close to that of state-of-the-art ViTs to ensure safe
object avoidance in autonomous navigation, or correct localization of humans in
search-and-rescue. Inference latency should also be minimized given the
application requirements. Hence, our target is to enable rapid inference of a
vision transformer on an NVIDIA Jetson Nano (4GB) with minimal accuracy loss.
This allows us to deploy ViTs on resource-constrained devices, opening up new
possibilities in surveillance, environmental monitoring, etc. Our
implementation is made available at https://github.com/chensy7/efficient-vit.
|
[
{
"version": "v1",
"created": "Tue, 5 Sep 2023 23:33:39 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Youn",
"Eric",
""
],
[
"J",
"Sai Mitheran",
""
],
[
"Prabhu",
"Sanjana",
""
],
[
"Chen",
"Siyuan",
""
]
] |
new_dataset
| 0.98585 |
2309.02637
|
Kaifeng Huang
|
Junan Zhang, Kaifeng Huang, Bihuan Chen, Chong Wang, Zhenhao Tian, Xin
Peng
|
Malicious Package Detection in NPM and PyPI using a Single Model of
Malicious Behavior Sequence
| null | null | null | null |
cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Open-source software (OSS) supply chain enlarges the attack surface, which
makes package registries attractive targets for attacks. Recently, package
registries NPM and PyPI have been flooded with malicious packages. The
effectiveness of existing malicious NPM and PyPI package detection approaches
is hindered by two challenges. The first challenge is how to leverage the
knowledge of malicious packages from different ecosystems in a unified way such
that multi-lingual malicious package detection can be feasible. The second
challenge is how to model malicious behavior in a sequential way such that
maliciousness can be precisely captured. To address the two challenges, we
propose and implement Cerebro to detect malicious packages in NPM and PyPI. We
curate a feature set based on a high-level abstraction of malicious behavior to
enable multi-lingual knowledge fusing. We organize extracted features into a
behavior sequence to model sequential malicious behavior. We fine-tune the BERT
model to understand the semantics of malicious behavior. Extensive evaluation
has demonstrated the effectiveness of Cerebro over the state-of-the-art as well
as the practically acceptable efficiency. Cerebro has successfully detected 306
and 196 new malicious packages in PyPI and NPM, and received 385 thank letters
from the official PyPI and NPM teams.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 00:58:59 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Zhang",
"Junan",
""
],
[
"Huang",
"Kaifeng",
""
],
[
"Chen",
"Bihuan",
""
],
[
"Wang",
"Chong",
""
],
[
"Tian",
"Zhenhao",
""
],
[
"Peng",
"Xin",
""
]
] |
new_dataset
| 0.997007 |
2309.02713
|
You Rim Choi
|
You Rim Choi, Gyeongseon Eo, Wonhyuck Youn, Hyojin Lee, Haemin Jang,
Dongyoon Kim, Hyunwoo Shin, Hyung-Sin Kim
|
SlAction: Non-intrusive, Lightweight Obstructive Sleep Apnea Detection
using Infrared Video
|
Accepted to ICCV CVAMD 2023, poster
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Obstructive sleep apnea (OSA) is a prevalent sleep disorder affecting
approximately one billion people world-wide. The current gold standard for
diagnosing OSA, Polysomnography (PSG), involves an overnight hospital stay with
multiple attached sensors, leading to potential inaccuracies due to the
first-night effect. To address this, we present SlAction, a non-intrusive OSA
detection system for daily sleep environments using infrared videos.
Recognizing that sleep videos exhibit minimal motion, this work investigates
the fundamental question: "Are respiratory events adequately reflected in human
motions during sleep?" Analyzing the largest sleep video dataset of 5,098
hours, we establish correlations between OSA events and human motions during
sleep. Our approach uses a low frame rate (2.5 FPS), a large size (60 seconds)
and step (30 seconds) for sliding window analysis to capture slow and long-term
motions related to OSA. Furthermore, we utilize a lightweight deep neural
network for resource-constrained devices, ensuring all video streams are
processed locally without compromising privacy. Evaluations show that SlAction
achieves an average F1 score of 87.6% in detecting OSA across various
environments. Implementing SlAction on NVIDIA Jetson Nano enables real-time
inference (~3 seconds for a 60-second video clip), highlighting its potential
for early detection and personalized treatment of OSA.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 04:52:02 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Choi",
"You Rim",
""
],
[
"Eo",
"Gyeongseon",
""
],
[
"Youn",
"Wonhyuck",
""
],
[
"Lee",
"Hyojin",
""
],
[
"Jang",
"Haemin",
""
],
[
"Kim",
"Dongyoon",
""
],
[
"Shin",
"Hyunwoo",
""
],
[
"Kim",
"Hyung-Sin",
""
]
] |
new_dataset
| 0.996232 |
2309.02724
|
Nagham Hamad
|
Nagham Hamad, Mustafa Jarrar, Mohammad Khalilia, Nadim Nashif
|
Offensive Hebrew Corpus and Detection using BERT
|
8 pages, 1 figure, The 20th ACS/IEEE International Conference on
Computer Systems and Applications (AICCSA)
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Offensive language detection has been well studied in many languages, but it
is lagging behind in low-resource languages, such as Hebrew. In this paper, we
present a new offensive language corpus in Hebrew. A total of 15,881 tweets
were retrieved from Twitter. Each was labeled with one or more of five classes
(abusive, hate, violence, pornographic, or none offensive) by Arabic-Hebrew
bilingual speakers. The annotation process was challenging as each annotator is
expected to be familiar with the Israeli culture, politics, and practices to
understand the context of each tweet. We fine-tuned two Hebrew BERT models,
HeBERT and AlephBERT, using our proposed dataset and another published dataset.
We observed that our data boosts HeBERT performance by 2% when combined with
D_OLaH. Fine-tuning AlephBERT on our data and testing on D_OLaH yields 69%
accuracy, while fine-tuning on D_OLaH and testing on our data yields 57%
accuracy, which may be an indication to the generalizability our data offers.
Our dataset and fine-tuned models are available on GitHub and Huggingface.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 05:18:43 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Hamad",
"Nagham",
""
],
[
"Jarrar",
"Mustafa",
""
],
[
"Khalilia",
"Mohammad",
""
],
[
"Nashif",
"Nadim",
""
]
] |
new_dataset
| 0.999037 |
2309.02755
|
EPTCS
|
Henning Fernau, Lakshmanan Kuppusamy, Indhumathi Raman
|
When Stars Control a Grammar's Work
|
In Proceedings AFL 2023, arXiv:2309.01126
|
EPTCS 386, 2023, pp. 96-111
|
10.4204/EPTCS.386.9
| null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Graph-controlled insertion-deletion (GCID) systems are regulated extensions
of insertion-deletion systems. Such a system has several components and each
component contains some insertion-deletion rules. The components are the
vertices of a directed control graph. A rule is applied to a string in a
component and the resultant string is moved to the target component specified
in the rule. The language of the system is the set of all terminal strings
collected in the final component. We impose the restriction in the structure of
the underlying graph to be a star structure where there is a central, control
component which acts like a master and transmits a string (after applying one
of its rules) to one of the components specified in the (applied) rule. A
component which receives the string can process the obtained string with any
applicable rule available in it and sends back the resultant string only to the
center component. With this restriction, we obtain computational completeness
for some descriptional complexity measures
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 06:18:10 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Fernau",
"Henning",
""
],
[
"Kuppusamy",
"Lakshmanan",
""
],
[
"Raman",
"Indhumathi",
""
]
] |
new_dataset
| 0.999287 |
2309.02759
|
EPTCS
|
Benedek Nagy (Eastern Mediterranean University, Eszterhazy Karoly
Catholic University)
|
State-deterministic Finite Automata with Translucent Letters and Finite
Automata with Nondeterministically Translucent Letters
|
In Proceedings AFL 2023, arXiv:2309.01126
|
EPTCS 386, 2023, pp. 170-184
|
10.4204/EPTCS.386.14
| null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Deterministic and nondeterministic finite automata with translucent letters
were introduced by Nagy and Otto more than a decade ago as Cooperative
Distributed systems of a kind of stateless restarting automata with window size
one. These finite state machines have a surprisingly large expressive power:
all commutative semi-linear languages and all rational trace languages can be
accepted by them including various not context-free languages. While the
nondeterministic variant defines a language class with nice closure properties,
the deterministic variant is weaker, however it contains all regular languages,
some non-regular context-free languages, as the Dyck language, and also some
languages that are not even context-free. In all those models for each state,
the letters of the alphabet could be in one of the following categories: the
automaton cannot see the letter (it is translucent), there is a transition
defined on the letter (maybe more than one transitions in nondeterministic
case) or none of the above categories (the automaton gets stuck by seeing this
letter at the given state and this computation is not accepting).
State-deterministic automata are recent models, where the next state of the
computation determined by the structure of the automata and it is independent
of the processed letters. In this paper our aim is twofold, on the one hand, we
investigate state-deterministic finite automata with translucent letters. These
automata are specially restricted deterministic finite automata with
translucent letters.
In the other novel model we present, it is allowed that for a state the set
of translucent letters and the set of letters for which transition is defined
are not disjoint. One can interpret this fact that the automaton has a
nondeterministic choice for each occurrence of such letters to see them (and
then erase and make the transition) or not to see that occurrence at that time.
Based on these semi-translucent letters, the expressive power of the automata
increases, i.e., in this way a proper generalization of the previous models is
obtained.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 06:19:29 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Nagy",
"Benedek",
"",
"Eastern Mediterranean University, Eszterhazy Karoly\n Catholic University"
]
] |
new_dataset
| 0.998394 |
2309.02763
|
EPTCS
|
Giovanni Pighizzini, Luca Prigioniero
|
Once-Marking and Always-Marking 1-Limited Automata
|
In Proceedings AFL 2023, arXiv:2309.01126
|
EPTCS 386, 2023, pp. 215-227
|
10.4204/EPTCS.386.17
| null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Single-tape nondeterministic Turing machines that are allowed to replace the
symbol in each tape cell only when it is scanned for the first time are also
known as 1-limited automata. These devices characterize, exactly as finite
automata, the class of regular languages. However, they can be extremely more
succinct. Indeed, in the worst case the size gap from 1-limited automata to
one-way deterministic finite automata is double exponential.
Here we introduce two restricted versions of 1-limited automata, once-marking
1-limited automata and always-marking 1-limited automata, and study their
descriptional complexity. We prove that once-marking 1-limited automata still
exhibit a double exponential size gap to one-way deterministic finite automata.
However, their deterministic restriction is polynomially related in size to
two-way deterministic finite automata, in contrast to deterministic 1-limited
automata, whose equivalent two-way deterministic finite automata in the worst
case are exponentially larger. For always-marking 1-limited automata, we prove
that the size gap to one-way deterministic finite automata is only a single
exponential. The gap remains exponential even in the case the given machine is
deterministic.
We obtain other size relationships between different variants of these
machines and finite automata and we present some problems that deserve
investigation.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 06:20:24 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Pighizzini",
"Giovanni",
""
],
[
"Prigioniero",
"Luca",
""
]
] |
new_dataset
| 0.991297 |
2309.02768
|
EPTCS
|
Bianca Truthe
|
Strictly Locally Testable and Resources Restricted Control Languages in
Tree-Controlled Grammars
|
In Proceedings AFL 2023, arXiv:2309.01126
|
EPTCS 386, 2023, pp. 253-268
|
10.4204/EPTCS.386.20
| null |
cs.CC cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Tree-controlled grammars are context-free grammars where the derivation
process is controlled in such a way that every word on a level of the
derivation tree must belong to a certain control language. We investigate the
generative capacity of such tree-controlled grammars where the control
languages are special regular sets, especially strictly locally testable
languages or languages restricted by resources of the generation (number of
non-terminal symbols or production rules) or acceptance (number of states).
Furthermore, the set theoretic inclusion relations of these subregular language
families themselves are studied.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 06:21:15 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Truthe",
"Bianca",
""
]
] |
new_dataset
| 0.994165 |
2309.02777
|
V\'ictor M. Batlle
|
V\'ictor M. Batlle, Jos\'e M. M. Montiel, Pascal Fua and Juan D.
Tard\'os
|
LightNeuS: Neural Surface Reconstruction in Endoscopy using Illumination
Decline
|
12 pages, 7 figures, 1 table, submitted to MICCAI 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a new approach to 3D reconstruction from sequences of images
acquired by monocular endoscopes. It is based on two key insights. First,
endoluminal cavities are watertight, a property naturally enforced by modeling
them in terms of a signed distance function. Second, the scene illumination is
variable. It comes from the endoscope's light sources and decays with the
inverse of the squared distance to the surface. To exploit these insights, we
build on NeuS, a neural implicit surface reconstruction technique with an
outstanding capability to learn appearance and a SDF surface model from
multiple views, but currently limited to scenes with static illumination. To
remove this limitation and exploit the relation between pixel brightness and
depth, we modify the NeuS architecture to explicitly account for it and
introduce a calibrated photometric model of the endoscope's camera and light
source. Our method is the first one to produce watertight reconstructions of
whole colon sections. We demonstrate excellent accuracy on phantom imagery.
Remarkably, the watertight prior combined with illumination decline, allows to
complete the reconstruction of unseen portions of the surface with acceptable
accuracy, paving the way to automatic quality assessment of cancer screening
explorations, measuring the global percentage of observed mucosa.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 06:41:40 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Batlle",
"Víctor M.",
""
],
[
"Montiel",
"José M. M.",
""
],
[
"Fua",
"Pascal",
""
],
[
"Tardós",
"Juan D.",
""
]
] |
new_dataset
| 0.995191 |
2309.02781
|
Xuan Liu
|
Xuan Liu, Cagdas D. Onal, and Jie Fu
|
Technical Report: A Contact-aware Feedback CPG System for Learning-based
Locomotion Control in a Soft Snake Robot
|
17 pages, 19 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Integrating contact-awareness into a soft snake robot and efficiently
controlling its locomotion in response to contact information present
significant challenges. This paper aims to solve contact-aware locomotion
problem of a soft snake robot through developing bio-inspired contact-aware
locomotion controllers. To provide effective contact information for the
controllers, we develop a scale covered sensor structure mimicking natural
snakes' \textit{scale sensilla}. In the design of control framework, our core
contribution is the development of a novel sensory feedback mechanism of the
Matsuoka central pattern generator (CPG) network. This mechanism allows the
Matsuoka CPG system to work like a "spine cord" in the whole contact-aware
control scheme, which simultaneously takes the stimuli including tonic input
signals from the "brain" (a goal-tracking locomotion controller) and sensory
feedback signals from the "reflex arc" (the contact reactive controller), and
generate rhythmic signals to effectively actuate the soft snake robot to
slither through densely allocated obstacles. In the design of the "reflex arc",
we develop two types of reactive controllers -- 1) a reinforcement learning
(RL) sensor regulator that learns to manipulate the sensory feedback inputs of
the CPG system, and 2) a local reflexive sensor-CPG network that directly
connects sensor readings and the CPG's feedback inputs in a special topology.
These two reactive controllers respectively facilitate two different
contact-aware locomotion control schemes. The two control schemes are tested
and evaluated in the soft snake robot, showing promising performance in the
contact-aware locomotion tasks. The experimental results also further verify
the benefit of Matsuoka CPG system in bio-inspired robot controller design.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 06:46:52 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Liu",
"Xuan",
""
],
[
"Onal",
"Cagdas D.",
""
],
[
"Fu",
"Jie",
""
]
] |
new_dataset
| 0.998213 |
2309.02810
|
Andrea Bedin
|
Andrea Bedin, Dmitry Chizhik, Jinfeng Du, Martti Moisio, Karthik
Upadhya, Reinaldo Valenzuela and Mikko A. Uusitalo
|
28 GHz NLOS Channel Measurements Revealing Low Path Loss and High
Angular Spread in Container Ports
|
10 pages, 19 figures. Submitted to Transactions on Antennas and
Propagation
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents results from a comprehensive measurement campaign
conducted at 28 GHz inside a container canyon within a commercial port
environment. The measurements are performed at various points inside the
container canyon, considering two types of container stacking and two different
Transmitter (TX) locations, using a narrowband channel sounder equipped with a
rotating horn antenna. The measurements are used to evaluate the azimuthal
spectrum and spatial correlation, as well as the impact of a vehicle inside a
canyon on these parameters. Further, the measurement data is utilized to
validate a simulation setup from which the path loss and the elevation spectrum
inside the canyon is obtained. Lastly, a propagation model inside the canyon is
hypothesized and shown to be consistent with the measurements. The analysis
show a low path loss compared to free space, as well as a high angular spread
and short spatial correlation.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 07:54:03 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Bedin",
"Andrea",
""
],
[
"Chizhik",
"Dmitry",
""
],
[
"Du",
"Jinfeng",
""
],
[
"Moisio",
"Martti",
""
],
[
"Upadhya",
"Karthik",
""
],
[
"Valenzuela",
"Reinaldo",
""
],
[
"Uusitalo",
"Mikko A.",
""
]
] |
new_dataset
| 0.999029 |
2309.02834
|
Johan Markdahl
|
Johan Markdahl and Mattias Vikgren
|
tinySLAM-based exploration with a swarm of nano-UAVs
|
Published at the Sixth International Symposium on Swarm Behavior and
Bio-Inspired Robotics 2023 (SWARM 6th 2023). Pages 899-904
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
This paper concerns SLAM and exploration for a swarm of nano-UAVs. The laser
range finder-based tinySLAM algorithm is used to build maps of the environment.
The maps are synchronized using an iterative closest point algorithm. The UAVs
then explore the map by steering to points selected by a modified dynamic
coverage algorithm, for which we prove a stability result. Both algorithms
inform each other, allowing the UAVs to map out new areas of the environment
and move into them for exploration. Experimental findings using the nano-UAV
Crazyflie 2.1 platform are presented. A key challenge is to implement all
algorithms on the hardware limited experimental platform.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 08:40:30 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Markdahl",
"Johan",
""
],
[
"Vikgren",
"Mattias",
""
]
] |
new_dataset
| 0.981975 |
2309.02841
|
Bin Chen
|
Bin Chen, Zhenglin Liang, Shiqian Wu
|
Adjacency-hopping de Bruijn Sequences for Non-repetitive Coding
| null | null | null | null |
cs.IT cs.CV cs.DM math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A special type of cyclic sequences named adjacency-hopping de Bruijn
sequences is introduced in this paper. It is theoretically proved the existence
of such sequences, and the number of such sequences is derived. These sequences
guarantee that all neighboring codes are different while retaining the
uniqueness of subsequences, which is a significant characteristic of original
de Bruijn sequences in coding and matching. At last, the adjacency-hopping de
Bruijn sequences are applied to structured light coding, and a color fringe
pattern coded by such a sequence is presented. In summary, the proposed
sequences demonstrate significant advantages in structured light coding by
virtue of the uniqueness of subsequences and the adjacency-hopping
characteristic, and show potential for extension to other fields with similar
requirements of non-repetitive coding and efficient matching.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 08:59:15 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Chen",
"Bin",
""
],
[
"Liang",
"Zhenglin",
""
],
[
"Wu",
"Shiqian",
""
]
] |
new_dataset
| 0.997508 |
2309.02848
|
Xuanwen Huang
|
Xuanwen Huang, Kaiqiao Han, Dezheng Bao, Quanjin Tao, Zhisheng Zhang,
Yang Yang, Qi Zhu
|
Prompt-based Node Feature Extractor for Few-shot Learning on
Text-Attributed Graphs
|
Under review
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-attributed Graphs (TAGs) are commonly found in the real world, such as
social networks and citation networks, and consist of nodes represented by
textual descriptions. Currently, mainstream machine learning methods on TAGs
involve a two-stage modeling approach: (1) unsupervised node feature extraction
with pre-trained language models (PLMs); and (2) supervised learning using
Graph Neural Networks (GNNs). However, we observe that these representations,
which have undergone large-scale pre-training, do not significantly improve
performance with a limited amount of training samples. The main issue is that
existing methods have not effectively integrated information from the graph and
downstream tasks simultaneously. In this paper, we propose a novel framework
called G-Prompt, which combines a graph adapter and task-specific prompts to
extract node features. First, G-Prompt introduces a learnable GNN layer
(\emph{i.e.,} adaptor) at the end of PLMs, which is fine-tuned to better
capture the masked tokens considering graph neighborhood information. After the
adapter is trained, G-Prompt incorporates task-specific prompts to obtain
\emph{interpretable} node representations for the downstream task. Our
experiment results demonstrate that our proposed method outperforms current
state-of-the-art (SOTA) methods on few-shot node classification. More
importantly, in zero-shot settings, the G-Prompt embeddings can not only
provide better task interpretability than vanilla PLMs but also achieve
comparable performance with fully-supervised baselines.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 09:12:52 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Huang",
"Xuanwen",
""
],
[
"Han",
"Kaiqiao",
""
],
[
"Bao",
"Dezheng",
""
],
[
"Tao",
"Quanjin",
""
],
[
"Zhang",
"Zhisheng",
""
],
[
"Yang",
"Yang",
""
],
[
"Zhu",
"Qi",
""
]
] |
new_dataset
| 0.967413 |
2309.02875
|
Vasiliki Sideri-Lampretsa
|
Vasiliki Sideri-Lampretsa, Veronika A. Zimmer, Huaqi Qiu, Georgios
Kaissis, and Daniel Rueckert
|
MAD: Modality Agnostic Distance Measure for Image Registration
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-modal image registration is a crucial pre-processing step in many
medical applications. However, it is a challenging task due to the complex
intensity relationships between different imaging modalities, which can result
in large discrepancy in image appearance. The success of multi-modal image
registration, whether it is conventional or learning based, is predicated upon
the choice of an appropriate distance (or similarity) measure. Particularly,
deep learning registration algorithms lack in accuracy or even fail completely
when attempting to register data from an "unseen" modality. In this work, we
present Modality Agnostic Distance (MAD), a deep image distance}] measure that
utilises random convolutions to learn the inherent geometry of the images while
being robust to large appearance changes. Random convolutions are
geometry-preserving modules which we use to simulate an infinite number of
synthetic modalities alleviating the need for aligned paired data during
training. We can therefore train MAD on a mono-modal dataset and successfully
apply it to a multi-modal dataset. We demonstrate that not only can MAD
affinely register multi-modal images successfully, but it has also a larger
capture range than traditional measures such as Mutual Information and
Normalised Gradient Fields.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 09:59:58 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Sideri-Lampretsa",
"Vasiliki",
""
],
[
"Zimmer",
"Veronika A.",
""
],
[
"Qiu",
"Huaqi",
""
],
[
"Kaissis",
"Georgios",
""
],
[
"Rueckert",
"Daniel",
""
]
] |
new_dataset
| 0.999523 |
2309.02902
|
Quoc-Nam Nguyen
|
Chau-Thang Phan, Quoc-Nam Nguyen, Chi-Thanh Dang, Trong-Hop Do, Kiet
Van Nguyen
|
ViCGCN: Graph Convolutional Network with Contextualized Language Models
for Social Media Mining in Vietnamese
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Social media processing is a fundamental task in natural language processing
with numerous applications. As Vietnamese social media and information science
have grown rapidly, the necessity of information-based mining on Vietnamese
social media has become crucial. However, state-of-the-art research faces
several significant drawbacks, including imbalanced data and noisy data on
social media platforms. Imbalanced and noisy are two essential issues that need
to be addressed in Vietnamese social media texts. Graph Convolutional Networks
can address the problems of imbalanced and noisy data in text classification on
social media by taking advantage of the graph structure of the data. This study
presents a novel approach based on contextualized language model (PhoBERT) and
graph-based method (Graph Convolutional Networks). In particular, the proposed
approach, ViCGCN, jointly trained the power of Contextualized embeddings with
the ability of Graph Convolutional Networks, GCN, to capture more syntactic and
semantic dependencies to address those drawbacks. Extensive experiments on
various Vietnamese benchmark datasets were conducted to verify our approach.
The observation shows that applying GCN to BERTology models as the final layer
significantly improves performance. Moreover, the experiments demonstrate that
ViCGCN outperforms 13 powerful baseline models, including BERTology models,
fusion BERTology and GCN models, other baselines, and SOTA on three benchmark
social media datasets. Our proposed ViCGCN approach demonstrates a significant
improvement of up to 6.21%, 4.61%, and 2.63% over the best Contextualized
Language Models, including multilingual and monolingual, on three benchmark
datasets, UIT-VSMEC, UIT-ViCTSD, and UIT-VSFC, respectively. Additionally, our
integrated model ViCGCN achieves the best performance compared to other
BERTology integrated with GCN models.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 10:51:34 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Phan",
"Chau-Thang",
""
],
[
"Nguyen",
"Quoc-Nam",
""
],
[
"Dang",
"Chi-Thanh",
""
],
[
"Do",
"Trong-Hop",
""
],
[
"Van Nguyen",
"Kiet",
""
]
] |
new_dataset
| 0.995211 |
2309.02923
|
Jiakun Xu
|
Jiakun Xu, Bowen Xu, Gui-Song Xia, Liang Dong, Nan Xue
|
Patched Line Segment Learning for Vector Road Mapping
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a novel approach to computing vector road maps from
satellite remotely sensed images, building upon a well-defined Patched Line
Segment (PaLiS) representation for road graphs that holds geometric
significance. Unlike prevailing methods that derive road vector representations
from satellite images using binary masks or keypoints, our method employs line
segments. These segments not only convey road locations but also capture their
orientations, making them a robust choice for representation. More precisely,
given an input image, we divide it into non-overlapping patches and predict a
suitable line segment within each patch. This strategy enables us to capture
spatial and structural cues from these patch-based line segments, simplifying
the process of constructing the road network graph without the necessity of
additional neural networks for connectivity. In our experiments, we demonstrate
how an effective representation of a road graph significantly enhances the
performance of vector road mapping on established benchmarks, without requiring
extensive modifications to the neural network architecture. Furthermore, our
method achieves state-of-the-art performance with just 6 GPU hours of training,
leading to a substantial 32-fold reduction in training costs in terms of GPU
hours.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 11:33:25 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Xu",
"Jiakun",
""
],
[
"Xu",
"Bowen",
""
],
[
"Xia",
"Gui-Song",
""
],
[
"Dong",
"Liang",
""
],
[
"Xue",
"Nan",
""
]
] |
new_dataset
| 0.963723 |
2309.02999
|
Sijin Chen
|
Sijin Chen, Hongyuan Zhu, Mingsheng Li, Xin Chen, Peng Guo, Yinjie
Lei, Gang Yu, Taihao Li, and Tao Chen
|
Vote2Cap-DETR++: Decoupling Localization and Describing for End-to-End
3D Dense Captioning
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
3D dense captioning requires a model to translate its understanding of an
input 3D scene into several captions associated with different object regions.
Existing methods adopt a sophisticated "detect-then-describe" pipeline, which
builds explicit relation modules upon a 3D detector with numerous hand-crafted
components. While these methods have achieved initial success, the cascade
pipeline tends to accumulate errors because of duplicated and inaccurate box
estimations and messy 3D scenes. In this paper, we first propose Vote2Cap-DETR,
a simple-yet-effective transformer framework that decouples the decoding
process of caption generation and object localization through parallel
decoding. Moreover, we argue that object localization and description
generation require different levels of scene understanding, which could be
challenging for a shared set of queries to capture. To this end, we propose an
advanced version, Vote2Cap-DETR++, which decouples the queries into
localization and caption queries to capture task-specific features.
Additionally, we introduce the iterative spatial refinement strategy to vote
queries for faster convergence and better localization performance. We also
insert additional spatial information to the caption head for more accurate
descriptions. Without bells and whistles, extensive experiments on two commonly
used datasets, ScanRefer and Nr3D, demonstrate Vote2Cap-DETR and
Vote2Cap-DETR++ surpass conventional "detect-then-describe" methods by a large
margin. Codes will be made available at
https://github.com/ch3cook-fdu/Vote2Cap-DETR.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 13:43:27 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Chen",
"Sijin",
""
],
[
"Zhu",
"Hongyuan",
""
],
[
"Li",
"Mingsheng",
""
],
[
"Chen",
"Xin",
""
],
[
"Guo",
"Peng",
""
],
[
"Lei",
"Yinjie",
""
],
[
"Yu",
"Gang",
""
],
[
"Li",
"Taihao",
""
],
[
"Chen",
"Tao",
""
]
] |
new_dataset
| 0.960527 |
2309.03031
|
Zeyu Ling
|
Zeyu Ling, Bo Han, Yongkang Wong, Mohan Kangkanhalli, Weidong Geng
|
MCM: Multi-condition Motion Synthesis Framework for Multi-scenario
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The objective of the multi-condition human motion synthesis task is to
incorporate diverse conditional inputs, encompassing various forms like text,
music, speech, and more. This endows the task with the capability to adapt
across multiple scenarios, ranging from text-to-motion and music-to-dance,
among others. While existing research has primarily focused on single
conditions, the multi-condition human motion generation remains underexplored.
In this paper, we address these challenges by introducing MCM, a novel paradigm
for motion synthesis that spans multiple scenarios under diverse conditions.
The MCM framework is able to integrate with any DDPM-like diffusion model to
accommodate multi-conditional information input while preserving its generative
capabilities. Specifically, MCM employs two-branch architecture consisting of a
main branch and a control branch. The control branch shares the same structure
as the main branch and is initialized with the parameters of the main branch,
effectively maintaining the generation ability of the main branch and
supporting multi-condition input. We also introduce a Transformer-based
diffusion model MWNet (DDPM-like) as our main branch that can capture the
spatial complexity and inter-joint correlations in motion sequences through a
channel-dimension self-attention module. Quantitative comparisons demonstrate
that our approach achieves SoTA results in both text-to-motion and competitive
results in music-to-dance tasks, comparable to task-specific methods.
Furthermore, the qualitative evaluation shows that MCM not only streamlines the
adaptation of methodologies originally designed for text-to-motion tasks to
domains like music-to-dance and speech-to-gesture, eliminating the need for
extensive network re-configurations but also enables effective multi-condition
modal control, realizing "once trained is motion need".
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 14:17:49 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Ling",
"Zeyu",
""
],
[
"Han",
"Bo",
""
],
[
"Wong",
"Yongkang",
""
],
[
"Kangkanhalli",
"Mohan",
""
],
[
"Geng",
"Weidong",
""
]
] |
new_dataset
| 0.988946 |
2309.03059
|
Xusheng Zhu
|
Xusheng Zhu, Wen Chen, Qingqing Wu, Zhendong Li, Jun Li, Shunqing
Zhang, and Ming Ding
|
Reconfigurable Intelligent Surface Aided Space Shift Keying With
Imperfect CSI
|
arXiv admin note: text overlap with arXiv:2307.01994
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate the performance of reconfigurable intelligent
surface (RIS)-aided spatial shift keying (SSK) wireless communication systems
in the presence of imperfect channel state information (CSI). Specifically, we
analyze the average bit error probability (ABEP) of two RIS-SSK systems
respectively based on intelligent reflection and blind reflection of RIS. For
the intelligent RIS-SSK scheme, we first derive the conditional pairwise error
probability of the composite channel through maximum likelihood (ML) detection.
Subsequently, we derive the probability density function of the combined
channel. Due to the intricacies of the composite channel formulation, an exact
closed-form ABEP expression is unattainable through direct derivation. To this
end, we resort to employing the Gaussian-Chebyshev quadrature method to
estimate the results. In addition, we employ the Q-function approximation to
derive the non-exact closed-form expression when CSI imperfections are present.
For the blind RIS-SSK scheme, we derive both closed-form ABEP expression and
asymptotic ABEP expression with imperfect CSI by adopting the ML detector. To
offer deeper insights, we explore the impact of discrete reflection phase
shifts on the performance of the RIS-SSK system. Lastly, we extensively
validate all the analytical derivations using Monte Carlo simulations.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 14:59:27 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Zhu",
"Xusheng",
""
],
[
"Chen",
"Wen",
""
],
[
"Wu",
"Qingqing",
""
],
[
"Li",
"Zhendong",
""
],
[
"Li",
"Jun",
""
],
[
"Zhang",
"Shunqing",
""
],
[
"Ding",
"Ming",
""
]
] |
new_dataset
| 0.995938 |
2309.03078
|
Giordano Paoletti
|
Giordano Paoletti, Lorenzo Dall'Amico, Kyriaki Kalimeri, Jacopo Lenti,
Yelena Mejova, Daniela Paolotti, Michele Starnini, Michele Tizzani
|
Political Issue or Public Health: the Vaccination Debate on Twitter in
Europe
|
15 pages, 11 figures
| null | null | null |
cs.SI physics.soc-ph
|
http://creativecommons.org/licenses/by/4.0/
|
At the beginning of the COVID-19 pandemic, fears grew that making vaccination
a political (instead of public health) issue may impact the efficacy of this
life-saving intervention, spurring the spread of vaccine-hesitant content. In
this study, we examine whether there is a relationship between the political
interest of social media users and their exposure to vaccine-hesitant content
on Twitter. We focus on 17 European countries using a multilingual,
longitudinal dataset of tweets spanning the period before COVID, up to the
vaccine roll-out. We find that, in most countries, users' exposure to
vaccine-hesitant content is the highest in the early months of the pandemic,
around the time of greatest scientific uncertainty. Further, users who follow
politicians from right-wing parties, and those associated with authoritarian or
anti-EU stances are more likely to be exposed to vaccine-hesitant content,
whereas those following left-wing politicians, more pro-EU or liberal parties,
are less likely to encounter it. Somewhat surprisingly, politicians did not
play an outsized role in the vaccine debates of their countries, receiving a
similar number of retweets as other similarly popular users. This systematic,
multi-country, longitudinal investigation of the connection of politics with
vaccine hesitancy has important implications for public health policy and
communication.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 15:26:40 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Paoletti",
"Giordano",
""
],
[
"Dall'Amico",
"Lorenzo",
""
],
[
"Kalimeri",
"Kyriaki",
""
],
[
"Lenti",
"Jacopo",
""
],
[
"Mejova",
"Yelena",
""
],
[
"Paolotti",
"Daniela",
""
],
[
"Starnini",
"Michele",
""
],
[
"Tizzani",
"Michele",
""
]
] |
new_dataset
| 0.999546 |
2309.03103
|
Mohamad Elzohbi
|
Mohamad Elzohbi, Richard Zhao
|
ContrastWSD: Enhancing Metaphor Detection with Word Sense Disambiguation
Following the Metaphor Identification Procedure
|
10 pages, 2 figures
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents ContrastWSD, a RoBERTa-based metaphor detection model
that integrates the Metaphor Identification Procedure (MIP) and Word Sense
Disambiguation (WSD) to extract and contrast the contextual meaning with the
basic meaning of a word to determine whether it is used metaphorically in a
sentence. By utilizing the word senses derived from a WSD model, our model
enhances the metaphor detection process and outperforms other methods that rely
solely on contextual embeddings or integrate only the basic definitions and
other external knowledge. We evaluate our approach on various benchmark
datasets and compare it with strong baselines, indicating the effectiveness in
advancing metaphor detection.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 15:41:38 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Elzohbi",
"Mohamad",
""
],
[
"Zhao",
"Richard",
""
]
] |
new_dataset
| 0.99348 |
2309.03113
|
Jubilee Prasad Rao
|
Jubilee Prasad-Rao, Roohollah Heidary and Jesse Williams
|
Detecting Manufacturing Defects in PCBs via Data-Centric Machine
Learning on Solder Paste Inspection Features
| null | null | null | null |
cs.LG cs.AI cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated detection of defects in Printed Circuit Board (PCB) manufacturing
using Solder Paste Inspection (SPI) and Automated Optical Inspection (AOI)
machines can help improve operational efficiency and significantly reduce the
need for manual intervention. In this paper, using SPI-extracted features of 6
million pins, we demonstrate a data-centric approach to train Machine Learning
(ML) models to detect PCB defects at three stages of PCB manufacturing. The 6
million PCB pins correspond to 2 million components that belong to 15,387 PCBs.
Using a base extreme gradient boosting (XGBoost) ML model, we iterate on the
data pre-processing step to improve detection performance. Combining pin-level
SPI features using component and PCB IDs, we developed training instances also
at the component and PCB level. This allows the ML model to capture any
inter-pin, inter-component, or spatial effects that may not be apparent at the
pin level. Models are trained at the pin, component, and PCB levels, and the
detection results from the different models are combined to identify defective
components.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 15:52:55 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Prasad-Rao",
"Jubilee",
""
],
[
"Heidary",
"Roohollah",
""
],
[
"Williams",
"Jesse",
""
]
] |
new_dataset
| 0.994469 |
2309.03130
|
Sudeep Dasari
|
Vittorio Caggiano, Sudeep Dasari, Vikash Kumar
|
MyoDex: A Generalizable Prior for Dexterous Manipulation
|
Accepted to the 40th International Conference on Machine Learning
(2023)
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Human dexterity is a hallmark of motor control. Our hands can rapidly
synthesize new behaviors despite the complexity (multi-articular and
multi-joints, with 23 joints controlled by more than 40 muscles) of
musculoskeletal sensory-motor circuits. In this work, we take inspiration from
how human dexterity builds on a diversity of prior experiences, instead of
being acquired through a single task. Motivated by this observation, we set out
to develop agents that can build upon their previous experience to quickly
acquire new (previously unattainable) behaviors. Specifically, our approach
leverages multi-task learning to implicitly capture task-agnostic behavioral
priors (MyoDex) for human-like dexterity, using a physiologically realistic
human hand model - MyoHand. We demonstrate MyoDex's effectiveness in few-shot
generalization as well as positive transfer to a large repertoire of unseen
dexterous manipulation tasks. Agents leveraging MyoDex can solve approximately
3x more tasks, and 4x faster in comparison to a distillation baseline. While
prior work has synthesized single musculoskeletal control behaviors, MyoDex is
the first generalizable manipulation prior that catalyzes the learning of
dexterous physiological control across a large variety of contact-rich
behaviors. We also demonstrate the effectiveness of our paradigms beyond
musculoskeletal control towards the acquisition of dexterity in 24 DoF Adroit
Hand. Website: https://sites.google.com/view/myodex
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 16:10:49 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Caggiano",
"Vittorio",
""
],
[
"Dasari",
"Sudeep",
""
],
[
"Kumar",
"Vikash",
""
]
] |
new_dataset
| 0.998643 |
2309.03164
|
Tharindu Kumarage
|
Tharindu Kumarage, Amrita Bhattacharjee, Djordje Padejski, Kristy
Roschke, Dan Gillmor, Scott Ruston, Huan Liu, Joshua Garland
|
J-Guard: Journalism Guided Adversarially Robust Detection of
AI-generated News
|
This Paper is Accepted to The 13th International Joint Conference on
Natural Language Processing and the 3rd Conference of the Asia-Pacific
Chapter of the Association for Computational Linguistics (IJCNLP-AACL 2023)
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The rapid proliferation of AI-generated text online is profoundly reshaping
the information landscape. Among various types of AI-generated text,
AI-generated news presents a significant threat as it can be a prominent source
of misinformation online. While several recent efforts have focused on
detecting AI-generated text in general, these methods require enhanced
reliability, given concerns about their vulnerability to simple adversarial
attacks. Furthermore, due to the eccentricities of news writing, applying these
detection methods for AI-generated news can produce false positives,
potentially damaging the reputation of news organizations. To address these
challenges, we leverage the expertise of an interdisciplinary team to develop a
framework, J-Guard, capable of steering existing supervised AI text detectors
for detecting AI-generated news while boosting adversarial robustness. By
incorporating stylistic cues inspired by the unique journalistic attributes,
J-Guard effectively distinguishes between real-world journalism and
AI-generated news articles. Our experiments on news articles generated by a
vast array of AI models, including ChatGPT (GPT3.5), demonstrate the
effectiveness of J-Guard in enhancing detection capabilities while maintaining
an average performance decrease of as low as 7% when faced with adversarial
attacks.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 17:06:31 GMT"
}
] | 2023-09-07T00:00:00 |
[
[
"Kumarage",
"Tharindu",
""
],
[
"Bhattacharjee",
"Amrita",
""
],
[
"Padejski",
"Djordje",
""
],
[
"Roschke",
"Kristy",
""
],
[
"Gillmor",
"Dan",
""
],
[
"Ruston",
"Scott",
""
],
[
"Liu",
"Huan",
""
],
[
"Garland",
"Joshua",
""
]
] |
new_dataset
| 0.997386 |
2012.04174
|
Ninad Jadhav
|
Ninad Jadhav, Weiying Wang, Diana Zhang, Oussama Khatib, Swarun Kumar
and Stephanie Gil
|
A wireless signal-based sensing framework for robotics
|
27 pages, 27 figures, *co-primary authors
| null |
10.1177/02783649221097989
| null |
cs.RO cs.MA
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper we develop the analytical framework for a novel Wireless
signal-based Sensing capability for Robotics (WSR) by leveraging robots'
mobility. It allows robots to primarily measure relative direction, or
Angle-of-Arrival (AOA), to other robots, while operating in non-line-of-sight
unmapped environments and without requiring external infrastructure. We do so
by capturing all of the paths that a wireless signal traverses as it travels
from a transmitting to a receiving robot in the team, which we term as an AOA
profile. The key intuition behind our approach is to enable a robot to emulate
antenna arrays as it moves freely in 2D and 3D space. The small differences in
the phase of the wireless signals are thus processed with knowledge of robots'
local displacement to obtain the profile, via a method akin to Synthetic
Aperture Radar (SAR). The main contribution of this work is the development of
i) a framework to accommodate arbitrary 2D and 3D motion, as well as continuous
mobility of both signal transmitting and receiving robots, while computing AOA
profiles between them and ii) a Cramer-Rao Bound analysis, based on antenna
array theory, that provides a lower bound on the variance in AOA estimation as
a function of the geometry of robot motion. We show that allowing robots to use
their full mobility in 3D space while performing SAR, results in more accurate
AOA profiles and thus better AOA estimation. All analytical developments are
substantiated by extensive simulation and hardware experiments on air/ground
robot platforms using 5 GHz WiFi. Our experimental results bolster our
analytical findings, demonstrating that 3D motion provides enhanced and
consistent accuracy, with total AOA error of less than 10 degree for 95% of
trials. We also analytically characterize the impact of displacement estimation
errors on the measured AOA.
|
[
{
"version": "v1",
"created": "Tue, 8 Dec 2020 02:31:06 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Jun 2021 21:56:08 GMT"
},
{
"version": "v3",
"created": "Mon, 15 Nov 2021 03:15:51 GMT"
},
{
"version": "v4",
"created": "Sat, 19 Feb 2022 19:47:24 GMT"
},
{
"version": "v5",
"created": "Mon, 4 Sep 2023 22:20:47 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Jadhav",
"Ninad",
""
],
[
"Wang",
"Weiying",
""
],
[
"Zhang",
"Diana",
""
],
[
"Khatib",
"Oussama",
""
],
[
"Kumar",
"Swarun",
""
],
[
"Gil",
"Stephanie",
""
]
] |
new_dataset
| 0.998111 |
2106.08777
|
Ronny Bergmann
|
Seth D. Axen and Mateusz Baran and Ronny Bergmann and Krzysztof Rzecki
|
Manifolds.jl: An Extensible Julia Framework for Data Analysis on
Manifolds
| null | null |
10.1145/3618296
| null |
cs.MS
|
http://creativecommons.org/licenses/by/4.0/
|
We present the Julia package Manifolds.jl, providing a fast and easy-to-use
library of Riemannian manifolds and Lie groups. This package enables working
with data defined on a Riemannian manifold, such as the circle, the sphere,
symmetric positive definite matrices, or one of the models for hyperbolic
spaces. We introduce a common interface, available in ManifoldsBase.jl, with
which new manifolds, applications, and algorithms can be implemented. We
demonstrate the utility of Manifolds.jl using B\'ezier splines, an optimization
task on manifolds, and principal component analysis on nonlinear data. In a
benchmark, Manifolds.jl outperforms all comparable packages for low-dimensional
manifolds in speed; over Python and Matlab packages, the improvement is often
several orders of magnitude, while over C/C++ packages, the improvement is
two-fold. For high-dimensional manifolds, it outperforms all packages except
for Tensorflow-Riemopt, which is specifically tailored for high-dimensional
manifolds.
|
[
{
"version": "v1",
"created": "Wed, 16 Jun 2021 13:36:17 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Sep 2022 14:15:04 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Jun 2023 07:55:16 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Axen",
"Seth D.",
""
],
[
"Baran",
"Mateusz",
""
],
[
"Bergmann",
"Ronny",
""
],
[
"Rzecki",
"Krzysztof",
""
]
] |
new_dataset
| 0.975703 |
2112.02333
|
Ishan Tarunesh
|
Ishan Tarunesh, Somak Aditya, Monojit Choudhury
|
LoNLI: An Extensible Framework for Testing Diverse Logical Reasoning
Capabilities for NLI
|
arXiv admin note: substantial text overlap with arXiv:2107.07229
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Natural Language Inference (NLI) is considered a representative task to test
natural language understanding (NLU). In this work, we propose an extensible
framework to collectively yet categorically test diverse Logical reasoning
capabilities required for NLI (and, by extension, NLU). Motivated by behavioral
testing, we create a semi-synthetic large test bench (363 templates, 363k
examples) and an associated framework that offers the following utilities: 1)
individually test and analyze reasoning capabilities along 17 reasoning
dimensions (including pragmatic reasoning); 2) design experiments to study
cross-capability information content (leave one out or bring one in); and 3)
the synthetic nature enables us to control for artifacts and biases. We extend
a publicly available framework of automated test case instantiation from
free-form natural language templates (CheckList) and a well-defined taxonomy of
capabilities to cover a wide range of increasingly harder test cases while
varying the complexity of natural language. Through our analysis of
state-of-the-art NLI systems, we observe that our benchmark is indeed hard (and
non-trivial even with training on additional resources). Some capabilities
stand out as harder. Further, fine-grained analysis and fine-tuning experiments
reveal more insights about these capabilities and the models -- supporting and
extending previous observations; thus showing the utility of the proposed
testbench.
|
[
{
"version": "v1",
"created": "Sat, 4 Dec 2021 13:41:31 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Sep 2023 08:28:54 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Tarunesh",
"Ishan",
""
],
[
"Aditya",
"Somak",
""
],
[
"Choudhury",
"Monojit",
""
]
] |
new_dataset
| 0.999215 |
2202.13844
|
Tiziana Calamoneri
|
Tiziana Calamoneri, Angelo Monti, Fabrizio Petroni
|
All Graphs with at most 8 nodes are 2-interval-PCGs
|
9 pages, 3 figures, never published
| null | null | null |
cs.DM math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
A graph G is a multi-interval PCG if there exist an edge weighted tree T with
non-negative real values and disjoint intervals of the non-negative real
half-line such that each node of G is uniquely associated to a leaf of T and
there is an edge between two nodes in G if and only if the weighted distance
between their corresponding leaves in T lies within any such intervals. If the
number of intervals is k, then we call the graph a k-interval-PCG; in symbols,
G = k-interval-PCG (T, I1, . . . , Ik). It is known that 2-interval-PCGs do not
contain all graphs and the smallest known graph outside this class has 135
nodes. Here we prove that all graphs with at most 8 nodes are 2-interval-PCGs,
so doing one step towards the determination of the smallest value of n such
that there exists an n node graph that is not a 2-interval-PCG.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 15:00:44 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Sep 2023 09:55:08 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Calamoneri",
"Tiziana",
""
],
[
"Monti",
"Angelo",
""
],
[
"Petroni",
"Fabrizio",
""
]
] |
new_dataset
| 0.997283 |
2205.08529
|
Haoqian Zhang
|
Haoqian Zhang, Louis-Henri Merino, Ziyan Qu, Mahsa Bastankhah, Vero
Estrada-Galinanes, Bryan Ford
|
F3B: A Low-Overhead Blockchain Architecture with Per-Transaction
Front-Running Protection
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Front-running attacks, which benefit from advanced knowledge of pending
transactions, have proliferated in the blockchain space since the emergence of
decentralized finance. Front-running causes devastating losses to honest
participants and continues to endanger the fairness of the ecosystem. We
present Flash Freezing Flash Boys (F3B), a blockchain architecture that
addresses front-running attacks by using threshold cryptography. In F3B, a user
generates a symmetric key to encrypt their transaction, and once the underlying
consensus layer has finalized the transaction, a decentralized
secret-management committee reveals this key. F3B mitigates front-running
attacks because, before the consensus group finalizes it, an adversary can no
longer read the content of a transaction, thus preventing the adversary from
benefiting from advanced knowledge of pending transactions. Unlike other
mitigation systems, F3B properly ensures that all unfinalized transactions,
even with significant delays, remain private by adopting per-transaction
protection. Furthermore, F3B addresses front-running at the execution layer;
thus, our solution is agnostic to the underlying consensus algorithm and
compatible with existing smart contracts. We evaluated F3B on Ethereum with a
modified execution layer and found only a negligible (0.026%) increase in
transaction latency, specifically due to running threshold decryption with a
128-member secret-management committee after a transaction is finalized; this
indicates that F3B is both practical and low-cost.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 17:53:45 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Jan 2023 11:28:12 GMT"
},
{
"version": "v3",
"created": "Tue, 5 Sep 2023 07:56:18 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Zhang",
"Haoqian",
""
],
[
"Merino",
"Louis-Henri",
""
],
[
"Qu",
"Ziyan",
""
],
[
"Bastankhah",
"Mahsa",
""
],
[
"Estrada-Galinanes",
"Vero",
""
],
[
"Ford",
"Bryan",
""
]
] |
new_dataset
| 0.998768 |
2206.05601
|
Avishai Sintov
|
Avishai Sintov and Inbar Ben-David
|
Simple Kinesthetic Haptics for Object Recognition
| null | null | null | null |
cs.RO eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Object recognition is an essential capability when performing various tasks.
Humans naturally use either or both visual and tactile perception to extract
object class and properties. Typical approaches for robots, however, require
complex visual systems or multiple high-density tactile sensors which can be
highly expensive. In addition, they usually require actual collection of a
large dataset from real objects through direct interaction. In this paper, we
propose a kinesthetic-based object recognition method that can be performed
with any multi-fingered robotic hand in which the kinematics is known. The
method does not require tactile sensors and is based on observing grasps of the
objects. We utilize a unique and frame invariant parameterization of grasps to
learn instances of object shapes. To train a classifier, training data is
generated rapidly and solely in a computational process without interaction
with real objects. We then propose and compare between two iterative algorithms
that can integrate any trained classifier. The classifiers and algorithms are
independent of any particular robot hand and, therefore, can be exerted on
various ones. We show in experiments, that with few grasps, the algorithms
acquire accurate classification. Furthermore, we show that the object
recognition approach is scalable to objects of various sizes. Similarly, a
global classifier is trained to identify general geometries (e.g., an ellipsoid
or a box) rather than particular ones and demonstrated on a large set of
objects. Full scale experiments and analysis are provided to show the
performance of the method.
|
[
{
"version": "v1",
"created": "Sat, 11 Jun 2022 20:03:14 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Sintov",
"Avishai",
""
],
[
"Ben-David",
"Inbar",
""
]
] |
new_dataset
| 0.995862 |
2206.09410
|
Jiaming Zhang
|
Jiaming Zhang, Qi Yi, Dongyuan Lu, Jitao Sang
|
Low-Mid Adversarial Perturbation against Unauthorized Face Recognition
System
|
published in Information Sciences
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In light of the growing concerns regarding the unauthorized use of facial
recognition systems and its implications on individual privacy, the exploration
of adversarial perturbations as a potential countermeasure has gained traction.
However, challenges arise in effectively deploying this approach against
unauthorized facial recognition systems due to the effects of JPEG compression
on image distribution across the internet, which ultimately diminishes the
efficacy of adversarial perturbations. Existing JPEG compression-resistant
techniques struggle to strike a balance between resistance, transferability,
and attack potency. To address these limitations, we propose a novel solution
referred to as \emph{low frequency adversarial perturbation} (LFAP). This
method conditions the source model to leverage low-frequency characteristics
through adversarial training. To further enhance the performance, we introduce
an improved \emph{low-mid frequency adversarial perturbation} (LMFAP) that
incorporates mid-frequency components for an additive benefit. Our study
encompasses a range of settings to replicate genuine application scenarios,
including cross backbones, supervisory heads, training datasets, and testing
datasets. Moreover, we evaluated our approaches on a commercial black-box API,
\texttt{Face++}. The empirical results validate the cutting-edge performance
achieved by our proposed solutions.
|
[
{
"version": "v1",
"created": "Sun, 19 Jun 2022 14:15:49 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Sep 2023 03:18:01 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Zhang",
"Jiaming",
""
],
[
"Yi",
"Qi",
""
],
[
"Lu",
"Dongyuan",
""
],
[
"Sang",
"Jitao",
""
]
] |
new_dataset
| 0.973439 |
2207.08569
|
Kosmas Dimitropoulos
|
Dimitrios Konstantinidis, Ilias Papastratis, Kosmas Dimitropoulos,
Petros Daras
|
Multi-manifold Attention for Vision Transformers
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision Transformers are very popular nowadays due to their state-of-the-art
performance in several computer vision tasks, such as image classification and
action recognition. Although their performance has been greatly enhanced
through highly descriptive patch embeddings and hierarchical structures, there
is still limited research on utilizing additional data representations so as to
refine the selfattention map of a Transformer. To address this problem, a novel
attention mechanism, called multi-manifold multihead attention, is proposed in
this work to substitute the vanilla self-attention of a Transformer. The
proposed mechanism models the input space in three distinct manifolds, namely
Euclidean, Symmetric Positive Definite and Grassmann, thus leveraging different
statistical and geometrical properties of the input for the computation of a
highly descriptive attention map. In this way, the proposed attention mechanism
can guide a Vision Transformer to become more attentive towards important
appearance, color and texture features of an image, leading to improved
classification and segmentation results, as shown by the experimental results
on well-known datasets.
|
[
{
"version": "v1",
"created": "Mon, 18 Jul 2022 12:53:53 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Nov 2022 13:45:41 GMT"
},
{
"version": "v3",
"created": "Tue, 5 Sep 2023 09:05:15 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Konstantinidis",
"Dimitrios",
""
],
[
"Papastratis",
"Ilias",
""
],
[
"Dimitropoulos",
"Kosmas",
""
],
[
"Daras",
"Petros",
""
]
] |
new_dataset
| 0.987215 |
2208.02484
|
Chaeyoon Jeong
|
Chaeyoon Jeong and Sundong Kim and Jaewoo Park and Yeonsoo Choi
|
Customs Import Declaration Datasets
|
Datasets: https://github.com/Seondong/Customs-Declaration-Datasets
| null | null | null |
cs.LG cs.AI stat.OT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Given the huge volume of cross-border flows, effective and efficient control
of trade becomes more crucial in protecting people and society from illicit
trade. However, limited accessibility of the transaction-level trade datasets
hinders the progress of open research, and lots of customs administrations have
not benefited from the recent progress in data-based risk management. In this
paper, we introduce an import declaration dataset to facilitate the
collaboration between domain experts in customs administrations and researchers
from diverse domains, such as data science and machine learning. The dataset
contains 54,000 artificially generated trades with 22 key attributes, and it is
synthesized with conditional tabular GAN while maintaining correlated features.
Synthetic data has several advantages. First, releasing the dataset is free
from restrictions that do not allow disclosing the original import data. The
fabrication step minimizes the possible identity risk which may exist in trade
statistics. Second, the published data follow a similar distribution to the
source data so that it can be used in various downstream tasks. Hence, our
dataset can be used as a benchmark for testing the performance of any
classification algorithm. With the provision of data and its generation
process, we open baseline codes for fraud detection tasks, as we empirically
show that more advanced algorithms can better detect fraud.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 06:20:20 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2023 02:31:22 GMT"
},
{
"version": "v3",
"created": "Mon, 4 Sep 2023 05:48:50 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Jeong",
"Chaeyoon",
""
],
[
"Kim",
"Sundong",
""
],
[
"Park",
"Jaewoo",
""
],
[
"Choi",
"Yeonsoo",
""
]
] |
new_dataset
| 0.999806 |
2208.09944
|
Alexander Kensert
|
Alexander Kensert, Gert Desmet, Deirdre Cabooter
|
MolGraph: a Python package for the implementation of molecular graphs
and graph neural networks with TensorFlow and Keras
|
14 pages, 4 figures, 4 tables
| null | null | null |
cs.LG q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Molecular machine learning (ML) has proven important for tackling various
molecular problems, such as predicting molecular properties based on molecular
descriptors or fingerprints. Since relatively recently, graph neural network
(GNN) algorithms have been implemented for molecular ML, showing comparable or
superior performance to descriptor or fingerprint-based approaches. Although
various tools and packages exist to apply GNNs in molecular ML, a new GNN
package, named MolGraph, was developed in this work with the motivation to
create GNN model pipelines highly compatible with the TensorFlow and Keras
application programming interface (API). MolGraph also implements a chemistry
module to accommodate the generation of small molecular graphs, which can be
passed to a GNN algorithm to solve a molecular ML problem. To validate the
GNNs, they were benchmarked against the datasets of MoleculeNet, as well as
three chromatographic retention time datasets. The results on these benchmarks
illustrate that the GNNs performed as expected. Additionally, the GNNs proved
useful for molecular identification and improved interpretability of
chromatographic retention time data. MolGraph is available at
https://github.com/akensert/molgraph. Installation, tutorials and
implementation details can be found at
https://molgraph.readthedocs.io/en/latest/.
|
[
{
"version": "v1",
"created": "Sun, 21 Aug 2022 18:37:41 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Aug 2022 16:33:20 GMT"
},
{
"version": "v3",
"created": "Sun, 25 Sep 2022 19:45:49 GMT"
},
{
"version": "v4",
"created": "Mon, 4 Sep 2023 13:30:25 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Kensert",
"Alexander",
""
],
[
"Desmet",
"Gert",
""
],
[
"Cabooter",
"Deirdre",
""
]
] |
new_dataset
| 0.99906 |
2208.11718
|
Mocho Go
|
Mocho Go, Hideyuki Tachibana
|
gSwin: Gated MLP Vision Model with Hierarchical Structure of Shifted
Window
|
5 pages, 7 figures, IEEE ICASSP 2023
|
Proc. ICASSP (2023)
|
10.1109/ICASSP49357.2023.10096453
| null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Following the success in language domain, the self-attention mechanism
(transformer) is adopted in the vision domain and achieving great success
recently. Additionally, as another stream, multi-layer perceptron (MLP) is also
explored in the vision domain. These architectures, other than traditional
CNNs, have been attracting attention recently, and many methods have been
proposed. As one that combines parameter efficiency and performance with
locality and hierarchy in image recognition, we propose gSwin, which merges the
two streams; Swin Transformer and (multi-head) gMLP. We showed that our gSwin
can achieve better accuracy on three vision tasks, image classification, object
detection and semantic segmentation, than Swin Transformer, with smaller model
size.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 18:00:46 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Sep 2023 08:14:57 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Go",
"Mocho",
""
],
[
"Tachibana",
"Hideyuki",
""
]
] |
new_dataset
| 0.997765 |
2209.04920
|
Thanh-Dat Truong
|
Thanh-Dat Truong, Chi Nhan Duong, Ngan Le, Marios Savvides, Khoa Luu
|
Vec2Face-v2: Unveil Human Faces from their Blackbox Features via
Attention-based Network in Face Recognition
|
arXiv admin note: substantial text overlap with arXiv:2003.06958
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we investigate the problem of face reconstruction given a
facial feature representation extracted from a blackbox face recognition
engine. Indeed, it is a very challenging problem in practice due to the
limitations of abstracted information from the engine. We, therefore, introduce
a new method named Attention-based Bijective Generative Adversarial Networks in
a Distillation framework (DAB-GAN) to synthesize the faces of a subject given
his/her extracted face recognition features. Given any unconstrained unseen
facial features of a subject, the DAB-GAN can reconstruct his/her facial images
in high definition. The DAB-GAN method includes a novel attention-based
generative structure with the newly defined Bijective Metrics Learning
approach. The framework starts by introducing a bijective metric so that the
distance measurement and metric learning process can be directly adopted in the
image domain for an image reconstruction task. The information from the
blackbox face recognition engine will be optimally exploited using the global
distillation process. Then an attention-based generator is presented for a
highly robust generator to synthesize realistic faces with ID preservation. We
have evaluated our method on the challenging face recognition databases, i.e.,
CelebA, LFW, CFP-FP, CP-LFW, AgeDB, CA-LFW, and consistently achieved
state-of-the-art results. The advancement of DAB-GAN is also proven in both
image realism and ID preservation properties.
|
[
{
"version": "v1",
"created": "Sun, 11 Sep 2022 19:14:21 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Sep 2023 20:51:48 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Truong",
"Thanh-Dat",
""
],
[
"Duong",
"Chi Nhan",
""
],
[
"Le",
"Ngan",
""
],
[
"Savvides",
"Marios",
""
],
[
"Luu",
"Khoa",
""
]
] |
new_dataset
| 0.99358 |
2210.13540
|
Apoorva Beedu
|
Apoorva Beedu, Huda Alamri, Irfan Essa
|
Video based Object 6D Pose Estimation using Transformers
|
arXiv admin note: text overlap with arXiv:2111.10677
| null | null | null |
cs.CV cs.AI cs.HC cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a Transformer based 6D Object Pose Estimation framework
VideoPose, comprising an end-to-end attention based modelling architecture,
that attends to previous frames in order to estimate accurate 6D Object Poses
in videos. Our approach leverages the temporal information from a video
sequence for pose refinement, along with being computationally efficient and
robust. Compared to existing methods, our architecture is able to capture and
reason from long-range dependencies efficiently, thus iteratively refining over
video sequences. Experimental evaluation on the YCB-Video dataset shows that
our approach is on par with the state-of-the-art Transformer methods, and
performs significantly better relative to CNN based approaches. Further, with a
speed of 33 fps, it is also more efficient and therefore applicable to a
variety of applications that require real-time object pose estimation. Training
code and pretrained models are available at
https://github.com/ApoorvaBeedu/VideoPose
|
[
{
"version": "v1",
"created": "Mon, 24 Oct 2022 18:45:53 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Nov 2022 18:29:51 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Beedu",
"Apoorva",
""
],
[
"Alamri",
"Huda",
""
],
[
"Essa",
"Irfan",
""
]
] |
new_dataset
| 0.998556 |
2211.05222
|
Hehui Zheng
|
Hehui Zheng (1 and 2), Sebastian Pinzello (1), Barnabas Gavin Cangan
(1), Thomas Buchner (1) and Robert K. Katzschmann (1) ((1) Soft Robotics Lab
ETH Zurich, (2) ETH AI Center)
|
ViSE: Vision-Based 3D Online Shape Estimation of Continuously Deformable
Robots
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The precise control of soft and continuum robots requires knowledge of their
shape. The shape of these robots has, in contrast to classical rigid robots,
infinite degrees of freedom. To partially reconstruct the shape, proprioceptive
techniques use built-in sensors resulting in inaccurate results and increased
fabrication complexity. Exteroceptive methods so far rely on placing reflective
markers on all tracked components and triangulating their position using
multiple motion-tracking cameras. Tracking systems are expensive and infeasible
for deformable robots interacting with the environment due to marker occlusion
and damage. Here, we present a regression approach for 3D shape estimation
using a convolutional neural network. The proposed approach takes advantage of
data-driven supervised learning and is capable of real-time marker-less shape
estimation during inference. Two images of a robotic system are taken
simultaneously at 25 Hz from two different perspectives, and are fed to the
network, which returns for each pair the parameterized shape. The proposed
approach outperforms marker-less state-of-the-art methods by a maximum of 4.4%
in estimation accuracy while at the same time being more robust and requiring
no prior knowledge of the shape. The approach can be easily implemented due to
only requiring two color cameras without depth and not needing an explicit
calibration of the extrinsic parameters. Evaluations on two types of soft
robotic arms and a soft robotic fish demonstrate our method's accuracy and
versatility on highly deformable systems in real-time. The robust performance
of the approach against different scene modifications (camera alignment and
brightness) suggests its generalizability to a wider range of experimental
setups, which will benefit downstream tasks such as robotic grasping and
manipulation.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 22:08:23 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Sep 2023 14:47:23 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Zheng",
"Hehui",
"",
"1 and 2"
],
[
"Pinzello",
"Sebastian",
""
],
[
"Cangan",
"Barnabas Gavin",
""
],
[
"Buchner",
"Thomas",
""
],
[
"Katzschmann",
"Robert K.",
""
]
] |
new_dataset
| 0.994285 |
2211.14305
|
Omri Avrahami
|
Omri Avrahami, Thomas Hayes, Oran Gafni, Sonal Gupta, Yaniv Taigman,
Devi Parikh, Dani Lischinski, Ohad Fried, Xi Yin
|
SpaText: Spatio-Textual Representation for Controllable Image Generation
|
CVPR 2023. Project page available at:
https://omriavrahami.com/spatext
| null |
10.1109/CVPR52729.2023.01762
| null |
cs.CV cs.GR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recent text-to-image diffusion models are able to generate convincing results
of unprecedented quality. However, it is nearly impossible to control the
shapes of different regions/objects or their layout in a fine-grained fashion.
Previous attempts to provide such controls were hindered by their reliance on a
fixed set of labels. To this end, we present SpaText - a new method for
text-to-image generation using open-vocabulary scene control. In addition to a
global text prompt that describes the entire scene, the user provides a
segmentation map where each region of interest is annotated by a free-form
natural language description. Due to lack of large-scale datasets that have a
detailed textual description for each region in the image, we choose to
leverage the current large-scale text-to-image datasets and base our approach
on a novel CLIP-based spatio-textual representation, and show its effectiveness
on two state-of-the-art diffusion models: pixel-based and latent-based. In
addition, we show how to extend the classifier-free guidance method in
diffusion models to the multi-conditional case and present an alternative
accelerated inference algorithm. Finally, we offer several automatic evaluation
metrics and use them, in addition to FID scores and a user study, to evaluate
our method and show that it achieves state-of-the-art results on image
generation with free-form textual scene control.
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 18:59:10 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Mar 2023 16:25:10 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Avrahami",
"Omri",
""
],
[
"Hayes",
"Thomas",
""
],
[
"Gafni",
"Oran",
""
],
[
"Gupta",
"Sonal",
""
],
[
"Taigman",
"Yaniv",
""
],
[
"Parikh",
"Devi",
""
],
[
"Lischinski",
"Dani",
""
],
[
"Fried",
"Ohad",
""
],
[
"Yin",
"Xi",
""
]
] |
new_dataset
| 0.991525 |
2212.01381
|
Enis Simsar
|
Enis Simsar and Alessio Tonioni and Evin P{\i}nar \"Ornek and Federico
Tombari
|
LatentSwap3D: Semantic Edits on 3D Image GANs
|
The paper has been accepted by ICCV'23 AI3DCC
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
3D GANs have the ability to generate latent codes for entire 3D volumes
rather than only 2D images. These models offer desirable features like
high-quality geometry and multi-view consistency, but, unlike their 2D
counterparts, complex semantic image editing tasks for 3D GANs have only been
partially explored. To address this problem, we propose LatentSwap3D, a
semantic edit approach based on latent space discovery that can be used with
any off-the-shelf 3D or 2D GAN model and on any dataset. LatentSwap3D relies on
identifying the latent code dimensions corresponding to specific attributes by
feature ranking using a random forest classifier. It then performs the edit by
swapping the selected dimensions of the image being edited with the ones from
an automatically selected reference image. Compared to other latent space
control-based edit methods, which were mainly designed for 2D GANs, our method
on 3D GANs provides remarkably consistent semantic edits in a disentangled
manner and outperforms others both qualitatively and quantitatively. We show
results on seven 3D GANs (pi-GAN, GIRAFFE, StyleSDF, MVCGAN, EG3D, StyleNeRF,
and VolumeGAN) and on five datasets (FFHQ, AFHQ, Cats, MetFaces, and CompCars).
|
[
{
"version": "v1",
"created": "Fri, 2 Dec 2022 18:59:51 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Sep 2023 19:12:46 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Simsar",
"Enis",
""
],
[
"Tonioni",
"Alessio",
""
],
[
"Örnek",
"Evin Pınar",
""
],
[
"Tombari",
"Federico",
""
]
] |
new_dataset
| 0.998637 |
2212.08059
|
Yanyu Li
|
Yanyu Li, Ju Hu, Yang Wen, Georgios Evangelidis, Kamyar Salahi, Yanzhi
Wang, Sergey Tulyakov, Jian Ren
|
Rethinking Vision Transformers for MobileNet Size and Speed
|
Code is available at:
https://github.com/snap-research/EfficientFormer
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
With the success of Vision Transformers (ViTs) in computer vision tasks,
recent arts try to optimize the performance and complexity of ViTs to enable
efficient deployment on mobile devices. Multiple approaches are proposed to
accelerate attention mechanism, improve inefficient designs, or incorporate
mobile-friendly lightweight convolutions to form hybrid architectures. However,
ViT and its variants still have higher latency or considerably more parameters
than lightweight CNNs, even true for the years-old MobileNet. In practice,
latency and size are both crucial for efficient deployment on
resource-constraint hardware. In this work, we investigate a central question,
can transformer models run as fast as MobileNet and maintain a similar size? We
revisit the design choices of ViTs and propose a novel supernet with low
latency and high parameter efficiency. We further introduce a novel
fine-grained joint search strategy for transformer models that can find
efficient architectures by optimizing latency and number of parameters
simultaneously. The proposed models, EfficientFormerV2, achieve 3.5% higher
top-1 accuracy than MobileNetV2 on ImageNet-1K with similar latency and
parameters. This work demonstrate that properly designed and optimized vision
transformers can achieve high performance even with MobileNet-level size and
speed.
|
[
{
"version": "v1",
"created": "Thu, 15 Dec 2022 18:59:12 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Sep 2023 12:47:28 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Li",
"Yanyu",
""
],
[
"Hu",
"Ju",
""
],
[
"Wen",
"Yang",
""
],
[
"Evangelidis",
"Georgios",
""
],
[
"Salahi",
"Kamyar",
""
],
[
"Wang",
"Yanzhi",
""
],
[
"Tulyakov",
"Sergey",
""
],
[
"Ren",
"Jian",
""
]
] |
new_dataset
| 0.971013 |
2212.11777
|
\c{C}a\u{g}kan Yapar
|
\c{C}a\u{g}kan Yapar, Ron Levie, Gitta Kutyniok, Giuseppe Caire
|
Dataset of Pathloss and ToA Radio Maps With Localization Application
| null | null | null | null |
cs.NI cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, we present a collection of radio map datasets in dense urban
setting, which we generated and made publicly available. The datasets include
simulated pathloss/received signal strength (RSS) and time of arrival (ToA)
radio maps over a large collection of realistic dense urban setting in real
city maps. The two main applications of the presented dataset are 1) learning
methods that predict the pathloss from input city maps (namely, deep
learning-based simulations), and, 2) wireless localization. The fact that the
RSS and ToA maps are computed by the same simulations over the same city maps
allows for a fair comparison of the RSS and ToA-based localization methods.
|
[
{
"version": "v1",
"created": "Fri, 18 Nov 2022 20:39:51 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Sep 2023 20:15:54 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Yapar",
"Çağkan",
""
],
[
"Levie",
"Ron",
""
],
[
"Kutyniok",
"Gitta",
""
],
[
"Caire",
"Giuseppe",
""
]
] |
new_dataset
| 0.999566 |
2212.12196
|
Ruoyu Xu
|
Ruoyu Xu, Chongfeng Liu, Zhongzhong Cao, Yuquan Wang and Huihuan Qian
|
A Manipulator-Assisted Multiple UAV Landing System for USV Subject to
Disturbance
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Marine waves significantly disturb the unmanned surface vehicle (USV) motion.
An unmanned aerial vehicle (UAV) can hardly land on a USV that undergoes
irregular motion. An oversized landing platform is usually necessary to
guarantee the landing safety, which limits the number of UAVs that can be
carried. We propose a landing system assisted by tether and robot manipulation.
The system can land multiple UAVs without increasing the USV's size. An MPC
controller stabilizes the end-effector and tracks the UAVs, and an adaptive
estimator addresses the disturbance caused by the base motion. The working
strategy of the system is designed to plan the motion of each device. We have
validated the manipulator controller through simulations and well-controlled
indoor experiments. During the field tests, the proposed system caught and
placed the UAVs when the disturbed USV roll range was approximately 12 degrees.
|
[
{
"version": "v1",
"created": "Fri, 23 Dec 2022 08:26:23 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jun 2023 20:21:27 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Jun 2023 13:26:23 GMT"
},
{
"version": "v4",
"created": "Sat, 2 Sep 2023 03:46:51 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Xu",
"Ruoyu",
""
],
[
"Liu",
"Chongfeng",
""
],
[
"Cao",
"Zhongzhong",
""
],
[
"Wang",
"Yuquan",
""
],
[
"Qian",
"Huihuan",
""
]
] |
new_dataset
| 0.998625 |
2301.01635
|
Yuliang Liu
|
Yuliang Liu, Jiaxin Zhang, Dezhi Peng, Mingxin Huang, Xinyu Wang,
Jingqun Tang, Can Huang, Dahua Lin, Chunhua Shen, Xiang Bai, Lianwen Jin
|
SPTS v2: Single-Point Scene Text Spotting
|
Accepted for publication in TPAMI 2023. arXiv admin note: text
overlap with arXiv:2112.07917
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
End-to-end scene text spotting has made significant progress due to its
intrinsic synergy between text detection and recognition. Previous methods
commonly regard manual annotations such as horizontal rectangles, rotated
rectangles, quadrangles, and polygons as a prerequisite, which are much more
expensive than using single-point. Our new framework, SPTS v2, allows us to
train high-performing text-spotting models using a single-point annotation.
SPTS v2 reserves the advantage of the auto-regressive Transformer with an
Instance Assignment Decoder (IAD) through sequentially predicting the center
points of all text instances inside the same predicting sequence, while with a
Parallel Recognition Decoder (PRD) for text recognition in parallel, which
significantly reduces the requirement of the length of the sequence. These two
decoders share the same parameters and are interactively connected with a
simple but effective information transmission process to pass the gradient and
information. Comprehensive experiments on various existing benchmark datasets
demonstrate the SPTS v2 can outperform previous state-of-the-art single-point
text spotters with fewer parameters while achieving 19$\times$ faster inference
speed. Within the context of our SPTS v2 framework, our experiments suggest a
potential preference for single-point representation in scene text spotting
when compared to other representations. Such an attempt provides a significant
opportunity for scene text spotting applications beyond the realms of existing
paradigms. Code is available at: https://github.com/Yuliang-Liu/SPTSv2.
|
[
{
"version": "v1",
"created": "Wed, 4 Jan 2023 14:20:14 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 13:59:43 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Aug 2023 01:45:37 GMT"
},
{
"version": "v4",
"created": "Sat, 2 Sep 2023 05:01:23 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Liu",
"Yuliang",
""
],
[
"Zhang",
"Jiaxin",
""
],
[
"Peng",
"Dezhi",
""
],
[
"Huang",
"Mingxin",
""
],
[
"Wang",
"Xinyu",
""
],
[
"Tang",
"Jingqun",
""
],
[
"Huang",
"Can",
""
],
[
"Lin",
"Dahua",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Bai",
"Xiang",
""
],
[
"Jin",
"Lianwen",
""
]
] |
new_dataset
| 0.985405 |
2302.09221
|
Jingzong Li
|
Jingzong Li, Yik Hong Cai, Libin Liu, Yu Mao, Chun Jason Xue, Hong Xu
|
Moby: Empowering 2D Models for Efficient Point Cloud Analytics on the
Edge
|
Accepted to ACM International Conference on Multimedia (MM) 2023
| null | null | null |
cs.NI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D object detection plays a pivotal role in many applications, most notably
autonomous driving and robotics. These applications are commonly deployed on
edge devices to promptly interact with the environment, and often require near
real-time response. With limited computation power, it is challenging to
execute 3D detection on the edge using highly complex neural networks. Common
approaches such as offloading to the cloud induce significant latency overheads
due to the large amount of point cloud data during transmission. To resolve the
tension between wimpy edge devices and compute-intensive inference workloads,
we explore the possibility of empowering fast 2D detection to extrapolate 3D
bounding boxes. To this end, we present Moby, a novel system that demonstrates
the feasibility and potential of our approach. We design a transformation
pipeline for Moby that generates 3D bounding boxes efficiently and accurately
based on 2D detection results without running 3D detectors. Further, we devise
a frame offloading scheduler that decides when to launch the 3D detector
judiciously in the cloud to avoid the errors from accumulating. Extensive
evaluations on NVIDIA Jetson TX2 with real-world autonomous driving datasets
demonstrate that Moby offers up to 91.9% latency improvement with modest
accuracy loss over state of the art.
|
[
{
"version": "v1",
"created": "Sat, 18 Feb 2023 03:42:31 GMT"
},
{
"version": "v2",
"created": "Sun, 7 May 2023 04:57:09 GMT"
},
{
"version": "v3",
"created": "Tue, 5 Sep 2023 02:17:19 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Li",
"Jingzong",
""
],
[
"Cai",
"Yik Hong",
""
],
[
"Liu",
"Libin",
""
],
[
"Mao",
"Yu",
""
],
[
"Xue",
"Chun Jason",
""
],
[
"Xu",
"Hong",
""
]
] |
new_dataset
| 0.995088 |
2302.12601
|
Christian Guckelsberger
|
Veera Vimpari, Annakaisa Kultima, Perttu H\"am\"al\"ainen, Christian
Guckelsberger
|
"An Adapt-or-Die Type of Situation": Perception, Adoption, and Use of
Text-To-Image-Generation AI by Game Industry Professionals
|
34 pages (incl. appendix), 3 figures, 4 tables. Coding template (31
pages, 10 tables), study invitations (email, social media) and pre-study
survey provided as supplementary (ancillary) material. Accepted at ACM CHI
Play 2023
|
Proc. ACM Hum.-Comput. Interact., Vol. 7, No. CHI PLAY, 2023,
Article 379
|
10.1145/3611025
| null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Text-to-image generation (TTIG) models, a recent addition to creative AI, can
generate images based on a text description. These models have begun to rival
the work of professional creatives, and sparked discussions on the future of
creative work, loss of jobs, and copyright issues, amongst other important
implications. To support the sustainable adoption of TTIG, we must provide
rich, reliable and transparent insights into how professionals perceive, adopt
and use TTIG. Crucially though, the public debate is shallow, narrow and
lacking transparency, while academic work has focused on studying the use of
TTIG in a general artist population, but not on the perceptions and attitudes
of professionals in a specific industry. In this paper, we contribute a
qualitative, exploratory interview study on TTIG in the Finnish videogame
industry. Through a Template Analysis on semi-structured interviews with 14
game professionals, we reveal 12 overarching themes, structured into 49
sub-themes on professionals' perception, adoption and use of TTIG systems in
games industry practice. Experiencing (yet another) change of roles and
creative processes, our participants' reflections can inform discussions within
the industry, be used by policymakers to inform urgently needed legislation,
and support researchers in games, HCI and AI to support the sustainable,
professional use of TTIG to benefit people and games as cultural artefacts.
|
[
{
"version": "v1",
"created": "Fri, 24 Feb 2023 12:38:27 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Feb 2023 15:29:22 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Apr 2023 00:27:03 GMT"
},
{
"version": "v4",
"created": "Mon, 5 Jun 2023 07:47:34 GMT"
},
{
"version": "v5",
"created": "Tue, 5 Sep 2023 15:33:04 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Vimpari",
"Veera",
""
],
[
"Kultima",
"Annakaisa",
""
],
[
"Hämäläinen",
"Perttu",
""
],
[
"Guckelsberger",
"Christian",
""
]
] |
new_dataset
| 0.956461 |
2302.14686
|
Giannis Fikioris
|
Giannis Fikioris, \'Eva Tardos
|
Approximately Stationary Bandits with Knapsacks
| null |
Proceedings of Thirty Sixth Conference on Learning Theory, 195
(2023) 3758-3782
| null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Bandits with Knapsacks (BwK), the generalization of the Bandits problem under
global budget constraints, has received a lot of attention in recent years.
Previous work has focused on one of the two extremes: Stochastic BwK where the
rewards and consumptions of the resources of each round are sampled from an
i.i.d. distribution, and Adversarial BwK where these parameters are picked by
an adversary. Achievable guarantees in the two cases exhibit a massive gap:
No-regret learning is achievable in the stochastic case, but in the adversarial
case only competitive ratio style guarantees are achievable, where the
competitive ratio depends either on the budget or on both the time and the
number of resources. What makes this gap so vast is that in Adversarial BwK the
guarantees get worse in the typical case when the budget is more binding. While
``best-of-both-worlds'' type algorithms are known (single algorithms that
provide the best achievable guarantee in each extreme case), their bounds
degrade to the adversarial case as soon as the environment is not fully
stochastic.
Our work aims to bridge this gap, offering guarantees for a workload that is
not exactly stochastic but is also not worst-case. We define a condition,
Approximately Stationary BwK, that parameterizes how close to stochastic or
adversarial an instance is. Based on these parameters, we explore what is the
best competitive ratio attainable in BwK. We explore two algorithms that are
oblivious to the values of the parameters but guarantee competitive ratios that
smoothly transition between the best possible guarantees in the two extreme
cases, depending on the values of the parameters. Our guarantees offer great
improvement over the adversarial guarantee, especially when the available
budget is small. We also prove bounds on the achievable guarantee, showing that
our results are approximately tight when the budget is small.
|
[
{
"version": "v1",
"created": "Tue, 28 Feb 2023 15:55:52 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Jul 2023 22:16:18 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Fikioris",
"Giannis",
""
],
[
"Tardos",
"Éva",
""
]
] |
new_dataset
| 0.993794 |
2303.09623
|
Quentin Sti\'evenart
|
Alexander Nicholson, Quentin Sti\'evenart, Arash Mazidi, Mohammad
Ghafari
|
Wasmizer: Curating WebAssembly-driven Projects on GitHub
|
11 pages + 1 page of references Preprint of MSR'23 publication
| null |
10.1109/MSR59073.2023.00031
| null |
cs.SE
|
http://creativecommons.org/licenses/by-sa/4.0/
|
WebAssembly has attracted great attention as a portable compilation target
for programming languages. To facilitate in-depth studies about this
technology, we have deployed Wasmizer, a tool that regularly mines GitHub
projects and makes an up-to-date dataset of WebAssembly sources and their
binaries publicly available. Presently, we have collected 2 540 C and C++
projects that are highly-related to WebAssembly, and built a dataset of 8 915
binaries that are linked to their source projects. To demonstrate an
application of this dataset, we have investigated the presence of eight
WebAssembly compilation smells in the wild.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 19:55:47 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Nicholson",
"Alexander",
""
],
[
"Stiévenart",
"Quentin",
""
],
[
"Mazidi",
"Arash",
""
],
[
"Ghafari",
"Mohammad",
""
]
] |
new_dataset
| 0.998434 |
2303.12937
|
Thomas Manzini
|
Thomas Manzini, Robin Murphy, David Merrick, and Justin Adams
|
Wireless Network Demands of Data Products from Small Uncrewed Aerial
Systems at Hurricane Ian
|
6 pages, 8 figures
| null | null | null |
cs.RO cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data collected at Hurricane Ian (2022) quantifies the demands that small
uncrewed aerial systems (UAS), or drones, place on the network communication
infrastructure and identifies gaps in the field. Drones have been increasingly
used since Hurricane Katrina (2005) for disaster response, however getting the
data from the drone to the appropriate decision makers throughout incident
command in a timely fashion has been problematic. These delays have persisted
even as countries such as the USA have made significant investments in wireless
infrastructure, rapidly deployable nodes, and an increase in commercial
satellite solutions. Hurricane Ian serves as a case study of the mismatch
between communications needs and capabilities. In the first four days of the
response, nine drone teams flew 34 missions under the direction of the State of
Florida FL-UAS1, generating 636GB of data. The teams had access to six
different wireless communications networks but had to resort to physically
transferring data to the nearest intact emergency operations center in order to
make the data available to the relevant agencies. The analysis of the mismatch
contributes a model of the drone data-to-decision workflow in a disaster and
quantifies wireless network communication requirements throughout the workflow
in five factors. Four of the factors-availability, bandwidth, burstiness, and
spatial distribution-were previously identified from analyses of Hurricanes
Harvey (2017) and Michael (2018). This work adds upload rate as a fifth
attribute. The analysis is expected to improve drone design and edge computing
schemes as well as inform wireless communication research and development.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 22:38:34 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jul 2023 00:40:57 GMT"
},
{
"version": "v3",
"created": "Tue, 5 Sep 2023 01:04:42 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Manzini",
"Thomas",
""
],
[
"Murphy",
"Robin",
""
],
[
"Merrick",
"David",
""
],
[
"Adams",
"Justin",
""
]
] |
new_dataset
| 0.959437 |
2303.15049
|
Zihao Wang
|
Zihao Wang, Nathan Keyes, Terry Crawford, Jinho D. Choi
|
InterviewBot: Real-Time End-to-End Dialogue System to Interview Students
for College Admission
| null |
Information 2023, 14, 460
|
10.3390/info14080460
| null |
cs.CL cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
We present the InterviewBot that dynamically integrates conversation history
and customized topics into a coherent embedding space to conduct 10 mins
hybrid-domain (open and closed) conversations with foreign students applying to
U.S. colleges for assessing their academic and cultural readiness. To build a
neural-based end-to-end dialogue model, 7,361 audio recordings of
human-to-human interviews are automatically transcribed, where 440 are manually
corrected for finetuning and evaluation. To overcome the input/output size
limit of a transformer-based encoder-decoder model, two new methods are
proposed, context attention and topic storing, allowing the model to make
relevant and consistent interactions. Our final model is tested both
statistically by comparing its responses to the interview data and dynamically
by inviting professional interviewers and various students to interact with it
in real-time, finding it highly satisfactory in fluency and context awareness.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 09:46:56 GMT"
},
{
"version": "v2",
"created": "Wed, 24 May 2023 13:57:39 GMT"
},
{
"version": "v3",
"created": "Tue, 5 Sep 2023 15:33:49 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Wang",
"Zihao",
""
],
[
"Keyes",
"Nathan",
""
],
[
"Crawford",
"Terry",
""
],
[
"Choi",
"Jinho D.",
""
]
] |
new_dataset
| 0.998278 |
2303.17597
|
Lingdong Kong
|
Lingdong Kong and Youquan Liu and Xin Li and Runnan Chen and Wenwei
Zhang and Jiawei Ren and Liang Pan and Kai Chen and Ziwei Liu
|
Robo3D: Towards Robust and Reliable 3D Perception against Corruptions
|
ICCV 2023; 34 pages, 26 figures, 26 tables; Code at
https://github.com/ldkong1205/Robo3D
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The robustness of 3D perception systems under natural corruptions from
environments and sensors is pivotal for safety-critical applications. Existing
large-scale 3D perception datasets often contain data that are meticulously
cleaned. Such configurations, however, cannot reflect the reliability of
perception models during the deployment stage. In this work, we present Robo3D,
the first comprehensive benchmark heading toward probing the robustness of 3D
detectors and segmentors under out-of-distribution scenarios against natural
corruptions that occur in real-world environments. Specifically, we consider
eight corruption types stemming from severe weather conditions, external
disturbances, and internal sensor failure. We uncover that, although promising
results have been progressively achieved on standard benchmarks,
state-of-the-art 3D perception models are at risk of being vulnerable to
corruptions. We draw key observations on the use of data representations,
augmentation schemes, and training strategies, that could severely affect the
model's performance. To pursue better robustness, we propose a
density-insensitive training framework along with a simple flexible
voxelization strategy to enhance the model resiliency. We hope our benchmark
and approach could inspire future research in designing more robust and
reliable 3D perception models. Our robustness benchmark suite is publicly
available.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 17:59:17 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Mar 2023 13:03:55 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Apr 2023 12:38:27 GMT"
},
{
"version": "v4",
"created": "Sat, 2 Sep 2023 23:52:19 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Kong",
"Lingdong",
""
],
[
"Liu",
"Youquan",
""
],
[
"Li",
"Xin",
""
],
[
"Chen",
"Runnan",
""
],
[
"Zhang",
"Wenwei",
""
],
[
"Ren",
"Jiawei",
""
],
[
"Pan",
"Liang",
""
],
[
"Chen",
"Kai",
""
],
[
"Liu",
"Ziwei",
""
]
] |
new_dataset
| 0.999732 |
2304.08576
|
Eunhyek Joa
|
Eunhyek Joa, Hotae Lee, Eric Yongkeun Choi, Francesco Borrelli
|
Energy-Efficient Lane Changes Planning and Control for Connected
Autonomous Vehicles on Urban Roads
|
IEEE Intelligent Vehicle Symposium, Anchorage, Alaska, June 4-7, 2023
|
2023 IEEE Intelligent Vehicles Symposium (IV). 2023
|
10.1109/IV55152.2023.10186574
| null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a novel energy-efficient motion planning algorithm for
Connected Autonomous Vehicles (CAVs) on urban roads. The approach consists of
two components: a decision-making algorithm and an optimization-based
trajectory planner. The decision-making algorithm leverages Signal Phase and
Timing (SPaT) information from connected traffic lights to select a lane with
the aim of reducing energy consumption. The algorithm is based on a heuristic
rule which is learned from human driving data. The optimization-based
trajectory planner generates a safe, smooth, and energy-efficient trajectory
toward the selected lane. The proposed strategy is experimentally evaluated in
a Vehicle-in-the-Loop (VIL) setting, where a real test vehicle receives SPaT
information from both actual and virtual traffic lights and autonomously drives
on a testing site, while the surrounding vehicles are simulated. The results
demonstrate that the use of SPaT information in autonomous driving leads to
improved energy efficiency, with the proposed strategy saving 37.1% energy
consumption compared to a lane-keeping algorithm.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 19:34:51 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Joa",
"Eunhyek",
""
],
[
"Lee",
"Hotae",
""
],
[
"Choi",
"Eric Yongkeun",
""
],
[
"Borrelli",
"Francesco",
""
]
] |
new_dataset
| 0.980889 |
2304.14389
|
John Zhang
|
John Z. Zhang, Shuo Yang, Gengshan Yang, Arun L. Bishop, Deva Ramanan,
Zachary Manchester
|
SLoMo: A General System for Legged Robot Motion Imitation from Casual
Videos
|
accepted at RA-L 2023, with ICRA 2024 option
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present SLoMo: a first-of-its-kind framework for transferring skilled
motions from casually captured "in the wild" video footage of humans and
animals to legged robots. SLoMo works in three stages: 1) synthesize a
physically plausible reconstructed key-point trajectory from monocular videos;
2) optimize a dynamically feasible reference trajectory for the robot offline
that includes body and foot motion, as well as contact sequences that closely
tracks the key points; 3) track the reference trajectory online using a
general-purpose model-predictive controller on robot hardware. Traditional
motion imitation for legged motor skills often requires expert animators,
collaborative demonstrations, and/or expensive motion capture equipment, all of
which limits scalability. Instead, SLoMo only relies on easy-to-obtain
monocular video footage, readily available in online repositories such as
YouTube. It converts videos into motion primitives that can be executed
reliably by real-world robots. We demonstrate our approach by transferring the
motions of cats, dogs, and humans to example robots including a quadruped (on
hardware) and a humanoid (in simulation). To the best knowledge of the authors,
this is the first attempt at a general-purpose motion transfer framework that
imitates animal and human motions on legged robots directly from casual videos
without artificial markers or labels.
|
[
{
"version": "v1",
"created": "Thu, 27 Apr 2023 17:53:27 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Sep 2023 13:45:16 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Zhang",
"John Z.",
""
],
[
"Yang",
"Shuo",
""
],
[
"Yang",
"Gengshan",
""
],
[
"Bishop",
"Arun L.",
""
],
[
"Ramanan",
"Deva",
""
],
[
"Manchester",
"Zachary",
""
]
] |
new_dataset
| 0.999526 |
2305.04161
|
Kegang Wang
|
Kegang Wang, Yantao Wei, Mingwen Tong, Jie Gao, Yi Tian, YuJian Ma,
ZhongJin Zhao
|
PhysBench: A Benchmark Framework for rPPG with a New Dataset and
Baseline
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In recent years, due to the widespread use of internet videos, physiological
remote sensing has gained more and more attention in the fields of affective
computing and telemedicine. Recovering physiological signals from facial videos
is a challenging task that involves a series of preprocessing, image
algorithms, and post-processing to finally restore waveforms. We propose a
complete and efficient end-to-end training and testing framework that provides
fair comparisons for different algorithms through unified preprocessing and
post-processing. In addition, we introduce a highly synchronized lossless
format dataset along with a lightweight algorithm. The dataset contains over 32
hours (3.53M frames) of video from 58 subjects; by training on our collected
dataset both our proposed algorithm as well as existing ones can achieve
improvements.
|
[
{
"version": "v1",
"created": "Sun, 7 May 2023 02:26:00 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Sep 2023 16:27:11 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Wang",
"Kegang",
""
],
[
"Wei",
"Yantao",
""
],
[
"Tong",
"Mingwen",
""
],
[
"Gao",
"Jie",
""
],
[
"Tian",
"Yi",
""
],
[
"Ma",
"YuJian",
""
],
[
"Zhao",
"ZhongJin",
""
]
] |
new_dataset
| 0.999655 |
2305.04334
|
Jonathan Kelly
|
Andrej Janda, Pierre Merriaux, Pierre Olivier, Jonathan Kelly
|
Living in a Material World: Learning Material Properties from
Full-Waveform Flash Lidar Data for Semantic Segmentation
|
In Proceedings of the Conference on Robots and Vision (CRV'23),
Montreal, Canada, Jun. 6-8, 2023
| null |
10.1109/CRV60082.2023.00033
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advances in lidar technology have made the collection of 3D point clouds fast
and easy. While most lidar sensors return per-point intensity (or reflectance)
values along with range measurements, flash lidar sensors are able to provide
information about the shape of the return pulse. The shape of the return
waveform is affected by many factors, including the distance that the light
pulse travels and the angle of incidence with a surface. Importantly, the shape
of the return waveform also depends on the material properties of the
reflecting surface. In this paper, we investigate whether the material type or
class can be determined from the full-waveform response. First, as a proof of
concept, we demonstrate that the extra information about material class, if
known accurately, can improve performance on scene understanding tasks such as
semantic segmentation. Next, we learn two different full-waveform material
classifiers: a random forest classifier and a temporal convolutional neural
network (TCN) classifier. We find that, in some cases, material types can be
distinguished, and that the TCN generally performs better across a wider range
of materials. However, factors such as angle of incidence, material colour, and
material similarity may hinder overall performance.
|
[
{
"version": "v1",
"created": "Sun, 7 May 2023 17:07:11 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Sep 2023 18:34:43 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Janda",
"Andrej",
""
],
[
"Merriaux",
"Pierre",
""
],
[
"Olivier",
"Pierre",
""
],
[
"Kelly",
"Jonathan",
""
]
] |
new_dataset
| 0.995682 |
2305.08673
|
Jonathan Kelly
|
Sean Wu and Nicole Amenta and Jiachen Zhou and Sandro Papais and
Jonathan Kelly
|
aUToLights: A Robust Multi-Camera Traffic Light Detection and Tracking
System
|
In Proceedings of the Conference on Robots and Vision (CRV'23),
Montreal, Canada, Jun. 6-8, 2023
| null |
10.1109/CRV60082.2023.00019
| null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Following four successful years in the SAE AutoDrive Challenge Series I, the
University of Toronto is participating in the Series II competition to develop
a Level 4 autonomous passenger vehicle capable of handling various urban
driving scenarios by 2025. Accurate detection of traffic lights and correct
identification of their states is essential for safe autonomous operation in
cities. Herein, we describe our recently-redesigned traffic light perception
system for autonomous vehicles like the University of Toronto's self-driving
car, Artemis. Similar to most traffic light perception systems, we rely
primarily on camera-based object detectors. We deploy the YOLOv5 detector for
bounding box regression and traffic light classification across multiple
cameras and fuse the observations. To improve robustness, we incorporate priors
from high-definition semantic maps and perform state filtering using hidden
Markov models. We demonstrate a multi-camera, real time-capable traffic light
perception pipeline that handles complex situations including multiple visible
intersections, traffic light variations, temporary occlusion, and flashing
light states. To validate our system, we collected and annotated a varied
dataset incorporating flashing states and a range of occlusion types. Our
results show superior performance in challenging real-world scenarios compared
to single-frame, single-camera object detection.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 14:28:34 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Sep 2023 18:32:25 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Wu",
"Sean",
""
],
[
"Amenta",
"Nicole",
""
],
[
"Zhou",
"Jiachen",
""
],
[
"Papais",
"Sandro",
""
],
[
"Kelly",
"Jonathan",
""
]
] |
new_dataset
| 0.998663 |
2306.00936
|
Juri Opitz
|
Juri Opitz and Shira Wein and Julius Steen and Anette Frank and Nathan
Schneider
|
AMR4NLI: Interpretable and robust NLI measures from semantic graphs
|
International Conference on Computational Semantics (IWCS 2023); v2
fixes an imprecise sentence below Eq. 5
| null | null | null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The task of natural language inference (NLI) asks whether a given premise
(expressed in NL) entails a given NL hypothesis. NLI benchmarks contain human
ratings of entailment, but the meaning relationships driving these ratings are
not formalized. Can the underlying sentence pair relationships be made more
explicit in an interpretable yet robust fashion? We compare semantic structures
to represent premise and hypothesis, including sets of contextualized
embeddings and semantic graphs (Abstract Meaning Representations), and measure
whether the hypothesis is a semantic substructure of the premise, utilizing
interpretable metrics. Our evaluation on three English benchmarks finds value
in both contextualized embeddings and semantic graphs; moreover, they provide
complementary signals, and can be leveraged together in a hybrid model.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 17:39:40 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Sep 2023 13:36:27 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Opitz",
"Juri",
""
],
[
"Wein",
"Shira",
""
],
[
"Steen",
"Julius",
""
],
[
"Frank",
"Anette",
""
],
[
"Schneider",
"Nathan",
""
]
] |
new_dataset
| 0.986527 |
2306.10046
|
Alejandro Pe\~na Almansa
|
Alejandro Pe\~na, Aythami Morales, Julian Fierrez, Javier
Ortega-Garcia, Marcos Grande, I\~nigo Puente, Jorge Cordova, Gonzalo Cordova
|
Document Layout Annotation: Database and Benchmark in the Domain of
Public Affairs
|
Accepted in ICDAR 2023 Workshop on Machine Vision and NLP for
Document Analysis
|
Document Analysis and Recognition - ICDAR 2023 Workshops. ICDAR
2023. Lecture Notes in Computer Science, vol 14194
|
10.1007/978-3-031-41501-2_9
| null |
cs.IR cs.CV cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Every day, thousands of digital documents are generated with useful
information for companies, public organizations, and citizens. Given the
impossibility of processing them manually, the automatic processing of these
documents is becoming increasingly necessary in certain sectors. However, this
task remains challenging, since in most cases a text-only based parsing is not
enough to fully understand the information presented through different
components of varying significance. In this regard, Document Layout Analysis
(DLA) has been an interesting research field for many years, which aims to
detect and classify the basic components of a document. In this work, we used a
procedure to semi-automatically annotate digital documents with different
layout labels, including 4 basic layout blocks and 4 text categories. We apply
this procedure to collect a novel database for DLA in the public affairs
domain, using a set of 24 data sources from the Spanish Administration. The
database comprises 37.9K documents with more than 441K document pages, and more
than 8M labels associated to 8 layout block units. The results of our
experiments validate the proposed text labeling procedure with accuracy up to
99%.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 08:21:50 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Aug 2023 09:46:21 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Peña",
"Alejandro",
""
],
[
"Morales",
"Aythami",
""
],
[
"Fierrez",
"Julian",
""
],
[
"Ortega-Garcia",
"Javier",
""
],
[
"Grande",
"Marcos",
""
],
[
"Puente",
"Iñigo",
""
],
[
"Cordova",
"Jorge",
""
],
[
"Cordova",
"Gonzalo",
""
]
] |
new_dataset
| 0.989115 |
2306.10404
|
Nishil Patel
|
Nishil Patel, Sebastian Lee, Stefano Sarao Mannelli, Sebastian Goldt,
Andrew Saxe
|
The RL Perceptron: Generalisation Dynamics of Policy Learning in High
Dimensions
|
10 pages, 7 figures, Preprint
| null | null | null |
cs.LG cond-mat.dis-nn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement learning (RL) algorithms have proven transformative in a range
of domains. To tackle real-world domains, these systems often use neural
networks to learn policies directly from pixels or other high-dimensional
sensory input. By contrast, much theory of RL has focused on discrete state
spaces or worst-case analysis, and fundamental questions remain about the
dynamics of policy learning in high-dimensional settings. Here, we propose a
solvable high-dimensional model of RL that can capture a variety of learning
protocols, and derive its typical dynamics as a set of closed-form ordinary
differential equations (ODEs). We derive optimal schedules for the learning
rates and task difficulty - analogous to annealing schemes and curricula during
training in RL - and show that the model exhibits rich behaviour, including
delayed learning under sparse rewards; a variety of learning regimes depending
on reward baselines; and a speed-accuracy trade-off driven by reward
stringency. Experiments on variants of the Procgen game "Bossfight" and Arcade
Learning Environment game "Pong" also show such a speed-accuracy trade-off in
practice. Together, these results take a step towards closing the gap between
theory and practice in high-dimensional RL.
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 18:16:51 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jun 2023 16:38:04 GMT"
},
{
"version": "v3",
"created": "Tue, 27 Jun 2023 10:37:55 GMT"
},
{
"version": "v4",
"created": "Wed, 19 Jul 2023 09:17:09 GMT"
},
{
"version": "v5",
"created": "Sat, 2 Sep 2023 14:24:52 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Patel",
"Nishil",
""
],
[
"Lee",
"Sebastian",
""
],
[
"Mannelli",
"Stefano Sarao",
""
],
[
"Goldt",
"Sebastian",
""
],
[
"Saxe",
"Andrew",
""
]
] |
new_dataset
| 0.973689 |
2306.11259
|
Baozhe Zhang
|
Baozhe Zhang, Xinwei Chen, Zhehan Li, Giovanni Beltrame, Chao Xu, Fei
Gao, and Yanjun Cao
|
CoNi-MPC: Cooperative Non-inertial Frame Based Model Predictive Control
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a novel solution for UAV control in cooperative
multi-robot systems, which can be used in various scenarios such as
leader-following, landing on a moving base, or specific relative motion with a
target. Unlike classical methods that tackle UAV control in the world frame, we
directly control the UAV in the target coordinate frame, without making motion
assumptions about the target. In detail, we formulate a non-linear model
predictive controller of a UAV, referred to as the agent, within a non-inertial
frame (i.e., the target frame). The system requires the relative states (pose
and velocity), the angular velocity and the accelerations of the target, which
can be obtained by relative localization methods and ubiquitous MEMS IMU
sensors, respectively. This framework eliminates dependencies that are vital in
classical solutions, such as accurate state estimation for both the agent and
target, prior knowledge of the target motion model, and continuous trajectory
re-planning for some complex tasks. We have performed extensive simulations to
investigate the control performance with varying motion characteristics of the
target. Furthermore, we conducted real robot experiments, employing either
simulated relative pose estimation from motion capture systems indoors or
directly from our previous relative pose estimation devices outdoors, to
validate the applicability and feasibility of the proposed approach.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 03:25:35 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Sep 2023 03:02:56 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Zhang",
"Baozhe",
""
],
[
"Chen",
"Xinwei",
""
],
[
"Li",
"Zhehan",
""
],
[
"Beltrame",
"Giovanni",
""
],
[
"Xu",
"Chao",
""
],
[
"Gao",
"Fei",
""
],
[
"Cao",
"Yanjun",
""
]
] |
new_dataset
| 0.997533 |
2307.02192
|
Tamas Bisztray
|
Norbert Tihanyi, Tamas Bisztray, Ridhi Jain, Mohamed Amine Ferrag,
Lucas C. Cordeiro, Vasileios Mavroeidis
|
The FormAI Dataset: Generative AI in Software Security Through the Lens
of Formal Verification
|
https://github.com/FormAI-Dataset
| null | null | null |
cs.DB cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the FormAI dataset, a large collection of 112, 000
AI-generated compilable and independent C programs with vulnerability
classification. We introduce a dynamic zero-shot prompting technique
constructed to spawn diverse programs utilizing Large Language Models (LLMs).
The dataset is generated by GPT-3.5-turbo and comprises programs with varying
levels of complexity. Some programs handle complicated tasks like network
management, table games, or encryption, while others deal with simpler tasks
like string manipulation. Every program is labeled with the vulnerabilities
found within the source code, indicating the type, line number, and vulnerable
function name. This is accomplished by employing a formal verification method
using the Efficient SMT-based Bounded Model Checker (ESBMC), which uses model
checking, abstract interpretation, constraint programming, and satisfiability
modulo theories to reason over safety/security properties in programs. This
approach definitively detects vulnerabilities and offers a formal model known
as a counterexample, thus eliminating the possibility of generating false
positive reports. We have associated the identified vulnerabilities with Common
Weakness Enumeration (CWE) numbers. We make the source code available for the
112, 000 programs, accompanied by a separate file containing the
vulnerabilities detected in each program, making the dataset ideal for training
LLMs and machine learning algorithms. Our study unveiled that according to
ESBMC, 51.24% of the programs generated by GPT-3.5 contained vulnerabilities,
thereby presenting considerable risks to software safety and security.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 10:39:58 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Sep 2023 13:23:29 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Tihanyi",
"Norbert",
""
],
[
"Bisztray",
"Tamas",
""
],
[
"Jain",
"Ridhi",
""
],
[
"Ferrag",
"Mohamed Amine",
""
],
[
"Cordeiro",
"Lucas C.",
""
],
[
"Mavroeidis",
"Vasileios",
""
]
] |
new_dataset
| 0.999826 |
2307.11307
|
Xuelian Cheng
|
Ruyi Zha, Xuelian Cheng, Hongdong Li, Mehrtash Harandi, Zongyuan Ge
|
EndoSurf: Neural Surface Reconstruction of Deformable Tissues with
Stereo Endoscope Videos
|
MICCAI 2023(Oral, Student Travel Award, Top 3%); Ruyi Zha and Xuelian
Cheng made equal contributions. Corresponding author: Ruyi Zha
([email protected])
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconstructing soft tissues from stereo endoscope videos is an essential
prerequisite for many medical applications. Previous methods struggle to
produce high-quality geometry and appearance due to their inadequate
representations of 3D scenes. To address this issue, we propose a novel
neural-field-based method, called EndoSurf, which effectively learns to
represent a deforming surface from an RGBD sequence. In EndoSurf, we model
surface dynamics, shape, and texture with three neural fields. First, 3D points
are transformed from the observed space to the canonical space using the
deformation field. The signed distance function (SDF) field and radiance field
then predict their SDFs and colors, respectively, with which RGBD images can be
synthesized via differentiable volume rendering. We constrain the learned shape
by tailoring multiple regularization strategies and disentangling geometry and
appearance. Experiments on public endoscope datasets demonstrate that EndoSurf
significantly outperforms existing solutions, particularly in reconstructing
high-fidelity shapes. Code is available at
https://github.com/Ruyi-Zha/endosurf.git.
|
[
{
"version": "v1",
"created": "Fri, 21 Jul 2023 02:28:20 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Sep 2023 03:55:03 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Zha",
"Ruyi",
""
],
[
"Cheng",
"Xuelian",
""
],
[
"Li",
"Hongdong",
""
],
[
"Harandi",
"Mehrtash",
""
],
[
"Ge",
"Zongyuan",
""
]
] |
new_dataset
| 0.980449 |
2307.11729
|
Ryuto Koike
|
Ryuto Koike, Masahiro Kaneko, Naoaki Okazaki
|
OUTFOX: LLM-generated Essay Detection through In-context Learning with
Adversarially Generated Examples
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Models (LLMs) have achieved human-level fluency in text
generation, making it difficult to distinguish between human-written and
LLM-generated texts. This poses a growing risk of misuse of LLMs and demands
the development of detectors to identify LLM-generated texts. However, existing
detectors lack robustness against attacks: they degrade detection accuracy by
simply paraphrasing LLM-generated texts. Furthermore, a malicious user might
attempt to deliberately evade the detectors based on detection results, but
this has not been assumed in previous studies. In this paper, we propose
OUTFOX, a framework that improves the robustness of LLM-generated-text
detectors by allowing both the detector and the attacker to consider each
other's output. In this framework, the attacker uses the detector's prediction
labels as examples for in-context learning and adversarially generates essays
that are harder to detect, while the detector uses the adversarially generated
essays as examples for in-context learning to learn to detect essays from a
strong attacker. Experiments in the domain of student essays show that the
proposed detector improves the detection performance on the attacker-generated
texts by up to +41.3 points in F1-score. Furthermore, the proposed detector
shows a state-of-the-art detection performance: up to 96.9 points in F1-score,
beating existing detectors on non-attacked texts. Finally, the proposed
attacker drastically degrades the performance of detectors by up to -57.0
points F1-score, massively outperforming the baseline paraphrasing method for
evading detection.
|
[
{
"version": "v1",
"created": "Fri, 21 Jul 2023 17:40:47 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Sep 2023 10:20:30 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Koike",
"Ryuto",
""
],
[
"Kaneko",
"Masahiro",
""
],
[
"Okazaki",
"Naoaki",
""
]
] |
new_dataset
| 0.987203 |
2307.11772
|
Yixin Su
|
Rui Zhang, Yixin Su, Bayu Distiawan Trisedya, Xiaoyan Zhao, Min Yang,
Hong Cheng, Jianzhong Qi
|
AutoAlign: Fully Automatic and Effective Knowledge Graph Alignment
enabled by Large Language Models
|
14 pages, 5 figures, 4 tables. arXiv admin note: substantial text
overlap with arXiv:2210.08540
| null | null | null |
cs.IR cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The task of entity alignment between knowledge graphs (KGs) aims to identify
every pair of entities from two different KGs that represent the same entity.
Many machine learning-based methods have been proposed for this task. However,
to our best knowledge, existing methods all require manually crafted seed
alignments, which are expensive to obtain. In this paper, we propose the first
fully automatic alignment method named AutoAlign, which does not require any
manually crafted seed alignments. Specifically, for predicate embeddings,
AutoAlign constructs a predicate-proximity-graph with the help of large
language models to automatically capture the similarity between predicates
across two KGs. For entity embeddings, AutoAlign first computes the entity
embeddings of each KG independently using TransE, and then shifts the two KGs'
entity embeddings into the same vector space by computing the similarity
between entities based on their attributes. Thus, both predicate alignment and
entity alignment can be done without manually crafted seed alignments.
AutoAlign is not only fully automatic, but also highly effective. Experiments
using real-world KGs show that AutoAlign improves the performance of entity
alignment significantly compared to state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Tue, 18 Jul 2023 04:43:24 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Sep 2023 14:18:40 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Zhang",
"Rui",
""
],
[
"Su",
"Yixin",
""
],
[
"Trisedya",
"Bayu Distiawan",
""
],
[
"Zhao",
"Xiaoyan",
""
],
[
"Yang",
"Min",
""
],
[
"Cheng",
"Hong",
""
],
[
"Qi",
"Jianzhong",
""
]
] |
new_dataset
| 0.991497 |
2307.12762
|
Yanhui Zhang
|
Yanhui Zhang, Li Liu, Xianhong Xie
|
Two types of narrow-sense negacyclic BCH codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Negacyclic BCH codes are an important subclass of negacyclic codes and are
the best linear codes in most cases, but their parameters are hard to
determine. In this paper, we mainly study two types of negacyclic BCH codes of
length $n=\frac{q^{m}-1}{4},\frac{q^{m}+1}{4}$, and give their dimensions and
the lower bound on their minimum distance. Furthermore, we provide the weight
distribution of narrow-sense neagcyclic BCH codes of length $n=\frac{q^m-1}{4}$
for some special designed distances.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 13:06:09 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Sep 2023 01:35:48 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Zhang",
"Yanhui",
""
],
[
"Liu",
"Li",
""
],
[
"Xie",
"Xianhong",
""
]
] |
new_dataset
| 0.991401 |
2307.16176
|
Huayuan Ye
|
Huayuan Ye, Chenhui Li, Yang Li and Changbo Wang
|
InvVis: Large-Scale Data Embedding for Invertible Visualization
|
IEEE VIS 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present InvVis, a new approach for invertible visualization, which is
reconstructing or further modifying a visualization from an image. InvVis
allows the embedding of a significant amount of data, such as chart data, chart
information, source code, etc., into visualization images. The encoded image is
perceptually indistinguishable from the original one. We propose a new method
to efficiently express chart data in the form of images, enabling
large-capacity data embedding. We also outline a model based on the invertible
neural network to achieve high-quality data concealing and revealing. We
explore and implement a variety of application scenarios of InvVis.
Additionally, we conduct a series of evaluation experiments to assess our
method from multiple perspectives, including data embedding quality, data
restoration accuracy, data encoding capacity, etc. The result of our
experiments demonstrates the great potential of InvVis in invertible
visualization.
|
[
{
"version": "v1",
"created": "Sun, 30 Jul 2023 09:15:36 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Aug 2023 18:47:02 GMT"
},
{
"version": "v3",
"created": "Sun, 3 Sep 2023 13:39:21 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Ye",
"Huayuan",
""
],
[
"Li",
"Chenhui",
""
],
[
"Li",
"Yang",
""
],
[
"Wang",
"Changbo",
""
]
] |
new_dataset
| 0.996737 |
2308.03826
|
Pingping Zhang Dr
|
Xinhao Deng and Pingping Zhang and Wei Liu and Huchuan Lu
|
Recurrent Multi-scale Transformer for High-Resolution Salient Object
Detection
|
This work is the camera-ready version of ACM MM2023
| null | null | null |
cs.CV cs.AI cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Salient Object Detection (SOD) aims to identify and segment the most
conspicuous objects in an image or video. As an important pre-processing step,
it has many potential applications in multimedia and vision tasks. With the
advance of imaging devices, SOD with high-resolution images is of great demand,
recently. However, traditional SOD methods are largely limited to
low-resolution images, making them difficult to adapt to the development of
High-Resolution SOD (HRSOD). Although some HRSOD methods emerge, there are no
large enough datasets for training and evaluating. Besides, current HRSOD
methods generally produce incomplete object regions and irregular object
boundaries. To address above issues, in this work, we first propose a new
HRS10K dataset, which contains 10,500 high-quality annotated images at 2K-8K
resolution. As far as we know, it is the largest dataset for the HRSOD task,
which will significantly help future works in training and evaluating models.
Furthermore, to improve the HRSOD performance, we propose a novel Recurrent
Multi-scale Transformer (RMFormer), which recurrently utilizes shared
Transformers and multi-scale refinement architectures. Thus, high-resolution
saliency maps can be generated with the guidance of lower-resolution
predictions. Extensive experiments on both high-resolution and low-resolution
benchmarks show the effectiveness and superiority of the proposed framework.
The source code and dataset are released at:
https://github.com/DrowsyMon/RMFormer.
|
[
{
"version": "v1",
"created": "Mon, 7 Aug 2023 17:49:04 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Sep 2023 06:03:58 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Deng",
"Xinhao",
""
],
[
"Zhang",
"Pingping",
""
],
[
"Liu",
"Wei",
""
],
[
"Lu",
"Huchuan",
""
]
] |
new_dataset
| 0.993768 |
2308.05967
|
Huikai Wu
|
Shenxiao Mei, Chenglong Ma, Feihong Shen, Huikai Wu
|
YOLOrtho -- A Unified Framework for Teeth Enumeration and Dental Disease
Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Detecting dental diseases through panoramic X-rays images is a standard
procedure for dentists. Normally, a dentist need to identify diseases and find
the infected teeth. While numerous machine learning models adopting this
two-step procedure have been developed, there has not been an end-to-end model
that can identify teeth and their associated diseases at the same time. To fill
the gap, we develop YOLOrtho, a unified framework for teeth enumeration and
dental disease detection. We develop our model on Dentex Challenge 2023 data,
which consists of three distinct types of annotated data. The first part is
labeled with quadrant, and the second part is labeled with quadrant and
enumeration and the third part is labeled with quadrant, enumeration and
disease. To further improve detection, we make use of Tufts Dental public
dataset. To fully utilize the data and learn both teeth detection and disease
identification simultaneously, we formulate diseases as attributes attached to
their corresponding teeth. Due to the nature of position relation in teeth
enumeration, We replace convolution layer with CoordConv in our model to
provide more position information for the model. We also adjust the model
architecture and insert one more upsampling layer in FPN in favor of large
object detection. Finally, we propose a post-process strategy for teeth layout
that corrects teeth enumeration based on linear sum assignment. Results from
experiments show that our model exceeds large Diffusion-based model.
|
[
{
"version": "v1",
"created": "Fri, 11 Aug 2023 06:54:55 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Sep 2023 03:44:01 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Mei",
"Shenxiao",
""
],
[
"Ma",
"Chenglong",
""
],
[
"Shen",
"Feihong",
""
],
[
"Wu",
"Huikai",
""
]
] |
new_dataset
| 0.998846 |
2308.08479
|
Johan Edstedt
|
Johan Edstedt, Georg B\"okman, M{\aa}rten Wadenb\"ack, Michael
Felsberg
|
DeDoDe: Detect, Don't Describe -- Describe, Don't Detect for Local
Feature Matching
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Keypoint detection is a pivotal step in 3D reconstruction, whereby sets of
(up to) K points are detected in each view of a scene. Crucially, the detected
points need to be consistent between views, i.e., correspond to the same 3D
point in the scene. One of the main challenges with keypoint detection is the
formulation of the learning objective. Previous learning-based methods
typically jointly learn descriptors with keypoints, and treat the keypoint
detection as a binary classification task on mutual nearest neighbours.
However, basing keypoint detection on descriptor nearest neighbours is a proxy
task, which is not guaranteed to produce 3D-consistent keypoints. Furthermore,
this ties the keypoints to a specific descriptor, complicating downstream
usage. In this work, we instead learn keypoints directly from 3D consistency.
To this end, we train the detector to detect tracks from large-scale SfM. As
these points are often overly sparse, we derive a semi-supervised two-view
detection objective to expand this set to a desired number of detections. To
train a descriptor, we maximize the mutual nearest neighbour objective over the
keypoints with a separate network. Results show that our approach, DeDoDe,
achieves significant gains on multiple geometry benchmarks. Code is provided at
https://github.com/Parskatt/DeDoDe
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 16:37:02 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Sep 2023 10:43:12 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Edstedt",
"Johan",
""
],
[
"Bökman",
"Georg",
""
],
[
"Wadenbäck",
"Mårten",
""
],
[
"Felsberg",
"Michael",
""
]
] |
new_dataset
| 0.990917 |
2308.08942
|
Chenxin Xu
|
Chenxin Xu, Robby T. Tan, Yuhong Tan, Siheng Chen, Xinchao Wang,
Yanfeng Wang
|
Auxiliary Tasks Benefit 3D Skeleton-based Human Motion Prediction
|
Accpeted to ICCV2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Exploring spatial-temporal dependencies from observed motions is one of the
core challenges of human motion prediction. Previous methods mainly focus on
dedicated network structures to model the spatial and temporal dependencies.
This paper considers a new direction by introducing a model learning framework
with auxiliary tasks. In our auxiliary tasks, partial body joints' coordinates
are corrupted by either masking or adding noise and the goal is to recover
corrupted coordinates depending on the rest coordinates. To work with auxiliary
tasks, we propose a novel auxiliary-adapted transformer, which can handle
incomplete, corrupted motion data and achieve coordinate recovery via capturing
spatial-temporal dependencies. Through auxiliary tasks, the auxiliary-adapted
transformer is promoted to capture more comprehensive spatial-temporal
dependencies among body joints' coordinates, leading to better feature
learning. Extensive experimental results have shown that our method outperforms
state-of-the-art methods by remarkable margins of 7.2%, 3.7%, and 9.4% in terms
of 3D mean per joint position error (MPJPE) on the Human3.6M, CMU Mocap, and
3DPW datasets, respectively. We also demonstrate that our method is more robust
under data missing cases and noisy data cases. Code is available at
https://github.com/MediaBrain-SJTU/AuxFormer.
|
[
{
"version": "v1",
"created": "Thu, 17 Aug 2023 12:26:11 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Sep 2023 13:41:06 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Xu",
"Chenxin",
""
],
[
"Tan",
"Robby T.",
""
],
[
"Tan",
"Yuhong",
""
],
[
"Chen",
"Siheng",
""
],
[
"Wang",
"Xinchao",
""
],
[
"Wang",
"Yanfeng",
""
]
] |
new_dataset
| 0.998396 |
2308.09481
|
Nicola Botta
|
Nicola Botta, Patrik Jansson, Guilherme Horta Alvares Da Silva
|
Types, equations, dimensions and the Pi theorem
|
Submitted for publication in the "Journal of Functional Programming"
in August 2023
| null | null | null |
cs.PL cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
The languages of mathematical physics and modelling are endowed with a rich
"grammar of dimensions" that common abstractions of programming languages fail
to represent. We propose a dependently typed domain-specific language (embedded
in Idris) that captures this grammar. We apply it to explain basic notions of
dimensional analysis and Buckingham's Pi theorem. We hope that the language
makes mathematical physics more accessible to computer scientists and
functional programming more palatable to modelers and physicists.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 14:33:18 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Sep 2023 12:50:26 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Botta",
"Nicola",
""
],
[
"Jansson",
"Patrik",
""
],
[
"Da Silva",
"Guilherme Horta Alvares",
""
]
] |
new_dataset
| 0.994825 |
2308.11592
|
Hao Feng Mr.
|
Hao Feng, Zijian Wang, Jingqun Tang, Jinghui Lu, Wengang Zhou,
Houqiang Li, Can Huang
|
UniDoc: A Universal Large Multimodal Model for Simultaneous Text
Detection, Recognition, Spotting and Understanding
| null | null | null | null |
cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the era of Large Language Models (LLMs), tremendous strides have been made
in the field of multimodal understanding. However, existing advanced algorithms
are limited to effectively utilizing the immense representation capabilities
and rich world knowledge inherent to these large pre-trained models, and the
beneficial connections among tasks within the context of text-rich scenarios
have not been sufficiently explored. In this work, we introduce UniDoc, a novel
multimodal model equipped with text detection and recognition capabilities,
which are deficient in existing approaches. Moreover, UniDoc capitalizes on the
beneficial interactions among tasks to enhance the performance of each
individual task. To implement UniDoc, we perform unified multimodal instruct
tuning on the contributed large-scale instruction following datasets.
Quantitative and qualitative experimental results show that UniDoc sets
state-of-the-art scores across multiple challenging benchmarks. To the best of
our knowledge, this is the first large multimodal model capable of simultaneous
text detection, recognition, spotting, and understanding.
|
[
{
"version": "v1",
"created": "Sat, 19 Aug 2023 17:32:34 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Sep 2023 04:28:42 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Feng",
"Hao",
""
],
[
"Wang",
"Zijian",
""
],
[
"Tang",
"Jingqun",
""
],
[
"Lu",
"Jinghui",
""
],
[
"Zhou",
"Wengang",
""
],
[
"Li",
"Houqiang",
""
],
[
"Huang",
"Can",
""
]
] |
new_dataset
| 0.993365 |
2308.13365
|
Xuyuan Li
|
Xuyuan Li, Zengqiang Shang, Jian Liu, Hua Hua, Peiyang Shi, Pengyuan
Zhang
|
Expressive paragraph text-to-speech synthesis with multi-step
variational autoencoder
|
5 pages, 1 figure, 2 tables
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Neural networks have been able to generate high-quality single-sentence
speech with substantial expressiveness. However, it remains a challenge
concerning paragraph-level speech synthesis due to the need for coherent
acoustic features while delivering fluctuating speech styles. Meanwhile,
training these models directly on over-length speech leads to a deterioration
in the quality of synthesis speech. To address these problems, we propose a
high-quality and expressive paragraph speech synthesis system with a multi-step
variational autoencoder. Specifically, we employ multi-step latent variables to
capture speech information at different grammatical levels before utilizing
these features in parallel to generate speech waveform. We also propose a
three-step training method to improve the decoupling ability. Our model was
trained on a single-speaker French audiobook corpus released at Blizzard
Challenge 2023. Experimental results underscore the significant superiority of
our system over baseline models.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 13:22:42 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Aug 2023 13:08:25 GMT"
},
{
"version": "v3",
"created": "Sat, 2 Sep 2023 06:45:47 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Li",
"Xuyuan",
""
],
[
"Shang",
"Zengqiang",
""
],
[
"Liu",
"Jian",
""
],
[
"Hua",
"Hua",
""
],
[
"Shi",
"Peiyang",
""
],
[
"Zhang",
"Pengyuan",
""
]
] |
new_dataset
| 0.966696 |
2308.13401
|
Carla Binucci
|
Carla Binucci, Aaron B\"ungener, Giuseppe Di Battista, Walter Didimo,
Vida Dujmovi\'c, Seok-Hee Hong, Michael Kaufmann, Giuseppe Liotta, Pat Morin,
Alessandra Tappini
|
Min-$k$-planar Drawings of Graphs
|
Appears in the Proceedings of the 31st International Symposium on
Graph Drawing and Network Visualization (GD 2023)
| null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
The study of nonplanar drawings of graphs with restricted crossing
configurations is a well-established topic in graph drawing, often referred to
as beyond-planar graph drawing. One of the most studied types of drawings in
this area are the $k$-planar drawings $(k \geq 1)$, where each edge cannot
cross more than $k$ times. We generalize $k$-planar drawings, by introducing
the new family of min-$k$-planar drawings. In a min-$k$-planar drawing edges
can cross an arbitrary number of times, but for any two crossing edges, one of
the two must have no more than $k$ crossings. We prove a general upper bound on
the number of edges of min-$k$-planar drawings, a finer upper bound for $k=3$,
and tight upper bounds for $k=1,2$. Also, we study the inclusion relations
between min-$k$-planar graphs (i.e., graphs admitting min-$k$-planar drawings)
and $k$-planar graphs.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 14:24:14 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Sep 2023 13:38:27 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Binucci",
"Carla",
""
],
[
"Büngener",
"Aaron",
""
],
[
"Di Battista",
"Giuseppe",
""
],
[
"Didimo",
"Walter",
""
],
[
"Dujmović",
"Vida",
""
],
[
"Hong",
"Seok-Hee",
""
],
[
"Kaufmann",
"Michael",
""
],
[
"Liotta",
"Giuseppe",
""
],
[
"Morin",
"Pat",
""
],
[
"Tappini",
"Alessandra",
""
]
] |
new_dataset
| 0.995658 |
2308.13679
|
Jon Alvarez Justo
|
Jon A. Justo, Joseph Garrett, Dennis D. Langer, Marie B. Henriksen,
Radu T. Ionescu, and Tor A. Johansen
|
An Open Hyperspectral Dataset with Sea-Land-Cloud Ground-Truth from the
HYPSO-1 Satellite
|
Computer Vision, Artificial Intelligence, Remote Sensing, Earth
Observation, Hyperspectral Imaging, Classification, Labeled Data
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hyperspectral Imaging, employed in satellites for space remote sensing, like
HYPSO-1, faces constraints due to few labeled data sets, affecting the training
of AI models demanding these ground-truth annotations. In this work, we
introduce The HYPSO-1 Sea-Land-Cloud-Labeled Dataset, an open dataset with 200
diverse hyperspectral images from the HYPSO-1 mission, available in both raw
and calibrated forms for scientific research in Earth observation. Moreover, 38
of these images from different countries include ground-truth labels at
pixel-level totaling about 25 million spectral signatures labeled for
sea/land/cloud categories. To demonstrate the potential of the dataset and its
labeled subset, we have additionally optimized a deep learning model (1D Fully
Convolutional Network), achieving superior performance to the current state of
the art. The complete dataset, ground-truth labels, deep learning model, and
software code are openly accessible for download at the website
https://ntnu-smallsat-lab.github.io/hypso1_sea_land_clouds_dataset/ .
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 21:35:22 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Sep 2023 18:31:20 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Justo",
"Jon A.",
""
],
[
"Garrett",
"Joseph",
""
],
[
"Langer",
"Dennis D.",
""
],
[
"Henriksen",
"Marie B.",
""
],
[
"Ionescu",
"Radu T.",
""
],
[
"Johansen",
"Tor A.",
""
]
] |
new_dataset
| 0.999826 |
2308.14334
|
Younggeol Cho
|
Youngrae Kim, Younggeol Cho, Thanh-Tung Nguyen, Dongman Lee
|
MetaWeather: Few-Shot Weather-Degraded Image Restoration via Degradation
Pattern Matching
|
12 pages, 6 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Real-world vision tasks frequently suffer from the appearance of adverse
weather conditions including rain, fog, snow, and raindrops in captured images.
Recently, several generic methods for restoring weather-degraded images have
been proposed, aiming to remove multiple types of adverse weather effects
present in the images. However, these methods have considered weather as
discrete and mutually exclusive variables, leading to failure in generalizing
to unforeseen weather conditions beyond the scope of the training data, such as
the co-occurrence of rain, fog, and raindrops. To this end, weather-degraded
image restoration models should have flexible adaptability to the current
unknown weather condition to ensure reliable and optimal performance. The
adaptation method should also be able to cope with data scarcity for real-world
adaptation. This paper proposes MetaWeather, a few-shot weather-degraded image
restoration method for arbitrary weather conditions. For this, we devise the
core piece of MetaWeather, coined Degradation Pattern Matching Module (DPMM),
which leverages representations from a few-shot support set by matching
features between input and sample images under new weather conditions. In
addition, we build meta-knowledge with episodic meta-learning on top of our
MetaWeather architecture to provide flexible adaptability. In the meta-testing
phase, we adopt a parameter-efficient fine-tuning method to preserve the
prebuilt knowledge and avoid the overfitting problem. Experiments on the BID
Task II.A dataset show our method achieves the best performance on PSNR and
SSIM compared to state-of-the-art image restoration methods. Code is available
at (TBA).
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 06:25:40 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Sep 2023 07:35:58 GMT"
}
] | 2023-09-06T00:00:00 |
[
[
"Kim",
"Youngrae",
""
],
[
"Cho",
"Younggeol",
""
],
[
"Nguyen",
"Thanh-Tung",
""
],
[
"Lee",
"Dongman",
""
]
] |
new_dataset
| 0.997581 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.