id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.03671
|
Michael F\"arber
|
Michael F\"arber, David Lamprecht, Johan Krause, Linn Aung, Peter
Haase
|
SemOpenAlex: The Scientific Landscape in 26 Billion RDF Triples
|
accepted at ISWC'23
| null | null | null |
cs.DL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present SemOpenAlex, an extensive RDF knowledge graph that contains over
26 billion triples about scientific publications and their associated entities,
such as authors, institutions, journals, and concepts. SemOpenAlex is licensed
under CC0, providing free and open access to the data. We offer the data
through multiple channels, including RDF dump files, a SPARQL endpoint, and as
a data source in the Linked Open Data cloud, complete with resolvable URIs and
links to other data sources. Moreover, we provide embeddings for knowledge
graph entities using high-performance computing. SemOpenAlex enables a broad
range of use-case scenarios, such as exploratory semantic search via our
website, large-scale scientific impact quantification, and other forms of
scholarly big data analytics within and across scientific disciplines.
Additionally, it enables academic recommender systems, such as recommending
collaborators, publications, and venues, including explainability capabilities.
Finally, SemOpenAlex can serve for RDF query optimization benchmarks, creating
scholarly knowledge-guided language models, and as a hub for semantic
scientific publishing.
|
[
{
"version": "v1",
"created": "Mon, 7 Aug 2023 15:46:39 GMT"
}
] | 2023-08-08T00:00:00 |
[
[
"Färber",
"Michael",
""
],
[
"Lamprecht",
"David",
""
],
[
"Krause",
"Johan",
""
],
[
"Aung",
"Linn",
""
],
[
"Haase",
"Peter",
""
]
] |
new_dataset
| 0.999211 |
2308.03688
|
Xiao Liu
|
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu
Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan
Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan
Sun, Minlie Huang, Yuxiao Dong, Jie Tang
|
AgentBench: Evaluating LLMs as Agents
|
38 pages
| null | null | null |
cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 25 LLMs (including APIs
and open-sourced models) shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and open-sourced competitors. It also
serves as a component of an ongoing project with wider coverage and deeper
consideration towards systematic LLM evaluation. Datasets, environments, and an
integrated evaluation package for AgentBench are released at
https://github.com/THUDM/AgentBench
|
[
{
"version": "v1",
"created": "Mon, 7 Aug 2023 16:08:11 GMT"
}
] | 2023-08-08T00:00:00 |
[
[
"Liu",
"Xiao",
""
],
[
"Yu",
"Hao",
""
],
[
"Zhang",
"Hanchen",
""
],
[
"Xu",
"Yifan",
""
],
[
"Lei",
"Xuanyu",
""
],
[
"Lai",
"Hanyu",
""
],
[
"Gu",
"Yu",
""
],
[
"Ding",
"Hangliang",
""
],
[
"Men",
"Kaiwen",
""
],
[
"Yang",
"Kejuan",
""
],
[
"Zhang",
"Shudan",
""
],
[
"Deng",
"Xiang",
""
],
[
"Zeng",
"Aohan",
""
],
[
"Du",
"Zhengxiao",
""
],
[
"Zhang",
"Chenhui",
""
],
[
"Shen",
"Sheng",
""
],
[
"Zhang",
"Tianjun",
""
],
[
"Su",
"Yu",
""
],
[
"Sun",
"Huan",
""
],
[
"Huang",
"Minlie",
""
],
[
"Dong",
"Yuxiao",
""
],
[
"Tang",
"Jie",
""
]
] |
new_dataset
| 0.976954 |
2308.03690
|
Davide Ferrari
|
Davide Ferrari, Andrea Pupa, Alberto Signoretti, Cristian Secchi
|
Safe Multimodal Communication in Human-Robot Collaboration
| null | null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The new industrial settings are characterized by the presence of human and
robots that work in close proximity, cooperating in performing the required
job. Such a collaboration, however, requires to pay attention to many aspects.
Firstly, it is crucial to enable a communication between this two actors that
is natural and efficient. Secondly, the robot behavior must always be compliant
with the safety regulations, ensuring always a safe collaboration. In this
paper, we propose a framework that enables multi-channel communication between
humans and robots by leveraging multimodal fusion of voice and gesture commands
while always respecting safety regulations. The framework is validated through
a comparative experiment, demonstrating that, thanks to multimodal
communication, the robot can extract valuable information for performing the
required task and additionally, with the safety layer, the robot can scale its
speed to ensure the operator's safety.
|
[
{
"version": "v1",
"created": "Mon, 7 Aug 2023 16:08:21 GMT"
}
] | 2023-08-08T00:00:00 |
[
[
"Ferrari",
"Davide",
""
],
[
"Pupa",
"Andrea",
""
],
[
"Signoretti",
"Alberto",
""
],
[
"Secchi",
"Cristian",
""
]
] |
new_dataset
| 0.99518 |
2308.03741
|
Muhammad Bilal Shaikh
|
Muhammad Bilal Shaikh, Douglas Chai, Syed Mohammed Shamsul Islam and
Naveed Akhtar
|
MAiVAR-T: Multimodal Audio-image and Video Action Recognizer using
Transformers
|
6 pages, 7 figures, 4 tables, Peer reviewed, Accepted @ The 11th
European Workshop on Visual Information Processing (EUVIP) will be held on
11th-14th September 2023, in Gj{\o}vik, Norway. arXiv admin note: text
overlap with arXiv:2103.15691 by other authors
| null | null | null |
cs.CV cs.AI cs.LG cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In line with the human capacity to perceive the world by simultaneously
processing and integrating high-dimensional inputs from multiple modalities
like vision and audio, we propose a novel model, MAiVAR-T (Multimodal
Audio-Image to Video Action Recognition Transformer). This model employs an
intuitive approach for the combination of audio-image and video modalities,
with a primary aim to escalate the effectiveness of multimodal human action
recognition (MHAR). At the core of MAiVAR-T lies the significance of distilling
substantial representations from the audio modality and transmuting these into
the image domain. Subsequently, this audio-image depiction is fused with the
video modality to formulate a unified representation. This concerted approach
strives to exploit the contextual richness inherent in both audio and video
modalities, thereby promoting action recognition. In contrast to existing
state-of-the-art strategies that focus solely on audio or video modalities,
MAiVAR-T demonstrates superior performance. Our extensive empirical evaluations
conducted on a benchmark action recognition dataset corroborate the model's
remarkable performance. This underscores the potential enhancements derived
from integrating audio and video modalities for action recognition purposes.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 11:00:25 GMT"
}
] | 2023-08-08T00:00:00 |
[
[
"Shaikh",
"Muhammad Bilal",
""
],
[
"Chai",
"Douglas",
""
],
[
"Islam",
"Syed Mohammed Shamsul",
""
],
[
"Akhtar",
"Naveed",
""
]
] |
new_dataset
| 0.988825 |
2001.03426
|
Kumar Sankar Ray
|
Mandrita Mondal and Kumar S. Ray
|
DNA Linear Block Codes: Generation, Error-detection and Error-correction
of DNA Codeword
|
16 pages, 1 figure, 5 tables
|
International Journal of Bioinformatics Intelligent Computing.
2022;1(2):103-126
| null | null |
cs.IT math.IT q-bio.BM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In modern age, the increasing complexity of computation and communication
technology is leading us towards the necessity of new paradigm. As a result,
unconventional approach like DNA coding theory is gaining considerable
attention. The storage capacity, information processing and transmission
properties of DNA molecules stimulate the notion of DNA coding theory as well
as DNA cryptography. In this paper we generate DNA codeword using DNA linear
block codes which ensures the secure transmission of information. In the
proposed code design strategy DNA-based XOR operation (DNAX) is applied for
effective construction of DNA codewords which are quadruples generated over the
set of alphabets consisting of four DNA bases adenine, thymine, guanine, and
cytosine. By worked out examples we explain the use of generator matrix and
parity check matrix in encryption and decryption of coded data in the form of
short single stranded DNA sequences. The newly developed technique can detect
as well as correcting error in transmission of DNA codewords through biological
channels from sender to the intended receiver. Through DNA coding theory we are
expanding the paths towards data compression and error correction in the form
of DNA strands. This leads us towards a broader domain of DNA cryptography.
|
[
{
"version": "v1",
"created": "Tue, 31 Dec 2019 13:19:47 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Aug 2023 03:45:35 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Mondal",
"Mandrita",
""
],
[
"Ray",
"Kumar S.",
""
]
] |
new_dataset
| 0.992408 |
2201.04581
|
Truong Hoang Van
|
Truong V. Hoang and Quang H. Nguyen and Cuong Q. Nguyen and Phong X.
Nguyen and Hoang D. Nguyen
|
Sound-Dr: Reliable Sound Dataset and Baseline Artificial Intelligence
System for Respiratory Illnesses
|
9 pages, PHMAP2023, PHM
|
IJPHM (2023)
| null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
As the burden of respiratory diseases continues to fall on society worldwide,
this paper proposes a high-quality and reliable dataset of human sounds for
studying respiratory illnesses, including pneumonia and COVID-19. It consists
of coughing, mouth breathing, and nose breathing sounds together with metadata
on related clinical characteristics. We also develop a proof-of-concept system
for establishing baselines and benchmarking against multiple datasets, such as
Coswara and COUGHVID. Our comprehensive experiments show that the Sound-Dr
dataset has richer features, better performance, and is more robust to dataset
shifts in various machine learning tasks. It is promising for a wide range of
real-time applications on mobile devices. The proposed dataset and system will
serve as practical tools to support healthcare professionals in diagnosing
respiratory disorders. The dataset and code are publicly available here:
https://github.com/ReML-AI/Sound-Dr/.
|
[
{
"version": "v1",
"created": "Wed, 12 Jan 2022 17:15:17 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Feb 2022 11:12:11 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Aug 2023 15:28:28 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Hoang",
"Truong V.",
""
],
[
"Nguyen",
"Quang H.",
""
],
[
"Nguyen",
"Cuong Q.",
""
],
[
"Nguyen",
"Phong X.",
""
],
[
"Nguyen",
"Hoang D.",
""
]
] |
new_dataset
| 0.999816 |
2205.07871
|
Martin Khannouz
|
Martin Khannouz, Tristan Glatard
|
Mondrian Forest for Data Stream Classification Under Memory Constraints
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Supervised learning algorithms generally assume the availability of enough
memory to store their data model during the training and test phases. However,
in the Internet of Things, this assumption is unrealistic when data comes in
the form of infinite data streams, or when learning algorithms are deployed on
devices with reduced amounts of memory. In this paper, we adapt the online
Mondrian forest classification algorithm to work with memory constraints on
data streams. In particular, we design five out-of-memory strategies to update
Mondrian trees with new data points when the memory limit is reached. Moreover,
we design trimming mechanisms to make Mondrian trees more robust to concept
drifts under memory constraints. We evaluate our algorithms on a variety of
real and simulated datasets, and we conclude with recommendations on their use
in different situations: the Extend Node strategy appears as the best
out-of-memory strategy in all configurations, whereas different trimming
mechanisms should be adopted depending on whether a concept drift is expected.
All our methods are implemented in the OrpailleCC open-source library and are
ready to be used on embedded systems and connected objects.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 15:35:03 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Sep 2022 15:27:06 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Aug 2023 12:54:36 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Khannouz",
"Martin",
""
],
[
"Glatard",
"Tristan",
""
]
] |
new_dataset
| 0.974273 |
2205.12602
|
Ouhan Huang
|
Yuxing Chen, Renshu Gu, Ouhan Huang and Gangyong Jia
|
VTP: Volumetric Transformer for Multi-view Multi-person 3D Pose
Estimation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents Volumetric Transformer Pose estimator (VTP), the first 3D
volumetric transformer framework for multi-view multi-person 3D human pose
estimation. VTP aggregates features from 2D keypoints in all camera views and
directly learns the spatial relationships in the 3D voxel space in an
end-to-end fashion. The aggregated 3D features are passed through 3D
convolutions before being flattened into sequential embeddings and fed into a
transformer. A residual structure is designed to further improve the
performance. In addition, the sparse Sinkhorn attention is empowered to reduce
the memory cost, which is a major bottleneck for volumetric representations,
while also achieving excellent performance. The output of the transformer is
again concatenated with 3D convolutional features by a residual design. The
proposed VTP framework integrates the high performance of the transformer with
volumetric representations, which can be used as a good alternative to the
convolutional backbones. Experiments on the Shelf, Campus and CMU Panoptic
benchmarks show promising results in terms of both Mean Per Joint Position
Error (MPJPE) and Percentage of Correctly estimated Parts (PCP). Our code will
be available.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 09:26:42 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Chen",
"Yuxing",
""
],
[
"Gu",
"Renshu",
""
],
[
"Huang",
"Ouhan",
""
],
[
"Jia",
"Gangyong",
""
]
] |
new_dataset
| 0.989173 |
2207.12850
|
Toluwani Aremu
|
Toluwani Aremu, Li Zhiyuan, Reem Alameeri, Mustaqeem Khan,
Abdulmotaleb El Saddik
|
SSIVD-Net: A Novel Salient Super Image Classification & Detection
Technique for Weaponized Violence
|
5 tables, 3 figures
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detection of violence and weaponized violence in closed-circuit television
(CCTV) footage requires a comprehensive approach. In this work, we introduce
the \emph{Smart-City CCTV Violence Detection (SCVD)} dataset, specifically
designed to facilitate the learning of weapon distribution in surveillance
videos. To tackle the complexities of analyzing 3D surveillance video for
violence recognition tasks, we propose a novel technique called,
\emph{SSIVD-Net} (\textbf{S}alient-\textbf{S}uper-\textbf{I}mage for
\textbf{V}iolence \textbf{D}etection). Our method reduces 3D video data
complexity, dimensionality, and information loss while improving inference,
performance, and explainability through the use of Salient-Super-Image
representations. Considering the scalability and sustainability requirements of
futuristic smart cities, the authors introduce the \emph{Salient-Classifier}, a
novel architecture combining a kernelized approach with a residual learning
strategy. We evaluate variations of SSIVD-Net and Salient Classifier on our
SCVD dataset and benchmark against state-of-the-art (SOTA) models commonly
employed in violence detection. Our approach exhibits significant improvements
in detecting both weaponized and non-weaponized violence instances. By
advancing the SOTA in violence detection, our work offers a practical and
scalable solution suitable for real-world applications. The proposed
methodology not only addresses the challenges of violence detection in CCTV
footage but also contributes to the understanding of weapon distribution in
smart surveillance. Ultimately, our research findings should enable smarter and
more secure cities, as well as enhance public safety measures.
|
[
{
"version": "v1",
"created": "Tue, 26 Jul 2022 12:31:01 GMT"
},
{
"version": "v2",
"created": "Wed, 31 Aug 2022 13:37:55 GMT"
},
{
"version": "v3",
"created": "Sun, 25 Sep 2022 12:53:55 GMT"
},
{
"version": "v4",
"created": "Thu, 26 Jan 2023 12:29:11 GMT"
},
{
"version": "v5",
"created": "Wed, 22 Feb 2023 04:26:03 GMT"
},
{
"version": "v6",
"created": "Wed, 7 Jun 2023 09:26:49 GMT"
},
{
"version": "v7",
"created": "Fri, 4 Aug 2023 09:54:51 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Aremu",
"Toluwani",
""
],
[
"Zhiyuan",
"Li",
""
],
[
"Alameeri",
"Reem",
""
],
[
"Khan",
"Mustaqeem",
""
],
[
"Saddik",
"Abdulmotaleb El",
""
]
] |
new_dataset
| 0.999667 |
2208.00919
|
Felix Ott
|
Felix Ott and Nisha Lakshmana Raichur and David R\"ugamer and Tobias
Feigl and Heiko Neumann and Bernd Bischl and Christopher Mutschler
|
Benchmarking Visual-Inertial Deep Multimodal Fusion for Relative Pose
Regression and Odometry-aided Absolute Pose Regression
|
Under review
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual-inertial localization is a key problem in computer vision and robotics
applications such as virtual reality, self-driving cars, and aerial vehicles.
The goal is to estimate an accurate pose of an object when either the
environment or the dynamics are known. Absolute pose regression (APR)
techniques directly regress the absolute pose from an image input in a known
scene using convolutional and spatio-temporal networks. Odometry methods
perform relative pose regression (RPR) that predicts the relative pose from a
known object dynamic (visual or inertial inputs). The localization task can be
improved by retrieving information from both data sources for a cross-modal
setup, which is a challenging problem due to contradictory tasks. In this work,
we conduct a benchmark to evaluate deep multimodal fusion based on pose graph
optimization and attention networks. Auxiliary and Bayesian learning are
utilized for the APR task. We show accuracy improvements for the APR-RPR task
and for the RPR-RPR task for aerial vehicles and hand-held devices. We conduct
experiments on the EuRoC MAV and PennCOSYVIO datasets and record and evaluate a
novel industry dataset.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 15:05:26 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Feb 2023 10:01:28 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Aug 2023 08:36:02 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Ott",
"Felix",
""
],
[
"Raichur",
"Nisha Lakshmana",
""
],
[
"Rügamer",
"David",
""
],
[
"Feigl",
"Tobias",
""
],
[
"Neumann",
"Heiko",
""
],
[
"Bischl",
"Bernd",
""
],
[
"Mutschler",
"Christopher",
""
]
] |
new_dataset
| 0.980199 |
2211.11248
|
Zhaokai Wang
|
Le Zhuo, Zhaokai Wang, Baisen Wang, Yue Liao, Chenxi Bao, Stanley
Peng, Songhao Han, Aixi Zhang, Fei Fang, Si Liu
|
Video Background Music Generation: Dataset, Method and Evaluation
|
Accepted by ICCV2023
| null | null | null |
cs.CV cs.MM cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Music is essential when editing videos, but selecting music manually is
difficult and time-consuming. Thus, we seek to automatically generate
background music tracks given video input. This is a challenging task since it
requires music-video datasets, efficient architectures for video-to-music
generation, and reasonable metrics, none of which currently exist. To close
this gap, we introduce a complete recipe including dataset, benchmark model,
and evaluation metric for video background music generation. We present SymMV,
a video and symbolic music dataset with various musical annotations. To the
best of our knowledge, it is the first video-music dataset with rich musical
annotations. We also propose a benchmark video background music generation
framework named V-MusProd, which utilizes music priors of chords, melody, and
accompaniment along with video-music relations of semantic, color, and motion
features. To address the lack of objective metrics for video-music
correspondence, we design a retrieval-based metric VMCP built upon a powerful
video-music representation learning model. Experiments show that with our
dataset, V-MusProd outperforms the state-of-the-art method in both music
quality and correspondence with videos. We believe our dataset, benchmark
model, and evaluation metric will boost the development of video background
music generation. Our dataset and code are available at
https://github.com/zhuole1025/SymMV.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 08:39:48 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Aug 2023 15:57:36 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Zhuo",
"Le",
""
],
[
"Wang",
"Zhaokai",
""
],
[
"Wang",
"Baisen",
""
],
[
"Liao",
"Yue",
""
],
[
"Bao",
"Chenxi",
""
],
[
"Peng",
"Stanley",
""
],
[
"Han",
"Songhao",
""
],
[
"Zhang",
"Aixi",
""
],
[
"Fang",
"Fei",
""
],
[
"Liu",
"Si",
""
]
] |
new_dataset
| 0.999549 |
2212.07595
|
Kuan Xu
|
Kuan Xu, Yuefan Hao, Shenghai Yuan, Chen Wang, Lihua Xie
|
AirVO: An Illumination-Robust Point-Line Visual Odometry
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes an illumination-robust visual odometry (VO) system that
incorporates both accelerated learning-based corner point algorithms and an
extended line feature algorithm. To be robust to dynamic illumination, the
proposed system employs the convolutional neural network (CNN) and graph neural
network (GNN) to detect and match reliable and informative corner points. Then
point feature matching results and the distribution of point and line features
are utilized to match and triangulate lines. By accelerating CNN and GNN parts
and optimizing the pipeline, the proposed system is able to run in real-time on
low-power embedded platforms. The proposed VO was evaluated on several datasets
with varying illumination conditions, and the results show that it outperforms
other state-of-the-art VO systems in terms of accuracy and robustness. The
open-source nature of the proposed system allows for easy implementation and
customization by the research community, enabling further development and
improvement of VO for various applications.
|
[
{
"version": "v1",
"created": "Thu, 15 Dec 2022 02:55:12 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 01:52:31 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Aug 2023 08:11:33 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Xu",
"Kuan",
""
],
[
"Hao",
"Yuefan",
""
],
[
"Yuan",
"Shenghai",
""
],
[
"Wang",
"Chen",
""
],
[
"Xie",
"Lihua",
""
]
] |
new_dataset
| 0.979233 |
2302.02969
|
Bruno Sa
|
Bruno S\'a, Luca Valente, Jos\'e Martins, Davide Rossi, Luca Benini
and Sandro Pinto
|
CVA6 RISC-V Virtualization: Architecture, Microarchitecture, and Design
Space Exploration
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Virtualization is a key technology used in a wide range of applications, from
cloud computing to embedded systems. Over the last few years, mainstream
computer architectures were extended with hardware virtualization support,
giving rise to a set of virtualization technologies (e.g., Intel VT, Arm VE)
that are now proliferating in modern processors and SoCs. In this article, we
describe our work on hardware virtualization support in the RISC-V CVA6 core.
Our contribution is multifold and encompasses architecture, microarchitecture,
and design space exploration. In particular, we highlight the design of a set
of microarchitectural enhancements (i.e., G-Stage Translation Lookaside Buffer
(GTLB), L2 TLB) to alleviate the virtualization performance overhead. We also
perform a Design Space Exploration (DSE) and accompanying post-layout
simulations (based on 22nm FDX technology) to assess Performance, Power ,and
Area (PPA). Further, we map design variants on an FPGA platform (Genesys 2) to
assess the functional performance-area trade-off. Based on the DSE, we select
an optimal design point for the CVA6 with hardware virtualization support. For
this optimal hardware configuration, we collected functional performance
results by running the MiBench benchmark on Linux atop Bao hypervisor for a
single-core configuration. We observed a performance speedup of up to 16%
(approx. 12.5% on average) compared with virtualization-aware non-optimized
design at the minimal cost of 0.78% in area and 0.33% in power. Finally, all
work described in this article is publicly available and open-sourced for the
community to further evaluate additional design configurations and software
stacks.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 17:59:35 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Jun 2023 10:22:46 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Aug 2023 12:47:12 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Sá",
"Bruno",
""
],
[
"Valente",
"Luca",
""
],
[
"Martins",
"José",
""
],
[
"Rossi",
"Davide",
""
],
[
"Benini",
"Luca",
""
],
[
"Pinto",
"Sandro",
""
]
] |
new_dataset
| 0.988717 |
2303.10442
|
Lorenzo Galati Giordano
|
Lorenzo Galati Giordano, Giovanni Geraci, Marc Carrascosa, and Boris
Bellalta
|
What Will Wi-Fi 8 Be? A Primer on IEEE 802.11bn Ultra High Reliability
| null | null | null | null |
cs.NI cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
What will Wi-Fi 8 be? Driven by the strict requirements of emerging
applications, next-generation Wi-Fi is set to prioritize Ultra High Reliability
(UHR) above all. In this paper, we explore the journey towards IEEE 802.11bn
UHR, the amendment that will form the basis of Wi-Fi 8. We first present new
use cases calling for further Wi-Fi evolution and also outline current
standardization, certification, and spectrum allocation activities, sharing
updates from the newly formed UHR Study Group. We then introduce the disruptive
new features envisioned for Wi-Fi 8 and discuss the associated research
challenges. Among those, we focus on access point coordination and demonstrate
that it could build upon 802.11be multi-link operation to make Ultra High
Reliability a reality in Wi-Fi 8.
|
[
{
"version": "v1",
"created": "Sat, 18 Mar 2023 15:51:48 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2023 15:05:26 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Giordano",
"Lorenzo Galati",
""
],
[
"Geraci",
"Giovanni",
""
],
[
"Carrascosa",
"Marc",
""
],
[
"Bellalta",
"Boris",
""
]
] |
new_dataset
| 0.995692 |
2303.12745
|
Xiaobao Guo
|
Xiaobao Guo, Nithish Muthuchamy Selvaraj, Zitong Yu, Adams Wai-Kin
Kong, Bingquan Shen, Alex Kot
|
Audio-Visual Deception Detection: DOLOS Dataset and Parameter-Efficient
Crossmodal Learning
|
11 pages, 6 figures
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Deception detection in conversations is a challenging yet important task,
having pivotal applications in many fields such as credibility assessment in
business, multimedia anti-frauds, and custom security. Despite this, deception
detection research is hindered by the lack of high-quality deception datasets,
as well as the difficulties of learning multimodal features effectively. To
address this issue, we introduce DOLOS\footnote {The name ``DOLOS" comes from
Greek mythology.}, the largest gameshow deception detection dataset with rich
deceptive conversations. DOLOS includes 1,675 video clips featuring 213
subjects, and it has been labeled with audio-visual feature annotations. We
provide train-test, duration, and gender protocols to investigate the impact of
different factors. We benchmark our dataset on previously proposed deception
detection approaches. To further improve the performance by fine-tuning fewer
parameters, we propose Parameter-Efficient Crossmodal Learning (PECL), where a
Uniform Temporal Adapter (UT-Adapter) explores temporal attention in
transformer-based architectures, and a crossmodal fusion module, Plug-in
Audio-Visual Fusion (PAVF), combines crossmodal information from audio-visual
features. Based on the rich fine-grained audio-visual annotations on DOLOS, we
also exploit multi-task learning to enhance performance by concurrently
predicting deception and audio-visual features. Experimental results
demonstrate the desired quality of the DOLOS dataset and the effectiveness of
the PECL. The DOLOS dataset and the source codes are available at
https://github.com/NMS05/Audio-Visual-Deception-Detection-DOLOS-Dataset-and-Parameter-Efficient-Crossmodal-Learning/tree/main.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 08:12:16 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Aug 2023 03:54:49 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Guo",
"Xiaobao",
""
],
[
"Selvaraj",
"Nithish Muthuchamy",
""
],
[
"Yu",
"Zitong",
""
],
[
"Kong",
"Adams Wai-Kin",
""
],
[
"Shen",
"Bingquan",
""
],
[
"Kot",
"Alex",
""
]
] |
new_dataset
| 0.999488 |
2304.11793
|
Craig Reynolds
|
Craig Reynolds
|
Coevolution of Camouflage
|
16 pages, 20 figures
| null |
10.1162/isal_a_00583
| null |
cs.GR cs.CV cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Camouflage in nature seems to arise from competition between predator and
prey. To survive, predators must find prey, and prey must avoid being found.
This work simulates an abstract model of that adversarial relationship. It
looks at crypsis through evolving prey camouflage patterns (as color textures)
in competition with evolving predator vision. During their "lifetime" predators
learn to better locate camouflaged prey. The environment for this 2D simulation
is provided by a set of photographs, typically of natural scenes. This model is
based on two evolving populations, one of prey and another of predators. Mutual
conflict between these populations can produce both effective prey camouflage
and predators skilled at "breaking" camouflage. The result is an open source
artificial life model to help study camouflage in nature, and the perceptual
phenomenon of camouflage more generally.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 02:36:25 GMT"
},
{
"version": "v2",
"created": "Thu, 18 May 2023 23:43:32 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Reynolds",
"Craig",
""
]
] |
new_dataset
| 0.982336 |
2305.18340
|
Maya Karanouh
|
Maya Karanouh
|
Mapping ChatGPT in Mainstream Media to Unravel Jobs and Diversity
Challenges: Early Quantitative Insights through Sentiment Analysis and Word
Frequency Analysis
| null | null | null | null |
cs.CY cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The exponential growth in user acquisition and popularity of OpenAIs ChatGPT,
an artificial intelligence(AI) powered chatbot, was accompanied by widespread
mainstream media coverage. This article presents a quantitative data analysis
of the early trends and sentiments revealed by conducting text mining and NLP
methods onto a corpus of 10,902 mainstream news headlines related to the
subject of ChatGPT and artificial intelligence, from the launch of ChatGPT in
November 2022 to March 2023. The findings revealed in sentiment analysis,
ChatGPT and artificial intelligence, were perceived more positively than
negatively in the mainstream media. In regards to word frequency results, over
sixty-five percent of the top frequency words were focused on Big Tech issues
and actors while topics such as jobs, diversity, ethics, copyright, gender and
women were poorly represented or completely absent and only accounted for six
percent of the total corpus. This article is a critical analysis into the power
structures and collusions between Big Tech and Big Media in their hegemonic
exclusion of diversity and job challenges from mainstream media.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 15:10:51 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Aug 2023 19:21:02 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Karanouh",
"Maya",
""
]
] |
new_dataset
| 0.977176 |
2307.14850
|
Ahmet Yavuz Uluslu
|
Ahmet Yavuz Uluslu and Gerold Schneider
|
Turkish Native Language Identification
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present the first application of Native Language
Identification (NLI) for the Turkish language. NLI involves predicting the
writer's first language by analysing their writing in different languages.
While most NLI research has focused on English, our study extends its scope to
Turkish. We used the recently constructed Turkish Learner Corpus and employed a
combination of three syntactic features (CFG production rules, part-of-speech
n-grams, and function words) with L2 texts to demonstrate their effectiveness
in this task.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 13:28:31 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 13:27:14 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Aug 2023 11:11:32 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Uluslu",
"Ahmet Yavuz",
""
],
[
"Schneider",
"Gerold",
""
]
] |
new_dataset
| 0.997817 |
2308.01404
|
Aidan O'Gara
|
Aidan O'Gara
|
Hoodwinked: Deception and Cooperation in a Text-Based Game for Language
Models
|
Added reference for McKenzie 2023; updated acknowledgements
| null | null | null |
cs.CL cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Are current language models capable of deception and lie detection? We study
this question by introducing a text-based game called $\textit{Hoodwinked}$,
inspired by Mafia and Among Us. Players are locked in a house and must find a
key to escape, but one player is tasked with killing the others. Each time a
murder is committed, the surviving players have a natural language discussion
then vote to banish one player from the game. We conduct experiments with
agents controlled by GPT-3, GPT-3.5, and GPT-4 and find evidence of deception
and lie detection capabilities. The killer often denies their crime and accuses
others, leading to measurable effects on voting outcomes. More advanced models
are more effective killers, outperforming smaller models in 18 of 24 pairwise
comparisons. Secondary metrics provide evidence that this improvement is not
mediated by different actions, but rather by stronger persuasive skills during
discussions. To evaluate the ability of AI agents to deceive humans, we make
this game publicly available at h https://hoodwinked.ai/ .
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 17:22:09 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Aug 2023 00:57:06 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"O'Gara",
"Aidan",
""
]
] |
new_dataset
| 0.993398 |
2308.01925
|
Petar Radanliev
|
Dr Petar Radanliev, Professor David De Roure, Dr Peter Novitzky, Dr
Ivo Sluganovic
|
Accessibility and Inclusiveness of New Information and Communication
Technologies for Disabled Users and Content Creators in the Metaverse
| null | null | null | null |
cs.CY cs.CV cs.MM cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Despite the proliferation of Blockchain Metaverse projects, the inclusion of
physically disabled individuals in the Metaverse remains distant, with limited
standards and regulations in place. However, the article proposes a concept of
the Metaverse that leverages emerging technologies, such as Virtual and
Augmented Reality, and the Internet of Things, to enable greater engagement of
disabled creatives. This approach aims to enhance inclusiveness in the
Metaverse landscape. Based on the findings, the paper concludes that the active
involvement of physically disabled individuals in the design and development of
Metaverse platforms is crucial for promoting inclusivity. The proposed
framework for accessibility and inclusiveness in Virtual, Augmented, and Mixed
realities of decentralised Metaverses provides a basis for the meaningful
participation of disabled creatives. The article emphasises the importance of
addressing the mechanisms for art production by individuals with disabilities
in the emerging Metaverse landscape. Additionally, it highlights the need for
further research and collaboration to establish standards and regulations that
facilitate the inclusion of physically disabled individuals in Metaverse
projects.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 18:39:12 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Radanliev",
"Dr Petar",
""
],
[
"De Roure",
"Professor David",
""
],
[
"Novitzky",
"Dr Peter",
""
],
[
"Sluganovic",
"Dr Ivo",
""
]
] |
new_dataset
| 0.976437 |
2308.01940
|
Qi Yang
|
Qi Yang, Joel Jung, Haiqiang Wang, Xiaozhong Xu, and Shan Liu
|
TSMD: A Database for Static Color Mesh Quality Assessment Study
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Static meshes with texture map are widely used in modern industrial and
manufacturing sectors, attracting considerable attention in the mesh
compression community due to its huge amount of data. To facilitate the study
of static mesh compression algorithm and objective quality metric, we create
the Tencent - Static Mesh Dataset (TSMD) containing 42 reference meshes with
rich visual characteristics. 210 distorted samples are generated by the lossy
compression scheme developed for the Call for Proposals on polygonal static
mesh coding, released on June 23 by the Alliance for Open Media Volumetric
Visual Media group. Using processed video sequences, a large-scale,
crowdsourcing-based, subjective experiment was conducted to collect subjective
scores from 74 viewers. The dataset undergoes analysis to validate its sample
diversity and Mean Opinion Scores (MOS) accuracy, establishing its
heterogeneous nature and reliability. State-of-the-art objective metrics are
evaluated on the new dataset. Pearson and Spearman correlations around 0.75 are
reported, deviating from results typically observed on less heterogeneous
datasets, demonstrating the need for further development of more robust
metrics. The TSMD, including meshes, PVSs, bitstreams, and MOS, is made
publicly available at the following location:
https://multimedia.tencent.com/resources/tsmd.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 02:19:20 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Yang",
"Qi",
""
],
[
"Jung",
"Joel",
""
],
[
"Wang",
"Haiqiang",
""
],
[
"Xu",
"Xiaozhong",
""
],
[
"Liu",
"Shan",
""
]
] |
new_dataset
| 0.999701 |
2308.01979
|
Saleem Ahmed
|
Saleem Ahmed, Bhavin Jawade, Shubham Pandey, Srirangaraj Setlur, Venu
Govindaraju
|
RealCQA: Scientific Chart Question Answering as a Test-bed for
First-Order Logic
|
This a pre-print version. Accepted at ICDAR '23
| null | null | null |
cs.CV cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We present a comprehensive study of chart visual question-answering(QA) task,
to address the challenges faced in comprehending and extracting data from chart
visualizations within documents. Despite efforts to tackle this problem using
synthetic charts, solutions are limited by the shortage of annotated real-world
data. To fill this gap, we introduce a benchmark and dataset for chart visual
QA on real-world charts, offering a systematic analysis of the task and a novel
taxonomy for template-based chart question creation. Our contribution includes
the introduction of a new answer type, 'list', with both ranked and unranked
variations. Our study is conducted on a real-world chart dataset from
scientific literature, showcasing higher visual complexity compared to other
works. Our focus is on template-based QA and how it can serve as a standard for
evaluating the first-order logic capabilities of models. The results of our
experiments, conducted on a real-world out-of-distribution dataset, provide a
robust evaluation of large-scale pre-trained models and advance the field of
chart visual QA and formal logic verification for neural networks in general.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 18:21:38 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Ahmed",
"Saleem",
""
],
[
"Jawade",
"Bhavin",
""
],
[
"Pandey",
"Shubham",
""
],
[
"Setlur",
"Srirangaraj",
""
],
[
"Govindaraju",
"Venu",
""
]
] |
new_dataset
| 0.999731 |
2308.01987
|
Md. Tanvir Rouf Shawon
|
G. M. Shahariar, Md. Tanvir Rouf Shawon, Faisal Muhammad Shah,
Mohammad Shafiul Alam and Md. Shahriar Mahbub
|
Bengali Fake Reviews: A Benchmark Dataset and Detection System
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The proliferation of fake reviews on various online platforms has created a
major concern for both consumers and businesses. Such reviews can deceive
customers and cause damage to the reputation of products or services, making it
crucial to identify them. Although the detection of fake reviews has been
extensively studied in English language, detecting fake reviews in non-English
languages such as Bengali is still a relatively unexplored research area. This
paper introduces the Bengali Fake Review Detection (BFRD) dataset, the first
publicly available dataset for identifying fake reviews in Bengali. The dataset
consists of 7710 non-fake and 1339 fake food-related reviews collected from
social media posts. To convert non-Bengali words in a review, a unique pipeline
has been proposed that translates English words to their corresponding Bengali
meaning and also back transliterates Romanized Bengali to Bengali. We have
conducted rigorous experimentation using multiple deep learning and pre-trained
transformer language models to develop a reliable detection system. Finally, we
propose a weighted ensemble model that combines four pre-trained transformers:
BanglaBERT, BanglaBERT Base, BanglaBERT Large, and BanglaBERT Generator .
According to the experiment results, the proposed ensemble model obtained a
weighted F1-score of 0.9843 on 13390 reviews, including 1339 actual fake
reviews and 5356 augmented fake reviews generated with the nlpaug library. The
remaining 6695 reviews were randomly selected from the 7710 non-fake instances.
The model achieved a 0.9558 weighted F1-score when the fake reviews were
augmented using the bnaug library.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 18:49:45 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Shahariar",
"G. M.",
""
],
[
"Shawon",
"Md. Tanvir Rouf",
""
],
[
"Shah",
"Faisal Muhammad",
""
],
[
"Alam",
"Mohammad Shafiul",
""
],
[
"Mahbub",
"Md. Shahriar",
""
]
] |
new_dataset
| 0.99988 |
2308.02136
|
Yusuke Kato
|
Yusuke Kato, Ryo Okumura, Tadahiro Taniguchi
|
World-Model-Based Control for Industrial box-packing of Multiple Objects
using NewtonianVAE
|
7 pages, 8 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The process of industrial box-packing, which involves the accurate placement
of multiple objects, requires high-accuracy positioning and sequential actions.
When a robot is tasked with placing an object at a specific location with high
accuracy, it is important not only to have information about the location of
the object to be placed, but also the posture of the object grasped by the
robotic hand. Often, industrial box-packing requires the sequential placement
of identically shaped objects into a single box. The robot's action should be
determined by the same learned model. In factories, new kinds of products often
appear and there is a need for a model that can easily adapt to them.
Therefore, it should be easy to collect data to train the model. In this study,
we designed a robotic system to automate real-world industrial tasks, employing
a vision-based learning control model. We propose in-hand-view-sensitive
Newtonian variational autoencoder (ihVS-NVAE), which employs an RGB camera to
obtain in-hand postures of objects. We demonstrate that our model, trained for
a single object-placement task, can handle sequential tasks without additional
training. To evaluate efficacy of the proposed model, we employed a real robot
to perform sequential industrial box-packing of multiple objects. Results
showed that the proposed model achieved a 100% success rate in industrial
box-packing tasks, thereby outperforming the state-of-the-art and conventional
approaches, underscoring its superior effectiveness and potential in industrial
tasks.
|
[
{
"version": "v1",
"created": "Fri, 4 Aug 2023 04:58:06 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Kato",
"Yusuke",
""
],
[
"Okumura",
"Ryo",
""
],
[
"Taniguchi",
"Tadahiro",
""
]
] |
new_dataset
| 0.979878 |
2308.02242
|
Nam Chu
|
Nam H. Chu, Nguyen Van Huynh, Diep N. Nguyen, Dinh Thai Hoang, Shimin
Gong, Tao Shu, Eryk Dutkiewicz, and Khoa T. Phan
|
Countering Eavesdroppers with Meta-learning-based Cooperative Ambient
Backscatter Communications
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article introduces a novel lightweight framework using ambient
backscattering communications to counter eavesdroppers. In particular, our
framework divides an original message into two parts: (i) the active-transmit
message transmitted by the transmitter using conventional RF signals and (ii)
the backscatter message transmitted by an ambient backscatter tag that
backscatters upon the active signals emitted by the transmitter. Notably, the
backscatter tag does not generate its own signal, making it difficult for an
eavesdropper to detect the backscattered signals unless they have prior
knowledge of the system. Here, we assume that without decoding/knowing the
backscatter message, the eavesdropper is unable to decode the original message.
Even in scenarios where the eavesdropper can capture both messages,
reconstructing the original message is a complex task without understanding the
intricacies of the message-splitting mechanism. A challenge in our proposed
framework is to effectively decode the backscattered signals at the receiver,
often accomplished using the maximum likelihood (MLK) approach. However, such a
method may require a complex mathematical model together with perfect channel
state information (CSI). To address this issue, we develop a novel deep
meta-learning-based signal detector that can not only effectively decode the
weak backscattered signals without requiring perfect CSI but also quickly adapt
to a new wireless environment with very little knowledge. Simulation results
show that our proposed learning approach, without requiring perfect CSI and
complex mathematical model, can achieve a bit error ratio close to that of the
MLK-based approach. They also clearly show the efficiency of the proposed
approach in dealing with eavesdropping attacks and the lack of training data
for deep learning models in practical scenarios.
|
[
{
"version": "v1",
"created": "Fri, 4 Aug 2023 10:43:17 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Chu",
"Nam H.",
""
],
[
"Van Huynh",
"Nguyen",
""
],
[
"Nguyen",
"Diep N.",
""
],
[
"Hoang",
"Dinh Thai",
""
],
[
"Gong",
"Shimin",
""
],
[
"Shu",
"Tao",
""
],
[
"Dutkiewicz",
"Eryk",
""
],
[
"Phan",
"Khoa T.",
""
]
] |
new_dataset
| 0.958266 |
2308.02249
|
Dasaem Jeong PhD
|
Danbinaerin Han, Rafael Caro Repetto, Dasaem Jeong
|
Finding Tori: Self-supervised Learning for Analyzing Korean Folk Song
|
Accepted at 24th International Society for Music Information
Retrieval Conference (ISMIR 2023)
| null | null | null |
cs.SD cs.IR cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce a computational analysis of the field recording
dataset of approximately 700 hours of Korean folk songs, which were recorded
around 1980-90s. Because most of the songs were sung by non-expert musicians
without accompaniment, the dataset provides several challenges. To address this
challenge, we utilized self-supervised learning with convolutional neural
network based on pitch contour, then analyzed how the musical concept of tori,
a classification system defined by a specific scale, ornamental notes, and an
idiomatic melodic contour, is captured by the model. The experimental result
shows that our approach can better capture the characteristics of tori compared
to traditional pitch histograms. Using our approaches, we have examined how
musical discussions proposed in existing academia manifest in the actual field
recordings of Korean folk songs.
|
[
{
"version": "v1",
"created": "Fri, 4 Aug 2023 11:13:15 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Han",
"Danbinaerin",
""
],
[
"Repetto",
"Rafael Caro",
""
],
[
"Jeong",
"Dasaem",
""
]
] |
new_dataset
| 0.992696 |
2308.02299
|
Qiang Zhou
|
Qiang Zhou, Chaohui Yu, Shaofeng Zhang, Sitong Wu, Zhibing Wang, Fan
Wang
|
RegionBLIP: A Unified Multi-modal Pre-training Framework for Holistic
and Regional Comprehension
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we investigate extending the comprehension of Multi-modal Large
Language Models (MLLMs) to regional objects. To this end, we propose to extract
features corresponding to regional objects as soft prompts for LLM, which
provides a straightforward and scalable approach and eliminates the need for
LLM fine-tuning. To effectively extract regional features from regular image
features and irregular point cloud features, we present a novel and unified
position-assisted feature extraction module. Furthermore, training an MLLM from
scratch is highly time-consuming. Thus, we propose incrementally extending
existing pre-trained MLLMs to comprehend more modalities and the regional
objects of those modalities. Specifically, we freeze the Q-Former from BLIP-2,
an impressive MLLM, and optimize the modality-specific Lora parameters in
Q-Former and LLM for each newly introduced modality. The freezing of the
Q-Former eliminates the need for extensive pre-training on massive image-text
data. The freezed Q-Former pre-trained from massive image-text data is also
beneficial for the pre-training on image-region-text data. We name our
framework RegionBLIP. We pre-train RegionBLIP on image-region-text,
point-cloud-text, and point-cloud-region-text data. Experimental results verify
that \Ours{} can preserve the image comprehension capability of BILP-2 and
further gain a comprehension of the newly introduced point cloud modality and
regional objects. The Data, Code, and Pre-trained models will be available at
https://github.com/mightyzau/RegionBLIP.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 14:17:22 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Zhou",
"Qiang",
""
],
[
"Yu",
"Chaohui",
""
],
[
"Zhang",
"Shaofeng",
""
],
[
"Wu",
"Sitong",
""
],
[
"Wang",
"Zhibing",
""
],
[
"Wang",
"Fan",
""
]
] |
new_dataset
| 0.993545 |
2308.02317
|
Rohan Agarwal
|
Rohan Agarwal, Zhiyu Lin, Mark Riedl
|
A Controllable Co-Creative Agent for Game System Design
|
Thesis
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Many advancements have been made in procedural content generation for games,
and with mixed-initiative co-creativity, have the potential for great benefits
to human designers. However, co-creative systems for game generation are
typically limited to specific genres, rules, or games, limiting the creativity
of the designer. We seek to model games abstractly enough to apply to any
genre, focusing on designing game systems and mechanics, and create a
controllable, co-creative agent that can collaborate on these designs. We
present a model of games using state-machine-like components and resource
flows, a set of controllable metrics, a design evaluator simulating
playthroughs with these metrics, and an evolutionary design balancer and
generator. We find this system to be both able to express a wide range of games
and able to be human-controllable for future co-creative applications.
|
[
{
"version": "v1",
"created": "Fri, 4 Aug 2023 13:34:51 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Agarwal",
"Rohan",
""
],
[
"Lin",
"Zhiyu",
""
],
[
"Riedl",
"Mark",
""
]
] |
new_dataset
| 0.966198 |
2308.02356
|
Huan Zhong
|
Huan Zhong and Chen Wu
|
T-UNet: Triplet UNet for Change Detection in High-Resolution Remote
Sensing Images
|
21 pages, 11 figures, 6 tables
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Remote sensing image change detection aims to identify the differences
between images acquired at different times in the same area. It is widely used
in land management, environmental monitoring, disaster assessment and other
fields. Currently, most change detection methods are based on Siamese network
structure or early fusion structure. Siamese structure focuses on extracting
object features at different times but lacks attention to change information,
which leads to false alarms and missed detections. Early fusion (EF) structure
focuses on extracting features after the fusion of images of different phases
but ignores the significance of object features at different times for
detecting change details, making it difficult to accurately discern the edges
of changed objects. To address these issues and obtain more accurate results,
we propose a novel network, Triplet UNet(T-UNet), based on a three-branch
encoder, which is capable to simultaneously extract the object features and the
change features between the pre- and post-time-phase images through triplet
encoder. To effectively interact and fuse the features extracted from the three
branches of triplet encoder, we propose a multi-branch spatial-spectral
cross-attention module (MBSSCA). In the decoder stage, we introduce the channel
attention mechanism (CAM) and spatial attention mechanism (SAM) to fully mine
and integrate detailed textures information at the shallow layer and semantic
localization information at the deep layer.
|
[
{
"version": "v1",
"created": "Fri, 4 Aug 2023 14:44:11 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Zhong",
"Huan",
""
],
[
"Wu",
"Chen",
""
]
] |
new_dataset
| 0.984088 |
2308.02357
|
Sanju Tiwari Dr
|
Nandana Mihindukulasooriya, Sanju Tiwari, Carlos F. Enguix, Kusum Lata
|
Text2KGBench: A Benchmark for Ontology-Driven Knowledge Graph Generation
from Text
|
15 pages, 3 figures, 4 tables. Accepted at ISWC 2023 (Resources
Track)
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The recent advances in large language models (LLM) and foundation models with
emergent capabilities have been shown to improve the performance of many NLP
tasks. LLMs and Knowledge Graphs (KG) can complement each other such that LLMs
can be used for KG construction or completion while existing KGs can be used
for different tasks such as making LLM outputs explainable or fact-checking in
Neuro-Symbolic manner. In this paper, we present Text2KGBench, a benchmark to
evaluate the capabilities of language models to generate KGs from natural
language text guided by an ontology. Given an input ontology and a set of
sentences, the task is to extract facts from the text while complying with the
given ontology (concepts, relations, domain/range constraints) and being
faithful to the input sentences. We provide two datasets (i) Wikidata-TekGen
with 10 ontologies and 13,474 sentences and (ii) DBpedia-WebNLG with 19
ontologies and 4,860 sentences. We define seven evaluation metrics to measure
fact extraction performance, ontology conformance, and hallucinations by LLMs.
Furthermore, we provide results for two baseline models, Vicuna-13B and
Alpaca-LoRA-13B using automatic prompt generation from test cases. The baseline
results show that there is room for improvement using both Semantic Web and
Natural Language Processing techniques.
|
[
{
"version": "v1",
"created": "Fri, 4 Aug 2023 14:47:15 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Mihindukulasooriya",
"Nandana",
""
],
[
"Tiwari",
"Sanju",
""
],
[
"Enguix",
"Carlos F.",
""
],
[
"Lata",
"Kusum",
""
]
] |
new_dataset
| 0.999629 |
2308.02369
|
Diqun Yan
|
JiaCheng Deng, Li Dong, Jiahao Chen, Diqun Yan, Rangding Wang, Dengpan
Ye, Lingchen Zhao, and Jinyu Tian
|
Universal Defensive Underpainting Patch: Making Your Text Invisible to
Optical Character Recognition
| null | null |
10.1145/3581783.3613768
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Optical Character Recognition (OCR) enables automatic text extraction from
scanned or digitized text images, but it also makes it easy to pirate valuable
or sensitive text from these images. Previous methods to prevent OCR piracy by
distorting characters in text images are impractical in real-world scenarios,
as pirates can capture arbitrary portions of the text images, rendering the
defenses ineffective. In this work, we propose a novel and effective defense
mechanism termed the Universal Defensive Underpainting Patch (UDUP) that
modifies the underpainting of text images instead of the characters. UDUP is
created through an iterative optimization process to craft a small, fixed-size
defensive patch that can generate non-overlapping underpainting for text images
of any size. Experimental results show that UDUP effectively defends against
unauthorized OCR under the setting of any screenshot range or complex image
background. It is agnostic to the content, size, colors, and languages of
characters, and is robust to typical image operations such as scaling and
compressing. In addition, the transferability of UDUP is demonstrated by
evading several off-the-shelf OCRs. The code is available at
https://github.com/QRICKDD/UDUP.
|
[
{
"version": "v1",
"created": "Fri, 4 Aug 2023 15:07:20 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Deng",
"JiaCheng",
""
],
[
"Dong",
"Li",
""
],
[
"Chen",
"Jiahao",
""
],
[
"Yan",
"Diqun",
""
],
[
"Wang",
"Rangding",
""
],
[
"Ye",
"Dengpan",
""
],
[
"Zhao",
"Lingchen",
""
],
[
"Tian",
"Jinyu",
""
]
] |
new_dataset
| 0.987741 |
2308.02435
|
Sebastian Benthall
|
Sebastian Benthall and David Shekman
|
Designing Fiduciary Artificial Intelligence
| null | null | null | null |
cs.CY cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
A fiduciary is a trusted agent that has the legal duty to act with loyalty
and care towards a principal that employs them. When fiduciary organizations
interact with users through a digital interface, or otherwise automate their
operations with artificial intelligence, they will need to design these AI
systems to be compliant with their duties. This article synthesizes recent work
in computer science and law to develop a procedure for designing and auditing
Fiduciary AI. The designer of a Fiduciary AI should understand the context of
the system, identify its principals, and assess the best interests of those
principals. Then the designer must be loyal with respect to those interests,
and careful in an contextually appropriate way. We connect the steps in this
procedure to dimensions of Trustworthy AI, such as privacy and alignment.
Fiduciary AI is a promising means to address the incompleteness of data
subject's consent when interacting with complex technical systems.
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 15:35:32 GMT"
}
] | 2023-08-07T00:00:00 |
[
[
"Benthall",
"Sebastian",
""
],
[
"Shekman",
"David",
""
]
] |
new_dataset
| 0.992442 |
2104.11589
|
Sang Hun Lee
|
Sangrok Lee, Taekang Woo, Sang Hun Lee
|
SBNet: Segmentation-based Network for Natural Language-based Vehicle
Search
|
7 pages, 4 figures, CVPR Workshop Paper
|
2021 IEEE/CVF Conference on Computer Vision and Pattern
Recognition Workshops (CVPRW), pp. 4049-4055
|
10.1109/CVPRW53098.2021.00457
| null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Natural language-based vehicle retrieval is a task to find a target vehicle
within a given image based on a natural language description as a query. This
technology can be applied to various areas including police searching for a
suspect vehicle. However, it is challenging due to the ambiguity of language
descriptions and the difficulty of processing multi-modal data. To tackle this
problem, we propose a deep neural network called SBNet that performs natural
language-based segmentation for vehicle retrieval. We also propose two
task-specific modules to improve performance: a substitution module that helps
features from different domains to be embedded in the same space and a future
prediction module that learns temporal information. SBnet has been trained
using the CityFlow-NL dataset that contains 2,498 tracks of vehicles with three
unique natural language descriptions each and tested 530 unique vehicle tracks
and their corresponding query sets. SBNet achieved a significant improvement
over the baseline in the natural language-based vehicle tracking track in the
AI City Challenge 2021.
|
[
{
"version": "v1",
"created": "Thu, 22 Apr 2021 08:06:17 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Lee",
"Sangrok",
""
],
[
"Woo",
"Taekang",
""
],
[
"Lee",
"Sang Hun",
""
]
] |
new_dataset
| 0.998667 |
2108.02226
|
Willy Kuo
|
Willy Kuo, Diego Rossinelli, Georg Schulz, Roland H. Wenger, Simone
Hieber, Bert M\"uller, Vartan Kurtcuoglu
|
Terabyte-scale supervised 3D training and benchmarking dataset of the
mouse kidney
| null |
Scientific Data 10, 510 (2023)
|
10.1038/s41597-023-02407-5
| null |
cs.CV physics.med-ph q-bio.TO
|
http://creativecommons.org/licenses/by/4.0/
|
The performance of machine learning algorithms, when used for segmenting 3D
biomedical images, does not reach the level expected based on results achieved
with 2D photos. This may be explained by the comparative lack of high-volume,
high-quality training datasets, which require state-of-the-art imaging
facilities, domain experts for annotation and large computational and personal
resources. The HR-Kidney dataset presented in this work bridges this gap by
providing 1.7 TB of artefact-corrected synchrotron radiation-based X-ray
phase-contrast microtomography images of whole mouse kidneys and validated
segmentations of 33 729 glomeruli, which corresponds to a one to two orders of
magnitude increase over currently available biomedical datasets. The image sets
also contain the underlying raw data, threshold- and morphology-based
semi-automatic segmentations of renal vasculature and uriniferous tubules, as
well as true 3D manual annotations. We therewith provide a broad basis for the
scientific community to build upon and expand in the fields of image
processing, data augmentation and machine learning, in particular unsupervised
and semi-supervised learning investigations, as well as transfer learning and
generative adversarial networks.
|
[
{
"version": "v1",
"created": "Wed, 4 Aug 2021 18:08:28 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jul 2023 17:27:10 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Jul 2023 22:49:56 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Kuo",
"Willy",
""
],
[
"Rossinelli",
"Diego",
""
],
[
"Schulz",
"Georg",
""
],
[
"Wenger",
"Roland H.",
""
],
[
"Hieber",
"Simone",
""
],
[
"Müller",
"Bert",
""
],
[
"Kurtcuoglu",
"Vartan",
""
]
] |
new_dataset
| 0.999839 |
2205.12332
|
Kumar Vijay Mishra
|
Anders M. Buvarp, Robert M. Taylor Jr., Kumar Vijay Mishra, Lamine M.
Mili and Amir I. Zaghloul
|
Constant Curvature Curve Tube Codes for Low-Latency Analog Error
Correction
|
15 pages, 4 tables, 11 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent research in ultra-reliable and low latency communications (URLLC) for
future wireless systems has spurred interest in short block-length codes. In
this context, we analyze arbitrary harmonic bandwidth (BW) expansions for a
class of high-dimension constant curvature curve codes for analog error
correction of independent continuous-alphabet uniform sources. In particular,
we employ the circumradius function from knot theory to prescribe insulating
tubes about the centerline of constant curvature curves. We then use tube
packing density within a hypersphere to optimize the curve parameters. The
resulting constant curvature curve tube (C3T) codes possess the smallest
possible latency, i.e., block-length is unity under BW expansion mapping.
Further, the codes perform within $5$ dB signal-to-distortion ratio of the
optimal performance theoretically achievable at a signal-to-noise ratio (SNR)
$< -5$ dB for BW expansion factor $n \leq 10$. Furthermore, we propose a
neural-network-based method to decode C3T codes. We show that, at low SNR, the
neural-network-based C3T decoder outperforms the maximum likelihood and minimum
mean-squared error decoders for all $n$. The best possible digital codes
require two to three orders of magnitude higher latency compared to C3T codes,
thereby demonstrating the latter's utility for URLLC.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 19:21:29 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Aug 2023 03:06:38 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Buvarp",
"Anders M.",
""
],
[
"Taylor",
"Robert M.",
"Jr."
],
[
"Mishra",
"Kumar Vijay",
""
],
[
"Mili",
"Lamine M.",
""
],
[
"Zaghloul",
"Amir I.",
""
]
] |
new_dataset
| 0.99866 |
2211.13061
|
Federico Cunico
|
Federico Cunico, Andrea Toaiari and Marco Cristani
|
A Masked Face Classification Benchmark on Low-Resolution Surveillance
Images
|
15 pages, 7 figures. Accepted at T-CAP workshop @ ICPR 2022
| null |
10.1007/978-3-031-37660-3_4
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a novel image dataset focused on tiny faces wearing face masks for
mask classification purposes, dubbed Small Face MASK (SF-MASK), composed of a
collection made from 20k low-resolution images exported from diverse and
heterogeneous datasets, ranging from 7 x 7 to 64 x 64 pixel resolution. An
accurate visualization of this collection, through counting grids, made it
possible to highlight gaps in the variety of poses assumed by the heads of the
pedestrians. In particular, faces filmed by very high cameras, in which the
facial features appear strongly skewed, are absent. To address this structural
deficiency, we produced a set of synthetic images which resulted in a
satisfactory covering of the intra-class variance. Furthermore, a small
subsample of 1701 images contains badly worn face masks, opening to multi-class
classification challenges. Experiments on SF-MASK focus on face mask
classification using several classifiers. Results show that the richness of
SF-MASK (real + synthetic images) leads all of the tested classifiers to
perform better than exploiting comparative face mask datasets, on a fixed 1077
images testing set. Dataset and evaluation code are publicly available here:
https://github.com/HumaticsLAB/sf-mask
|
[
{
"version": "v1",
"created": "Wed, 23 Nov 2022 15:57:16 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Aug 2023 12:05:49 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Cunico",
"Federico",
""
],
[
"Toaiari",
"Andrea",
""
],
[
"Cristani",
"Marco",
""
]
] |
new_dataset
| 0.99976 |
2211.13543
|
Roman Kuznets
|
Rojo Randrianomentsoa, Hans van Ditmarsch, Roman Kuznets
|
Impure Simplicial Complexes: Complete Axiomatization
| null | null | null | null |
cs.LO cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Combinatorial topology is used in distributed computing to model concurrency
and asynchrony. The basic structure in combinatorial topology is the simplicial
complex, a collection of subsets called simplices of a set of vertices, closed
under containment. Pure simplicial complexes describe message passing in
asynchronous systems where all processes (agents) are alive, whereas impure
simplicial complexes describe message passing in synchronous systems where
processes may be dead (have crashed). Properties of impure simplicial complexes
can be described in a three-valued multi-agent epistemic logic where the third
value represents formulae that are undefined, e.g., the knowledge and local
propositions of dead agents. In this work we present an axiomatization for the
logic of the class of impure complexes and show soundness and completeness. The
completeness proof involves the novel construction of the canonical simplicial
model and requires a careful manipulation of undefined formulae.
|
[
{
"version": "v1",
"created": "Thu, 24 Nov 2022 11:32:36 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Apr 2023 21:21:47 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Aug 2023 09:21:40 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Randrianomentsoa",
"Rojo",
""
],
[
"van Ditmarsch",
"Hans",
""
],
[
"Kuznets",
"Roman",
""
]
] |
new_dataset
| 0.967689 |
2302.14674
|
Xingyu Chen
|
Xingyu Chen, Peixi Wu, Ge Li and Thomas H. Li
|
LIO-PPF: Fast LiDAR-Inertial Odometry via Incremental Plane Pre-Fitting
and Skeleton Tracking
|
IROS 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a crucial infrastructure of intelligent mobile robots, LiDAR-Inertial
odometry (LIO) provides the basic capability of state estimation by tracking
LiDAR scans. The high-accuracy tracking generally involves the kNN search,
which is used with minimizing the point-to-plane distance. The cost for this,
however, is maintaining a large local map and performing kNN plane fit for each
point. In this work, we reduce both time and space complexity of LIO by saving
these unnecessary costs. Technically, we design a plane pre-fitting (PPF)
pipeline to track the basic skeleton of the 3D scene. In PPF, planes are not
fitted individually for each scan, let alone for each point, but are updated
incrementally as the scene 'flows'. Unlike kNN, the PPF is more robust to noisy
and non-strict planes with our iterative Principal Component Analyse (iPCA)
refinement. Moreover, a simple yet effective sandwich layer is introduced to
eliminate false point-to-plane matches. Our method was extensively tested on a
total number of 22 sequences across 5 open datasets, and evaluated in 3
existing state-of-the-art LIO systems. By contrast, LIO-PPF can consume only
36% of the original local map size to achieve up to 4x faster residual
computing and 1.92x overall FPS, while maintaining the same level of accuracy.
We fully open source our implementation at
https://github.com/xingyuuchen/LIO-PPF.
|
[
{
"version": "v1",
"created": "Tue, 28 Feb 2023 15:37:06 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 03:31:59 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Aug 2023 14:56:43 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Chen",
"Xingyu",
""
],
[
"Wu",
"Peixi",
""
],
[
"Li",
"Ge",
""
],
[
"Li",
"Thomas H.",
""
]
] |
new_dataset
| 0.999442 |
2304.01577
|
Yihao Ding
|
Yihao Ding, Siqu Long, Jiabin Huang, Kaixuan Ren, Xingxiang Luo,
Hyunsuk Chung, Soyeon Caren Han
|
Form-NLU: Dataset for the Form Natural Language Understanding
|
Accepted by SIGIR 2023
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Compared to general document analysis tasks, form document structure
understanding and retrieval are challenging. Form documents are typically made
by two types of authors; A form designer, who develops the form structure and
keys, and a form user, who fills out form values based on the provided keys.
Hence, the form values may not be aligned with the form designer's intention
(structure and keys) if a form user gets confused. In this paper, we introduce
Form-NLU, the first novel dataset for form structure understanding and its key
and value information extraction, interpreting the form designer's intent and
the alignment of user-written value on it. It consists of 857 form images, 6k
form keys and values, and 4k table keys and values. Our dataset also includes
three form types: digital, printed, and handwritten, which cover diverse form
appearances and layouts. We propose a robust positional and logical
relation-based form key-value information extraction framework. Using this
dataset, Form-NLU, we first examine strong object detection models for the form
layout understanding, then evaluate the key information extraction task on the
dataset, providing fine-grained results for different types of forms and keys.
Furthermore, we examine it with the off-the-shelf pdf layout extraction tool
and prove its feasibility in real-world cases.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 07:06:54 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Apr 2023 03:55:53 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Aug 2023 02:30:02 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Ding",
"Yihao",
""
],
[
"Long",
"Siqu",
""
],
[
"Huang",
"Jiabin",
""
],
[
"Ren",
"Kaixuan",
""
],
[
"Luo",
"Xingxiang",
""
],
[
"Chung",
"Hyunsuk",
""
],
[
"Han",
"Soyeon Caren",
""
]
] |
new_dataset
| 0.999823 |
2305.13501
|
Marcella Cornia
|
Davide Morelli, Alberto Baldrati, Giuseppe Cartella, Marcella Cornia,
Marco Bertini, Rita Cucchiara
|
LaDI-VTON: Latent Diffusion Textual-Inversion Enhanced Virtual Try-On
|
ACM Multimedia 2023
| null | null | null |
cs.CV cs.AI cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapidly evolving fields of e-commerce and metaverse continue to seek
innovative approaches to enhance the consumer experience. At the same time,
recent advancements in the development of diffusion models have enabled
generative networks to create remarkably realistic images. In this context,
image-based virtual try-on, which consists in generating a novel image of a
target model wearing a given in-shop garment, has yet to capitalize on the
potential of these powerful generative solutions. This work introduces
LaDI-VTON, the first Latent Diffusion textual Inversion-enhanced model for the
Virtual Try-ON task. The proposed architecture relies on a latent diffusion
model extended with a novel additional autoencoder module that exploits
learnable skip connections to enhance the generation process preserving the
model's characteristics. To effectively maintain the texture and details of the
in-shop garment, we propose a textual inversion component that can map the
visual features of the garment to the CLIP token embedding space and thus
generate a set of pseudo-word token embeddings capable of conditioning the
generation process. Experimental results on Dress Code and VITON-HD datasets
demonstrate that our approach outperforms the competitors by a consistent
margin, achieving a significant milestone for the task. Source code and trained
models are publicly available at: https://github.com/miccunifi/ladi-vton.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 21:38:06 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2023 14:02:00 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Aug 2023 13:51:22 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Morelli",
"Davide",
""
],
[
"Baldrati",
"Alberto",
""
],
[
"Cartella",
"Giuseppe",
""
],
[
"Cornia",
"Marcella",
""
],
[
"Bertini",
"Marco",
""
],
[
"Cucchiara",
"Rita",
""
]
] |
new_dataset
| 0.998387 |
2307.04577
|
Yuzhe Qin
|
Yuzhe Qin, Wei Yang, Binghao Huang, Karl Van Wyk, Hao Su, Xiaolong
Wang, Yu-Wei Chao, Dieter Fox
|
AnyTeleop: A General Vision-Based Dexterous Robot Arm-Hand Teleoperation
System
|
http://anyteleop.com/ Robotics: Science and Systems 2023
| null | null | null |
cs.RO cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Vision-based teleoperation offers the possibility to endow robots with
human-level intelligence to physically interact with the environment, while
only requiring low-cost camera sensors. However, current vision-based
teleoperation systems are designed and engineered towards a particular robot
model and deploy environment, which scales poorly as the pool of the robot
models expands and the variety of the operating environment increases. In this
paper, we propose AnyTeleop, a unified and general teleoperation system to
support multiple different arms, hands, realities, and camera configurations
within a single system. Although being designed to provide great flexibility to
the choice of simulators and real hardware, our system can still achieve great
performance. For real-world experiments, AnyTeleop can outperform a previous
system that was designed for a specific robot hardware with a higher success
rate, using the same robot. For teleoperation in simulation, AnyTeleop leads to
better imitation learning performance, compared with a previous system that is
particularly designed for that simulator. Project page: http://anyteleop.com/.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 14:11:07 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2023 22:14:06 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Qin",
"Yuzhe",
""
],
[
"Yang",
"Wei",
""
],
[
"Huang",
"Binghao",
""
],
[
"Van Wyk",
"Karl",
""
],
[
"Su",
"Hao",
""
],
[
"Wang",
"Xiaolong",
""
],
[
"Chao",
"Yu-Wei",
""
],
[
"Fox",
"Dieter",
""
]
] |
new_dataset
| 0.999103 |
2307.14073
|
Zhihao Hu
|
Zhihao Hu, Dong Xu
|
VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by
Using Diffusion Model with ControlNet
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, diffusion models like StableDiffusion have achieved impressive
image generation results. However, the generation process of such diffusion
models is uncontrollable, which makes it hard to generate videos with
continuous and consistent content. In this work, by using the diffusion model
with ControlNet, we proposed a new motion-guided video-to-video translation
framework called VideoControlNet to generate various videos based on the given
prompts and the condition from the input video. Inspired by the video codecs
that use motion information for reducing temporal redundancy, our framework
uses motion information to prevent the regeneration of the redundant areas for
content consistency. Specifically, we generate the first frame (i.e., the
I-frame) by using the diffusion model with ControlNet. Then we generate other
key frames (i.e., the P-frame) based on the previous I/P-frame by using our
newly proposed motion-guided P-frame generation (MgPG) method, in which the
P-frames are generated based on the motion information and the occlusion areas
are inpainted by using the diffusion model. Finally, the rest frames (i.e., the
B-frame) are generated by using our motion-guided B-frame interpolation (MgBI)
module. Our experiments demonstrate that our proposed VideoControlNet inherits
the generation capability of the pre-trained large diffusion model and extends
the image diffusion model to the video diffusion model by using motion
information. More results are provided at our project page.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 09:50:44 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Aug 2023 09:34:24 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Hu",
"Zhihao",
""
],
[
"Xu",
"Dong",
""
]
] |
new_dataset
| 0.990755 |
2307.14551
|
Siqi Wu
|
Alexander Liu, Siqi Wu, Paul Resnick
|
How to Train Your YouTube Recommender to Avoid Unwanted Videos
|
Accepted into ICWSM 2024, the code is publicly available at
https://github.com/avliu-um/youtube-disinterest
| null | null | null |
cs.CY cs.HC cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
YouTube provides features for users to indicate disinterest when presented
with unwanted recommendations, such as the "Not interested" and "Don't
recommend channel" buttons. These buttons are purported to allow the user to
correct "mistakes" made by the recommendation system. Yet, relatively little is
known about the empirical efficacy of these buttons. Neither is much known
about users' awareness of and confidence in them. To address these gaps, we
simulated YouTube users with sock puppet agents. Each agent first executed a
"stain phase", where it watched many videos of one assigned topic; it then
executed a "scrub phase", where it tried to remove recommendations of the
assigned topic. Each agent repeatedly applied a single scrubbing strategy,
either indicating disinterest in one of the videos visited in the stain phase
(disliking it or deleting it from the watch history), or indicating disinterest
in a video recommended on the homepage (clicking the "not interested" or "don't
recommend channel" button or opening the video and clicking the dislike
button). We found that the stain phase significantly increased the fraction of
the recommended videos dedicated to the assigned topic on the user's homepage.
For the scrub phase, using the "Not interested" button worked best,
significantly reducing such recommendations in all topics tested, on average
removing 88% of them. Neither the stain phase nor the scrub phase, however, had
much effect on videopage recommendations. We also ran a survey (N = 300) asking
adult YouTube users in the US whether they were aware of and used these buttons
before, as well as how effective they found these buttons to be. We found that
44% of participants were not aware that the "Not interested" button existed.
However, those who were aware of this button often used it to remove unwanted
recommendations (82.8%) and found it to be modestly effective (3.42 out of 5).
|
[
{
"version": "v1",
"created": "Thu, 27 Jul 2023 00:21:29 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2023 19:36:19 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Liu",
"Alexander",
""
],
[
"Wu",
"Siqi",
""
],
[
"Resnick",
"Paul",
""
]
] |
new_dataset
| 0.971387 |
2308.00692
|
Xin Lai
|
Xin Lai, Zhuotao Tian, Yukang Chen, Yanwei Li, Yuhui Yuan, Shu Liu,
Jiaya Jia
|
LISA: Reasoning Segmentation via Large Language Model
|
Code, models, and demo are available at
https://github.com/dvlab-research/LISA
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Although perception systems have made remarkable advancements in recent
years, they still rely on explicit human instruction to identify the target
objects or categories before executing visual recognition tasks. Such systems
lack the ability to actively reason and comprehend implicit user intentions. In
this work, we propose a new segmentation task -- reasoning segmentation. The
task is designed to output a segmentation mask given a complex and implicit
query text. Furthermore, we establish a benchmark comprising over one thousand
image-instruction pairs, incorporating intricate reasoning and world knowledge
for evaluation purposes. Finally, we present LISA: large Language Instructed
Segmentation Assistant, which inherits the language generation capabilities of
the multi-modal Large Language Model (LLM) while also possessing the ability to
produce segmentation masks. We expand the original vocabulary with a <SEG>
token and propose the embedding-as-mask paradigm to unlock the segmentation
capability. Remarkably, LISA can handle cases involving: 1) complex reasoning;
2) world knowledge; 3) explanatory answers; 4) multi-turn conversation. Also,
it demonstrates robust zero-shot capability when trained exclusively on
reasoning-free datasets. In addition, fine-tuning the model with merely 239
reasoning segmentation image-instruction pairs results in further performance
enhancement. Experiments show our method not only unlocks new reasoning
segmentation capabilities but also proves effective in both complex reasoning
segmentation and standard referring segmentation tasks. Code, models, and demo
are at https://github.com/dvlab-research/LISA.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 17:50:17 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Aug 2023 17:38:21 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Lai",
"Xin",
""
],
[
"Tian",
"Zhuotao",
""
],
[
"Chen",
"Yukang",
""
],
[
"Li",
"Yanwei",
""
],
[
"Yuan",
"Yuhui",
""
],
[
"Liu",
"Shu",
""
],
[
"Jia",
"Jiaya",
""
]
] |
new_dataset
| 0.999721 |
2308.00840
|
Sariel Har-Peled
|
Sariel Har-Peled
|
Approximately: Independence Implies Vertex Cover
| null | null | null | null |
cs.CG cs.DS
|
http://creativecommons.org/publicdomain/zero/1.0/
|
$\newcommand{\eps}{\varepsilon}$
We observe that a $(1-\eps)$-approximation algorithm to Independent Set, that
works for any induced subgraph of the input graph, can be used, via a
polynomial time reduction, to provide a $(1+\eps)$-approximation to Vertex
Cover. This basic observation was made before, see [BHR11].
As a consequence, we get a PTAS for VC for unweighted pseudo-disks, QQPTAS
for VC for unweighted axis-aligned rectangles in the plane, and QPTAS for MWVC
for weighted polygons in the plane. To the best of our knowledge all these
results are new.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 21:07:51 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Aug 2023 16:04:05 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Har-Peled",
"Sariel",
""
]
] |
new_dataset
| 0.999016 |
2308.01379
|
Eric Tabellion
|
Eric Tabellion, Nikhil Karnad, Noa Glaser, Ben Weiss, David E. Jacobs,
Yael Pritch
|
Computational Long Exposure Mobile Photography
|
15 pages, 17 figures
|
ACM Trans. Graph. 42, 4, Article 48 (August 2023)
|
10.1145/3592124
| null |
cs.CV cs.GR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Long exposure photography produces stunning imagery, representing moving
elements in a scene with motion-blur. It is generally employed in two
modalities, producing either a foreground or a background blur effect.
Foreground blur images are traditionally captured on a tripod-mounted camera
and portray blurred moving foreground elements, such as silky water or light
trails, over a perfectly sharp background landscape. Background blur images,
also called panning photography, are captured while the camera is tracking a
moving subject, to produce an image of a sharp subject over a background
blurred by relative motion. Both techniques are notoriously challenging and
require additional equipment and advanced skills. In this paper, we describe a
computational burst photography system that operates in a hand-held smartphone
camera app, and achieves these effects fully automatically, at the tap of the
shutter button. Our approach first detects and segments the salient subject. We
track the scene motion over multiple frames and align the images in order to
preserve desired sharpness and to produce aesthetically pleasing motion
streaks. We capture an under-exposed burst and select the subset of input
frames that will produce blur trails of controlled length, regardless of scene
or camera motion velocity. We predict inter-frame motion and synthesize
motion-blur to fill the temporal gaps between the input frames. Finally, we
composite the blurred image with the sharp regular exposure to protect the
sharpness of faces or areas of the scene that are barely moving, and produce a
final high resolution and high dynamic range (HDR) photograph. Our system
democratizes a capability previously reserved to professionals, and makes this
creative style accessible to most casual photographers.
More information and supplementary material can be found on our project
webpage: https://motion-mode.github.io/
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 18:36:54 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Tabellion",
"Eric",
""
],
[
"Karnad",
"Nikhil",
""
],
[
"Glaser",
"Noa",
""
],
[
"Weiss",
"Ben",
""
],
[
"Jacobs",
"David E.",
""
],
[
"Pritch",
"Yael",
""
]
] |
new_dataset
| 0.996761 |
2308.01385
|
Suryansh Sharma
|
Suryansh Sharma, Ashutosh Simha, R. Venkatesha Prasad, Shubham
Deshmukh, Kavin B. Saravanan, Ravi Ramesh, Luca Mottola
|
BEAVIS: Balloon Enabled Aerial Vehicle for IoT and Sensing
|
To be published in the 29th Annual International Conference on Mobile
Computing and Networking (ACM MobiCom 23), October 2-6, 2023, Madrid, Spain.
ACM, New York, NY, USA, 15 pages
| null |
10.1145/3570361.3592498
| null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
UAVs are becoming versatile and valuable platforms for various applications.
However, the main limitation is their flying time. We present BEAVIS, a novel
aerial robotic platform striking an unparalleled trade-off between the
manoeuvrability of drones and the long lasting capacity of blimps. BEAVIS
scores highly in applications where drones enjoy unconstrained mobility yet
suffer from limited lifetime. A nonlinear flight controller exploiting novel,
unexplored, aerodynamic phenomena to regulate the ambient pressure and enable
all translational and yaw degrees of freedom is proposed without direct
actuation in the vertical direction. BEAVIS has built-in rotor fault detection
and tolerance. We explain the design and the necessary background in detail. We
verify the dynamics of BEAVIS and demonstrate its distinct advantages, such as
agility, over existing platforms including the degrees of freedom akin to a
drone with 11.36x increased lifetime. We exemplify the potential of BEAVIS to
become an invaluable platform for many applications.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 19:01:03 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Sharma",
"Suryansh",
""
],
[
"Simha",
"Ashutosh",
""
],
[
"Prasad",
"R. Venkatesha",
""
],
[
"Deshmukh",
"Shubham",
""
],
[
"Saravanan",
"Kavin B.",
""
],
[
"Ramesh",
"Ravi",
""
],
[
"Mottola",
"Luca",
""
]
] |
new_dataset
| 0.999031 |
2308.01386
|
Elvys Soares
|
Elvys Soares, Manoel Aranda, Naelson Oliveira, M\'arcio Ribeiro, Rohit
Gheyi, Emerson Souza, Ivan Machado, Andr\'e Santos, Baldoino Fonseca, Rodrigo
Bonif\'acio
|
Manual Tests Do Smell! Cataloging and Identifying Natural Language Test
Smells
|
The 17th ACM/IEEE International Symposium on Empirical Software
Engineering and Measurement (ESEM), 2023
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Background: Test smells indicate potential problems in the design and
implementation of automated software tests that may negatively impact test code
maintainability, coverage, and reliability. When poorly described, manual tests
written in natural language may suffer from related problems, which enable
their analysis from the point of view of test smells. Despite the possible
prejudice to manually tested software products, little is known about test
smells in manual tests, which results in many open questions regarding their
types, frequency, and harm to tests written in natural language. Aims:
Therefore, this study aims to contribute to a catalog of test smells for manual
tests. Method: We perform a two-fold empirical strategy. First, an exploratory
study in manual tests of three systems: the Ubuntu Operational System, the
Brazilian Electronic Voting Machine, and the User Interface of a large
smartphone manufacturer. We use our findings to propose a catalog of eight test
smells and identification rules based on syntactical and morphological text
analysis, validating our catalog with 24 in-company test engineers. Second,
using our proposals, we create a tool based on Natural Language Processing
(NLP) to analyze the subject systems' tests, validating the results. Results:
We observed the occurrence of eight test smells. A survey of 24 in-company test
professionals showed that 80.7% agreed with our catalog definitions and
examples. Our NLP-based tool achieved a precision of 92%, recall of 95%, and
f-measure of 93.5%, and its execution evidenced 13,169 occurrences of our
cataloged test smells in the analyzed systems. Conclusion: We contribute with a
catalog of natural language test smells and novel detection strategies that
better explore the capabilities of current NLP mechanisms with promising
results and reduced effort to analyze tests written in different idioms.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 19:05:36 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Soares",
"Elvys",
""
],
[
"Aranda",
"Manoel",
""
],
[
"Oliveira",
"Naelson",
""
],
[
"Ribeiro",
"Márcio",
""
],
[
"Gheyi",
"Rohit",
""
],
[
"Souza",
"Emerson",
""
],
[
"Machado",
"Ivan",
""
],
[
"Santos",
"André",
""
],
[
"Fonseca",
"Baldoino",
""
],
[
"Bonifácio",
"Rodrigo",
""
]
] |
new_dataset
| 0.998661 |
2308.01398
|
Cora Dimmig
|
Cora A. Dimmig, Anna Goodridge, Gabriel Baraban, Pupei Zhu, Joyraj
Bhowmick, Marin Kobilarov
|
A Small Form Factor Aerial Research Vehicle for Pick-and-Place Tasks
with Onboard Real-Time Object Detection and Visual Odometry
|
\copyright 2023 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a novel, small form-factor, aerial vehicle research
platform for agile object detection, classification, tracking, and interaction
tasks. General-purpose hardware components were designed to augment a given
aerial vehicle and enable it to perform safe and reliable grasping. These
components include a custom collision tolerant cage and low-cost Gripper
Extension Package, which we call GREP, for object grasping. Small vehicles
enable applications in highly constrained environments, but are often limited
by computational resources. This work evaluates the challenges of
pick-and-place tasks, with entirely onboard computation of object pose and
visual odometry based state estimation on a small platform, and demonstrates
experiments with enough accuracy to reliably grasp objects. In a total of 70
trials across challenging cases such as cluttered environments, obstructed
targets, and multiple instances of the same target, we demonstrated
successfully grasping the target in 93% of trials. Both the hardware component
designs and software framework are released as open-source, since our intention
is to enable easy reproduction and application on a wide range of small
vehicles.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 19:40:58 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Dimmig",
"Cora A.",
""
],
[
"Goodridge",
"Anna",
""
],
[
"Baraban",
"Gabriel",
""
],
[
"Zhu",
"Pupei",
""
],
[
"Bhowmick",
"Joyraj",
""
],
[
"Kobilarov",
"Marin",
""
]
] |
new_dataset
| 0.998896 |
2308.01408
|
Andrei Preda
|
Andrei-Alexandru Preda, Dumitru-Clementin Cercel, Traian Rebedea,
Costin-Gabriel Chiru
|
UPB at IberLEF-2023 AuTexTification: Detection of Machine-Generated Text
using Transformer Ensembles
|
10 pages. Accepted for publication in the IberLEF 2023 Proceedings,
at https://ceur-ws.org/
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes the solutions submitted by the UPB team to the
AuTexTification shared task, featured as part of IberLEF-2023. Our team
participated in the first subtask, identifying text documents produced by large
language models instead of humans. The organizers provided a bilingual dataset
for this subtask, comprising English and Spanish texts covering multiple
domains, such as legal texts, social media posts, and how-to articles. We
experimented mostly with deep learning models based on Transformers, as well as
training techniques such as multi-task learning and virtual adversarial
training to obtain better results. We submitted three runs, two of which
consisted of ensemble models. Our best-performing model achieved macro
F1-scores of 66.63% on the English dataset and 67.10% on the Spanish dataset.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 20:08:59 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Preda",
"Andrei-Alexandru",
""
],
[
"Cercel",
"Dumitru-Clementin",
""
],
[
"Rebedea",
"Traian",
""
],
[
"Chiru",
"Costin-Gabriel",
""
]
] |
new_dataset
| 0.998727 |
2308.01414
|
Mingliang Bai
|
Mingliang Bai, Zhihao Zhou, Ruidong Wang, Yusheng Yang, Zizhen Qin,
Yunxiao Chen, Chunjin Mu, Jinfu Liu, Daren Yu
|
HouYi: An open-source large language model specially designed for
renewable energy and carbon neutrality field
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Renewable energy is important for achieving carbon neutrality goal. With the
great success of Large Language Models (LLMs) like ChatGPT in automatic content
generation, LLMs are playing an increasingly important role. However, there has
not been a specially designed LLM for renewable energy. Meanwhile, there has
not been any dataset of renewable energy for training LLMs. Therefore, this
paper published the first open-source Renewable Energy Academic Paper (REAP)
dataset for non-commercial LLM research of renewable energy. REAP dataset is
collected through searching the title and abstract of 1,168,970 academic
literatures from Web of Science. Based on REAP dataset, HouYi model, the first
LLM for renewable energy, is developed through finetuning general LLMs. HouYi
demonstrated powerful academic paper paragraph generation ability in renewable
energy field. Experiments show that its ability to generate academic papers on
renewable energy is comparable to ChatGPT, slightly outperforms Claude, ERNIE
Bot and SparkDesk, and significantly outperforms open-source LLaMA-13B model.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 06:59:36 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Bai",
"Mingliang",
""
],
[
"Zhou",
"Zhihao",
""
],
[
"Wang",
"Ruidong",
""
],
[
"Yang",
"Yusheng",
""
],
[
"Qin",
"Zizhen",
""
],
[
"Chen",
"Yunxiao",
""
],
[
"Mu",
"Chunjin",
""
],
[
"Liu",
"Jinfu",
""
],
[
"Yu",
"Daren",
""
]
] |
new_dataset
| 0.999666 |
2308.01430
|
Ziao Wang
|
Ziao Wang, Yuhang Li, Junda Wu, Jaehyeon Soon, Xiaofeng Zhang
|
FinVis-GPT: A Multimodal Large Language Model for Financial Chart
Analysis
|
(FinLLM 2023)@IJCAI 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose FinVis-GPT, a novel multimodal large language model
(LLM) specifically designed for financial chart analysis. By leveraging the
power of LLMs and incorporating instruction tuning and multimodal capabilities,
FinVis-GPT is capable of interpreting financial charts and providing valuable
analysis. To train FinVis-GPT, a financial task oriented dataset was generated
for pre-training alignment and instruction tuning, comprising various types of
financial charts and their corresponding descriptions. We evaluate the model
performance via several case studies due to the time limit, and the promising
results demonstrated that FinVis-GPT is superior in various financial chart
related tasks, including generating descriptions, answering questions and
predicting future market trends, surpassing existing state-of-the-art
multimodal LLMs. The proposed FinVis-GPT serves as a pioneering effort in
utilizing multimodal LLMs in the finance domain and our generated dataset will
be release for public use in the near future to speedup related research.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 07:44:15 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Wang",
"Ziao",
""
],
[
"Li",
"Yuhang",
""
],
[
"Wu",
"Junda",
""
],
[
"Soon",
"Jaehyeon",
""
],
[
"Zhang",
"Xiaofeng",
""
]
] |
new_dataset
| 0.999779 |
2308.01463
|
Zian Liu
|
Zian Liu, Zhi Zhang, Siqi Ma, Dongxi Liu, Jun Zhang, Chao Chen,
Shigang Liu, Muhammad Ejaz Ahmed, Yang Xiang
|
SemDiff: Binary Similarity Detection by Diffing Key-Semantics Graphs
|
12 pages, conference paper
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Binary similarity detection is a critical technique that has been applied in
many real-world scenarios where source code is not available, e.g., bug search,
malware analysis, and code plagiarism detection. Existing works are ineffective
in detecting similar binaries in cases where different compiling optimizations,
compilers, source code versions, or obfuscation are deployed.
We observe that all the cases do not change a binary's key code behaviors
although they significantly modify its syntax and structure. With this key
observation, we extract a set of key instructions from a binary to capture its
key code behaviors. By detecting the similarity between two binaries' key
instructions, we can address well the ineffectiveness limitation of existing
works. Specifically, we translate each extracted key instruction into a
self-defined key expression, generating a key-semantics graph based on the
binary's control flow. Each node in the key-semantics graph denotes a key
instruction, and the node attribute is the key expression. To quantify the
similarity between two given key-semantics graphs, we first serialize each
graph into a sequence of key expressions by topological sort. Then, we tokenize
and concatenate key expressions to generate token lists. We calculate the
locality-sensitive hash value for all token lists and quantify their
similarity. %We implement a prototype, called SemDiff, consisting of two
modules: graph generation and graph diffing. The first module generates a pair
of key-semantics graphs and the second module diffs the graphs. Our evaluation
results show that overall, SemDiff outperforms state-of-the-art tools when
detecting the similarity of binaries generated from different optimization
levels, compilers, and obfuscations. SemDiff is also effective for library
version search and finding similar vulnerabilities in firmware.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 22:48:48 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Liu",
"Zian",
""
],
[
"Zhang",
"Zhi",
""
],
[
"Ma",
"Siqi",
""
],
[
"Liu",
"Dongxi",
""
],
[
"Zhang",
"Jun",
""
],
[
"Chen",
"Chao",
""
],
[
"Liu",
"Shigang",
""
],
[
"Ahmed",
"Muhammad Ejaz",
""
],
[
"Xiang",
"Yang",
""
]
] |
new_dataset
| 0.964896 |
2308.01469
|
Ruyi Ding
|
Ruyi Ding, Shijin Duan, Xiaolin Xu, Yunsi Fei
|
VertexSerum: Poisoning Graph Neural Networks for Link Inference
| null | null | null | null |
cs.LG cs.AI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph neural networks (GNNs) have brought superb performance to various
applications utilizing graph structural data, such as social analysis and fraud
detection. The graph links, e.g., social relationships and transaction history,
are sensitive and valuable information, which raises privacy concerns when
using GNNs. To exploit these vulnerabilities, we propose VertexSerum, a novel
graph poisoning attack that increases the effectiveness of graph link stealing
by amplifying the link connectivity leakage. To infer node adjacency more
accurately, we propose an attention mechanism that can be embedded into the
link detection network. Our experiments demonstrate that VertexSerum
significantly outperforms the SOTA link inference attack, improving the AUC
scores by an average of $9.8\%$ across four real-world datasets and three
different GNN structures. Furthermore, our experiments reveal the effectiveness
of VertexSerum in both black-box and online learning settings, further
validating its applicability in real-world scenarios.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 23:13:49 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Ding",
"Ruyi",
""
],
[
"Duan",
"Shijin",
""
],
[
"Xu",
"Xiaolin",
""
],
[
"Fei",
"Yunsi",
""
]
] |
new_dataset
| 0.981271 |
2308.01477
|
Stan Birchfield
|
Andrew Guo, Bowen Wen, Jianhe Yuan, Jonathan Tremblay, Stephen Tyree,
Jeffrey Smith, Stan Birchfield
|
HANDAL: A Dataset of Real-World Manipulable Object Categories with Pose
Annotations, Affordances, and Reconstructions
|
IROS 2023. Project page: https://nvlabs.github.io/HANDAL/
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the HANDAL dataset for category-level object pose estimation and
affordance prediction. Unlike previous datasets, ours is focused on
robotics-ready manipulable objects that are of the proper size and shape for
functional grasping by robot manipulators, such as pliers, utensils, and
screwdrivers. Our annotation process is streamlined, requiring only a single
off-the-shelf camera and semi-automated processing, allowing us to produce
high-quality 3D annotations without crowd-sourcing. The dataset consists of
308k annotated image frames from 2.2k videos of 212 real-world objects in 17
categories. We focus on hardware and kitchen tool objects to facilitate
research in practical scenarios in which a robot manipulator needs to interact
with the environment beyond simple pushing or indiscriminate grasping. We
outline the usefulness of our dataset for 6-DoF category-level pose+scale
estimation and related tasks. We also provide 3D reconstructed meshes of all
objects, and we outline some of the bottlenecks to be addressed for
democratizing the collection of datasets like this one.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 23:59:59 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Guo",
"Andrew",
""
],
[
"Wen",
"Bowen",
""
],
[
"Yuan",
"Jianhe",
""
],
[
"Tremblay",
"Jonathan",
""
],
[
"Tyree",
"Stephen",
""
],
[
"Smith",
"Jeffrey",
""
],
[
"Birchfield",
"Stan",
""
]
] |
new_dataset
| 0.999757 |
2308.01483
|
Guillaume Berger
|
Antoine Mercier and Ruan Erasmus and Yashesh Savani and Manik Dhingra
and Fatih Porikli and Guillaume Berger
|
Efficient neural supersampling on a novel gaming dataset
|
ICCV'23
| null | null | null |
cs.CV cs.GR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Real-time rendering for video games has become increasingly challenging due
to the need for higher resolutions, framerates and photorealism. Supersampling
has emerged as an effective solution to address this challenge. Our work
introduces a novel neural algorithm for supersampling rendered content that is
4 times more efficient than existing methods while maintaining the same level
of accuracy. Additionally, we introduce a new dataset which provides auxiliary
modalities such as motion vectors and depth generated using graphics rendering
features like viewport jittering and mipmap biasing at different resolutions.
We believe that this dataset fills a gap in the current dataset landscape and
can serve as a valuable resource to help measure progress in the field and
advance the state-of-the-art in super-resolution techniques for gaming content.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 00:42:30 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Mercier",
"Antoine",
""
],
[
"Erasmus",
"Ruan",
""
],
[
"Savani",
"Yashesh",
""
],
[
"Dhingra",
"Manik",
""
],
[
"Porikli",
"Fatih",
""
],
[
"Berger",
"Guillaume",
""
]
] |
new_dataset
| 0.994241 |
2308.01492
|
Ramanathan Subramanian
|
Blooma John, Ramanathan Subramanian, Jayan Chirayath Kurian
|
A Virtual Reality Game to Improve Physical and Cognitive Acuity
|
5 Figures
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
We present the Virtual Human Benchmark (VHB) game to evaluate and improve
physical and cognitive acuity. VHB simulates in 3D the BATAK lightboard game,
which is designed to improve physical reaction and hand-eye coordination, on
the \textit{Oculus Rift} and \textit{Quest} headsets. The game comprises the
\textit{reaction}, \textit{accumulator} and \textit{sequence} modes; \bj{along}
with the \textit{reaction} and \textit{accumulator} modes which mimic BATAK
functionalities, the \textit{sequence} mode involves the user repeating a
sequence of illuminated targets with increasing complexity to train visual
memory and cognitive processing. A first version of the game (VHB v1) was
evaluated against the real-world BATAK by 20 users, and their feedback was
utilized to improve game design and obtain a second version (VHB v2). Another
study to evaluate VHB v2 was conducted with 20 users, whose results confirmed
that the deign improvements enhanced game usability and user experience in
multiple respects. Also, logging and visualization of performance data such as
\textit{reaction time}, \textit{speed between targets} and \textit{completed
sequence patterns} provides useful data for coaches/therapists monitoring
sports/rehabilitation regimens.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 01:26:18 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"John",
"Blooma",
""
],
[
"Subramanian",
"Ramanathan",
""
],
[
"Kurian",
"Jayan Chirayath",
""
]
] |
new_dataset
| 0.996634 |
2308.01499
|
Qi Yang
|
Qi Yang, Joel Jung, Timon Deschamps, Xiaozhong Xu, and Shan Liu
|
TDMD: A Database for Dynamic Color Mesh Subjective and Objective Quality
Explorations
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic colored meshes (DCM) are widely used in various applications;
however, these meshes may undergo different processes, such as compression or
transmission, which can distort them and degrade their quality. To facilitate
the development of objective metrics for DCMs and study the influence of
typical distortions on their perception, we create the Tencent - dynamic
colored mesh database (TDMD) containing eight reference DCM objects with six
typical distortions. Using processed video sequences (PVS) derived from the
DCM, we have conducted a large-scale subjective experiment that resulted in 303
distorted DCM samples with mean opinion scores, making the TDMD the largest
available DCM database to our knowledge. This database enabled us to study the
impact of different types of distortion on human perception and offer
recommendations for DCM compression and related tasks. Additionally, we have
evaluated three types of state-of-the-art objective metrics on the TDMD,
including image-based, point-based, and video-based metrics, on the TDMD. Our
experimental results highlight the strengths and weaknesses of each metric, and
we provide suggestions about the selection of metrics in practical DCM
applications. The TDMD will be made publicly available at the following
location: https://multimedia.tencent.com/resources/tdmd.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 01:50:48 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Yang",
"Qi",
""
],
[
"Jung",
"Joel",
""
],
[
"Deschamps",
"Timon",
""
],
[
"Xu",
"Xiaozhong",
""
],
[
"Liu",
"Shan",
""
]
] |
new_dataset
| 0.999481 |
2308.01521
|
Liang Wang
|
Liang Wang and Xiaogang Wang
|
PPI-NET: End-to-End Parametric Primitive Inference
|
arXiv admin note: text overlap with arXiv:2203.01305 by other authors
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In engineering applications, line, circle, arc, and point are collectively
referred to as primitives, and they play a crucial role in path planning,
simulation analysis, and manufacturing. When designing CAD models, engineers
typically start by sketching the model's orthographic view on paper or a
whiteboard and then translate the design intent into a CAD program. Although
this design method is powerful, it often involves challenging and repetitive
tasks, requiring engineers to perform numerous similar operations in each
design. To address this conversion process, we propose an efficient and
accurate end-to-end method that avoids the inefficiency and error accumulation
issues associated with using auto-regressive models to infer parametric
primitives from hand-drawn sketch images. Since our model samples match the
representation format of standard CAD software, they can be imported into CAD
software for solving, editing, and applied to downstream design tasks.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 03:50:49 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Wang",
"Liang",
""
],
[
"Wang",
"Xiaogang",
""
]
] |
new_dataset
| 0.999166 |
2308.01536
|
Sanghyeon Na
|
Sanghyeon Na
|
MFIM: Megapixel Facial Identity Manipulation
|
ECCV 2022 accepted
| null |
10.1007/978-3-031-19778-9_9
| null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face swapping is a task that changes a facial identity of a given image to
that of another person. In this work, we propose a novel face-swapping
framework called Megapixel Facial Identity Manipulation (MFIM). The
face-swapping model should achieve two goals. First, it should be able to
generate a high-quality image. We argue that a model which is proficient in
generating a megapixel image can achieve this goal. However, generating a
megapixel image is generally difficult without careful model design. Therefore,
our model exploits pretrained StyleGAN in the manner of GAN-inversion to
effectively generate a megapixel image. Second, it should be able to
effectively transform the identity of a given image. Specifically, it should be
able to actively transform ID attributes (e.g., face shape and eyes) of a given
image into those of another person, while preserving ID-irrelevant attributes
(e.g., pose and expression). To achieve this goal, we exploit 3DMM that can
capture various facial attributes. Specifically, we explicitly supervise our
model to generate a face-swapped image with the desirable attributes using
3DMM. We show that our model achieves state-of-the-art performance through
extensive experiments. Furthermore, we propose a new operation called ID
mixing, which creates a new identity by semantically mixing the identities of
several people. It allows the user to customize the new identity.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 04:36:48 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Na",
"Sanghyeon",
""
]
] |
new_dataset
| 0.997126 |
2308.01539
|
Rahma Mukta
|
Rahma Mukta, Rue C. Teh, Hye-young Paik, Qinghua Lu and Salil S.
Kanhere
|
VCTP: A Verifiable Credential-based Trust Propagation Protocol for
Personal Issuers in Self-Sovereign Identity Platforms
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self Sovereign Identity (SSI) is an emerging identity system that facilitates
secure credential issuance and verification without placing trust in any
centralised authority. To bypass central trust, most SSI implementations place
blockchain as a trusted mediator by placing credential transactions on-chain.
Yet, existing SSI platforms face trust issues as all credential issuers in SSI
are not supported with adequate trust. Current SSI solutions provide trust
support to the officiated issuers (e.g., government agencies), who must follow
a precise process to assess their credentials. However, there is no structured
trust support for individuals of SSI who may attempt to issue a credential
(e.g., letter of consent) in the context of business processes. Therefore, some
risk-averse verifiers in the system may not accept the credentials from
individual issuers to avoid carrying the cost of mishaps from potentially
inadmissible credentials without reliance on a trusted agency. This paper
proposes a trust propagation protocol that supports individual users to be
trusted as verifiable issuers in the SSI platform by establishing a trust
propagation credential template in the blockchain. Our approach utilises (i)
the sanitizable signature scheme to propagate the required trust to an
individual issuer, (ii) a voting mechanism to minimises the possibility of
collusion. Our implementation demonstrates that the solution is both practical
and performs well under varying system loads.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 05:01:51 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Mukta",
"Rahma",
""
],
[
"Teh",
"Rue C.",
""
],
[
"Paik",
"Hye-young",
""
],
[
"Lu",
"Qinghua",
""
],
[
"Kanhere",
"Salil S.",
""
]
] |
new_dataset
| 0.997465 |
2308.01597
|
Stefano Borgo
|
Stefano Borgo, Roberta Ferrario, Aldo Gangemi, Nicola Guarino, Claudio
Masolo, Daniele Porello, Emilio M. Sanfilippo, Laure Vieu
|
DOLCE: A Descriptive Ontology for Linguistic and Cognitive Engineering
|
25 pages, 7 figures
|
Applied Ontology 17 (2022):45-69
|
10.3233/AO-210259
| null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
DOLCE, the first top-level (foundational) ontology to be axiomatized, has
remained stable for twenty years and today is broadly used in a variety of
domains. DOLCE is inspired by cognitive and linguistic considerations and aims
to model a commonsense view of reality, like the one human beings exploit in
everyday life in areas as diverse as socio-technical systems, manufacturing,
financial transactions and cultural heritage. DOLCE clearly lists the
ontological choices it is based upon, relies on philosophical principles, is
richly formalized, and is built according to well-established ontological
methodologies, e.g. OntoClean. Because of these features, it has inspired most
of the existing top-level ontologies and has been used to develop or improve
standards and public domain resources (e.g. CIDOC CRM, DBpedia and WordNet).
Being a foundational ontology, DOLCE is not directly concerned with domain
knowledge. Its purpose is to provide the general categories and relations
needed to give a coherent view of reality, to integrate domain knowledge, and
to mediate across domains. In these 20 years DOLCE has shown that applied
ontologies can be stable and that interoperability across reference and domain
ontologies is a reality. This paper briefly introduces the ontology and shows
how to use it on a few modeling cases.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 08:03:19 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Borgo",
"Stefano",
""
],
[
"Ferrario",
"Roberta",
""
],
[
"Gangemi",
"Aldo",
""
],
[
"Guarino",
"Nicola",
""
],
[
"Masolo",
"Claudio",
""
],
[
"Porello",
"Daniele",
""
],
[
"Sanfilippo",
"Emilio M.",
""
],
[
"Vieu",
"Laure",
""
]
] |
new_dataset
| 0.998589 |
2308.01604
|
Muhammad Salman Ikrar Musyaffa
|
Muhammad Salman Ikrar Musyaffa, Novanto Yudistira, Muhammad Arif
Rahman
|
IndoHerb: Indonesia Medicinal Plants Recognition using Transfer Learning
and Deep Learning
|
25 pages, 18 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Herbal plants are nutritious plants that can be used as an alternative to
traditional disease healing. In Indonesia there are various types of herbal
plants. But with the development of the times, the existence of herbal plants
as traditional medicines began to be forgotten so that not everyone could
recognize them. Having the ability to identify herbal plants can have many
positive impacts. However, there is a problem where identifying plants can take
a long time because it requires in-depth knowledge and careful examination of
plant criteria. So that the application of computer vision can help identify
herbal plants. Previously, research had been conducted on the introduction of
herbal plants from Vietnam using several algorithms, but from these research
the accuracy was not high enough. Therefore, this study intends to implement
transfer learning from the Convolutional Neural Network (CNN) algorithm to
classify types of herbal plants from Indonesia. This research was conducted by
collecting image data of herbal plants from Indonesia independently through the
Google Images search engine. After that, it will go through the data
preprocessing, classification using the transfer learning method from CNN, and
analysis will be carried out. The CNN transfer learning models used are
ResNet34, DenseNet121, and VGG11_bn. Based on the test results of the three
models, it was found that DenseNet121 was the model with the highest accuracy,
which was 87.4%. In addition, testing was also carried out using the scratch
model and obtained an accuracy of 43.53%. The Hyperparameter configuration used
in this test is the ExponentialLR scheduler with a gamma value of 0.9; learning
rate 0.001; Cross Entropy Loss function; Adam optimizer; and the number of
epochs is 50. Indonesia Medicinal Plant Dataset can be accessed at the
following link https://github.com/Salmanim20/indo_medicinal_plant
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 08:16:55 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Musyaffa",
"Muhammad Salman Ikrar",
""
],
[
"Yudistira",
"Novanto",
""
],
[
"Rahman",
"Muhammad Arif",
""
]
] |
new_dataset
| 0.999709 |
2308.01607
|
Ahmed Eleliemy
|
Ahmed Eleliemy and Florina M. Ciorba
|
DaphneSched: A Scheduler for Integrated Data Analysis Pipelines
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
DAPHNE is a new open-source software infrastructure designed to address the
increasing demands of integrated data analysis (IDA) pipelines, comprising data
management (DM), high performance computing (HPC), and machine learning (ML)
systems. Efficiently executing IDA pipelines is challenging due to their
diverse computing characteristics and demands. Therefore, IDA pipelines
executed with the DAPHNE infrastructure require an efficient and versatile
scheduler to support these demands. This work introduces DaphneSched, the
task-based scheduler at the core of DAPHNE. DaphneSched is versatile by
incorporating eleven task partitioning and three task assignment techniques,
bringing the state-of-the-art closer to the state-of-the-practice task
scheduling. To showcase DaphneSched's effectiveness in scheduling IDA
pipelines, we evaluate its performance on two applications: a product
recommendation system and a linear regression model training. We conduct
performance experiments on multicore platforms with 20 and 56 cores. The
results show that the versatility of DaphneSched enabled combinations of
scheduling strategies that outperform commonly used scheduling techniques by up
to 13%. This work confirms the benefits of employing DaphneSched for the
efficient execution of applications with IDA pipelines.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 08:26:23 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Eleliemy",
"Ahmed",
""
],
[
"Ciorba",
"Florina M.",
""
]
] |
new_dataset
| 0.99959 |
2308.01622
|
Kaer Huang Carl
|
Kaer Huang, Bingchuan Sun, Feng Chen, Tao Zhang, Jun Xie, Jian Li,
Christopher Walter Twombly, Zhepeng Wang
|
ReIDTrack: Multi-Object Track and Segmentation Without Motion
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, dominant Multi-object tracking (MOT) and segmentation (MOTS)
methods mainly follow the tracking-by-detection paradigm. Transformer-based
end-to-end (E2E) solutions bring some ideas to MOT and MOTS, but they cannot
achieve a new state-of-the-art (SOTA) performance in major MOT and MOTS
benchmarks. Detection and association are two main modules of the
tracking-by-detection paradigm. Association techniques mainly depend on the
combination of motion and appearance information. As deep learning has been
recently developed, the performance of the detection and appearance model is
rapidly improved. These trends made us consider whether we can achieve SOTA
based on only high-performance detection and appearance model. Our paper mainly
focuses on exploring this direction based on CBNetV2 with Swin-B as a detection
model and MoCo-v2 as a self-supervised appearance model. Motion information and
IoU mapping were removed during the association. Our method wins 1st place on
the MOTS track and wins 2nd on the MOT track in the CVPR2023 WAD workshop. We
hope our simple and effective method can give some insights to the MOT and MOTS
research community. Source code will be released under this git repository
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 08:53:23 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Huang",
"Kaer",
""
],
[
"Sun",
"Bingchuan",
""
],
[
"Chen",
"Feng",
""
],
[
"Zhang",
"Tao",
""
],
[
"Xie",
"Jun",
""
],
[
"Li",
"Jian",
""
],
[
"Twombly",
"Christopher Walter",
""
],
[
"Wang",
"Zhepeng",
""
]
] |
new_dataset
| 0.995449 |
2308.01630
|
Qishun Wang
|
Zhengzheng Tu, Qishun Wang, Hongshun Wang, Kunpeng Wang, Chenglong Li
|
Erasure-based Interaction Network for RGBT Video Object Detection and A
Unified Benchmark
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, many breakthroughs are made in the field of Video Object Detection
(VOD), but the performance is still limited due to the imaging limitations of
RGB sensors in adverse illumination conditions. To alleviate this issue, this
work introduces a new computer vision task called RGB-thermal (RGBT) VOD by
introducing the thermal modality that is insensitive to adverse illumination
conditions. To promote the research and development of RGBT VOD, we design a
novel Erasure-based Interaction Network (EINet) and establish a comprehensive
benchmark dataset (VT-VOD50) for this task. Traditional VOD methods often
leverage temporal information by using many auxiliary frames, and thus have
large computational burden. Considering that thermal images exhibit less noise
than RGB ones, we develop a negative activation function that is used to erase
the noise of RGB features with the help of thermal image features. Furthermore,
with the benefits from thermal images, we rely only on a small temporal window
to model the spatio-temporal information to greatly improve efficiency while
maintaining detection accuracy.
VT-VOD50 dataset consists of 50 pairs of challenging RGBT video sequences
with complex backgrounds, various objects and different illuminations, which
are collected in real traffic scenarios. Extensive experiments on VT-VOD50
dataset demonstrate the effectiveness and efficiency of our proposed method
against existing mainstream VOD methods. The code of EINet and the dataset will
be released to the public for free academic usage.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 09:04:48 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Tu",
"Zhengzheng",
""
],
[
"Wang",
"Qishun",
""
],
[
"Wang",
"Hongshun",
""
],
[
"Wang",
"Kunpeng",
""
],
[
"Li",
"Chenglong",
""
]
] |
new_dataset
| 0.994002 |
2308.01648
|
Yu Ishihara
|
Yu Ishihara, Yuichi Hazama, Kousuke Suzuki, Jerry Jun Yokono, Kohtaro
Sabe, Kenta Kawamoto
|
Improving Wind Resistance Performance of Cascaded PID Controlled
Quadcopters using Residual Reinforcement Learning
| null | null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wind resistance control is an essential feature for quadcopters to maintain
their position to avoid deviation from target position and prevent collisions
with obstacles. Conventionally, cascaded PID controller is used for the control
of quadcopters for its simplicity and ease of tuning its parameters. However,
it is weak against wind disturbances and the quadcopter can easily deviate from
target position. In this work, we propose a residual reinforcement learning
based approach to build a wind resistance controller of a quadcopter. By
learning only the residual that compensates the disturbance, we can continue
using the cascaded PID controller as the base controller of the quadcopter but
improve its performance against wind disturbances. To avoid unexpected crashes
and destructions of quadcopters, our method does not require real hardware for
data collection and training. The controller is trained only on a simulator and
directly applied to the target hardware without extra finetuning process. We
demonstrate the effectiveness of our approach through various experiments
including an experiment in an outdoor scene with wind speed greater than 13
m/s. Despite its simplicity, our controller reduces the position deviation by
approximately 50% compared to the quadcopter controlled with the conventional
cascaded PID controller. Furthermore, trained controller is robust and
preserves its performance even though the quadcopter's mass and propeller's
lift coefficient is changed between 50% to 150% from original training time.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 09:29:19 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Ishihara",
"Yu",
""
],
[
"Hazama",
"Yuichi",
""
],
[
"Suzuki",
"Kousuke",
""
],
[
"Yokono",
"Jerry Jun",
""
],
[
"Sabe",
"Kohtaro",
""
],
[
"Kawamoto",
"Kenta",
""
]
] |
new_dataset
| 0.986681 |
2308.01650
|
Siyang Leng
|
Minhao Zou, Zhongxue Gan, Yutong Wang, Junheng Zhang, Dongyan Sui,
Chun Guan, Siyang Leng
|
UniG-Encoder: A Universal Feature Encoder for Graph and Hypergraph Node
Classification
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph and hypergraph representation learning has attracted increasing
attention from various research fields. Despite the decent performance and
fruitful applications of Graph Neural Networks (GNNs), Hypergraph Neural
Networks (HGNNs), and their well-designed variants, on some commonly used
benchmark graphs and hypergraphs, they are outperformed by even a simple
Multi-Layer Perceptron. This observation motivates a reexamination of the
design paradigm of the current GNNs and HGNNs and poses challenges of
extracting graph features effectively. In this work, a universal feature
encoder for both graph and hypergraph representation learning is designed,
called UniG-Encoder. The architecture starts with a forward transformation of
the topological relationships of connected nodes into edge or hyperedge
features via a normalized projection matrix. The resulting edge/hyperedge
features, together with the original node features, are fed into a neural
network. The encoded node embeddings are then derived from the reversed
transformation, described by the transpose of the projection matrix, of the
network's output, which can be further used for tasks such as node
classification. The proposed architecture, in contrast to the traditional
spectral-based and/or message passing approaches, simultaneously and
comprehensively exploits the node features and graph/hypergraph topologies in
an efficient and unified manner, covering both heterophilic and homophilic
graphs. The designed projection matrix, encoding the graph features, is
intuitive and interpretable. Extensive experiments are conducted and
demonstrate the superior performance of the proposed framework on twelve
representative hypergraph datasets and six real-world graph datasets, compared
to the state-of-the-art methods. Our implementation is available online at
https://github.com/MinhZou/UniG-Encoder.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 09:32:50 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Zou",
"Minhao",
""
],
[
"Gan",
"Zhongxue",
""
],
[
"Wang",
"Yutong",
""
],
[
"Zhang",
"Junheng",
""
],
[
"Sui",
"Dongyan",
""
],
[
"Guan",
"Chun",
""
],
[
"Leng",
"Siyang",
""
]
] |
new_dataset
| 0.992863 |
2308.01672
|
Shixin Chen
|
Shixin Chen, Shanyi Li, Zhen Zhuang, Su Zheng, Zheng Liang, Tsung-Yi
Ho, Bei Yu, Alberto L. Sangiovanni-Vincentelli
|
Floorplet: Performance-aware Floorplan Framework for Chiplet Integration
|
9 pages, 10 figures
| null | null | null |
cs.AR cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A chiplet is an integrated circuit that encompasses a well-defined subset of
an overall system's functionality. In contrast to traditional monolithic
system-on-chips (SoCs), chiplet-based architecture can reduce costs and
increase reusability, representing a promising avenue for continuing Moore's
Law. Despite the advantages of multi-chiplet architectures, floorplan design in
a chiplet-based architecture has received limited attention. Conflicts between
cost and performance necessitate a trade-off in chiplet floorplan design since
additional latency introduced by advanced packaging can decrease performance.
Consequently, balancing power, performance, cost, area, and reliability is of
paramount importance. To address this challenge, we propose Floorplet, a
framework comprising simulation tools for performance reporting and
comprehensive models for cost and reliability optimization. Our framework
employs the open-source Gem5 simulator to establish the relationship between
performance and floorplan for the first time, guiding the floorplan
optimization of multi-chiplet architecture. The experimental results show that
our framework decreases inter-chiplet communication costs by 24.81%.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 10:20:58 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Chen",
"Shixin",
""
],
[
"Li",
"Shanyi",
""
],
[
"Zhuang",
"Zhen",
""
],
[
"Zheng",
"Su",
""
],
[
"Liang",
"Zheng",
""
],
[
"Ho",
"Tsung-Yi",
""
],
[
"Yu",
"Bei",
""
],
[
"Sangiovanni-Vincentelli",
"Alberto L.",
""
]
] |
new_dataset
| 0.99893 |
2308.01725
|
Iana Zhura
|
Iana Zhura, Denis Davletshin, Nipun Dhananjaya Weerakkodi Mudalige,
Aleksey Fedoseev, Robinroy Peter and Dzmitry Tsetserukou
|
NeuroSwarm: Multi-Agent Neural 3D Scene Reconstruction and Segmentation
with UAV for Optimal Navigation of Quadruped Robot
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quadruped robots have the distinct ability to adapt their body and step
height to navigate through cluttered environments. Nonetheless, for these
robots to utilize their full potential in real-world scenarios, they require
awareness of their environment and obstacle geometry. We propose a novel
multi-agent robotic system that incorporates cutting-edge technologies. The
proposed solution features a 3D neural reconstruction algorithm that enables
navigation of a quadruped robot in both static and semi-static environments.
The prior areas of the environment are also segmented according to the
quadruped robots' abilities to pass them. Moreover, we have developed an
adaptive neural field optimal motion planner (ANFOMP) that considers both
collision probability and obstacle height in 2D space.Our new navigation and
mapping approach enables quadruped robots to adjust their height and behavior
to navigate under arches and push through obstacles with smaller dimensions.
The multi-agent mapping operation has proven to be highly accurate, with an
obstacle reconstruction precision of 82%. Moreover, the quadruped robot can
navigate with 3D obstacle information and the ANFOMP system, resulting in a
33.3% reduction in path length and a 70% reduction in navigation time.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 12:33:17 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Zhura",
"Iana",
""
],
[
"Davletshin",
"Denis",
""
],
[
"Mudalige",
"Nipun Dhananjaya Weerakkodi",
""
],
[
"Fedoseev",
"Aleksey",
""
],
[
"Peter",
"Robinroy",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
] |
new_dataset
| 0.996701 |
2308.01734
|
Xiangyu Peng
|
Zexin Chen, Eric Zhou, Kenneth Eaton, Xiangyu Peng, Mark Riedl
|
Ambient Adventures: Teaching ChatGPT on Developing Complex Stories
| null | null | null | null |
cs.CL cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Imaginative play is an area of creativity that could allow robots to engage
with the world around them in a much more personified way. Imaginary play can
be seen as taking real objects and locations and using them as imaginary
objects and locations in virtual scenarios. We adopted the story generation
capability of large language models (LLMs) to obtain the stories used for
imaginary play with human-written prompts. Those generated stories will be
simplified and mapped into action sequences that can guide the agent in
imaginary play. To evaluate whether the agent can successfully finish the
imaginary play, we also designed a text adventure game to simulate a house as
the playground for the agent to interact.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 12:52:49 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Chen",
"Zexin",
""
],
[
"Zhou",
"Eric",
""
],
[
"Eaton",
"Kenneth",
""
],
[
"Peng",
"Xiangyu",
""
],
[
"Riedl",
"Mark",
""
]
] |
new_dataset
| 0.99963 |
2308.01751
|
Alexander Vieth
|
Alexander Vieth, Thomas Kroes, Julian Thijssen, Baldur van Lew, Jeroen
Eggermont, Soumyadeep Basu, Elmar Eisemann, Anna Vilanova, Thomas H\"ollt,
Boudewijn Lelieveldt
|
ManiVault: A Flexible and Extensible Visual Analytics Framework for
High-Dimensional Data
|
11 pages paper (incl. 2 pages references and acknowledgements), 2
pages supplement
| null | null | null |
cs.HC cs.GR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Exploration and analysis of high-dimensional data are important tasks in many
fields that produce large and complex data, like the financial sector, systems
biology, or cultural heritage. Tailor-made visual analytics software is
developed for each specific application, limiting their applicability in other
fields. However, as diverse as these fields are, their characteristics and
requirements for data analysis are conceptually similar. Many applications
share abstract tasks and data types and are often constructed with similar
building blocks. Developing such applications, even when based mostly on
existing building blocks, requires significant engineering efforts. We
developed ManiVault, a flexible and extensible open-source visual analytics
framework for analyzing high-dimensional data. The primary objective of
ManiVault is to facilitate rapid prototyping of visual analytics workflows for
visualization software developers and practitioners alike. ManiVault is built
using a plugin-based architecture that offers easy extensibility. While our
architecture deliberately keeps plugins self-contained, to guarantee maximum
flexibility and re-usability, we have designed and implemented a messaging API
for tight integration and linking of modules to support common visual analytics
design patterns. We provide several visualization and analytics plugins, and
ManiVault's API makes the integration of new plugins easy for developers.
ManiVault facilitates the distribution of visualization and analysis pipelines
and results for practitioners through saving and reproducing complete
application states. As such, ManiVault can be used as a communication tool
among researchers to discuss workflows and results. A copy of this paper and
all supplemental material is available at https://osf.io/9k6jw and source code
at https://github.com/ManiVaultStudio.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 13:22:05 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Vieth",
"Alexander",
""
],
[
"Kroes",
"Thomas",
""
],
[
"Thijssen",
"Julian",
""
],
[
"van Lew",
"Baldur",
""
],
[
"Eggermont",
"Jeroen",
""
],
[
"Basu",
"Soumyadeep",
""
],
[
"Eisemann",
"Elmar",
""
],
[
"Vilanova",
"Anna",
""
],
[
"Höllt",
"Thomas",
""
],
[
"Lelieveldt",
"Boudewijn",
""
]
] |
new_dataset
| 0.984398 |
2308.01779
|
Wentong Li
|
Wentong Li, Yuqian Yuan, Song Wang, Jianke Zhu, Jianshu Li, Jian Liu,
Lei Zhang
|
Point2Mask: Point-supervised Panoptic Segmentation via Optimal Transport
|
14 pages, 8 figures, ICCV2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Weakly-supervised image segmentation has recently attracted increasing
research attentions, aiming to avoid the expensive pixel-wise labeling. In this
paper, we present an effective method, namely Point2Mask, to achieve
high-quality panoptic prediction using only a single random point annotation
per target for training. Specifically, we formulate the panoptic pseudo-mask
generation as an Optimal Transport (OT) problem, where each ground-truth (gt)
point label and pixel sample are defined as the label supplier and consumer,
respectively. The transportation cost is calculated by the introduced
task-oriented maps, which focus on the category-wise and instance-wise
differences among the various thing and stuff targets. Furthermore, a
centroid-based scheme is proposed to set the accurate unit number for each gt
point supplier. Hence, the pseudo-mask generation is converted into finding the
optimal transport plan at a globally minimal transportation cost, which can be
solved via the Sinkhorn-Knopp Iteration. Experimental results on Pascal VOC and
COCO demonstrate the promising performance of our proposed Point2Mask approach
to point-supervised panoptic segmentation. Source code is available at:
https://github.com/LiWentomng/Point2Mask.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 14:11:56 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Li",
"Wentong",
""
],
[
"Yuan",
"Yuqian",
""
],
[
"Wang",
"Song",
""
],
[
"Zhu",
"Jianke",
""
],
[
"Li",
"Jianshu",
""
],
[
"Liu",
"Jian",
""
],
[
"Zhang",
"Lei",
""
]
] |
new_dataset
| 0.994676 |
2308.01783
|
Tanjila Mawla
|
Tanjila Mawla, Maanak Gupta, Safwa Ameer, Ravi Sandhu
|
The ACAC_D Model for Mutable Activity Control and Chain of Dependencies
in Smart and Collaborative Systems
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With the integration of connected devices, artificial intelligence, and
heterogeneous networks in IoT-driven cyber-physical systems, our society is
evolving as a smart, automated, and connected community. In such dynamic and
distributed environments, various operations are carried out considering
different contextual factors to support the automation of collaborative devices
and systems. These devices often perform long-lived operations or tasks
(referred to as activities) to fulfill larger goals in the collaborative
environment. These activities are usually mutable (change states) and
interdependent. They can influence the execution of other activities in the
ecosystem, requiring active and real-time monitoring of the entire connected
environment.
Recently, a vision for activity-centric access control(ACAC) was proposed to
enable security modeling and enforcement from the perspective and abstraction
of interdependent activities. The proposed ACAC incorporates four decision
parameters: Authorizations(A), oBligations(B), Conditions(C), and activity
Dependencies(D) for an object agnostic access control in smart systems. In this
paper, we take a step further towards maturing ACAC by focusing on activity
dependencies(D) and developing a family of formal mathematically grounded
models, referred to as ACAC_D. These formal models consider the real-time
mutability of activities in resolving active dependencies among various
activities in the ecosystem. Activity dependencies can form a chain where it is
possible to have dependencies of dependencies. In ACAC, we also consider the
chain of dependencies while handling the mutability of an activity. We
highlight the challenges while dealing with chain of dependencies, and provide
solutions to resolve these challenges. We also present a proof of concept
implementation of with performance analysis for a smart farming use case.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 14:20:50 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Mawla",
"Tanjila",
""
],
[
"Gupta",
"Maanak",
""
],
[
"Ameer",
"Safwa",
""
],
[
"Sandhu",
"Ravi",
""
]
] |
new_dataset
| 0.993677 |
2308.01802
|
Hai Lin
|
Hai Lin, Jinhong Yuan, Wei Yu, Jingxian Wu, Lajos Hanzo
|
Multi-Carrier Modulation: An Evolution from Time-Frequency Domain to
Delay-Doppler Domain
|
This paper has been submitted to the IEEE for possible publication.
The supplementary material of this work will be posted at
https://www.omu.ac.jp/eng/ees-sic/oddm/
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The recently proposed orthogonal delay-Doppler division multiplexing (ODDM)
modulation, which is based on the new delay-Doppler (DD) domain orthogonal
pulse (DDOP), is studied. A substantial benefit of the DDOP-based ODDM or
general delay-Doppler domain multi-carrier (DDMC) modulation is that it
achieves orthogonality with respect to the fine time and frequency resolutions
of the DD domain. We first revisit the family of wireless channel models
conceived for linear time-varying (LTV) channels, and then review the
conventional multi-carrier (MC) modulation schemes and their design guidelines
for both linear time-invariant (LTI) and LTV channels. Then we discuss the
time-varying property of the LTV channels' DD domain impulse response and
propose an impulse function based transmission strategy for equivalent sampled
DD domain (ESDD) channels. Next, we take an in-depth look into the DDOP and the
corresponding ODDM modulation to unveil its unique input-output relation for
transmission over ESDD channels. Then, we point out that the conventional MC
modulation design guidelines based on the Wely-Heisenberg (WH) frame theory can
be relaxed without compromising its orthogonality or without violating the WH
frame theory. More specifically, for a communication system having given
bandwidth and duration, MC modulation signals can be designed based on a WH
subset associated with sufficient (bi)orthogonality, which governs the
(bi)orthogonality of the MC signal within the bandwidth and duration. This
novel design guideline could potentially open up opportunities for developing
future waveforms required by new applications such as communication systems
associated with high delay and/or Doppler shifts, as well as integrated sensing
and communications, etc.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 15:03:40 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Lin",
"Hai",
""
],
[
"Yuan",
"Jinhong",
""
],
[
"Yu",
"Wei",
""
],
[
"Wu",
"Jingxian",
""
],
[
"Hanzo",
"Lajos",
""
]
] |
new_dataset
| 0.995039 |
2308.01857
|
Wenxing Zhu
|
Xingquan Li, Simin Tao, Zengrong Huang, Shijian Chen, Zhisheng Zeng,
Liwei Ni, Zhipeng Huang, Chunan Zhuang, Hongxi Wu, Weiguo Li1, Xueyan Zhao,
He Liu, Shuaiying Long, Wei He, Bojun Liu, Sifeng Gan, Zihao Yu, Tong Liu,
Yuchi Miao, Zhiyuan Yan, Hao Wang, Jie Zhao, Yifan Li, Ruizhi Liu, Xiaoze
Lin, Bo Yang, Zhen Xue, Fuxing Huang, Zonglin Yang, Zhenggang Wu, Jiangkao
Li, Yuezuo Liu, Ming Peng, Yihang Qiu, Wenrui Wu, Zheqing Shao, Kai Mo,
Jikang Liu, Yuyao Liang, Mingzhe Zhang, Zhuang Ma, Xiang Cong, Daxiang Huang,
Guojie Luo, Huawei Li, Haihua Shen, Mingyu Chen, Dongbo Bu, Wenxing Zhu, Ye
Cai, Xiaoming Xiong, Ying Jiang, Yi Heng, Peng Zhang, Biwei Xie, Yungang Bao
|
iEDA: An Open-Source Intelligent Physical Implementation Toolkit and
Library
| null | null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Open-source EDA shows promising potential in unleashing EDA innovation and
lowering the cost of chip design. This paper presents an open-source EDA
project, iEDA, aiming for building a basic infrastructure for EDA technology
evolution and closing the industrial-academic gap in the EDA area. iEDA now
covers the whole flow of physical design (including Floorplan, Placement, CTS,
Routing, Timing Optimization etc.), and part of the analysis tools (Static
Timing Analysis and Power Analysis). To demonstrate the effectiveness of iEDA,
we implement and tape out three chips of different scales (from 700k to 1.5M
gates) on different process nodes (110nm and 28nm) with iEDA. iEDA is publicly
available from the project home page http://ieda.oscc.cc.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 16:24:04 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Li",
"Xingquan",
""
],
[
"Tao",
"Simin",
""
],
[
"Huang",
"Zengrong",
""
],
[
"Chen",
"Shijian",
""
],
[
"Zeng",
"Zhisheng",
""
],
[
"Ni",
"Liwei",
""
],
[
"Huang",
"Zhipeng",
""
],
[
"Zhuang",
"Chunan",
""
],
[
"Wu",
"Hongxi",
""
],
[
"Li1",
"Weiguo",
""
],
[
"Zhao",
"Xueyan",
""
],
[
"Liu",
"He",
""
],
[
"Long",
"Shuaiying",
""
],
[
"He",
"Wei",
""
],
[
"Liu",
"Bojun",
""
],
[
"Gan",
"Sifeng",
""
],
[
"Yu",
"Zihao",
""
],
[
"Liu",
"Tong",
""
],
[
"Miao",
"Yuchi",
""
],
[
"Yan",
"Zhiyuan",
""
],
[
"Wang",
"Hao",
""
],
[
"Zhao",
"Jie",
""
],
[
"Li",
"Yifan",
""
],
[
"Liu",
"Ruizhi",
""
],
[
"Lin",
"Xiaoze",
""
],
[
"Yang",
"Bo",
""
],
[
"Xue",
"Zhen",
""
],
[
"Huang",
"Fuxing",
""
],
[
"Yang",
"Zonglin",
""
],
[
"Wu",
"Zhenggang",
""
],
[
"Li",
"Jiangkao",
""
],
[
"Liu",
"Yuezuo",
""
],
[
"Peng",
"Ming",
""
],
[
"Qiu",
"Yihang",
""
],
[
"Wu",
"Wenrui",
""
],
[
"Shao",
"Zheqing",
""
],
[
"Mo",
"Kai",
""
],
[
"Liu",
"Jikang",
""
],
[
"Liang",
"Yuyao",
""
],
[
"Zhang",
"Mingzhe",
""
],
[
"Ma",
"Zhuang",
""
],
[
"Cong",
"Xiang",
""
],
[
"Huang",
"Daxiang",
""
],
[
"Luo",
"Guojie",
""
],
[
"Li",
"Huawei",
""
],
[
"Shen",
"Haihua",
""
],
[
"Chen",
"Mingyu",
""
],
[
"Bu",
"Dongbo",
""
],
[
"Zhu",
"Wenxing",
""
],
[
"Cai",
"Ye",
""
],
[
"Xiong",
"Xiaoming",
""
],
[
"Jiang",
"Ying",
""
],
[
"Heng",
"Yi",
""
],
[
"Zhang",
"Peng",
""
],
[
"Xie",
"Biwei",
""
],
[
"Bao",
"Yungang",
""
]
] |
new_dataset
| 0.991997 |
2308.01872
|
Mark Riedl
|
Christopher Cui, Xiangyu Peng, Mark Riedl
|
Thespian: Multi-Character Text Role-Playing Game Agents
|
11 pages
| null | null | null |
cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-adventure games and text role-playing games are grand challenges for
reinforcement learning game playing agents. Text role-playing games are
open-ended environments where an agent must faithfully play a particular
character. We consider the distinction between characters and actors, where an
actor agent has the ability to play multiple characters. We present a framework
we call a thespian agent that can learn to emulate multiple characters along
with a soft prompt that can be used to direct it as to which character to play
at any time. We further describe an attention mechanism that allows the agent
to learn new characters that are based on previously learned characters in a
few-shot fashion. We show that our agent outperforms the state of the art agent
framework in multi-character learning and few-shot learning.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 16:53:53 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Cui",
"Christopher",
""
],
[
"Peng",
"Xiangyu",
""
],
[
"Riedl",
"Mark",
""
]
] |
new_dataset
| 0.998581 |
2308.01887
|
Marilyn Walker
|
Omkar Patil, Lena Reed, Kevin K. Bowden, Juraj Juraska, Wen Cui,
Vrindavan Harrison, Rishi Rajasekaran, Angela Ramirez, Cecilia Li, Eduardo
Zamora, Phillip Lee, Jeshwanth Bheemanpally, Rohan Pandey, Adwait
Ratnaparkhi, and Marilyn Walker
|
Athena 2.0: Discourse and User Modeling in Open Domain Dialogue
|
Alexa Prize Proceedings, 2021. Socialbot Grand Challenge 4
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Conversational agents are consistently growing in popularity and many people
interact with them every day. While many conversational agents act as personal
assistants, they can have many different goals. Some are task-oriented, such as
providing customer support for a bank or making a reservation. Others are
designed to be empathetic and to form emotional connections with the user. The
Alexa Prize Challenge aims to create a socialbot, which allows the user to
engage in coherent conversations, on a range of popular topics that will
interest the user. Here we describe Athena 2.0, UCSC's conversational agent for
Amazon's Socialbot Grand Challenge 4. Athena 2.0 utilizes a novel
knowledge-grounded discourse model that tracks the entity links that Athena
introduces into the dialogue, and uses them to constrain named-entity
recognition and linking, and coreference resolution. Athena 2.0 also relies on
a user model to personalize topic selection and other aspects of the
conversation to individual users.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 17:30:39 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Patil",
"Omkar",
""
],
[
"Reed",
"Lena",
""
],
[
"Bowden",
"Kevin K.",
""
],
[
"Juraska",
"Juraj",
""
],
[
"Cui",
"Wen",
""
],
[
"Harrison",
"Vrindavan",
""
],
[
"Rajasekaran",
"Rishi",
""
],
[
"Ramirez",
"Angela",
""
],
[
"Li",
"Cecilia",
""
],
[
"Zamora",
"Eduardo",
""
],
[
"Lee",
"Phillip",
""
],
[
"Bheemanpally",
"Jeshwanth",
""
],
[
"Pandey",
"Rohan",
""
],
[
"Ratnaparkhi",
"Adwait",
""
],
[
"Walker",
"Marilyn",
""
]
] |
new_dataset
| 0.999548 |
2308.01898
|
Ze Yang
|
Ze Yang, Yun Chen, Jingkang Wang, Sivabalan Manivasagam, Wei-Chiu Ma,
Anqi Joyce Yang, Raquel Urtasun
|
UniSim: A Neural Closed-Loop Sensor Simulator
|
CVPR 2023 Highlight. Project page: https://waabi.ai/research/unisim/
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rigorously testing autonomy systems is essential for making safe self-driving
vehicles (SDV) a reality. It requires one to generate safety critical scenarios
beyond what can be collected safely in the world, as many scenarios happen
rarely on public roads. To accurately evaluate performance, we need to test the
SDV on these scenarios in closed-loop, where the SDV and other actors interact
with each other at each timestep. Previously recorded driving logs provide a
rich resource to build these new scenarios from, but for closed loop
evaluation, we need to modify the sensor data based on the new scene
configuration and the SDV's decisions, as actors might be added or removed and
the trajectories of existing actors and the SDV will differ from the original
log. In this paper, we present UniSim, a neural sensor simulator that takes a
single recorded log captured by a sensor-equipped vehicle and converts it into
a realistic closed-loop multi-sensor simulation. UniSim builds neural feature
grids to reconstruct both the static background and dynamic actors in the
scene, and composites them together to simulate LiDAR and camera data at new
viewpoints, with actors added or removed and at new placements. To better
handle extrapolated views, we incorporate learnable priors for dynamic objects,
and leverage a convolutional network to complete unseen regions. Our
experiments show UniSim can simulate realistic sensor data with small domain
gap on downstream tasks. With UniSim, we demonstrate closed-loop evaluation of
an autonomy system on safety-critical scenarios as if it were in the real
world.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 17:56:06 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Yang",
"Ze",
""
],
[
"Chen",
"Yun",
""
],
[
"Wang",
"Jingkang",
""
],
[
"Manivasagam",
"Sivabalan",
""
],
[
"Ma",
"Wei-Chiu",
""
],
[
"Yang",
"Anqi Joyce",
""
],
[
"Urtasun",
"Raquel",
""
]
] |
new_dataset
| 0.978355 |
2308.01904
|
Yutong Lin
|
Yutong Lin, Yuhui Yuan, Zheng Zhang, Chen Li, Nanning Zheng, Han Hu
|
DETR Doesn't Need Multi-Scale or Locality Design
|
To be published in ICCV2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents an improved DETR detector that maintains a "plain"
nature: using a single-scale feature map and global cross-attention
calculations without specific locality constraints, in contrast to previous
leading DETR-based detectors that reintroduce architectural inductive biases of
multi-scale and locality into the decoder. We show that two simple technologies
are surprisingly effective within a plain design to compensate for the lack of
multi-scale feature maps and locality constraints. The first is a box-to-pixel
relative position bias (BoxRPB) term added to the cross-attention formulation,
which well guides each query to attend to the corresponding object region while
also providing encoding flexibility. The second is masked image modeling
(MIM)-based backbone pre-training which helps learn representation with
fine-grained localization ability and proves crucial for remedying dependencies
on the multi-scale feature maps. By incorporating these technologies and recent
advancements in training and problem formation, the improved "plain" DETR
showed exceptional improvements over the original DETR detector. By leveraging
the Object365 dataset for pre-training, it achieved 63.9 mAP accuracy using a
Swin-L backbone, which is highly competitive with state-of-the-art detectors
which all heavily rely on multi-scale feature maps and region-based feature
extraction. Code is available at https://github.com/impiga/Plain-DETR .
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 17:59:04 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Lin",
"Yutong",
""
],
[
"Yuan",
"Yuhui",
""
],
[
"Zhang",
"Zheng",
""
],
[
"Li",
"Chen",
""
],
[
"Zheng",
"Nanning",
""
],
[
"Hu",
"Han",
""
]
] |
new_dataset
| 0.997696 |
2308.01906
|
Nikunj Saunshi
|
Vedant Gaur, Nikunj Saunshi
|
Reasoning in Large Language Models Through Symbolic Math Word Problems
|
Accepted at the Findings of ACL 2023
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) have revolutionized NLP by solving downstream
tasks with little to no labeled data. Despite their versatile abilities, the
larger question of their ability to reason remains ill-understood. This paper
addresses reasoning in math word problems (MWPs) by studying symbolic versions
of the numeric problems, since a symbolic expression is a "concise explanation"
of the numeric answer. We create and use a symbolic version of the SVAMP
dataset and find that GPT-3's davinci-002 model also has good zero-shot
accuracy on symbolic MWPs. To evaluate the faithfulness of the model's
reasoning, we go beyond accuracy and additionally evaluate the alignment
between the final answer and the outputted reasoning, which correspond to
numeric and symbolic answers respectively for MWPs. We explore a self-prompting
approach to encourage the symbolic reasoning to align with the numeric answer,
thus equipping the LLM with the ability to provide a concise and verifiable
reasoning and making it more interpretable. Surprisingly, self-prompting also
improves the symbolic accuracy to be higher than both the numeric and symbolic
accuracies, thus providing an ensembling effect. The SVAMP_Sym dataset will be
released for future research on symbolic math problems.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 17:59:27 GMT"
}
] | 2023-08-04T00:00:00 |
[
[
"Gaur",
"Vedant",
""
],
[
"Saunshi",
"Nikunj",
""
]
] |
new_dataset
| 0.997836 |
2209.01072
|
Yibo Liu
|
Yibo Liu, Jinjun Shan, Hunter Schofield
|
Occlusion-Resistant LiDAR Fiducial Marker Detection
|
7 pages, 11 figures
| null | null | null |
cs.CV cs.RO eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
The LiDAR fiducial marker, akin to the well-known AprilTag used in camera
applications, serves as a convenient resource to impart artificial features to
the LiDAR sensor, facilitating robotics applications. Unfortunately, current
LiDAR fiducial marker detection methods are limited to occlusion-free point
clouds. In this work, we present a novel approach for occlusion-resistant LiDAR
fiducial marker detection. We first extract 3D points potentially corresponding
to the markers, leveraging the 3D intensity gradients. Afterward, we analyze
the 3D spatial distribution of the extracted points through clustering.
Subsequently, we determine the potential marker locations by examining the
geometric characteristics of these clusters. We then successively transfer the
3D points that fall within the candidate locations from the raw point cloud
onto a designed intermediate plane. Finally, using the intermediate plane, we
validate each location for the presence of a fiducial marker and compute the
marker's pose if found. We conduct both qualitative and quantitative
experiments to demonstrate that our approach is the first LiDAR fiducial marker
detection method applicable to point clouds with occlusion while achieving
better accuracy.
|
[
{
"version": "v1",
"created": "Fri, 2 Sep 2022 14:07:25 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Aug 2023 22:44:35 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Liu",
"Yibo",
""
],
[
"Shan",
"Jinjun",
""
],
[
"Schofield",
"Hunter",
""
]
] |
new_dataset
| 0.961062 |
2211.08239
|
Victor Lutfalla
|
Thomas Fernique and Victor Lutfalla
|
Geometrical Penrose Tilings are characterized by their 1-atlas
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rhombus Penrose tilings are tilings of the plane by two decorated rhombi such
that the decoration match at the junction between two tiles (like in a jigsaw
puzzle). In dynamical terms, they form a tiling space of finite type. If we
remove the decorations, we get, by definition, a sofic tiling space that we
here call geometrical Penrose tilings. Here, we show how to compute the
patterns of a given size which appear in these tilings by two different method:
one based on the substitutive structure of the Penrose tilings and the other on
their definition by the cut and projection method. We use this to prove that
the geometrical Penrose tilings are characterized by a small set of patterns
called vertex-atlas, i.e., they form a tiling space of finite type. Though
considered as folk, no complete proof of this result has been published, to our
knowledge.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 15:54:18 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2023 14:59:08 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Fernique",
"Thomas",
""
],
[
"Lutfalla",
"Victor",
""
]
] |
new_dataset
| 0.974129 |
2302.12086
|
Victor Lutfalla
|
Benjamin Hellouin de Menibus, Victor H. Lutfalla, Camille No\^us
|
The Domino problem is undecidable on every rhombus subshift
|
12 pages + 1 page of appendix
| null |
10.1007/978-3-031-33264-7_9
| null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We extend the classical Domino problem to any tiling of rhombus-shaped tiles.
For any subshift X of edge-to-edge rhombus tilings, such as the Penrose
subshift, we prove that the associated X-Domino problem is $\Pi^0_1$ -hard and
therefore undecidable. It is $\Pi^0_1$ -complete when the subshift X is given
by a computable sequence of forbidden patterns.
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 15:20:01 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"de Menibus",
"Benjamin Hellouin",
""
],
[
"Lutfalla",
"Victor H.",
""
],
[
"Noûs",
"Camille",
""
]
] |
new_dataset
| 0.994333 |
2303.05264
|
Kees Middelburg
|
C. A. Middelburg
|
Belnap-Dunn logic and query answering in inconsistent databases with
null values
|
26 pages; revision of v1, presentation improved at several places and
DOIs added to the papers in the references. arXiv admin note: text overlap
with arXiv:2301.10555
| null | null | null |
cs.DB cs.LO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper concerns an expansion of first-order Belnap-Dunn logic, named
$\mathrm{BD}^{\supset,\mathsf{F}}$, and an application of this logic in the
area of relational database theory. The notion of a relational database, the
notion of a query applicable to a relational database, and several notions of
an answer to a query with respect to a relational database are considered from
the perspective of this logic, taking into account that a database may be an
inconsistent database or a database with null values. The chosen perspective
enables among other things the definition of a notion of a consistent answer to
a query with respect to a possibly inconsistent database without resort to
database repairs. For each of the notions of an answer considered, being an
answer to a query with respect to a database of the kind considered is
decidable.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 13:59:27 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Jul 2023 09:41:07 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Middelburg",
"C. A.",
""
]
] |
new_dataset
| 0.990857 |
2303.12653
|
Fenghao Zhu
|
Fenghao Zhu, Bohao Wang, Zhaohui Yang, Chongwen Huang, Zhaoyang Zhang,
George C.Alexandropoulos, Chau Yuen and Merouane Debbah
|
Robust mmWave Beamforming by Self-Supervised Hybrid Deep Learning
| null | null | null | null |
cs.IT cs.LG math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Beamforming with large-scale antenna arrays has been widely used in recent
years, which is acknowledged as an important part in 5G and incoming 6G. Thus,
various techniques are leveraged to improve its performance, e.g., deep
learning, advanced optimization algorithms, etc. Although its performance in
many previous research scenarios with deep learning is quite attractive,
usually it drops rapidly when the environment or dataset is changed. Therefore,
designing effective beamforming network with strong robustness is an open issue
for the intelligent wireless communications. In this paper, we propose a robust
beamforming self-supervised network, and verify it in two kinds of different
datasets with various scenarios. Simulation results show that the proposed
self-supervised network with hybrid learning performs well in both classic
DeepMIMO and new WAIR-D dataset with the strong robustness under the various
environments. Also, we present the principle to explain the rationality of this
kind of hybrid learning, which is instructive to apply with more kinds of
datasets.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 05:30:53 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2023 12:20:40 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Zhu",
"Fenghao",
""
],
[
"Wang",
"Bohao",
""
],
[
"Yang",
"Zhaohui",
""
],
[
"Huang",
"Chongwen",
""
],
[
"Zhang",
"Zhaoyang",
""
],
[
"Alexandropoulos",
"George C.",
""
],
[
"Yuen",
"Chau",
""
],
[
"Debbah",
"Merouane",
""
]
] |
new_dataset
| 0.988285 |
2304.01463
|
Mohsen Moradi
|
Mohsen Moradi
|
Polarization-Adjusted Convolutional (PAC) Codes as a Concatenation of
Inner Cyclic and Outer Polar- and Reed-Muller-like Codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Polarization-adjusted convolutional (PAC) codes are a new family of linear
block codes that can perform close to the theoretical bounds in the short
block-length regime. These codes combine polar coding and convolutional coding.
In this study, we show that PAC codes are equivalent to a new class of codes
consisting of inner cyclic codes and outer polar- and Reed-Muller-like codes.
We leverage the properties of cyclic codes to establish that PAC codes
outperform polar- and Reed-Muller-like codes in terms of minimum distance.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 02:05:30 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2023 17:09:08 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Moradi",
"Mohsen",
""
]
] |
new_dataset
| 0.998088 |
2305.13425
|
Aidan Barbieux
|
Aidan Barbieux, Rodrigo Canaan
|
EINCASM: Emergent Intelligence in Neural Cellular Automaton Slime Molds
|
Extended Abstract for the 2023 ALife conference. 2 Pages, 1 Figure
| null | null | null |
cs.NE cs.AI cs.CY cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents EINCASM, a prototype system employing a novel framework
for studying emergent intelligence in organisms resembling slime molds. EINCASM
evolves neural cellular automata with NEAT to maximize cell growth constrained
by nutrient and energy costs. These organisms capitalize physically simulated
fluid to transport nutrients and chemical-like signals to orchestrate growth
and adaptation to complex, changing environments. Our framework builds the
foundation for studying how the presence of puzzles, physics, communication,
competition and dynamic open-ended environments contribute to the emergence of
intelligent behavior. We propose preliminary tests for intelligence in such
organisms and suggest future work for more powerful systems employing EINCASM
to better understand intelligence in distributed dynamical systems.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 19:15:55 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Barbieux",
"Aidan",
""
],
[
"Canaan",
"Rodrigo",
""
]
] |
new_dataset
| 0.997186 |
2305.15386
|
Kaushal Bhogale
|
Kaushal Santosh Bhogale, Sai Sundaresan, Abhigyan Raman, Tahir Javed,
Mitesh M. Khapra, Pratyush Kumar
|
Vistaar: Diverse Benchmarks and Training Sets for Indian Language ASR
|
Accepted in INTERSPEECH 2023
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Improving ASR systems is necessary to make new LLM-based use-cases accessible
to people across the globe. In this paper, we focus on Indian languages, and
make the case that diverse benchmarks are required to evaluate and improve ASR
systems for Indian languages. To address this, we collate Vistaar as a set of
59 benchmarks across various language and domain combinations, on which we
evaluate 3 publicly available ASR systems and 2 commercial systems. We also
train IndicWhisper models by fine-tuning the Whisper models on publicly
available training datasets across 12 Indian languages totalling to 10.7K
hours. We show that IndicWhisper significantly improves on considered ASR
systems on the Vistaar benchmark. Indeed, IndicWhisper has the lowest WER in 39
out of the 59 benchmarks, with an average reduction of 4.1 WER. We open-source
all datasets, code and models.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 17:46:03 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2023 13:29:31 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Bhogale",
"Kaushal Santosh",
""
],
[
"Sundaresan",
"Sai",
""
],
[
"Raman",
"Abhigyan",
""
],
[
"Javed",
"Tahir",
""
],
[
"Khapra",
"Mitesh M.",
""
],
[
"Kumar",
"Pratyush",
""
]
] |
new_dataset
| 0.999542 |
2306.00642
|
Sascha Rechenberger
|
Sascha Rechenberger and Thom Fr\"uhwirth
|
FreeCHR: An Algebraic Framework for CHR-Embeddings
|
This is the extended version of a paper presented at the 7th
International Joint Conference on Rules and Reasoning (RuleML+RR 2023); minor
revision of section 5; additional examples, added acknowledgments, minor
changes in section 1 and 5 as well as proofreading
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the framework FreeCHR, which formalizes the embedding of
Constraint Handling Rules (CHR) into a host-language, using the concept of
initial algebra semantics from category theory, to establish a high-level
implementation scheme for CHR, as well as a common formalization for both
theory and practice. We propose a lifting of the syntax of CHR via an
endofunctor in the category Set and a lifting of the operational semantics,
using the free algebra, generated by the endofunctor. We then lift the very
abstract operational semantics of CHR into FreeCHR, and give proofs for
soundness and completeness w.r.t. their original definition.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 13:08:50 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jun 2023 07:07:30 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Aug 2023 13:35:15 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Rechenberger",
"Sascha",
""
],
[
"Frühwirth",
"Thom",
""
]
] |
new_dataset
| 0.979061 |
2306.10940
|
Ioannis Prapas
|
Ioannis Prapas, Nikolaos Ioannis Bountos, Spyros Kondylatos, Dimitrios
Michail, Gustau Camps-Valls, Ioannis Papoutsis
|
TeleViT: Teleconnection-driven Transformers Improve Subseasonal to
Seasonal Wildfire Forecasting
|
Accepted at the ICCV 2023 workshop on Artificial Intelligence for
Humanitarian Assistance and Disaster Response
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Wildfires are increasingly exacerbated as a result of climate change,
necessitating advanced proactive measures for effective mitigation. It is
important to forecast wildfires weeks and months in advance to plan forest fuel
management, resource procurement and allocation. To achieve such accurate
long-term forecasts at a global scale, it is crucial to employ models that
account for the Earth system's inherent spatio-temporal interactions, such as
memory effects and teleconnections. We propose a teleconnection-driven vision
transformer (TeleViT), capable of treating the Earth as one interconnected
system, integrating fine-grained local-scale inputs with global-scale inputs,
such as climate indices and coarse-grained global variables. Through
comprehensive experimentation, we demonstrate the superiority of TeleViT in
accurately predicting global burned area patterns for various forecasting
windows, up to four months in advance. The gain is especially pronounced in
larger forecasting windows, demonstrating the improved ability of deep learning
models that exploit teleconnections to capture Earth system dynamics. Code
available at https://github.com/Orion-Ai-Lab/TeleViT.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 14:00:34 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2023 13:04:50 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Prapas",
"Ioannis",
""
],
[
"Bountos",
"Nikolaos Ioannis",
""
],
[
"Kondylatos",
"Spyros",
""
],
[
"Michail",
"Dimitrios",
""
],
[
"Camps-Valls",
"Gustau",
""
],
[
"Papoutsis",
"Ioannis",
""
]
] |
new_dataset
| 0.996389 |
2306.15550
|
Rian Touchent
|
Rian Touchent, Laurent Romary, Eric de la Clergerie
|
CamemBERT-bio: a Tasty French Language Model Better for your Health
|
refined the terminology used for methodologies, providing more
explicit and descriptive labels; expanded the arguments about methodology in
the paper, offering a more comprehensive discussion and exploration of the
topic; results unchanged
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Clinical data in hospitals are increasingly accessible for research through
clinical data warehouses, however these documents are unstructured. It is
therefore necessary to extract information from medical reports to conduct
clinical studies. Transfer learning with BERT-like models such as CamemBERT has
allowed major advances, especially for named entity recognition. However, these
models are trained for plain language and are less efficient on biomedical
data. This is why we propose a new French public biomedical dataset on which we
have continued the pre-training of CamemBERT. Thus, we introduce a first
version of CamemBERT-bio, a specialized public model for the French biomedical
domain that shows 2.54 points of F1 score improvement on average on different
biomedical named entity recognition tasks. Our findings demonstrate the success
of continual pre-training from a French model and contrast with recent
proposals on the same domain and language. One of our key contributions
highlights the importance of using a standard evaluation protocol that enables
a clear view of the current state-of-the-art for French biomedical models.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 15:23:14 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2023 17:53:45 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Touchent",
"Rian",
""
],
[
"Romary",
"Laurent",
""
],
[
"de la Clergerie",
"Eric",
""
]
] |
new_dataset
| 0.996327 |
2307.07607
|
Shibo Zhao
|
Shibo Zhao, Tianhao Wu, YuanJun Gao, Damanpreet Singh, Rushan Jiang,
Haoxiang Sun, Jay Karhade, Ian Higgins, Chuck Whittaker, Lucas Nogueira,
Tingting Da, Mansi Sarawata, Can Xu, Jiahe Xu, He Yao, Sourojit Saha, Yuheng
Qiu, Chen Wang, Wenshan Wang, Sebastian Scherer
|
SubT-MRS: A Subterranean, Multi-Robot, Multi-Spectral and Multi-Degraded
Dataset for Robust SLAM
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, significant progress has been made in the field of
simultaneous localization and mapping (SLAM) research. However, current
state-of-the-art solutions still struggle with limited accuracy and robustness
in real-world applications. One major reason is the lack of datasets that fully
capture the conditions faced by robots in the wild. To address this problem, we
present SubT-MRS, an extremely challenging real-world dataset designed to push
the limits of SLAM and perception algorithms.
SubT-MRS is a multi-modal, multi-robot dataset collected mainly from
subterranean environments having multi-degraded conditions including
structureless corridors, varying lighting conditions, and perceptual obscurants
such as smoke and dust. Furthermore, the dataset packages information from a
diverse range of time-synchronized sensors, including LiDAR, visual cameras,
thermal cameras, and IMUs captured using varied vehicular motions like aerial,
legged, and wheeled, to support research in sensor fusion, which is essential
for achieving accurate and robust robotic perception in complex environments.
To evaluate the accuracy of SLAM systems, we also provide a dense 3D model with
sub-centimeter-level accuracy, as well as accurate 6DoF ground truth. Our
benchmarking approach includes several state-of-the-art methods to demonstrate
the challenges our datasets introduce, particularly in the case of
multi-degraded environments.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 20:05:14 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2023 15:52:24 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Zhao",
"Shibo",
""
],
[
"Wu",
"Tianhao",
""
],
[
"Gao",
"YuanJun",
""
],
[
"Singh",
"Damanpreet",
""
],
[
"Jiang",
"Rushan",
""
],
[
"Sun",
"Haoxiang",
""
],
[
"Karhade",
"Jay",
""
],
[
"Higgins",
"Ian",
""
],
[
"Whittaker",
"Chuck",
""
],
[
"Nogueira",
"Lucas",
""
],
[
"Da",
"Tingting",
""
],
[
"Sarawata",
"Mansi",
""
],
[
"Xu",
"Can",
""
],
[
"Xu",
"Jiahe",
""
],
[
"Yao",
"He",
""
],
[
"Saha",
"Sourojit",
""
],
[
"Qiu",
"Yuheng",
""
],
[
"Wang",
"Chen",
""
],
[
"Wang",
"Wenshan",
""
],
[
"Scherer",
"Sebastian",
""
]
] |
new_dataset
| 0.999574 |
2307.11884
|
Kaki Ryan
|
Kaki Ryan, Matthew Gregoire and Cynthia Sturton
|
Augmented Symbolic Execution for Information Flow in Hardware Designs
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
We present SEIF, a methodology that combines static analysis with symbolic
execution to verify and explicate information flow paths in a hardware design.
SEIF begins with a statically built model of the information flow through a
design and uses guided symbolic execution to recognize and eliminate non-flows
with high precision or to find corresponding paths through the design state for
true flows. We evaluate SEIF on two open-source CPUs, an AES core, and the AKER
access control module. SEIF can exhaustively explore 10-12 clock cycles deep in
4-6 seconds on average, and can automatically account for 86-90% of the paths
in the statically built model. Additionally, SEIF can be used to find multiple
violating paths for security properties, providing a new angle for security
verification.
|
[
{
"version": "v1",
"created": "Fri, 21 Jul 2023 19:58:59 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jul 2023 19:44:52 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Ryan",
"Kaki",
""
],
[
"Gregoire",
"Matthew",
""
],
[
"Sturton",
"Cynthia",
""
]
] |
new_dataset
| 0.989951 |
2307.12213
|
Quan Li
|
Yuchen Wu, Yuansong Xu, Shenghan Gao, Xingbo Wang, Wenkai Song,
Zhiheng Nie, Xiaomeng Fan, and Quan Li
|
LiveRetro: Visual Analytics for Strategic Retrospect in Livestream
E-Commerce
|
Accepted by IEEE VIS 2023
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Livestream e-commerce integrates live streaming and online shopping, allowing
viewers to make purchases while watching. However, effective marketing
strategies remain a challenge due to limited empirical research and subjective
biases from the absence of quantitative data. Current tools fail to capture the
interdependence between live performances and feedback. This study identified
computational features, formulated design requirements, and developed
LiveRetro, an interactive visual analytics system. It enables comprehensive
retrospective analysis of livestream e-commerce for streamers, viewers, and
merchandise. LiveRetro employs enhanced visualization and time-series
forecasting models to align performance features and feedback, identifying
influences at channel, merchandise, feature, and segment levels. Through case
studies and expert interviews, the system provides deep insights into the
relationship between live performance and streaming statistics, enabling
efficient strategic analysis from multiple perspectives.
|
[
{
"version": "v1",
"created": "Sun, 23 Jul 2023 03:10:05 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2023 15:22:47 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Wu",
"Yuchen",
""
],
[
"Xu",
"Yuansong",
""
],
[
"Gao",
"Shenghan",
""
],
[
"Wang",
"Xingbo",
""
],
[
"Song",
"Wenkai",
""
],
[
"Nie",
"Zhiheng",
""
],
[
"Fan",
"Xiaomeng",
""
],
[
"Li",
"Quan",
""
]
] |
new_dataset
| 0.987205 |
2307.12730
|
Xiaofeng Mao
|
Xiaofeng Mao, Yuefeng Chen, Yao Zhu, Da Chen, Hang Su, Rong Zhang, Hui
Xue
|
COCO-O: A Benchmark for Object Detectors under Natural Distribution
Shifts
|
Accepted in ICCV2023,
https://github.com/alibaba/easyrobust/tree/main/benchmarks/coco_o
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Practical object detection application can lose its effectiveness on image
inputs with natural distribution shifts. This problem leads the research
community to pay more attention on the robustness of detectors under
Out-Of-Distribution (OOD) inputs. Existing works construct datasets to
benchmark the detector's OOD robustness for a specific application scenario,
e.g., Autonomous Driving. However, these datasets lack universality and are
hard to benchmark general detectors built on common tasks such as COCO. To give
a more comprehensive robustness assessment, we introduce
COCO-O(ut-of-distribution), a test dataset based on COCO with 6 types of
natural distribution shifts. COCO-O has a large distribution gap with training
data and results in a significant 55.7% relative performance drop on a Faster
R-CNN detector. We leverage COCO-O to conduct experiments on more than 100
modern object detectors to investigate if their improvements are credible or
just over-fitting to the COCO test set. Unfortunately, most classic detectors
in early years do not exhibit strong OOD generalization. We further study the
robustness effect on recent breakthroughs of detector's architecture design,
augmentation and pre-training techniques. Some empirical findings are revealed:
1) Compared with detection head or neck, backbone is the most important part
for robustness; 2) An end-to-end detection transformer design brings no
enhancement, and may even reduce robustness; 3) Large-scale foundation models
have made a great leap on robust object detection. We hope our COCO-O could
provide a rich testbed for robustness study of object detection. The dataset
will be available at
https://github.com/alibaba/easyrobust/tree/main/benchmarks/coco_o.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 12:22:19 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2023 12:10:55 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Mao",
"Xiaofeng",
""
],
[
"Chen",
"Yuefeng",
""
],
[
"Zhu",
"Yao",
""
],
[
"Chen",
"Da",
""
],
[
"Su",
"Hang",
""
],
[
"Zhang",
"Rong",
""
],
[
"Xue",
"Hui",
""
]
] |
new_dataset
| 0.99968 |
2307.14510
|
Yijiong Lin
|
Yijiong Lin, Mauro Comi, Alex Church, Dandan Zhang, Nathan F. Lepora
|
Attention for Robot Touch: Tactile Saliency Prediction for Robust
Sim-to-Real Tactile Control
|
Accepted by IROS 2023
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
High-resolution tactile sensing can provide accurate information about local
contact in contact-rich robotic tasks. However, the deployment of such tasks in
unstructured environments remains under-investigated. To improve the robustness
of tactile robot control in unstructured environments, we propose and study a
new concept: \textit{tactile saliency} for robot touch, inspired by the human
touch attention mechanism from neuroscience and the visual saliency prediction
problem from computer vision. In analogy to visual saliency, this concept
involves identifying key information in tactile images captured by a tactile
sensor. While visual saliency datasets are commonly annotated by humans,
manually labelling tactile images is challenging due to their counterintuitive
patterns. To address this challenge, we propose a novel approach comprised of
three interrelated networks: 1) a Contact Depth Network (ConDepNet), which
generates a contact depth map to localize deformation in a real tactile image
that contains target and noise features; 2) a Tactile Saliency Network
(TacSalNet), which predicts a tactile saliency map to describe the target areas
for an input contact depth map; 3) and a Tactile Noise Generator (TacNGen),
which generates noise features to train the TacSalNet. Experimental results in
contact pose estimation and edge-following in the presence of distractors
showcase the accurate prediction of target features from real tactile images.
Overall, our tactile saliency prediction approach gives robust sim-to-real
tactile control in environments with unknown distractors. Project page:
https://sites.google.com/view/tactile-saliency/.
|
[
{
"version": "v1",
"created": "Wed, 26 Jul 2023 21:19:45 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2023 09:42:58 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Lin",
"Yijiong",
""
],
[
"Comi",
"Mauro",
""
],
[
"Church",
"Alex",
""
],
[
"Zhang",
"Dandan",
""
],
[
"Lepora",
"Nathan F.",
""
]
] |
new_dataset
| 0.998574 |
2307.16039
|
Viet Lai
|
Viet Dac Lai, Chien Van Nguyen, Nghia Trung Ngo, Thuat Nguyen, Franck
Dernoncourt, Ryan A. Rossi, Thien Huu Nguyen
|
Okapi: Instruction-tuned Large Language Models in Multiple Languages
with Reinforcement Learning from Human Feedback
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A key technology for the development of large language models (LLMs) involves
instruction tuning that helps align the models' responses with human
expectations to realize impressive learning abilities. Two major approaches for
instruction tuning characterize supervised fine-tuning (SFT) and reinforcement
learning from human feedback (RLHF), which are currently applied to produce the
best commercial LLMs (e.g., ChatGPT). To improve the accessibility of LLMs for
research and development efforts, various instruction-tuned open-source LLMs
have also been introduced recently, e.g., Alpaca, Vicuna, to name a few.
However, existing open-source LLMs have only been instruction-tuned for English
and a few popular languages, thus hindering their impacts and accessibility to
many other languages in the world. Among a few very recent work to explore
instruction tuning for LLMs in multiple languages, SFT has been used as the
only approach to instruction-tune LLMs for multiple languages. This has left a
significant gap for fine-tuned LLMs based on RLHF in diverse languages and
raised important questions on how RLHF can boost the performance of
multilingual instruction tuning. To overcome this issue, we present Okapi, the
first system with instruction-tuned LLMs based on RLHF for multiple languages.
Okapi introduces instruction and response-ranked data in 26 diverse languages
to facilitate the experiments and development of future multilingual LLM
research. We also present benchmark datasets to enable the evaluation of
generative LLMs in multiple languages. Our experiments demonstrate the
advantages of RLHF for multilingual instruction over SFT for different base
models and datasets. Our framework and resources are released at
https://github.com/nlp-uoregon/Okapi.
|
[
{
"version": "v1",
"created": "Sat, 29 Jul 2023 18:01:46 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2023 00:39:25 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Lai",
"Viet Dac",
""
],
[
"Van Nguyen",
"Chien",
""
],
[
"Ngo",
"Nghia Trung",
""
],
[
"Nguyen",
"Thuat",
""
],
[
"Dernoncourt",
"Franck",
""
],
[
"Rossi",
"Ryan A.",
""
],
[
"Nguyen",
"Thien Huu",
""
]
] |
new_dataset
| 0.999345 |
2307.16125
|
Bohao Li
|
Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan
|
SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension
|
Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability.
|
[
{
"version": "v1",
"created": "Sun, 30 Jul 2023 04:25:16 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2023 08:02:35 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Li",
"Bohao",
""
],
[
"Wang",
"Rui",
""
],
[
"Wang",
"Guangzhi",
""
],
[
"Ge",
"Yuying",
""
],
[
"Ge",
"Yixiao",
""
],
[
"Shan",
"Ying",
""
]
] |
new_dataset
| 0.999182 |
2307.16773
|
Tianxing Wu
|
Tianxing Wu, Xudong Cao, Yipeng Zhu, Feiyue Wu, Tianling Gong, Yuxiang
Wang, Shenqi Jing
|
AsdKB: A Chinese Knowledge Base for the Early Screening and Diagnosis of
Autism Spectrum Disorder
|
17 pages, Accepted by the Resource Track of ISWC 2023
| null | null | null |
cs.AI cs.CL cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
To easily obtain the knowledge about autism spectrum disorder and help its
early screening and diagnosis, we create AsdKB, a Chinese knowledge base on
autism spectrum disorder. The knowledge base is built on top of various
sources, including 1) the disease knowledge from SNOMED CT and ICD-10 clinical
descriptions on mental and behavioural disorders, 2) the diagnostic knowledge
from DSM-5 and different screening tools recommended by social organizations
and medical institutes, and 3) the expert knowledge on professional physicians
and hospitals from the Web. AsdKB contains both ontological and factual
knowledge, and is accessible as Linked Data at https://w3id.org/asdkb/. The
potential applications of AsdKB are question answering, auxiliary diagnosis,
and expert recommendation, and we illustrate them with a prototype which can be
accessed at http://asdkb.org.cn/.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 15:40:45 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2023 08:04:29 GMT"
}
] | 2023-08-03T00:00:00 |
[
[
"Wu",
"Tianxing",
""
],
[
"Cao",
"Xudong",
""
],
[
"Zhu",
"Yipeng",
""
],
[
"Wu",
"Feiyue",
""
],
[
"Gong",
"Tianling",
""
],
[
"Wang",
"Yuxiang",
""
],
[
"Jing",
"Shenqi",
""
]
] |
new_dataset
| 0.999672 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.