id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2302.00988
|
Xiaozheng Zheng
|
Xiaozheng Zheng, Chao Wen, Zhou Xue, Pengfei Ren, Jingyu Wang
|
HaMuCo: Hand Pose Estimation via Multiview Collaborative Self-Supervised
Learning
|
Accepted to ICCV 2023. Won first place in the HANDS22 Challenge Task
2. Project page: https://zxz267.github.io/HaMuCo
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancements in 3D hand pose estimation have shown promising results,
but its effectiveness has primarily relied on the availability of large-scale
annotated datasets, the creation of which is a laborious and costly process. To
alleviate the label-hungry limitation, we propose a self-supervised learning
framework, HaMuCo, that learns a single-view hand pose estimator from
multi-view pseudo 2D labels. However, one of the main challenges of
self-supervised learning is the presence of noisy labels and the ``groupthink''
effect from multiple views. To overcome these issues, we introduce a cross-view
interaction network that distills the single-view estimator by utilizing the
cross-view correlated features and enforcing multi-view consistency to achieve
collaborative learning. Both the single-view estimator and the cross-view
interaction network are trained jointly in an end-to-end manner. Extensive
experiments show that our method can achieve state-of-the-art performance on
multi-view self-supervised hand pose estimation. Furthermore, the proposed
cross-view interaction network can also be applied to hand pose estimation from
multi-view input and outperforms previous methods under the same settings.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 10:13:04 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2023 04:51:27 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Zheng",
"Xiaozheng",
""
],
[
"Wen",
"Chao",
""
],
[
"Xue",
"Zhou",
""
],
[
"Ren",
"Pengfei",
""
],
[
"Wang",
"Jingyu",
""
]
] |
new_dataset
| 0.997485 |
2302.12449
|
Yun Zhu
|
Yun Zhu and Jianhao Guo and Siliang Tang
|
SGL-PT: A Strong Graph Learner with Graph Prompt Tuning
| null | null | null | null |
cs.LG cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, much exertion has been paid to design graph self-supervised methods
to obtain generalized pre-trained models, and adapt pre-trained models onto
downstream tasks through fine-tuning. However, there exists an inherent gap
between pretext and downstream graph tasks, which insufficiently exerts the
ability of pre-trained models and even leads to negative transfer. Meanwhile,
prompt tuning has seen emerging success in natural language processing by
aligning pre-training and fine-tuning with consistent training objectives. In
this paper, we identify the challenges for graph prompt tuning: The first is
the lack of a strong and universal pre-training task across sundry pre-training
methods in graph domain. The second challenge lies in the difficulty of
designing a consistent training objective for both pre-training and downstream
tasks. To overcome above obstacles, we propose a novel framework named SGL-PT
which follows the learning strategy ``Pre-train, Prompt, and Predict''.
Specifically, we raise a strong and universal pre-training task coined as SGL
that acquires the complementary merits of generative and contrastive
self-supervised graph learning. And aiming for graph classification task, we
unify pre-training and fine-tuning by designing a novel verbalizer-free
prompting function, which reformulates the downstream task in a similar format
as pretext task. Empirical results show that our method surpasses other
baselines under unsupervised setting, and our prompt tuning method can greatly
facilitate models on biological datasets over fine-tuning methods.
|
[
{
"version": "v1",
"created": "Fri, 24 Feb 2023 04:31:18 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2023 08:11:16 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Zhu",
"Yun",
""
],
[
"Guo",
"Jianhao",
""
],
[
"Tang",
"Siliang",
""
]
] |
new_dataset
| 0.992402 |
2302.14325
|
Lun Luo
|
Lun Luo, Shuhang Zheng, Yixuan Li, Yongzhi Fan, Beinan Yu, Siyuan Cao,
Huiliang Shen
|
BEVPlace: Learning LiDAR-based Place Recognition using Bird's Eye View
Images
|
Accepted by ICCV 2023
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Place recognition is a key module for long-term SLAM systems. Current
LiDAR-based place recognition methods usually use representations of point
clouds such as unordered points or range images. These methods achieve high
recall rates of retrieval, but their performance may degrade in the case of
view variation or scene changes. In this work, we explore the potential of a
different representation in place recognition, i.e. bird's eye view (BEV)
images. We observe that the structural contents of BEV images are less
influenced by rotations and translations of point clouds. We validate that,
without any delicate design, a simple VGGNet trained on BEV images achieves
comparable performance with the state-of-the-art place recognition methods in
scenes of slight viewpoint changes. For more robust place recognition, we
design a rotation-invariant network called BEVPlace. We use group convolution
to extract rotation-equivariant local features from the images and NetVLAD for
global feature aggregation. In addition, we observe that the distance between
BEV features is correlated with the geometry distance of point clouds. Based on
the observation, we develop a method to estimate the position of the query
cloud, extending the usage of place recognition. The experiments conducted on
large-scale public datasets show that our method 1) achieves state-of-the-art
performance in terms of recall rates, 2) is robust to view changes, 3) shows
strong generalization ability, and 4) can estimate the positions of query point
clouds. Source codes are publicly available at
https://github.com/zjuluolun/BEVPlace.
|
[
{
"version": "v1",
"created": "Tue, 28 Feb 2023 05:37:45 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 02:38:54 GMT"
},
{
"version": "v3",
"created": "Tue, 15 Aug 2023 03:44:00 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Luo",
"Lun",
""
],
[
"Zheng",
"Shuhang",
""
],
[
"Li",
"Yixuan",
""
],
[
"Fan",
"Yongzhi",
""
],
[
"Yu",
"Beinan",
""
],
[
"Cao",
"Siyuan",
""
],
[
"Shen",
"Huiliang",
""
]
] |
new_dataset
| 0.980352 |
2303.05234
|
Yang Fu
|
Yang Fu, Shibei Meng, Saihui Hou, Xuecai Hu and Yongzhen Huang
|
GPGait: Generalized Pose-based Gait Recognition
|
ICCV Camera Ready
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recent works on pose-based gait recognition have demonstrated the potential
of using such simple information to achieve results comparable to
silhouette-based methods. However, the generalization ability of pose-based
methods on different datasets is undesirably inferior to that of
silhouette-based ones, which has received little attention but hinders the
application of these methods in real-world scenarios. To improve the
generalization ability of pose-based methods across datasets, we propose a
\textbf{G}eneralized \textbf{P}ose-based \textbf{Gait} recognition
(\textbf{GPGait}) framework. First, a Human-Oriented Transformation (HOT) and a
series of Human-Oriented Descriptors (HOD) are proposed to obtain a unified
pose representation with discriminative multi-features. Then, given the slight
variations in the unified representation after HOT and HOD, it becomes crucial
for the network to extract local-global relationships between the keypoints. To
this end, a Part-Aware Graph Convolutional Network (PAGCN) is proposed to
enable efficient graph partition and local-global spatial feature extraction.
Experiments on four public gait recognition datasets, CASIA-B, OUMVLP-Pose,
Gait3D and GREW, show that our model demonstrates better and more stable
cross-domain capabilities compared to existing skeleton-based methods,
achieving comparable recognition results to silhouette-based ones. Code is
available at https://github.com/BNU-IVC/FastPoseGait.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 13:17:13 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2023 07:32:29 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Fu",
"Yang",
""
],
[
"Meng",
"Shibei",
""
],
[
"Hou",
"Saihui",
""
],
[
"Hu",
"Xuecai",
""
],
[
"Huang",
"Yongzhen",
""
]
] |
new_dataset
| 0.953517 |
2303.05648
|
Qingming Li
|
Qingming Li and H. Vicky Zhao
|
Pacos: Modeling Users' Interpretable and Context-Dependent Choices in
Preference Reversals
|
29 pages, 12 figures
| null |
10.1016/j.knosys.2023.110835
| null |
cs.IR cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Choice problems refer to selecting the best choices from several items, and
learning users' preferences in choice problems is of great significance in
understanding the decision making mechanisms and providing personalized
services. Existing works typically assume that people evaluate items
independently. In practice, however, users' preferences depend on the market in
which items are placed, which is known as context effects; and the order of
users' preferences for two items may even be reversed, which is referred to
preference reversals. In this work, we identify three factors contributing to
context effects: users' adaptive weights, the inter-item comparison, and
display positions. We propose a context-dependent preference model named Pacos
as a unified framework for addressing three factors simultaneously, and
consider two design methods including an additive method with high
interpretability and an ANN-based method with high accuracy. We study the
conditions for preference reversals to occur and provide an theoretical proof
of the effectiveness of Pacos in addressing preference reversals. Experimental
results show that the proposed method has better performance than prior works
in predicting users' choices, and has great interpretability to help understand
the cause of preference reversals.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 01:49:56 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Jun 2023 03:40:40 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Li",
"Qingming",
""
],
[
"Zhao",
"H. Vicky",
""
]
] |
new_dataset
| 0.985973 |
2303.06445
|
Mojtaba Esfandiari
|
Soroush Sadeghnejad, Mojtaba Esfandiari and Farshad Khadivar
|
A Virtual-Based Haptic Endoscopic Sinus Surgery (ESS) Training System:
from Development to Validation
| null | null |
10.1016/B978-0-443-18460-4.00002-0
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Simulated training platforms offer a suitable avenue for surgical students
and professionals to build and improve upon their skills, without the hassle of
traditional training methods. To enhance the degree of realistic interaction
paradigms of training simulators, great work has been done to both model
simulated anatomy in more realistic fashion, as well as providing appropriate
haptic feedback to the trainee. As such, this chapter seeks to discuss the
ongoing research being conducted on haptic feedback-incorporated simulators
specifically for Endoscopic Sinus Surgery (ESS). This chapter offers a brief
comparative analysis of some EES simulators, in addition to a deeper
quantitative and qualitative look into our approach to designing and
prototyping a complete virtual-based haptic EES training platform.
|
[
{
"version": "v1",
"created": "Sat, 11 Mar 2023 16:46:57 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Sadeghnejad",
"Soroush",
""
],
[
"Esfandiari",
"Mojtaba",
""
],
[
"Khadivar",
"Farshad",
""
]
] |
new_dataset
| 0.980296 |
2303.18232
|
Ximeng Sun
|
Ximeng Sun, Pengchuan Zhang, Peizhao Zhang, Hardik Shah, Kate Saenko,
Xide Xia
|
DIME-FM: DIstilling Multimodal and Efficient Foundation Models
|
Accepted to ICCV 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Large Vision-Language Foundation Models (VLFM), such as CLIP, ALIGN and
Florence, are trained on large-scale datasets of image-caption pairs and
achieve superior transferability and robustness on downstream tasks, but they
are difficult to use in many practical applications due to their large size,
high latency and fixed architectures. Unfortunately, recent work shows training
a small custom VLFM for resource-limited applications is currently very
difficult using public and smaller-scale data. In this paper, we introduce a
new distillation mechanism (DIME-FM) that allows us to transfer the knowledge
contained in large VLFMs to smaller, customized foundation models using a
relatively small amount of inexpensive, unpaired images and sentences. We
transfer the knowledge from the pre-trained CLIP-ViTL/14 model to a ViT-B/32
model, with only 40M public images and 28.4M unpaired public sentences. The
resulting model "Distill-ViT-B/32" rivals the CLIP-ViT-B/32 model pre-trained
on its private WiT dataset (400M image-text pairs): Distill-ViT-B/32 achieves
similar results in terms of zero-shot and linear-probing performance on both
ImageNet and the ELEVATER (20 image classification tasks) benchmarks. It also
displays comparable robustness when evaluated on five datasets with natural
distribution shifts from ImageNet.
|
[
{
"version": "v1",
"created": "Fri, 31 Mar 2023 17:47:23 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Aug 2023 18:30:40 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Sun",
"Ximeng",
""
],
[
"Zhang",
"Pengchuan",
""
],
[
"Zhang",
"Peizhao",
""
],
[
"Shah",
"Hardik",
""
],
[
"Saenko",
"Kate",
""
],
[
"Xia",
"Xide",
""
]
] |
new_dataset
| 0.996518 |
2304.03251
|
Bjoern Michele
|
Bjoern Michele, Alexandre Boulch, Gilles Puy, Tuan-Hung Vu, Renaud
Marlet, Nicolas Courty
|
SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation
|
Project repository: github.com/valeoai/SALUDA
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning models on one labeled dataset that generalize well on another domain
is a difficult task, as several shifts might happen between the data domains.
This is notably the case for lidar data, for which models can exhibit large
performance discrepancies due for instance to different lidar patterns or
changes in acquisition conditions. This paper addresses the corresponding
Unsupervised Domain Adaptation (UDA) task for semantic segmentation. To
mitigate this problem, we introduce an unsupervised auxiliary task of learning
an implicit underlying surface representation simultaneously on source and
target data. As both domains share the same latent representation, the model is
forced to accommodate discrepancies between the two sources of data. This novel
strategy differs from classical minimization of statistical divergences or
lidar-specific domain adaptation techniques. Our experiments demonstrate that
our method achieves a better performance than the current state of the art,
both in real-to-real and synthetic-to-real scenarios.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 17:36:23 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2023 12:31:33 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Michele",
"Bjoern",
""
],
[
"Boulch",
"Alexandre",
""
],
[
"Puy",
"Gilles",
""
],
[
"Vu",
"Tuan-Hung",
""
],
[
"Marlet",
"Renaud",
""
],
[
"Courty",
"Nicolas",
""
]
] |
new_dataset
| 0.997558 |
2304.11463
|
Samuel Schulter
|
Samuel Schulter, Vijay Kumar B G, Yumin Suh, Konstantinos M. Dafnis,
Zhixing Zhang, Shiyu Zhao, Dimitris Metaxas
|
OmniLabel: A Challenging Benchmark for Language-Based Object Detection
|
ICCV 2023 Oral - Visit our project website at
https://www.omnilabel.org
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Language-based object detection is a promising direction towards building a
natural interface to describe objects in images that goes far beyond plain
category names. While recent methods show great progress in that direction,
proper evaluation is lacking. With OmniLabel, we propose a novel task
definition, dataset, and evaluation metric. The task subsumes standard- and
open-vocabulary detection as well as referring expressions. With more than 28K
unique object descriptions on over 25K images, OmniLabel provides a challenging
benchmark with diverse and complex object descriptions in a naturally
open-vocabulary setting. Moreover, a key differentiation to existing benchmarks
is that our object descriptions can refer to one, multiple or even no object,
hence, providing negative examples in free-form text. The proposed evaluation
handles the large label space and judges performance via a modified average
precision metric, which we validate by evaluating strong language-based
baselines. OmniLabel indeed provides a challenging test bed for future research
on language-based detection.
|
[
{
"version": "v1",
"created": "Sat, 22 Apr 2023 18:35:50 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Aug 2023 21:43:42 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Schulter",
"Samuel",
""
],
[
"G",
"Vijay Kumar B",
""
],
[
"Suh",
"Yumin",
""
],
[
"Dafnis",
"Konstantinos M.",
""
],
[
"Zhang",
"Zhixing",
""
],
[
"Zhao",
"Shiyu",
""
],
[
"Metaxas",
"Dimitris",
""
]
] |
new_dataset
| 0.999299 |
2305.06794
|
Zhiheng Li
|
Zhiheng Li, Yubo Cui, Yu Lin, Zheng Fang
|
MMF-Track: Multi-modal Multi-level Fusion for 3D Single Object Tracking
|
11 pages, 10 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D single object tracking plays a crucial role in computer vision. Mainstream
methods mainly rely on point clouds to achieve geometry matching between target
template and search area. However, textureless and incomplete point clouds make
it difficult for single-modal trackers to distinguish objects with similar
structures. To overcome the limitations of geometry matching, we propose a
Multi-modal Multi-level Fusion Tracker (MMF-Track), which exploits the image
texture and geometry characteristic of point clouds to track 3D target.
Specifically, we first propose a Space Alignment Module (SAM) to align RGB
images with point clouds in 3D space, which is the prerequisite for
constructing inter-modal associations. Then, in feature interaction level, we
design a Feature Interaction Module (FIM) based on dual-stream structure, which
enhances intra-modal features in parallel and constructs inter-modal semantic
associations. Meanwhile, in order to refine each modal feature, we introduce a
Coarse-to-Fine Interaction Module (CFIM) to realize the hierarchical feature
interaction at different scales. Finally, in similarity fusion level, we
propose a Similarity Fusion Module (SFM) to aggregate geometry and texture
clues from the target. Experiments show that our method achieves
state-of-the-art performance on KITTI (39% Success and 42% Precision gains
against previous multi-modal method) and is also competitive on NuScenes.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 13:34:02 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2023 03:24:57 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Li",
"Zhiheng",
""
],
[
"Cui",
"Yubo",
""
],
[
"Lin",
"Yu",
""
],
[
"Fang",
"Zheng",
""
]
] |
new_dataset
| 0.998392 |
2307.00360
|
Zuchao Li
|
Zuchao Li, Shitou Zhang, Hai Zhao, Yifei Yang, Dongjie Yang
|
BatGPT: A Bidirectional Autoregessive Talker from Generative Pre-trained
Transformer
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
BatGPT is a large-scale language model designed and trained jointly by Wuhan
University and Shanghai Jiao Tong University. It is capable of generating
highly natural and fluent text in response to various types of input, including
text prompts, images, and audio. In the modeling level, we employ a
bidirectional autoregressive architecture that allows the model to efficiently
capture the complex dependencies of natural language, making it highly
effective in tasks such as language generation, dialog systems, and question
answering. Moreover, the bidirectional autoregressive modeling not only
operates from left to right but also from right to left, effectively reducing
fixed memory effects and alleviating model hallucinations.
In the training aspect, we propose a novel parameter expansion method for
leveraging the pre-training of smaller models and employ reinforcement learning
from both AI and human feedback, aimed at improving the model's alignment
performance. Overall, these approaches significantly improve the effectiveness
of BatGPT, and the model can be utilized for a wide range of natural language
applications.
|
[
{
"version": "v1",
"created": "Sat, 1 Jul 2023 15:10:01 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2023 13:59:42 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Li",
"Zuchao",
""
],
[
"Zhang",
"Shitou",
""
],
[
"Zhao",
"Hai",
""
],
[
"Yang",
"Yifei",
""
],
[
"Yang",
"Dongjie",
""
]
] |
new_dataset
| 0.998523 |
2307.15958
|
Maksym Bekuzarov
|
Maksym Bekuzarov, Ariana Bermudez, Joon-Young Lee, Hao Li
|
XMem++: Production-level Video Segmentation From Few Annotated Frames
|
Accepted to ICCV 2023. 18 pages, 16 figures
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite advancements in user-guided video segmentation, extracting complex
objects consistently for highly complex scenes is still a labor-intensive task,
especially for production. It is not uncommon that a majority of frames need to
be annotated. We introduce a novel semi-supervised video object segmentation
(SSVOS) model, XMem++, that improves existing memory-based models, with a
permanent memory module. Most existing methods focus on single frame
annotations, while our approach can effectively handle multiple user-selected
frames with varying appearances of the same object or region. Our method can
extract highly consistent results while keeping the required number of frame
annotations low. We further introduce an iterative and attention-based frame
suggestion mechanism, which computes the next best frame for annotation. Our
method is real-time and does not require retraining after each user input. We
also introduce a new dataset, PUMaVOS, which covers new challenging use cases
not found in previous benchmarks. We demonstrate SOTA performance on
challenging (partial and multi-class) segmentation scenarios as well as long
videos, while ensuring significantly fewer frame annotations than any existing
method. Project page: https://max810.github.io/xmem2-project-page/
|
[
{
"version": "v1",
"created": "Sat, 29 Jul 2023 11:18:23 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2023 11:26:36 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Bekuzarov",
"Maksym",
""
],
[
"Bermudez",
"Ariana",
""
],
[
"Lee",
"Joon-Young",
""
],
[
"Li",
"Hao",
""
]
] |
new_dataset
| 0.995366 |
2308.01246
|
Jyotirmaya Shivottam Mr.
|
Jyotirmaya Shivottam and Subhankar Mishra
|
Tirtha -- An Automated Platform to Crowdsource Images and Create 3D
Models of Heritage Sites
|
Accepted at The 28th International ACM Conference on 3D Web
Technology (Web3D 2023)
| null |
10.1145/3611314.3615904
| null |
cs.CV cs.HC cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Digital preservation of Cultural Heritage (CH) sites is crucial to protect
them against damage from natural disasters or human activities. Creating 3D
models of CH sites has become a popular method of digital preservation thanks
to advancements in computer vision and photogrammetry. However, the process is
time-consuming, expensive, and typically requires specialized equipment and
expertise, posing challenges in resource-limited developing countries.
Additionally, the lack of an open repository for 3D models hinders research and
public engagement with their heritage. To address these issues, we propose
Tirtha, a web platform for crowdsourcing images of CH sites and creating their
3D models. Tirtha utilizes state-of-the-art Structure from Motion (SfM) and
Multi-View Stereo (MVS) techniques. It is modular, extensible and
cost-effective, allowing for the incorporation of new techniques as
photogrammetry advances. Tirtha is accessible through a web interface at
https://tirtha.niser.ac.in and can be deployed on-premise or in a cloud
environment. In our case studies, we demonstrate the pipeline's effectiveness
by creating 3D models of temples in Odisha, India, using crowdsourced images.
These models are available for viewing, interaction, and download on the Tirtha
website. Our work aims to provide a dataset of crowdsourced images and 3D
reconstructions for research in computer vision, heritage conservation, and
related domains. Overall, Tirtha is a step towards democratizing digital
preservation, primarily in resource-limited developing countries.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 16:00:39 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2023 17:39:05 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Shivottam",
"Jyotirmaya",
""
],
[
"Mishra",
"Subhankar",
""
]
] |
new_dataset
| 0.998849 |
2308.01413
|
Tiezhu Sun
|
Tiezhu Sun, Weiguo Pian, Nadia Daoudi, Kevin Allix, Tegawend\'e F.
Bissyand\'e, Jacques Klein
|
LaFiCMIL: Rethinking Large File Classification from the Perspective of
Correlated Multiple Instance Learning
|
12 pages; update results; manuscript revision
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer-based models, such as BERT, have revolutionized various language
tasks, but still struggle with large file classification due to their input
limit (e.g., 512 tokens). Despite several attempts to alleviate this
limitation, no method consistently excels across all benchmark datasets,
primarily because they can only extract partial essential information from the
input file. Additionally, they fail to adapt to the varied properties of
different types of large files. In this work, we tackle this problem from the
perspective of correlated multiple instance learning. The proposed approach,
LaFiCMIL, serves as a versatile framework applicable to various large file
classification tasks covering binary, multi-class, and multi-label
classification tasks, spanning various domains including Natural Language
Processing, Programming Language Processing, and Android Analysis. To evaluate
its effectiveness, we employ eight benchmark datasets pertaining to Long
Document Classification, Code Defect Detection, and Android Malware Detection.
Leveraging BERT-family models as feature extractors, our experimental results
demonstrate that LaFiCMIL achieves new state-of-the-art performance across all
benchmark datasets. This is largely attributable to its capability of scaling
BERT up to nearly 20K tokens, running on a single Tesla V-100 GPU with 32G of
memory.
|
[
{
"version": "v1",
"created": "Sun, 30 Jul 2023 18:47:54 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2023 12:19:56 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Sun",
"Tiezhu",
""
],
[
"Pian",
"Weiguo",
""
],
[
"Daoudi",
"Nadia",
""
],
[
"Allix",
"Kevin",
""
],
[
"Bissyandé",
"Tegawendé F.",
""
],
[
"Klein",
"Jacques",
""
]
] |
new_dataset
| 0.977641 |
2308.02158
|
Jiaxin Chen
|
Xin Liao and Siliang Chen and Jiaxin Chen and Tianyi Wang and Xiehua
Li
|
CTP-Net: Character Texture Perception Network for Document Image Forgery
Localization
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the progression of information technology in recent years, document
images have been widely disseminated on social networks. With the help of
powerful image editing tools, document images are easily forged without leaving
visible manipulation traces, which leads to severe issues if significant
information is falsified for malicious use. Therefore, the research of document
image forensics is worth further exploring. In this paper, we propose a
Character Texture Perception Network (CTP-Net) to localize the forged regions
in document images. Specifically, considering the characters with semantics in
a document image are highly vulnerable, capturing the forgery traces is the key
to localize the forged regions. We design a Character Texture Stream (CTS)
based on optical character recognition to capture features of text areas that
are essential components of a document image. Meanwhile, texture features of
the whole document image are exploited by an Image Texture Stream (ITS).
Combining the features extracted from the CTS and the ITS, the CTP-Net can
reveal more subtle forgery traces from document images. Moreover, to overcome
the challenge caused by the lack of fake document images, we design a data
generation strategy that is utilized to construct a Fake Chinese Trademark
dataset (FCTM). Experimental results on different datasets demonstrate that the
proposed CTP-Net is able to localize multi-scale forged areas in document
images, and outperform the state-of-the-art forgery localization methods, even
though post-processing operations are applied.
|
[
{
"version": "v1",
"created": "Fri, 4 Aug 2023 06:37:28 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2023 03:45:50 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Liao",
"Xin",
""
],
[
"Chen",
"Siliang",
""
],
[
"Chen",
"Jiaxin",
""
],
[
"Wang",
"Tianyi",
""
],
[
"Li",
"Xiehua",
""
]
] |
new_dataset
| 0.989883 |
2308.07207
|
Mufeng Yao
|
Mufeng Yao, Jiaqi Wang, Jinlong Peng, Mingmin Chi, Chao Liu
|
FOLT: Fast Multiple Object Tracking from UAV-captured Videos Based on
Optical Flow
|
Accepted by ACM Multi-Media 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiple object tracking (MOT) has been successfully investigated in computer
vision.
However, MOT for the videos captured by unmanned aerial vehicles (UAV) is
still challenging due to small object size, blurred object appearance, and very
large and/or irregular motion in both ground objects and UAV platforms.
In this paper, we propose FOLT to mitigate these problems and reach fast and
accurate MOT in UAV view.
Aiming at speed-accuracy trade-off, FOLT adopts a modern detector and
light-weight optical flow extractor to extract object detection features and
motion features at a minimum cost.
Given the extracted flow, the flow-guided feature augmentation is designed to
augment the object detection feature based on its optical flow, which improves
the detection of small objects.
Then the flow-guided motion prediction is also proposed to predict the
object's position in the next frame, which improves the tracking performance of
objects with very large displacements between adjacent frames.
Finally, the tracker matches the detected objects and predicted objects using
a spatially matching scheme to generate tracks for every object.
Experiments on Visdrone and UAVDT datasets show that our proposed model can
successfully track small objects with large and irregular motion and outperform
existing state-of-the-art methods in UAV-MOT tasks.
|
[
{
"version": "v1",
"created": "Mon, 14 Aug 2023 15:24:44 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Aug 2023 02:59:04 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Yao",
"Mufeng",
""
],
[
"Wang",
"Jiaqi",
""
],
[
"Peng",
"Jinlong",
""
],
[
"Chi",
"Mingmin",
""
],
[
"Liu",
"Chao",
""
]
] |
new_dataset
| 0.958429 |
2308.07340
|
Naresh Goud Boddu
|
Rishabh Batra, Naresh Goud Boddu, Rahul Jain
|
Quantum secure non-malleable randomness encoder and its applications
|
arXiv admin note: text overlap with arXiv:2308.06466
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
"Non-Malleable Randomness Encoder"(NMRE) was introduced by Kanukurthi,
Obbattu, and Sekar~[KOS18] as a useful cryptographic primitive helpful in the
construction of non-malleable codes. To the best of our knowledge, their
construction is not known to be quantum secure.
We provide a construction of a first rate-$1/2$, $2$-split, quantum secure
NMRE and use this in a black-box manner, to construct for the first time the
following:
1) rate $1/11$, $3$-split, quantum non-malleable code,
2) rate $1/3$, $3$-split, quantum secure non-malleable code,
3) rate $1/5$, $2$-split, average case quantum secure non-malleable code.
|
[
{
"version": "v1",
"created": "Sat, 12 Aug 2023 05:23:44 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Batra",
"Rishabh",
""
],
[
"Boddu",
"Naresh Goud",
""
],
[
"Jain",
"Rahul",
""
]
] |
new_dataset
| 0.992057 |
2308.07346
|
Joseph Ramsey
|
Joseph D. Ramsey, Bryan Andrews
|
Py-Tetrad and RPy-Tetrad: A New Python Interface with R Support for
Tetrad Causal Search
|
Causal Analysis Workshop Series (CAWS) 2023, 12 pages, 4 Figures, 2
Tables
| null | null | null |
cs.MS cs.AI cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
We give novel Python and R interfaces for the (Java) Tetrad project for
causal modeling, search, and estimation. The Tetrad project is a mainstay in
the literature, having been under consistent development for over 30 years.
Some of its algorithms are now classics, like PC and FCI; others are recent
developments. It is increasingly the case, however, that researchers need to
access the underlying Java code from Python or R. Existing methods for doing
this are inadequate. We provide new, up-to-date methods using the JPype
Python-Java interface and the Reticulate Python-R interface, directly solving
these issues. With the addition of some simple tools and the provision of
working examples for both Python and R, using JPype and Reticulate to interface
Python and R with Tetrad is straightforward and intuitive.
|
[
{
"version": "v1",
"created": "Sun, 13 Aug 2023 16:29:05 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Ramsey",
"Joseph D.",
""
],
[
"Andrews",
"Bryan",
""
]
] |
new_dataset
| 0.965682 |
2308.07391
|
Jiayi Liu
|
Jiayi Liu, Ali Mahdavi-Amiri, Manolis Savva
|
PARIS: Part-level Reconstruction and Motion Analysis for Articulated
Objects
|
Presented at ICCV 2023. Project website:
https://3dlg-hcvc.github.io/paris/
| null | null | null |
cs.CV cs.AI cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the task of simultaneous part-level reconstruction and motion
parameter estimation for articulated objects. Given two sets of multi-view
images of an object in two static articulation states, we decouple the movable
part from the static part and reconstruct shape and appearance while predicting
the motion parameters. To tackle this problem, we present PARIS: a
self-supervised, end-to-end architecture that learns part-level implicit shape
and appearance models and optimizes motion parameters jointly without any 3D
supervision, motion, or semantic annotation. Our experiments show that our
method generalizes better across object categories, and outperforms baselines
and prior work that are given 3D point clouds as input. Our approach improves
reconstruction relative to state-of-the-art baselines with a Chamfer-L1
distance reduction of 3.94 (45.2%) for objects and 26.79 (84.5%) for parts, and
achieves 5% error rate for motion estimation across 10 object categories.
Video summary at: https://youtu.be/tDSrROPCgUc
|
[
{
"version": "v1",
"created": "Mon, 14 Aug 2023 18:18:00 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Liu",
"Jiayi",
""
],
[
"Mahdavi-Amiri",
"Ali",
""
],
[
"Savva",
"Manolis",
""
]
] |
new_dataset
| 0.999326 |
2308.07427
|
Jane Hsieh
|
Jane Hsieh, Joselyn Kim, Laura Dabbish, Haiyi Zhu
|
Nip it in the Bud: Moderation Strategies in Open Source Software
Projects and the Role of Bots
| null | null |
10.1145/3610092
| null |
cs.HC cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Much of our modern digital infrastructure relies critically upon open sourced
software. The communities responsible for building this cyberinfrastructure
require maintenance and moderation, which is often supported by volunteer
efforts. Moderation, as a non-technical form of labor, is a necessary but often
overlooked task that maintainers undertake to sustain the community around an
OSS project. This study examines the various structures and norms that support
community moderation, describes the strategies moderators use to mitigate
conflicts, and assesses how bots can play a role in assisting these processes.
We interviewed 14 practitioners to uncover existing moderation practices and
ways that automation can provide assistance. Our main contributions include a
characterization of moderated content in OSS projects, moderation techniques,
as well as perceptions of and recommendations for improving the automation of
moderation tasks. We hope that these findings will inform the implementation of
more effective moderation practices in open source communities.
|
[
{
"version": "v1",
"created": "Mon, 14 Aug 2023 19:42:51 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Hsieh",
"Jane",
""
],
[
"Kim",
"Joselyn",
""
],
[
"Dabbish",
"Laura",
""
],
[
"Zhu",
"Haiyi",
""
]
] |
new_dataset
| 0.967903 |
2308.07449
|
Aiman Soliman
|
Aiman Soliman, Priyam Mazumdar, Aaron Hoyle-Katz, Brian Allan, and
Allison Gardner
|
Integrated dataset for air travel and reported Zika virus cases in
Colombia (Data and Resources Paper)
| null | null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This open-access dataset provides consistent records of air travel volumes
between 205 airport catchments in Colombia and the associated number of
reported human cases of Zika virus within these catchments during the arbovirus
outbreak between October 2015 and September 2016. We associated in this dataset
the monthly air travel volumes provided by the Colombian Civil Aviation
Authority (AEROCIVIL) with the reported human cases of Zika Virus published by
The Pan American Health Organization (PAHO). Our methodology consists of
geocoding all the reported airports and identifying the catchment of each
airport using the municipalities' boundaries since reported human cases of Zika
Virus are available at the municipal level. In addition, we calculated the
total population at risk in each airport catchment by combining the total
population count in a catchment with the environmental suitability of the Aedes
aegypti mosquito, the vector for the Zika virus. We separated the monthly air
travel volumes into domestic and international based on the location of the
origin airport. The current dataset includes the total air travel volumes of
23,539,364 passengers on domestic flights and 11,592,197 on international ones.
We validated our dataset by comparing the monthly aggregated air travel volumes
between airport catchments to those predicted by the gravity model. We hope the
novel dataset will provide a resource to researchers studying the role of human
mobility in the spread of mosquito-borne diseases and modeling disease spread
in realistic networks.
|
[
{
"version": "v1",
"created": "Mon, 14 Aug 2023 20:38:58 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Soliman",
"Aiman",
""
],
[
"Mazumdar",
"Priyam",
""
],
[
"Hoyle-Katz",
"Aaron",
""
],
[
"Allan",
"Brian",
""
],
[
"Gardner",
"Allison",
""
]
] |
new_dataset
| 0.999501 |
2308.07472
|
Chinmay Chinara
|
Thomas B Talbot and Chinmay Chinara
|
Open Medical Gesture: An Open-Source Experiment in Naturalistic Physical
Interactions for Mixed and Virtual Reality Simulations
|
AHFE 2022
|
Human Factors in Virtual Environments and Game Design. AHFE (2022)
International Conference. AHFE Open Access, vol 50, 1-7. AHFE International,
USA
|
10.54941/ahfe1002054
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Mixed Reality (MR) and Virtual Reality (VR) simulations are hampered by
requirements for hand controllers or attempts to perseverate in use of
two-dimensional computer interface paradigms from the 1980s. From our efforts
to produce more naturalistic interactions for combat medic training for the
military, USC has developed an open-source toolkit that enables direct hand
controlled responsive interactions that is sensor independent and can function
with depth sensing cameras, webcams or sensory gloves. Natural approaches we
have examined include the ability to manipulate virtual smart objects in a
similar manner to how they are used in the real world. From this research and
review of current literature, we have discerned several best approaches for
hand-based human computer interactions which provide intuitive, responsive,
useful, and low frustration experiences for VR users.
|
[
{
"version": "v1",
"created": "Mon, 14 Aug 2023 21:56:41 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Talbot",
"Thomas B",
""
],
[
"Chinara",
"Chinmay",
""
]
] |
new_dataset
| 0.994346 |
2308.07498
|
Wenguan Wang
|
Hanqing Wang, Wei Liang, Luc Van Gool, Wenguan Wang
|
DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation
|
Accepted at ICCV 2023; Project page:
https://github.com/hanqingwangai/Dreamwalker
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
VLN-CE is a recently released embodied task, where AI agents need to navigate
a freely traversable environment to reach a distant target location, given
language instructions. It poses great challenges due to the huge space of
possible strategies. Driven by the belief that the ability to anticipate the
consequences of future actions is crucial for the emergence of intelligent and
interpretable planning behavior, we propose DREAMWALKER -- a world model based
VLN-CE agent. The world model is built to summarize the visual, topological,
and dynamic properties of the complicated continuous environment into a
discrete, structured, and compact representation. DREAMWALKER can simulate and
evaluate possible plans entirely in such internal abstract world, before
executing costly actions. As opposed to existing model-free VLN-CE agents
simply making greedy decisions in the real world, which easily results in
shortsighted behaviors, DREAMWALKER is able to make strategic planning through
large amounts of ``mental experiments.'' Moreover, the imagined future
scenarios reflect our agent's intention, making its decision-making process
more transparent. Extensive experiments and ablation studies on VLN-CE dataset
confirm the effectiveness of the proposed approach and outline fruitful
directions for future work.
|
[
{
"version": "v1",
"created": "Mon, 14 Aug 2023 23:45:01 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Wang",
"Hanqing",
""
],
[
"Liang",
"Wei",
""
],
[
"Van Gool",
"Luc",
""
],
[
"Wang",
"Wenguan",
""
]
] |
new_dataset
| 0.997301 |
2308.07502
|
Varun Viswanath
|
Yinan Xuan, Varun Viswanath, Sunny Chu, Owen Bartolf, Jessica
Echterhoff, and Edward Wang
|
SpecTracle: Wearable Facial Motion Tracking from Unobtrusive Peripheral
Cameras
| null | null | null | null |
cs.HC cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Facial motion tracking in head-mounted displays (HMD) has the potential to
enable immersive "face-to-face" interaction in a virtual environment. However,
current works on facial tracking are not suitable for unobtrusive augmented
reality (AR) glasses or do not have the ability to track arbitrary facial
movements. In this work, we demonstrate a novel system called SpecTracle that
tracks a user's facial motions using two wide-angle cameras mounted right next
to the visor of a Hololens. Avoiding the usage of cameras extended in front of
the face, our system greatly improves the feasibility to integrate full-face
tracking into a low-profile form factor. We also demonstrate that a neural
network-based model processing the wide-angle cameras can run in real-time at
24 frames per second (fps) on a mobile GPU and track independent facial
movement for different parts of the face with a user-independent model. Using a
short personalized calibration, the system improves its tracking performance by
42.3% compared to the user-independent model.
|
[
{
"version": "v1",
"created": "Mon, 14 Aug 2023 23:52:19 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Xuan",
"Yinan",
""
],
[
"Viswanath",
"Varun",
""
],
[
"Chu",
"Sunny",
""
],
[
"Bartolf",
"Owen",
""
],
[
"Echterhoff",
"Jessica",
""
],
[
"Wang",
"Edward",
""
]
] |
new_dataset
| 0.999397 |
2308.07512
|
Ans Qureshi
|
Ans Qureshi, David Smith, Trevor Gee, Mahla Nejati, Jalil Shahabi,
JongYoon Lim, Ho Seok Ahn, Ben McGuinness, Catherine Downes, Rahul Jangali,
Kale Black, Hin Lim, Mike Duke, Bruce MacDonald, Henry Williams
|
Seeing the Fruit for the Leaves: Robotically Mapping Apple Fruitlets in
a Commercial Orchard
|
Accepted at the International Conference on Intelligent Robots and
Systems (IROS 2023)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Aotearoa New Zealand has a strong and growing apple industry but struggles to
access workers to complete skilled, seasonal tasks such as thinning. To ensure
effective thinning and make informed decisions on a per-tree basis, it is
crucial to accurately measure the crop load of individual apple trees. However,
this task poses challenges due to the dense foliage that hides the fruitlets
within the tree structure. In this paper, we introduce the vision system of an
automated apple fruitlet thinning robot, developed to tackle the labor shortage
issue. This paper presents the initial design, implementation,and evaluation
specifics of the system. The platform straddles the 3.4 m tall 2D apple canopy
structures to create an accurate map of the fruitlets on each tree. We show
that this platform can measure the fruitlet load on an apple tree by scanning
through both sides of the branch. The requirement of an overarching platform
was justified since two-sided scans had a higher counting accuracy of 81.17 %
than one-sided scans at 73.7 %. The system was also demonstrated to produce
size estimates within 5.9% RMSE of their true size.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 00:33:26 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Qureshi",
"Ans",
""
],
[
"Smith",
"David",
""
],
[
"Gee",
"Trevor",
""
],
[
"Nejati",
"Mahla",
""
],
[
"Shahabi",
"Jalil",
""
],
[
"Lim",
"JongYoon",
""
],
[
"Ahn",
"Ho Seok",
""
],
[
"McGuinness",
"Ben",
""
],
[
"Downes",
"Catherine",
""
],
[
"Jangali",
"Rahul",
""
],
[
"Black",
"Kale",
""
],
[
"Lim",
"Hin",
""
],
[
"Duke",
"Mike",
""
],
[
"MacDonald",
"Bruce",
""
],
[
"Williams",
"Henry",
""
]
] |
new_dataset
| 0.999228 |
2308.07540
|
Andrew Zhu
|
Andrew Zhu and Lara J. Martin and Andrew Head and Chris Callison-Burch
|
CALYPSO: LLMs as Dungeon Masters' Assistants
|
11 pages, 4 figures. AIIDE 2023
| null | null | null |
cs.CL cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 02:57:00 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Zhu",
"Andrew",
""
],
[
"Martin",
"Lara J.",
""
],
[
"Head",
"Andrew",
""
],
[
"Callison-Burch",
"Chris",
""
]
] |
new_dataset
| 0.999081 |
2308.07571
|
Anbang Yao
|
Dongqi Cai, Yangyuxuan Kang, Anbang Yao, Yurong Chen
|
Ske2Grid: Skeleton-to-Grid Representation Learning for Action
Recognition
|
The paper of Ske2Grid is published at ICML 2023. Code and models are
available at https://github.com/OSVAI/Ske2Grid
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents Ske2Grid, a new representation learning framework for
improved skeleton-based action recognition. In Ske2Grid, we define a regular
convolution operation upon a novel grid representation of human skeleton, which
is a compact image-like grid patch constructed and learned through three novel
designs. Specifically, we propose a graph-node index transform (GIT) to
construct a regular grid patch through assigning the nodes in the skeleton
graph one by one to the desired grid cells. To ensure that GIT is a bijection
and enrich the expressiveness of the grid representation, an up-sampling
transform (UPT) is learned to interpolate the skeleton graph nodes for filling
the grid patch to the full. To resolve the problem when the one-step UPT is
aggressive and further exploit the representation capability of the grid patch
with increasing spatial size, a progressive learning strategy (PLS) is proposed
which decouples the UPT into multiple steps and aligns them to multiple paired
GITs through a compact cascaded design learned progressively. We construct
networks upon prevailing graph convolution networks and conduct experiments on
six mainstream skeleton-based action recognition datasets. Experiments show
that our Ske2Grid significantly outperforms existing GCN-based solutions under
different benchmark settings, without bells and whistles. Code and models are
available at https://github.com/OSVAI/Ske2Grid
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 04:49:11 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Cai",
"Dongqi",
""
],
[
"Kang",
"Yangyuxuan",
""
],
[
"Yao",
"Anbang",
""
],
[
"Chen",
"Yurong",
""
]
] |
new_dataset
| 0.998516 |
2308.07580
|
Bo Lin
|
Bo Lin, Shoshanna Saxe, Timothy C. Y. Chan
|
AutoLTS: Automating Cycling Stress Assessment via Contrastive Learning
and Spatial Post-processing
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Cycling stress assessment, which quantifies cyclists' perceived stress
imposed by the built environment and motor traffics, increasingly informs
cycling infrastructure planning and cycling route recommendation. However,
currently calculating cycling stress is slow and data-intensive, which hinders
its broader application. In this paper, We propose a deep learning framework to
support accurate, fast, and large-scale cycling stress assessments for urban
road networks based on street-view images. Our framework features i) a
contrastive learning approach that leverages the ordinal relationship among
cycling stress labels, and ii) a post-processing technique that enforces
spatial smoothness into our predictions. On a dataset of 39,153 road segments
collected in Toronto, Canada, our results demonstrate the effectiveness of our
deep learning framework and the value of using image data for cycling stress
assessment in the absence of high-quality road geometry and motor traffic data.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 05:51:25 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Lin",
"Bo",
""
],
[
"Saxe",
"Shoshanna",
""
],
[
"Chan",
"Timothy C. Y.",
""
]
] |
new_dataset
| 0.981348 |
2308.07593
|
JeongHun Yeo
|
Jeong Hun Yeo, Minsu Kim, Jeongsoo Choi, Dae Hoe Kim, and Yong Man Ro
|
AKVSR: Audio Knowledge Empowered Visual Speech Recognition by
Compressing Audio Knowledge of a Pretrained Model
| null | null | null | null |
cs.CV cs.MM eess.AS eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual Speech Recognition (VSR) is the task of predicting spoken words from
silent lip movements. VSR is regarded as a challenging task because of the
insufficient information on lip movements. In this paper, we propose an Audio
Knowledge empowered Visual Speech Recognition framework (AKVSR) to complement
the insufficient speech information of visual modality by using audio modality.
Different from the previous methods, the proposed AKVSR 1) utilizes rich audio
knowledge encoded by a large-scale pretrained audio model, 2) saves the
linguistic information of audio knowledge in compact audio memory by discarding
the non-linguistic information from the audio through quantization, and 3)
includes Audio Bridging Module which can find the best-matched audio features
from the compact audio memory, which makes our training possible without audio
inputs, once after the compact audio memory is composed. We validate the
effectiveness of the proposed method through extensive experiments, and achieve
new state-of-the-art performances on the widely-used datasets, LRS2 and LRS3.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 06:38:38 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Yeo",
"Jeong Hun",
""
],
[
"Kim",
"Minsu",
""
],
[
"Choi",
"Jeongsoo",
""
],
[
"Kim",
"Dae Hoe",
""
],
[
"Ro",
"Yong Man",
""
]
] |
new_dataset
| 0.985627 |
2308.07605
|
Zhengwentai Sun
|
Zhengwentai Sun, Yanghong Zhou, Honghong He, P. Y. Mok
|
SGDiff: A Style Guided Diffusion Model for Fashion Synthesis
|
Accepted by ACM MM'23
| null |
10.1145/3581783.3613806
| null |
cs.CV cs.AI cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper reports on the development of \textbf{a novel style guided
diffusion model (SGDiff)} which overcomes certain weaknesses inherent in
existing models for image synthesis. The proposed SGDiff combines image
modality with a pretrained text-to-image diffusion model to facilitate creative
fashion image synthesis. It addresses the limitations of text-to-image
diffusion models by incorporating supplementary style guidance, substantially
reducing training costs, and overcoming the difficulties of controlling
synthesized styles with text-only inputs. This paper also introduces a new
dataset -- SG-Fashion, specifically designed for fashion image synthesis
applications, offering high-resolution images and an extensive range of garment
categories. By means of comprehensive ablation study, we examine the
application of classifier-free guidance to a variety of conditions and validate
the effectiveness of the proposed model for generating fashion images of the
desired categories, product attributes, and styles. The contributions of this
paper include a novel classifier-free guidance method for multi-modal feature
fusion, a comprehensive dataset for fashion image synthesis application, a
thorough investigation on conditioned text-to-image synthesis, and valuable
insights for future research in the text-to-image synthesis domain. The code
and dataset are available at: \url{https://github.com/taited/SGDiff}.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 07:20:22 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Sun",
"Zhengwentai",
""
],
[
"Zhou",
"Yanghong",
""
],
[
"He",
"Honghong",
""
],
[
"Mok",
"P. Y.",
""
]
] |
new_dataset
| 0.998852 |
2308.07622
|
Jialing Zou
|
Jialing Zou, Jiahao Mei, Guangze Ye, Tianyu Huai, Qiwei Shen, Daoguo
Dong
|
EMID: An Emotional Aligned Dataset in Audio-Visual Modality
| null | null | null | null |
cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, we propose Emotionally paired Music and Image Dataset (EMID),
a novel dataset designed for the emotional matching of music and images, to
facilitate auditory-visual cross-modal tasks such as generation and retrieval.
Unlike existing approaches that primarily focus on semantic correlations or
roughly divided emotional relations, EMID emphasizes the significance of
emotional consistency between music and images using an advanced 13-dimension
emotional model. By incorporating emotional alignment into the dataset, it aims
to establish pairs that closely align with human perceptual understanding,
thereby raising the performance of auditory-visual cross-modal tasks. We also
design a supplemental module named EMI-Adapter to optimize existing cross-modal
alignment methods. To validate the effectiveness of the EMID, we conduct a
psychological experiment, which has demonstrated that considering the emotional
relationship between the two modalities effectively improves the accuracy of
matching in abstract perspective. This research lays the foundation for future
cross-modal research in domains such as psychotherapy and contributes to
advancing the understanding and utilization of emotions in cross-modal
alignment. The EMID dataset is available at https://github.com/ecnu-aigc/EMID.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 08:13:14 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Zou",
"Jialing",
""
],
[
"Mei",
"Jiahao",
""
],
[
"Ye",
"Guangze",
""
],
[
"Huai",
"Tianyu",
""
],
[
"Shen",
"Qiwei",
""
],
[
"Dong",
"Daoguo",
""
]
] |
new_dataset
| 0.999708 |
2308.07654
|
Jianyi Cheng
|
Jianyi Cheng, Samuel Coward, Lorenzo Chelini, Rafael Barbalho, Theo
Drane
|
SEER: Super-Optimization Explorer for HLS using E-graph Rewriting with
MLIR
| null | null | null | null |
cs.PL cs.AR cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
High-level synthesis (HLS) is a process that automatically translates a
software program in a high-level language into a low-level hardware
description. However, the hardware designs produced by HLS tools still suffer
from a significant performance gap compared to manual implementations. This is
because the input HLS programs must still be written using hardware design
principles.
Existing techniques either leave the program source unchanged or perform a
fixed sequence of source transformation passes, potentially missing
opportunities to find the optimal design. We propose a super-optimization
approach for HLS that automatically rewrites an arbitrary software program into
efficient HLS code that can be used to generate an optimized hardware design.
We developed a toolflow named SEER, based on the e-graph data structure, to
efficiently explore equivalent implementations of a program at scale. SEER
provides an extensible framework, orchestrating existing software compiler
passes and hardware synthesis optimizers.
Our work is the first attempt to exploit e-graph rewriting for large software
compiler frameworks, such as MLIR. Across a set of open-source benchmarks, we
show that SEER achieves up to 38x the performance within 1.4x the area of the
original program. Via an Intel-provided case study, SEER demonstrates the
potential to outperform manually optimized designs produced by hardware
experts.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 09:05:27 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Cheng",
"Jianyi",
""
],
[
"Coward",
"Samuel",
""
],
[
"Chelini",
"Lorenzo",
""
],
[
"Barbalho",
"Rafael",
""
],
[
"Drane",
"Theo",
""
]
] |
new_dataset
| 0.997368 |
2308.07700
|
Serhii Nazarovets
|
Serhii Nazarovets, Olesya Mryglod
|
Ukrainian Arts and Humanities research in Scopus: A Bibliometric
Analysis
|
Library Hi Tech (2023)
| null |
10.1108/LHT-05-2023-0180
| null |
cs.DL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This article presents the results of a quantitative analysis of Ukrainian
Arts and Humanities (A&H) research from 2012 to 2021, as observed in Scopus.
The overall publication activity and the relative share of A&H publications in
relation to Ukraine's total research output, comparing them with other
countries. The study analyzes the diversity and total number of sources, as
well as the geographic distribution of authors and citing authors, to provide
insights into the internationalization level of Ukrainian A&H research.
Additionally, the topical spectrum and language usage are considered to
complete the overall picture. According to our results, the publication
patterns for Ukrainian A&H research exhibit dynamics comparable to those of
other countries, with a gradual increase in the total number of papers and
sources. However, the citedness is lower than expected, and the share of
publications in top-quartile sources is lower for 2020-2021 period compared to
the previous years. The impact of internationally collaborative papers,
especially those in English, is higher. Nevertheless, over half of all works
remain uncited, probably due to the limited readership of the journals selected
for publication.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 11:05:04 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Nazarovets",
"Serhii",
""
],
[
"Mryglod",
"Olesya",
""
]
] |
new_dataset
| 0.996917 |
2308.07717
|
Ching-Hsun Tseng
|
Ching-Hsun Tseng, Shao-Ju Chien, Po-Shen Wang, Shin-Jye Lee, Wei-Huan
Hu, Bin Pu, and Xiao-jun Zeng
|
Real-time Automatic M-mode Echocardiography Measurement with Panel
Attention from Local-to-Global Pixels
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Motion mode (M-mode) recording is an essential part of echocardiography to
measure cardiac dimension and function. However, the current diagnosis cannot
build an automatic scheme, as there are three fundamental obstructs: Firstly,
there is no open dataset available to build the automation for ensuring
constant results and bridging M-mode echocardiography with real-time instance
segmentation (RIS); Secondly, the examination is involving the time-consuming
manual labelling upon M-mode echocardiograms; Thirdly, as objects in
echocardiograms occupy a significant portion of pixels, the limited receptive
field in existing backbones (e.g., ResNet) composed from multiple convolution
layers are inefficient to cover the period of a valve movement. Existing
non-local attentions (NL) compromise being unable real-time with a high
computation overhead or losing information from a simplified version of the
non-local block. Therefore, we proposed RAMEM, a real-time automatic M-mode
echocardiography measurement scheme, contributes three aspects to answer the
problems: 1) provide MEIS, a dataset of M-mode echocardiograms for instance
segmentation, to enable consistent results and support the development of an
automatic scheme; 2) propose panel attention, local-to-global efficient
attention by pixel-unshuffling, embedding with updated UPANets V2 in a RIS
scheme toward big object detection with global receptive field; 3) develop and
implement AMEM, an efficient algorithm of automatic M-mode echocardiography
measurement enabling fast and accurate automatic labelling among diagnosis. The
experimental results show that RAMEM surpasses existing RIS backbones (with
non-local attention) in PASCAL 2012 SBD and human performances in real-time
MEIS tested. The code of MEIS and dataset are available at
https://github.com/hanktseng131415go/RAME.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 11:50:57 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Tseng",
"Ching-Hsun",
""
],
[
"Chien",
"Shao-Ju",
""
],
[
"Wang",
"Po-Shen",
""
],
[
"Lee",
"Shin-Jye",
""
],
[
"Hu",
"Wei-Huan",
""
],
[
"Pu",
"Bin",
""
],
[
"Zeng",
"Xiao-jun",
""
]
] |
new_dataset
| 0.999023 |
2308.07732
|
Haiyang Wang
|
Haiyang Wang, Hao Tang, Shaoshuai Shi, Aoxue Li, Zhenguo Li, Bernt
Schiele, Liwei Wang
|
UniTR: A Unified and Efficient Multi-Modal Transformer for
Bird's-Eye-View Representation
|
Accepted by ICCV2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Jointly processing information from multiple sensors is crucial to achieving
accurate and robust perception for reliable autonomous driving systems.
However, current 3D perception research follows a modality-specific paradigm,
leading to additional computation overheads and inefficient collaboration
between different sensor data. In this paper, we present an efficient
multi-modal backbone for outdoor 3D perception named UniTR, which processes a
variety of modalities with unified modeling and shared parameters. Unlike
previous works, UniTR introduces a modality-agnostic transformer encoder to
handle these view-discrepant sensor data for parallel modal-wise representation
learning and automatic cross-modal interaction without additional fusion steps.
More importantly, to make full use of these complementary sensor types, we
present a novel multi-modal integration strategy by both considering
semantic-abundant 2D perspective and geometry-aware 3D sparse neighborhood
relations. UniTR is also a fundamentally task-agnostic backbone that naturally
supports different 3D perception tasks. It sets a new state-of-the-art
performance on the nuScenes benchmark, achieving +1.1 NDS higher for 3D object
detection and +12.0 higher mIoU for BEV map segmentation with lower inference
latency. Code will be available at https://github.com/Haiyang-W/UniTR .
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 12:13:44 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Wang",
"Haiyang",
""
],
[
"Tang",
"Hao",
""
],
[
"Shi",
"Shaoshuai",
""
],
[
"Li",
"Aoxue",
""
],
[
"Li",
"Zhenguo",
""
],
[
"Schiele",
"Bernt",
""
],
[
"Wang",
"Liwei",
""
]
] |
new_dataset
| 0.996019 |
2308.07743
|
Wenyuan Xue
|
Wenyuan Xue, Dapeng Chen, Baosheng Yu, Yifei Chen, Sai Zhou, Wei Peng
|
ChartDETR: A Multi-shape Detection Network for Visual Chart Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual chart recognition systems are gaining increasing attention due to the
growing demand for automatically identifying table headers and values from
chart images. Current methods rely on keypoint detection to estimate data
element shapes in charts but suffer from grouping errors in post-processing. To
address this issue, we propose ChartDETR, a transformer-based multi-shape
detector that localizes keypoints at the corners of regular shapes to
reconstruct multiple data elements in a single chart image. Our method predicts
all data element shapes at once by introducing query groups in set prediction,
eliminating the need for further postprocessing. This property allows ChartDETR
to serve as a unified framework capable of representing various chart types
without altering the network architecture, effectively detecting data elements
of diverse shapes. We evaluated ChartDETR on three datasets, achieving
competitive results across all chart types without any additional enhancements.
For example, ChartDETR achieved an F1 score of 0.98 on Adobe Synthetic,
significantly outperforming the previous best model with a 0.71 F1 score.
Additionally, we obtained a new state-of-the-art result of 0.97 on
ExcelChart400k. The code will be made publicly available.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 12:50:06 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Xue",
"Wenyuan",
""
],
[
"Chen",
"Dapeng",
""
],
[
"Yu",
"Baosheng",
""
],
[
"Chen",
"Yifei",
""
],
[
"Zhou",
"Sai",
""
],
[
"Peng",
"Wei",
""
]
] |
new_dataset
| 0.999501 |
2308.07749
|
Bosheng Qin
|
Bosheng Qin, Wentao Ye, Qifan Yu, Siliang Tang, Yueting Zhuang
|
Dancing Avatar: Pose and Text-Guided Human Motion Videos Synthesis with
Image Diffusion Model
|
11 pages, 3 figures
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rising demand for creating lifelike avatars in the digital realm has led
to an increased need for generating high-quality human videos guided by textual
descriptions and poses. We propose Dancing Avatar, designed to fabricate human
motion videos driven by poses and textual cues. Our approach employs a
pretrained T2I diffusion model to generate each video frame in an
autoregressive fashion. The crux of innovation lies in our adept utilization of
the T2I diffusion model for producing video frames successively while
preserving contextual relevance. We surmount the hurdles posed by maintaining
human character and clothing consistency across varying poses, along with
upholding the background's continuity amidst diverse human movements. To ensure
consistent human appearances across the entire video, we devise an intra-frame
alignment module. This module assimilates text-guided synthesized human
character knowledge into the pretrained T2I diffusion model, synergizing
insights from ChatGPT. For preserving background continuity, we put forth a
background alignment pipeline, amalgamating insights from segment anything and
image inpainting techniques. Furthermore, we propose an inter-frame alignment
module that draws inspiration from an auto-regressive pipeline to augment
temporal consistency between adjacent frames, where the preceding frame guides
the synthesis process of the current frame. Comparisons with state-of-the-art
methods demonstrate that Dancing Avatar exhibits the capacity to generate human
videos with markedly superior quality, both in terms of human and background
fidelity, as well as temporal coherence compared to existing state-of-the-art
approaches.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 13:00:42 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Qin",
"Bosheng",
""
],
[
"Ye",
"Wentao",
""
],
[
"Yu",
"Qifan",
""
],
[
"Tang",
"Siliang",
""
],
[
"Zhuang",
"Yueting",
""
]
] |
new_dataset
| 0.999211 |
2308.07771
|
Wei Qian
|
Wei Qian, Dan Guo, Kun Li, Xilan Tian, Meng Wang
|
Dual-path TokenLearner for Remote Photoplethysmography-based
Physiological Measurement with Facial Videos
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Remote photoplethysmography (rPPG) based physiological measurement is an
emerging yet crucial vision task, whose challenge lies in exploring accurate
rPPG prediction from facial videos accompanied by noises of illumination
variations, facial occlusions, head movements, \etc, in a non-contact manner.
Existing mainstream CNN-based models make efforts to detect physiological
signals by capturing subtle color changes in facial regions of interest (ROI)
caused by heartbeats. However, such models are constrained by the limited local
spatial or temporal receptive fields in the neural units. Unlike them, a native
Transformer-based framework called Dual-path TokenLearner (Dual-TL) is proposed
in this paper, which utilizes the concept of learnable tokens to integrate both
spatial and temporal informative contexts from the global perspective of the
video. Specifically, the proposed Dual-TL uses a Spatial TokenLearner (S-TL) to
explore associations in different facial ROIs, which promises the rPPG
prediction far away from noisy ROI disturbances. Complementarily, a Temporal
TokenLearner (T-TL) is designed to infer the quasi-periodic pattern of
heartbeats, which eliminates temporal disturbances such as head movements. The
two TokenLearners, S-TL and T-TL, are executed in a dual-path mode. This
enables the model to reduce noise disturbances for final rPPG signal
prediction. Extensive experiments on four physiological measurement benchmark
datasets are conducted. The Dual-TL achieves state-of-the-art performances in
both intra- and cross-dataset testings, demonstrating its immense potential as
a basic backbone for rPPG measurement. The source code is available at
\href{https://github.com/VUT-HFUT/Dual-TL}{https://github.com/VUT-HFUT/Dual-TL}
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 13:45:45 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Qian",
"Wei",
""
],
[
"Guo",
"Dan",
""
],
[
"Li",
"Kun",
""
],
[
"Tian",
"Xilan",
""
],
[
"Wang",
"Meng",
""
]
] |
new_dataset
| 0.998768 |
2308.07799
|
Raphaela Heil
|
Raphaela Heil, Malin Nauwerck
|
Handwritten Stenography Recognition and the LION Dataset
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Purpose: In this paper, we establish a baseline for handwritten stenography
recognition, using the novel LION dataset, and investigate the impact of
including selected aspects of stenographic theory into the recognition process.
We make the LION dataset publicly available with the aim of encouraging future
research in handwritten stenography recognition.
Methods: A state-of-the-art text recognition model is trained to establish a
baseline. Stenographic domain knowledge is integrated by applying four
different encoding methods that transform the target sequence into
representations, which approximate selected aspects of the writing system.
Results are further improved by integrating a pre-training scheme, based on
synthetic data.
Results: The baseline model achieves an average test character error rate
(CER) of 29.81% and a word error rate (WER) of 55.14%. Test error rates are
reduced significantly by combining stenography-specific target sequence
encodings with pre-training and fine-tuning, yielding CERs in the range of
24.5% - 26% and WERs of 44.8% - 48.2%.
Conclusion: The obtained results demonstrate the challenging nature of
stenography recognition. Integrating stenography-specific knowledge, in
conjunction with pre-training and fine-tuning on synthetic data, yields
considerable improvements. Together with our precursor study on the subject,
this is the first work to apply modern handwritten text recognition to
stenography. The dataset and our code are publicly available via Zenodo.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 14:25:53 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Heil",
"Raphaela",
""
],
[
"Nauwerck",
"Malin",
""
]
] |
new_dataset
| 0.999797 |
2308.07802
|
Paul Kielty
|
Paul Kielty, Cian Ryan, Mehdi Sefidgar Dilmaghani, Waseem Shariff, Joe
Lemley, Peter Corcoran
|
Neuromorphic Seatbelt State Detection for In-Cabin Monitoring with Event
Cameras
|
4 pages, 3 figures, IMVIP 2023
|
Zenodo (2023)
|
10.5281/zenodo.8223905
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Neuromorphic vision sensors, or event cameras, differ from conventional
cameras in that they do not capture images at a specified rate. Instead, they
asynchronously log local brightness changes at each pixel. As a result, event
cameras only record changes in a given scene, and do so with very high temporal
resolution, high dynamic range, and low power requirements. Recent research has
demonstrated how these characteristics make event cameras extremely practical
sensors in driver monitoring systems (DMS), enabling the tracking of high-speed
eye motion and blinks. This research provides a proof of concept to expand
event-based DMS techniques to include seatbelt state detection. Using an event
simulator, a dataset of 108,691 synthetic neuromorphic frames of car occupants
was generated from a near-infrared (NIR) dataset, and split into training,
validation, and test sets for a seatbelt state detection algorithm based on a
recurrent convolutional neural network (CNN). In addition, a smaller set of
real event data was collected and reserved for testing. In a binary
classification task, the fastened/unfastened frames were identified with an F1
score of 0.989 and 0.944 on the simulated and real test sets respectively. When
the problem extended to also classify the action of fastening/unfastening the
seatbelt, respective F1 scores of 0.964 and 0.846 were achieved.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 14:27:46 GMT"
}
] | 2023-08-16T00:00:00 |
[
[
"Kielty",
"Paul",
""
],
[
"Ryan",
"Cian",
""
],
[
"Dilmaghani",
"Mehdi Sefidgar",
""
],
[
"Shariff",
"Waseem",
""
],
[
"Lemley",
"Joe",
""
],
[
"Corcoran",
"Peter",
""
]
] |
new_dataset
| 0.999189 |
1408.0366
|
Yoshihiro Terasawa
|
Yoshihiro Terasawa
|
Publickey encryption by ordering
|
I want to rewrite
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In 1999, public key cryptography using the matrix was devised by a hish
school student of 16 yesrs old girl Sarah Flannery. This cryptosystem seemed
faster than RSA, and it's having the strength to surpass even the encryption to
RSA. However, this encryption scheme was broken bfore har papers were
published. In this paper, We try to construct publickey encryption scheme from
permutation group that is equivalent to matrix as noncommutative group. And we
explore the potential of this cryptsystem through implementation.
|
[
{
"version": "v1",
"created": "Sat, 2 Aug 2014 12:49:40 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Aug 2023 11:46:31 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Terasawa",
"Yoshihiro",
""
]
] |
new_dataset
| 0.987796 |
2011.09896
|
Nikolaus Piccolotto
|
Nikolaus Piccolotto, Markus B\"ogl, Theresia Gschwandtner, Christoph
Muehlmann, Klaus Nordhausen, Peter Filzmoser and Silvia Miksch
|
TBSSvis: Visual Analytics for Temporal Blind Source Separation
| null |
Visual Informatics, 6, 51-66, 2022
|
10.1016/j.visinf.2022.10.002
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal Blind Source Separation (TBSS) is used to obtain the true underlying
processes from noisy temporal multivariate data, such as electrocardiograms.
TBSS has similarities to Principal Component Analysis (PCA) as it separates the
input data into univariate components and is applicable to suitable datasets
from various domains, such as medicine, finance, or civil engineering. Despite
TBSS's broad applicability, the involved tasks are not well supported in
current tools, which offer only text-based interactions and single static
images. Analysts are limited in analyzing and comparing obtained results, which
consist of diverse data such as matrices and sets of time series. Additionally,
parameter settings have a big impact on separation performance, but as a
consequence of improper tooling, analysts currently do not consider the whole
parameter space. We propose to solve these problems by applying visual
analytics (VA) principles. Our primary contribution is a design study for TBSS,
which so far has not been explored by the visualization community. We developed
a task abstraction and visualization design in a user-centered design process.
Task-specific assembling of well-established visualization techniques and
algorithms to gain insights in the TBSS processes is our secondary
contribution. We present TBSSvis, an interactive web-based VA prototype, which
we evaluated extensively in two interviews with five TBSS experts. Feedback and
observations from these interviews show that TBSSvis supports the actual
workflow and combination of interactive visualizations that facilitate the
tasks involved in analyzing TBSS results.
|
[
{
"version": "v1",
"created": "Thu, 19 Nov 2020 15:29:16 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Feb 2022 10:27:49 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Piccolotto",
"Nikolaus",
""
],
[
"Bögl",
"Markus",
""
],
[
"Gschwandtner",
"Theresia",
""
],
[
"Muehlmann",
"Christoph",
""
],
[
"Nordhausen",
"Klaus",
""
],
[
"Filzmoser",
"Peter",
""
],
[
"Miksch",
"Silvia",
""
]
] |
new_dataset
| 0.99866 |
2112.06300
|
Zachary Ferguson
|
David Belgrod, Bolun Wang, Zachary Ferguson, Xin Zhao, Marco Attene,
Daniele Panozzo, Teseo Schneider
|
Time of Impact Dataset for Continuous Collision Detection and a Scalable
Conservative Algorithm
| null | null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a large-scale benchmark for broad- and narrow-phase continuous
collision detection (CCD) over linearized trajectories with exact time of
impacts and use it to evaluate the accuracy, correctness, and efficiency of 13
state-of-the-art CCD algorithms. Our analysis shows that several methods
exhibit problems either in efficiency or accuracy.
To overcome these limitations, we introduce an algorithm for CCD designed to
be scalable on modern parallel architectures and provably correct when
implemented using floating point arithmetic. We integrate our algorithm within
the Incremental Potential Contact solver [Li et al . 2021] and evaluate its
impact on various simulation scenarios. Our approach includes a broad-phase CCD
to quickly filter out primitives having disjoint bounding boxes and a
narrow-phase CCD that establishes whether the remaining primitive pairs indeed
collide. Our broad-phase algorithm is efficient and scalable thanks to the
experimental observation that sweeping along a coordinate axis performs
surprisingly well on modern parallel architectures. For narrow-phase CCD, we
re-design the recently proposed interval-based algorithm of Wang et al. [2021]
to work on massively parallel hardware.
To foster the adoption and development of future linear CCD algorithms, and
to evaluate their correctness, scalability, and overall performance, we release
the dataset with analytic ground truth, the implementation of all the
algorithms tested, and our testing framework.
|
[
{
"version": "v1",
"created": "Sun, 12 Dec 2021 18:47:55 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Feb 2022 00:45:48 GMT"
},
{
"version": "v3",
"created": "Mon, 22 Aug 2022 21:56:18 GMT"
},
{
"version": "v4",
"created": "Sun, 13 Aug 2023 08:02:00 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Belgrod",
"David",
""
],
[
"Wang",
"Bolun",
""
],
[
"Ferguson",
"Zachary",
""
],
[
"Zhao",
"Xin",
""
],
[
"Attene",
"Marco",
""
],
[
"Panozzo",
"Daniele",
""
],
[
"Schneider",
"Teseo",
""
]
] |
new_dataset
| 0.998673 |
2204.09803
|
Jintang Li
|
Jintang Li, Jie Liao, Ruofan Wu, Liang Chen, Zibin Zheng, Jiawang Dan,
Changhua Meng, Weiqiang Wang
|
GUARD: Graph Universal Adversarial Defense
|
Accepted by CIKM 2023. Code is publicly available at
https://github.com/EdisonLeeeee/GUARD
| null | null | null |
cs.LG cs.AI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph convolutional networks (GCNs) have been shown to be vulnerable to small
adversarial perturbations, which becomes a severe threat and largely limits
their applications in security-critical scenarios. To mitigate such a threat,
considerable research efforts have been devoted to increasing the robustness of
GCNs against adversarial attacks. However, current defense approaches are
typically designed to prevent GCNs from untargeted adversarial attacks and
focus on overall performance, making it challenging to protect important local
nodes from more powerful targeted adversarial attacks. Additionally, a
trade-off between robustness and performance is often made in existing
research. Such limitations highlight the need for developing an effective and
efficient approach that can defend local nodes against targeted attacks,
without compromising the overall performance of GCNs. In this work, we present
a simple yet effective method, named Graph Universal Adversarial Defense
(GUARD). Unlike previous works, GUARD protects each individual node from
attacks with a universal defensive patch, which is generated once and can be
applied to any node (node-agnostic) in a graph. GUARD is fast, straightforward
to implement without any change to network architecture nor any additional
parameters, and is broadly applicable to any GCNs. Extensive experiments on
four benchmark datasets demonstrate that GUARD significantly improves
robustness for several established GCNs against multiple adversarial attacks
and outperforms state-of-the-art defense methods by large margins.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2022 22:18:12 GMT"
},
{
"version": "v2",
"created": "Thu, 19 May 2022 09:49:34 GMT"
},
{
"version": "v3",
"created": "Mon, 15 Aug 2022 09:10:01 GMT"
},
{
"version": "v4",
"created": "Sat, 12 Aug 2023 10:03:40 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Li",
"Jintang",
""
],
[
"Liao",
"Jie",
""
],
[
"Wu",
"Ruofan",
""
],
[
"Chen",
"Liang",
""
],
[
"Zheng",
"Zibin",
""
],
[
"Dan",
"Jiawang",
""
],
[
"Meng",
"Changhua",
""
],
[
"Wang",
"Weiqiang",
""
]
] |
new_dataset
| 0.959913 |
2206.04596
|
Sarvesh Bipin Patil
|
Sarvesh Patil, Tony Tao, Tess Hellebrekers, Oliver Kroemer, F. Zeynep
Temel
|
Linear Delta Arrays for Compliant Dexterous Distributed Manipulation
|
ICRA 2023
| null |
10.1109/ICRA48891.2023.10160578
| null |
cs.RO cs.MA cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a new type of distributed dexterous manipulator: delta
arrays. Our delta array setup consists of 64 linearly-actuated delta robots
with 3D-printed compliant linkages. Through the design of the individual delta
robots, the modular array structure, and distributed communication and control,
we study a wide range of in-plane and out-of-plane manipulations, as well as
prehensile manipulations among subsets of neighboring delta robots. We also
demonstrate dexterous manipulation capabilities of the delta array using
reinforcement learning while leveraging the compliance to not break the
end-effectors. Our evaluations show that the resulting 192 DoF compliant robot
is capable of performing various coordinated distributed manipulations of a
variety of objects, including translation, alignment, prehensile squeezing,
lifting, and grasping.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 16:23:42 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Sep 2022 22:53:26 GMT"
},
{
"version": "v3",
"created": "Mon, 14 Aug 2023 11:52:57 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Patil",
"Sarvesh",
""
],
[
"Tao",
"Tony",
""
],
[
"Hellebrekers",
"Tess",
""
],
[
"Kroemer",
"Oliver",
""
],
[
"Temel",
"F. Zeynep",
""
]
] |
new_dataset
| 0.99766 |
2207.00721
|
Sarvesh Bipin Patil
|
Sarvesh Patil, Samuel C. Alvares, Pragna Mannam, Oliver Kroemer, F.
Zeynep Temel
|
DeltaZ: An Accessible Compliant Delta Robot Manipulator for Research and
Education
|
IROS 2022, first two authors contributed equally
|
IEEE International Conference on Robotics and Automation (ICRA),
2023, 10324-10330
|
10.1109/IROS47612.2022.9981257
| null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents the DeltaZ robot, a centimeter-scale, low-cost,
delta-style robot that allows for a broad range of capabilities and robust
functionalities. Current technologies allow DeltaZ to be 3D-printed from soft
and rigid materials so that it is easy to assemble and maintain, and lowers the
barriers to utilize. Functionality of the robot stems from its three
translational degrees of freedom and a closed form kinematic solution which
makes manipulation problems more intuitive compared to other manipulators.
Moreover, the low cost of the robot presents an opportunity to democratize
manipulators for a research setting. We also describe how the robot can be used
as a reinforcement learning benchmark. Open-source 3D-printable designs and
code are available to the public.
|
[
{
"version": "v1",
"created": "Sat, 2 Jul 2022 03:01:03 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Patil",
"Sarvesh",
""
],
[
"Alvares",
"Samuel C.",
""
],
[
"Mannam",
"Pragna",
""
],
[
"Kroemer",
"Oliver",
""
],
[
"Temel",
"F. Zeynep",
""
]
] |
new_dataset
| 0.999547 |
2208.00847
|
Wei Dai
|
Yuanyuan Liu, Wei Dai, Chuanxu Feng, Wenbin Wang, Guanghao Yin, Jiabei
Zeng and Shiguang Shan
|
MAFW: A Large-scale, Multi-modal, Compound Affective Database for
Dynamic Facial Expression Recognition in the Wild
|
This paper has been accepted by ACM MM'22
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic facial expression recognition (FER) databases provide important data
support for affective computing and applications. However, most FER databases
are annotated with several basic mutually exclusive emotional categories and
contain only one modality, e.g., videos. The monotonous labels and modality
cannot accurately imitate human emotions and fulfill applications in the real
world. In this paper, we propose MAFW, a large-scale multi-modal compound
affective database with 10,045 video-audio clips in the wild. Each clip is
annotated with a compound emotional category and a couple of sentences that
describe the subjects' affective behaviors in the clip. For the compound
emotion annotation, each clip is categorized into one or more of the 11
widely-used emotions, i.e., anger, disgust, fear, happiness, neutral, sadness,
surprise, contempt, anxiety, helplessness, and disappointment. To ensure high
quality of the labels, we filter out the unreliable annotations by an
Expectation Maximization (EM) algorithm, and then obtain 11 single-label
emotion categories and 32 multi-label emotion categories. To the best of our
knowledge, MAFW is the first in-the-wild multi-modal database annotated with
compound emotion annotations and emotion-related captions. Additionally, we
also propose a novel Transformer-based expression snippet feature learning
method to recognize the compound emotions leveraging the expression-change
relations among different emotions and modalities. Extensive experiments on
MAFW database show the advantages of the proposed method over other
state-of-the-art methods for both uni- and multi-modal FER. Our MAFW database
is publicly available from https://mafw-database.github.io/MAFW.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 13:34:33 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Aug 2023 05:22:41 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Liu",
"Yuanyuan",
""
],
[
"Dai",
"Wei",
""
],
[
"Feng",
"Chuanxu",
""
],
[
"Wang",
"Wenbin",
""
],
[
"Yin",
"Guanghao",
""
],
[
"Zeng",
"Jiabei",
""
],
[
"Shan",
"Shiguang",
""
]
] |
new_dataset
| 0.999705 |
2210.06551
|
Wentao Zhu
|
Wentao Zhu, Xiaoxuan Ma, Zhaoyang Liu, Libin Liu, Wayne Wu, Yizhou
Wang
|
MotionBERT: A Unified Perspective on Learning Human Motion
Representations
|
ICCV 2023 Camera Ready
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present a unified perspective on tackling various human-centric video
tasks by learning human motion representations from large-scale and
heterogeneous data resources. Specifically, we propose a pretraining stage in
which a motion encoder is trained to recover the underlying 3D motion from
noisy partial 2D observations. The motion representations acquired in this way
incorporate geometric, kinematic, and physical knowledge about human motion,
which can be easily transferred to multiple downstream tasks. We implement the
motion encoder with a Dual-stream Spatio-temporal Transformer (DSTformer)
neural network. It could capture long-range spatio-temporal relationships among
the skeletal joints comprehensively and adaptively, exemplified by the lowest
3D pose estimation error so far when trained from scratch. Furthermore, our
proposed framework achieves state-of-the-art performance on all three
downstream tasks by simply finetuning the pretrained motion encoder with a
simple regression head (1-2 layers), which demonstrates the versatility of the
learned motion representations. Code and models are available at
https://motionbert.github.io/
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 19:46:25 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2023 06:34:14 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Jul 2023 08:54:27 GMT"
},
{
"version": "v4",
"created": "Thu, 20 Jul 2023 04:59:45 GMT"
},
{
"version": "v5",
"created": "Mon, 14 Aug 2023 12:11:35 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Zhu",
"Wentao",
""
],
[
"Ma",
"Xiaoxuan",
""
],
[
"Liu",
"Zhaoyang",
""
],
[
"Liu",
"Libin",
""
],
[
"Wu",
"Wayne",
""
],
[
"Wang",
"Yizhou",
""
]
] |
new_dataset
| 0.999485 |
2212.10963
|
Simon Erfurth
|
Joan Boyar, Simon Erfurth, Kim S. Larsen, Ruben Niederhagen
|
Quotable Signatures for Authenticating Shared Quotes
|
25 pages, 7 figures
| null | null | null |
cs.CR cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quotable signature schemes are digital signature schemes with the additional
property that from the signature for a message, any party can extract
signatures for (allowable) quotes from the message, without knowing the secret
key or interacting with the signer of the original message. Crucially, the
extracted signatures are still signed with the original secret key. We define a
notion of security for quotable signature schemes and construct a concrete
example of a quotable signature scheme, using Merkle trees and classical
digital signature schemes. The scheme is shown to be secure, with respect to
the aforementioned notion of security. Additionally, we prove bounds on the
complexity of the constructed scheme and provide algorithms for signing,
quoting, and verifying. Finally, concrete use cases of quotable signatures are
considered, using them to combat misinformation by bolstering authentic content
on social media. We consider both how quotable signatures can be used, and why
using them could help mitigate the effects of fake news.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 12:07:46 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Mar 2023 04:55:26 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Jul 2023 12:58:41 GMT"
},
{
"version": "v4",
"created": "Mon, 14 Aug 2023 09:26:21 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Boyar",
"Joan",
""
],
[
"Erfurth",
"Simon",
""
],
[
"Larsen",
"Kim S.",
""
],
[
"Niederhagen",
"Ruben",
""
]
] |
new_dataset
| 0.958021 |
2301.00626
|
Alejandro Vigna-Gomez
|
Alejandro Vigna-G\'omez, Javier Murillo, Manelik Ramirez, Alberto
Borbolla, Ian M\'arquez and Prasun K. Ray
|
Design and analysis of tweet-based election models for the 2021 Mexican
legislative election
|
Accepted for publication in EPJ Data Science. 20 pages, 7 figures, 1
table
| null |
10.1140/epjds/s13688-023-00401-w
| null |
cs.SI cs.CL cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Modelling and forecasting real-life human behaviour using online social media
is an active endeavour of interest in politics, government, academia, and
industry. Since its creation in 2006, Twitter has been proposed as a potential
laboratory that could be used to gauge and predict social behaviour. During the
last decade, the user base of Twitter has been growing and becoming more
representative of the general population. Here we analyse this user base in the
context of the 2021 Mexican Legislative Election. To do so, we use a dataset of
15 million election-related tweets in the six months preceding election day. We
explore different election models that assign political preference to either
the ruling parties or the opposition. We find that models using data with
geographical attributes determine the results of the election with better
precision and accuracy than conventional polling methods. These results
demonstrate that analysis of public online data can outperform conventional
polling methods, and that political analysis and general forecasting would
likely benefit from incorporating such data in the immediate future. Moreover,
the same Twitter dataset with geographical attributes is positively correlated
with results from official census data on population and internet usage in
Mexico. These findings suggest that we have reached a period in time when
online activity, appropriately curated, can provide an accurate representation
of offline behaviour.
|
[
{
"version": "v1",
"created": "Mon, 2 Jan 2023 12:40:05 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jun 2023 08:01:38 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Vigna-Gómez",
"Alejandro",
""
],
[
"Murillo",
"Javier",
""
],
[
"Ramirez",
"Manelik",
""
],
[
"Borbolla",
"Alberto",
""
],
[
"Márquez",
"Ian",
""
],
[
"Ray",
"Prasun K.",
""
]
] |
new_dataset
| 0.999063 |
2301.06719
|
Yh.Peng Tu
|
Peng Tu, Xu Xie, Guo AI, Yuexiang Li, Yawen Huang, Yefeng Zheng
|
FemtoDet: An Object Detection Baseline for Energy Versus Performance
Tradeoffs
|
ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Efficient detectors for edge devices are often optimized for parameters or
speed count metrics, which remain in weak correlation with the energy of
detectors.
However, some vision applications of convolutional neural networks, such as
always-on surveillance cameras, are critical for energy constraints.
This paper aims to serve as a baseline by designing detectors to reach
tradeoffs between energy and performance from two perspectives:
1) We extensively analyze various CNNs to identify low-energy architectures,
including selecting activation functions, convolutions operators, and feature
fusion structures on necks. These underappreciated details in past work
seriously affect the energy consumption of detectors;
2) To break through the dilemmatic energy-performance problem, we propose a
balanced detector driven by energy using discovered low-energy components named
\textit{FemtoDet}.
In addition to the novel construction, we improve FemtoDet by considering
convolutions and training strategy optimizations.
Specifically, we develop a new instance boundary enhancement (IBE) module for
convolution optimization to overcome the contradiction between the limited
capacity of CNNs and detection tasks in diverse spatial representations, and
propose a recursive warm-restart (RecWR) for optimizing training strategy to
escape the sub-optimization of light-weight detectors by considering the data
shift produced in popular augmentations.
As a result, FemtoDet with only 68.77k parameters achieves a competitive
score of 46.3 AP50 on PASCAL VOC and 1.11 W $\&$ 64.47 FPS on Qualcomm
Snapdragon 865 CPU platforms.
Extensive experiments on COCO and TJU-DHD datasets indicate that the proposed
method achieves competitive results in diverse scenes.
|
[
{
"version": "v1",
"created": "Tue, 17 Jan 2023 06:24:08 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 15:57:28 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Jul 2023 07:36:01 GMT"
},
{
"version": "v4",
"created": "Mon, 17 Jul 2023 02:40:42 GMT"
},
{
"version": "v5",
"created": "Sun, 13 Aug 2023 17:25:45 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Tu",
"Peng",
""
],
[
"Xie",
"Xu",
""
],
[
"AI",
"Guo",
""
],
[
"Li",
"Yuexiang",
""
],
[
"Huang",
"Yawen",
""
],
[
"Zheng",
"Yefeng",
""
]
] |
new_dataset
| 0.994735 |
2303.00277
|
Sier Ha
|
Ha Sier, Xianjia Yu, Iacopo Catalano, Jorge Pena Queralta, Zhuo Zou
and Tomi Westerlund
|
UAV Tracking with Lidar as a Camera Sensors in GNSS-Denied Environments
|
I need to make some revisions to the paper because there are some
mistakes in the paper
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR has become one of the primary sensors in robotics and autonomous system
for high-accuracy situational awareness. In recent years, multi-modal LiDAR
systems emerged, and among them, LiDAR-as-a-camera sensors provide not only 3D
point clouds but also fixed-resolution 360{\deg}panoramic images by encoding
either depth, reflectivity, or near-infrared light in the image pixels. This
potentially brings computer vision capabilities on top of the potential of
LiDAR itself. In this paper, we are specifically interested in utilizing LiDARs
and LiDAR-generated images for tracking Unmanned Aerial Vehicles (UAVs) in
real-time which can benefit applications including docking, remote
identification, or counter-UAV systems, among others. This is, to the best of
our knowledge, the first work that explores the possibility of fusing the
images and point cloud generated by a single LiDAR sensor to track a UAV
without a priori known initialized position. We trained a custom YOLOv5 model
for detecting UAVs based on the panoramic images collected in an indoor
experiment arena with a MOCAP system. By integrating with the point cloud, we
are able to continuously provide the position of the UAV. Our experiment
demonstrated the effectiveness of the proposed UAV tracking approach compared
with methods based only on point clouds or images. Additionally, we evaluated
the real-time performance of our approach on the Nvidia Jetson Nano, a popular
mobile computing platform.
|
[
{
"version": "v1",
"created": "Wed, 1 Mar 2023 06:55:49 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2023 11:40:11 GMT"
},
{
"version": "v3",
"created": "Mon, 14 Aug 2023 11:04:31 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Sier",
"Ha",
""
],
[
"Yu",
"Xianjia",
""
],
[
"Catalano",
"Iacopo",
""
],
[
"Queralta",
"Jorge Pena",
""
],
[
"Zou",
"Zhuo",
""
],
[
"Westerlund",
"Tomi",
""
]
] |
new_dataset
| 0.999439 |
2303.01664
|
Yuma Koizumi
|
Yuma Koizumi, Heiga Zen, Shigeki Karita, Yifan Ding, Kohei Yatabe,
Nobuyuki Morioka, Yu Zhang, Wei Han, Ankur Bapna, Michiel Bacchiani
|
Miipher: A Robust Speech Restoration Model Integrating Self-Supervised
Speech and Text Representations
|
Accepted to WASPAA 2023
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Speech restoration (SR) is a task of converting degraded speech signals into
high-quality ones. In this study, we propose a robust SR model called Miipher,
and apply Miipher to a new SR application: increasing the amount of
high-quality training data for speech generation by converting speech samples
collected from the Web to studio-quality. To make our SR model robust against
various degradation, we use (i) a speech representation extracted from w2v-BERT
for the input feature, and (ii) a text representation extracted from
transcripts via PnG-BERT as a linguistic conditioning feature. Experiments show
that Miipher (i) is robust against various audio degradation and (ii) enable us
to train a high-quality text-to-speech (TTS) model from restored speech samples
collected from the Web. Audio samples are available at our demo page:
google.github.io/df-conformer/miipher/
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 01:57:16 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Aug 2023 09:22:18 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Koizumi",
"Yuma",
""
],
[
"Zen",
"Heiga",
""
],
[
"Karita",
"Shigeki",
""
],
[
"Ding",
"Yifan",
""
],
[
"Yatabe",
"Kohei",
""
],
[
"Morioka",
"Nobuyuki",
""
],
[
"Zhang",
"Yu",
""
],
[
"Han",
"Wei",
""
],
[
"Bapna",
"Ankur",
""
],
[
"Bacchiani",
"Michiel",
""
]
] |
new_dataset
| 0.997911 |
2303.06007
|
Bilal Farooq
|
Nael Alsaleh and Bilal Farooq
|
Sustainability Analysis Framework for On-Demand Public Transit Systems
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is an increased interest from transit agencies to replace fixed-route
transit services with on-demand public transits (ODT). However, it is still
unclear when and where such a service is efficient and sustainable. To this
end, we provide a comprehensive framework for assessing the sustainability of
ODT systems from the perspective of overall efficiency, environmental
footprint, and social equity and inclusion. The proposed framework is
illustrated by applying it to the Town of Innisfil, Ontario, where an ODT
system has been implemented since 2017. It can be concluded that when there is
adequate supply and no surge pricing, crowdsourced ODTs are the most
cost-effective transit system when the demand is below 3.37 riders/km2/day.
With surge pricing applied to crowdsourced ODTs, hybrid systems become the most
cost-effective transit solution when demand ranges between 1.18 and 3.37
riders/km2/day. The use of private vehicles is more environmentally sustainable
than providing public transit service at all demand levels below 3.37
riders/km2/day. However, the electrification of the public transit fleet along
with optimized charging strategies can reduce total yearly GHG emissions by
more than 98%. Furthermore, transit systems have similar equity distributions
for waiting and in-vehicle travel times.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 16:09:51 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Aug 2023 23:42:03 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Alsaleh",
"Nael",
""
],
[
"Farooq",
"Bilal",
""
]
] |
new_dataset
| 0.994891 |
2303.07274
|
Yonatan Bitton
|
Nitzan Bitton-Guetta, Yonatan Bitton, Jack Hessel, Ludwig Schmidt,
Yuval Elovici, Gabriel Stanovsky, Roy Schwartz
|
Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of
Synthetic and Compositional Images
|
Accepted to ICCV 2023. Website: whoops-benchmark.github.io
| null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Weird, unusual, and uncanny images pique the curiosity of observers because
they challenge commonsense. For example, an image released during the 2022
world cup depicts the famous soccer stars Lionel Messi and Cristiano Ronaldo
playing chess, which playfully violates our expectation that their competition
should occur on the football field. Humans can easily recognize and interpret
these unconventional images, but can AI models do the same? We introduce
WHOOPS!, a new dataset and benchmark for visual commonsense. The dataset is
comprised of purposefully commonsense-defying images created by designers using
publicly-available image generation tools like Midjourney. We consider several
tasks posed over the dataset. In addition to image captioning, cross-modal
matching, and visual question answering, we introduce a difficult explanation
generation task, where models must identify and explain why a given image is
unusual. Our results show that state-of-the-art models such as GPT3 and BLIP2
still lag behind human performance on WHOOPS!. We hope our dataset will inspire
the development of AI models with stronger visual commonsense reasoning
abilities. Data, models and code are available at the project website:
whoops-benchmark.github.io
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 16:49:43 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Mar 2023 21:30:06 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Jul 2023 16:36:38 GMT"
},
{
"version": "v4",
"created": "Sat, 12 Aug 2023 22:37:31 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Bitton-Guetta",
"Nitzan",
""
],
[
"Bitton",
"Yonatan",
""
],
[
"Hessel",
"Jack",
""
],
[
"Schmidt",
"Ludwig",
""
],
[
"Elovici",
"Yuval",
""
],
[
"Stanovsky",
"Gabriel",
""
],
[
"Schwartz",
"Roy",
""
]
] |
new_dataset
| 0.997941 |
2303.08597
|
Thanh Nhat Huy Nguyen
|
Huy Nguyen, Kien Nguyen, Sridha Sridharan, Clinton Fookes
|
Aerial-Ground Person Re-ID
|
Published on IEEE International Conference on Multimedia and Expo
2023 (ICME2023)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Person re-ID matches persons across multiple non-overlapping cameras. Despite
the increasing deployment of airborne platforms in surveillance, current
existing person re-ID benchmarks' focus is on ground-ground matching and very
limited efforts on aerial-aerial matching. We propose a new benchmark dataset -
AG-ReID, which performs person re-ID matching in a new setting: across aerial
and ground cameras. Our dataset contains 21,983 images of 388 identities and 15
soft attributes for each identity. The data was collected by a UAV flying at
altitudes between 15 to 45 meters and a ground-based CCTV camera on a
university campus. Our dataset presents a novel elevated-viewpoint challenge
for person re-ID due to the significant difference in person appearance across
these cameras. We propose an explainable algorithm to guide the person re-ID
model's training with soft attributes to address this challenge. Experiments
demonstrate the efficacy of our method on the aerial-ground person re-ID task.
The dataset will be published and the baseline codes will be open-sourced at
https://github.com/huynguyen792/AG-ReID to facilitate research in this area.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 13:07:21 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 09:32:42 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Mar 2023 00:36:08 GMT"
},
{
"version": "v4",
"created": "Mon, 27 Mar 2023 07:56:21 GMT"
},
{
"version": "v5",
"created": "Mon, 14 Aug 2023 04:44:50 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Nguyen",
"Huy",
""
],
[
"Nguyen",
"Kien",
""
],
[
"Sridharan",
"Sridha",
""
],
[
"Fookes",
"Clinton",
""
]
] |
new_dataset
| 0.999823 |
2303.09695
|
Sauradip Nag
|
Sauradip Nag, Anran Qi, Xiatian Zhu and Ariel Shamir
|
PersonalTailor: Personalizing 2D Pattern Design from 3D Garment Point
Clouds
|
Technical Report
| null | null | null |
cs.CV cs.GR cs.MM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Garment pattern design aims to convert a 3D garment to the corresponding 2D
panels and their sewing structure. Existing methods rely either on template
fitting with heuristics and prior assumptions, or on model learning with
complicated shape parameterization. Importantly, both approaches do not allow
for personalization of the output garment, which today has increasing demands.
To fill this demand, we introduce PersonalTailor: a personalized 2D pattern
design method, where the user can input specific constraints or demands (in
language or sketch) for personal 2D panel fabrication from 3D point clouds.
PersonalTailor first learns a multi-modal panel embeddings based on
unsupervised cross-modal association and attentive fusion. It then predicts a
binary panel masks individually using a transformer encoder-decoder framework.
Extensive experiments show that our PersonalTailor excels on both personalized
and standard pattern fabrication tasks.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 00:03:38 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Aug 2023 20:07:48 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Nag",
"Sauradip",
""
],
[
"Qi",
"Anran",
""
],
[
"Zhu",
"Xiatian",
""
],
[
"Shamir",
"Ariel",
""
]
] |
new_dataset
| 0.997784 |
2303.16986
|
Kamran Shafafi
|
Kamran Shafafi, Eduardo Nuno Almeida, Andr\'e Coelho, Helder Fontes,
Manuel Ricardo, Rui Campos
|
UAV-Assisted Wireless Communications: An Experimental Analysis of A2G
and G2A Channels
| null | null | null | null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned Aerial Vehicles (UAVs) offer promising potential as communications
node carriers, providing on-demand wireless connectivity to users. While
existing literature presents various wireless channel models, it often
overlooks the impact of UAV heading. This paper provides an experimental
characterization of the Air-to-Ground (A2G) and Ground-to-Air (G2A) wireless
channels in an open environment with no obstacles nor interference, considering
the distance and the UAV heading. We analyze the received signal strength
indicator and the TCP throughput between a ground user and a UAV, covering
distances between 50~m and 500~m, and considering different UAV headings.
Additionally, we characterize the antenna's radiation pattern based on UAV
headings. The paper provides valuable perspectives on the capabilities of UAVs
in offering on-demand and dynamic wireless connectivity, as well as highlights
the significance of considering UAV heading and antenna configurations in
real-world scenarios.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 19:26:38 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 08:40:24 GMT"
},
{
"version": "v3",
"created": "Sat, 12 Aug 2023 10:07:53 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Shafafi",
"Kamran",
""
],
[
"Almeida",
"Eduardo Nuno",
""
],
[
"Coelho",
"André",
""
],
[
"Fontes",
"Helder",
""
],
[
"Ricardo",
"Manuel",
""
],
[
"Campos",
"Rui",
""
]
] |
new_dataset
| 0.997793 |
2304.08842
|
Sicen Guo
|
Sicen Guo, Jiahang Li, Shuai Su, Yi Feng, Dacheng Zhou, Chen Chen,
Denghuang Zhang, Xingyi Zhu, Qijun Chen, Rui Fan
|
UDTIRI: An Open-Source Intelligent Road Inspection Benchmark Suite
|
Database webpage: https://www.udtiri.com/, Kaggle webpage:
https://www.kaggle.com/datasets/jiahangli617/udtiri
| null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is seen that there is enormous potential to leverage powerful deep
learning methods in the emerging field of urban digital twins. It is
particularly in the area of intelligent road inspection where there is
currently limited research and data available. To facilitate progress in this
field, we have developed a well-labeled road pothole dataset named Urban
Digital Twins Intelligent Road Inspection (UDTIRI) dataset. We hope this
dataset will enable the use of powerful deep learning methods in urban road
inspection, providing algorithms with a more comprehensive understanding of the
scene and maximizing their potential. Our dataset comprises 1000 images of
potholes, captured in various scenarios with different lighting and humidity
conditions. Our intention is to employ this dataset for object detection,
semantic segmentation, and instance segmentation tasks. Our team has devoted
significant effort to conducting a detailed statistical analysis, and
benchmarking a selection of representative algorithms from recent years. We
also provide a multi-task platform for researchers to fully exploit the
performance of various algorithms with the support of UDTIRI dataset.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 09:13:52 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Aug 2023 11:31:34 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Guo",
"Sicen",
""
],
[
"Li",
"Jiahang",
""
],
[
"Su",
"Shuai",
""
],
[
"Feng",
"Yi",
""
],
[
"Zhou",
"Dacheng",
""
],
[
"Chen",
"Chen",
""
],
[
"Zhang",
"Denghuang",
""
],
[
"Zhu",
"Xingyi",
""
],
[
"Chen",
"Qijun",
""
],
[
"Fan",
"Rui",
""
]
] |
new_dataset
| 0.999793 |
2304.12687
|
Ligong Wang
|
Amos Lapidoth and Ligong Wang
|
State-Dependent DMC with a Causal Helper
|
To appear in the IEEE Transactions on Information Theory
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A memoryless state sequence governing the behavior of a memoryless
state-dependent channel is to be described causally to an encoder wishing to
communicate over said channel. Given the maximal-allowed description rate, we
seek the description that maximizes the Shannon capacity. It is shown that the
maximum need not be achieved by a memoryless (symbol-by-symbol) description.
Such descriptions are, however, optimal when the receiver is cognizant of the
state sequence or when the description is allowed to depend on the message. For
other cases, a block-Markov scheme with backward decoding is proposed.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 09:42:11 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Aug 2023 06:46:04 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Lapidoth",
"Amos",
""
],
[
"Wang",
"Ligong",
""
]
] |
new_dataset
| 0.98749 |
2305.01643
|
Shengyu Huang
|
Shengyu Huang, Zan Gojcic, Zian Wang, Francis Williams, Yoni Kasten,
Sanja Fidler, Konrad Schindler, Or Litany
|
Neural LiDAR Fields for Novel View Synthesis
|
ICCV 2023 - camera ready. Project page:
https://research.nvidia.com/labs/toronto-ai/nfl/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present Neural Fields for LiDAR (NFL), a method to optimise a neural field
scene representation from LiDAR measurements, with the goal of synthesizing
realistic LiDAR scans from novel viewpoints. NFL combines the rendering power
of neural fields with a detailed, physically motivated model of the LiDAR
sensing process, thus enabling it to accurately reproduce key sensor behaviors
like beam divergence, secondary returns, and ray dropping. We evaluate NFL on
synthetic and real LiDAR scans and show that it outperforms explicit
reconstruct-then-simulate methods as well as other NeRF-style methods on LiDAR
novel view synthesis task. Moreover, we show that the improved realism of the
synthesized views narrows the domain gap to real scans and translates to better
registration and semantic segmentation performance.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 17:55:38 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Aug 2023 09:25:18 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Huang",
"Shengyu",
""
],
[
"Gojcic",
"Zan",
""
],
[
"Wang",
"Zian",
""
],
[
"Williams",
"Francis",
""
],
[
"Kasten",
"Yoni",
""
],
[
"Fidler",
"Sanja",
""
],
[
"Schindler",
"Konrad",
""
],
[
"Litany",
"Or",
""
]
] |
new_dataset
| 0.977309 |
2305.09419
|
Gilbert Netzer
|
Gilbert Netzer and Stefano Markidis
|
QHDL: a Low-Level Circuit Description Language for Quantum Computing
|
4 pages, 7 figures, to be published in Proceedings of the 20th ACM
International Conference on Computing Frontiers, May 9-11, 2023, Bologna,
Italy
| null |
10.1145/3587135.3592191
| null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a descriptive language called QHDL, akin to VHDL, to
program gate-based quantum computing systems. Unlike other popular quantum
programming languages, QHDL targets low-level quantum computing programming and
aims to provide a common framework for programming FPGAs and gate-based quantum
computing systems. The paper presents an initial implementation and design
principles of the QHDL framework, including a compiler and quantum computer
simulator. We discuss the challenges of low-level integration of streaming
models and quantum computing for programming FPGAs and gate-based quantum
computing systems.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 13:18:27 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Netzer",
"Gilbert",
""
],
[
"Markidis",
"Stefano",
""
]
] |
new_dataset
| 0.99961 |
2306.02898
|
Shuyu Yang
|
Shuyu Yang, Yinan Zhou, Yaxiong Wang, Yujiao Wu, Li Zhu, Zhedong Zheng
|
Towards Unified Text-based Person Retrieval: A Large-scale
Multi-Attribute and Language Search Benchmark
| null | null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce a large Multi-Attribute and Language Search
dataset for text-based person retrieval, called MALS, and explore the
feasibility of performing pre-training on both attribute recognition and
image-text matching tasks in one stone. In particular, MALS contains 1,510,330
image-text pairs, which is about 37.5 times larger than prevailing CUHK-PEDES,
and all images are annotated with 27 attributes. Considering the privacy
concerns and annotation costs, we leverage the off-the-shelf diffusion models
to generate the dataset. To verify the feasibility of learning from the
generated data, we develop a new joint Attribute Prompt Learning and Text
Matching Learning (APTM) framework, considering the shared knowledge between
attribute and text. As the name implies, APTM contains an attribute prompt
learning stream and a text matching learning stream. (1) The attribute prompt
learning leverages the attribute prompts for image-attribute alignment, which
enhances the text matching learning. (2) The text matching learning facilitates
the representation learning on fine-grained details, and in turn, boosts the
attribute prompt learning. Extensive experiments validate the effectiveness of
the pre-training on MALS, achieving state-of-the-art retrieval performance via
APTM on three challenging real-world benchmarks. In particular, APTM achieves a
consistent improvement of +6.96%, +7.68%, and +16.95% Recall@1 accuracy on
CUHK-PEDES, ICFG-PEDES, and RSTPReid datasets by a clear margin, respectively.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 14:06:24 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 06:42:56 GMT"
},
{
"version": "v3",
"created": "Fri, 11 Aug 2023 11:13:08 GMT"
},
{
"version": "v4",
"created": "Mon, 14 Aug 2023 07:37:27 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Yang",
"Shuyu",
""
],
[
"Zhou",
"Yinan",
""
],
[
"Wang",
"Yaxiong",
""
],
[
"Wu",
"Yujiao",
""
],
[
"Zhu",
"Li",
""
],
[
"Zheng",
"Zhedong",
""
]
] |
new_dataset
| 0.987319 |
2306.07705
|
Zhongxiang Sun
|
Zhongxiang Sun and Zihua Si and Xiaoxue Zang and Dewei Leng and Yanan
Niu and Yang Song and Xiao Zhang and Jun Xu
|
KuaiSAR: A Unified Search And Recommendation Dataset
|
CIKM 2023 resource track
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The confluence of Search and Recommendation (S&R) services is vital to online
services, including e-commerce and video platforms. The integration of S&R
modeling is a highly intuitive approach adopted by industry practitioners.
However, there is a noticeable lack of research conducted in this area within
academia, primarily due to the absence of publicly available datasets.
Consequently, a substantial gap has emerged between academia and industry
regarding research endeavors in joint optimization using user behavior data
from both S&R services. To bridge this gap, we introduce the first large-scale,
real-world dataset KuaiSAR of integrated Search And Recommendation behaviors
collected from Kuaishou, a leading short-video app in China with over 350
million daily active users. Previous research in this field has predominantly
employed publicly available semi-synthetic datasets and simulated, with
artificially fabricated search behaviors. Distinct from previous datasets,
KuaiSAR contains genuine user behaviors, including the occurrence of each
interaction within either search or recommendation service, and the users'
transitions between the two services. This work aids in joint modeling of S&R,
and utilizing search data for recommender systems (and recommendation data for
search engines). Furthermore, due to the various feedback labels associated
with user-video interactions, KuaiSAR also supports a broad range of tasks,
including intent recommendation, multi-task learning, and modeling of long
sequential multi-behavioral patterns. We believe this dataset will serve as a
catalyst for innovative research and bridge the gap between academia and
industry in understanding the S&R services in practical, real-world
applications.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 11:46:37 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jun 2023 11:18:36 GMT"
},
{
"version": "v3",
"created": "Sun, 18 Jun 2023 07:49:58 GMT"
},
{
"version": "v4",
"created": "Mon, 14 Aug 2023 03:48:45 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Sun",
"Zhongxiang",
""
],
[
"Si",
"Zihua",
""
],
[
"Zang",
"Xiaoxue",
""
],
[
"Leng",
"Dewei",
""
],
[
"Niu",
"Yanan",
""
],
[
"Song",
"Yang",
""
],
[
"Zhang",
"Xiao",
""
],
[
"Xu",
"Jun",
""
]
] |
new_dataset
| 0.996159 |
2306.09011
|
Kevis-Kokitsi Maninis
|
Kevis-Kokitsi Maninis, Stefan Popov, Matthias Nie{\ss}ner, Vittorio
Ferrari
|
CAD-Estate: Large-scale CAD Model Annotation in RGB Videos
|
Project page: https://github.com/google-research/cad-estate
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a method for annotating videos of complex multi-object scenes with
a globally-consistent 3D representation of the objects. We annotate each object
with a CAD model from a database, and place it in the 3D coordinate frame of
the scene with a 9-DoF pose transformation. Our method is semi-automatic and
works on commonly-available RGB videos, without requiring a depth sensor. Many
steps are performed automatically, and the tasks performed by humans are
simple, well-specified, and require only limited reasoning in 3D. This makes
them feasible for crowd-sourcing and has allowed us to construct a large-scale
dataset by annotating real-estate videos from YouTube. Our dataset CAD-Estate
offers 101k instances of 12k unique CAD models placed in the 3D representations
of 20k videos. In comparison to Scan2CAD, the largest existing dataset with CAD
model annotations on real scenes, CAD-Estate has 7x more instances and 4x more
unique CAD models. We showcase the benefits of pre-training a Mask2CAD model on
CAD-Estate for the task of automatic 3D object reconstruction and pose
estimation, demonstrating that it leads to performance improvements on the
popular Scan2CAD benchmark. The dataset is available at
https://github.com/google-research/cad-estate.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 10:12:02 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Aug 2023 12:16:53 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Maninis",
"Kevis-Kokitsi",
""
],
[
"Popov",
"Stefan",
""
],
[
"Nießner",
"Matthias",
""
],
[
"Ferrari",
"Vittorio",
""
]
] |
new_dataset
| 0.973932 |
2307.01482
|
Tong Nie
|
Tong Nie, Guoyang Qin, Lijun Sun, Yunpeng Wang, Jian Sun
|
Nexus sine qua non: Essentially Connected Networks for Traffic
Forecasting
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spatiotemporal graph neural networks (STGNNs) have emerged as a leading
approach for learning representations and forecasting on traffic datasets with
underlying topological and correlational structures. However, current STGNNs
use intricate techniques with high complexities to capture these structures,
making them difficult to understand and scale. The existence of simple yet
efficient architectures remains an open question. Upon closer examination, we
find what lies at the core of STGNN's representations are certain forms of
spatiotemporal contextualization. In light of this, we design Nexus sine qua
non (NexuSQN), an essentially connected network built on an efficient
message-passing backbone. NexuSQN simply uses learnable "where" and "when"
locators for the aforementioned contextualization and omits any intricate
components such as RNNs, Transformers, and diffusion convolutions. Results show
that NexuSQN outperforms intricately designed benchmarks in terms of size,
computational efficiency, and accuracy. This suggests a promising future for
developing simple yet efficient neural predictors.
|
[
{
"version": "v1",
"created": "Tue, 4 Jul 2023 05:19:19 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jul 2023 02:40:29 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Aug 2023 07:39:53 GMT"
},
{
"version": "v4",
"created": "Sun, 13 Aug 2023 08:42:08 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Nie",
"Tong",
""
],
[
"Qin",
"Guoyang",
""
],
[
"Sun",
"Lijun",
""
],
[
"Wang",
"Yunpeng",
""
],
[
"Sun",
"Jian",
""
]
] |
new_dataset
| 0.997877 |
2307.06281
|
Haodong Duan
|
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo
Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
|
MMBench: Is Your Multi-modal Model an All-around Player?
| null | null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 16:23:09 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jul 2023 16:02:57 GMT"
},
{
"version": "v3",
"created": "Sun, 13 Aug 2023 13:12:47 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Liu",
"Yuan",
""
],
[
"Duan",
"Haodong",
""
],
[
"Zhang",
"Yuanhan",
""
],
[
"Li",
"Bo",
""
],
[
"Zhang",
"Songyang",
""
],
[
"Zhao",
"Wangbo",
""
],
[
"Yuan",
"Yike",
""
],
[
"Wang",
"Jiaqi",
""
],
[
"He",
"Conghui",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Chen",
"Kai",
""
],
[
"Lin",
"Dahua",
""
]
] |
new_dataset
| 0.99914 |
2307.06505
|
Shanliang Yao
|
Shanliang Yao, Runwei Guan, Zhaodong Wu, Yi Ni, Zile Huang, Zixian
Zhang, Yong Yue, Weiping Ding, Eng Gee Lim, Hyungjoon Seo, Ka Lok Man,
Xiaohui Zhu, Yutao Yue
|
WaterScenes: A Multi-Task 4D Radar-Camera Fusion Dataset and Benchmark
for Autonomous Driving on Water Surfaces
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous driving on water surfaces plays an essential role in executing
hazardous and time-consuming missions, such as maritime surveillance, survivors
rescue, environmental monitoring, hydrography mapping and waste cleaning. This
work presents WaterScenes, the first multi-task 4D radar-camera fusion dataset
for autonomous driving on water surfaces. Equipped with a 4D radar and a
monocular camera, our Unmanned Surface Vehicle (USV) proffers all-weather
solutions for discerning object-related information, including color, shape,
texture, range, velocity, azimuth, and elevation. Focusing on typical static
and dynamic objects on water surfaces, we label the camera images and radar
point clouds at pixel-level and point-level, respectively. In addition to basic
perception tasks, such as object detection, instance segmentation and semantic
segmentation, we also provide annotations for free-space segmentation and
waterline segmentation. Leveraging the multi-task and multi-modal data, we
conduct benchmark experiments on the uni-modality of radar and camera, as well
as the fused modalities. Experimental results demonstrate that 4D radar-camera
fusion can considerably improve the accuracy and robustness of perception on
water surfaces, especially in adverse lighting and weather conditions.
WaterScenes dataset is public on https://waterscenes.github.io.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 01:05:12 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Aug 2023 08:52:02 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Yao",
"Shanliang",
""
],
[
"Guan",
"Runwei",
""
],
[
"Wu",
"Zhaodong",
""
],
[
"Ni",
"Yi",
""
],
[
"Huang",
"Zile",
""
],
[
"Zhang",
"Zixian",
""
],
[
"Yue",
"Yong",
""
],
[
"Ding",
"Weiping",
""
],
[
"Lim",
"Eng Gee",
""
],
[
"Seo",
"Hyungjoon",
""
],
[
"Man",
"Ka Lok",
""
],
[
"Zhu",
"Xiaohui",
""
],
[
"Yue",
"Yutao",
""
]
] |
new_dataset
| 0.999815 |
2307.08602
|
Hiroyasu Tsukamoto
|
Hiroyasu Tsukamoto and Benjamin Rivi\`ere and Changrak Choi and Amir
Rahmani and Soon-Jo Chung
|
CaRT: Certified Safety and Robust Tracking in Learning-based Motion
Planning for Multi-Agent Systems
|
IEEE Conference on Decision and Control (CDC), Preprint Version,
Accepted July, 2023
| null | null | null |
cs.RO cs.LG cs.MA cs.SY eess.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The key innovation of our analytical method, CaRT, lies in establishing a new
hierarchical, distributed architecture to guarantee the safety and robustness
of a given learning-based motion planning policy. First, in a nominal setting,
the analytical form of our CaRT safety filter formally ensures safe maneuvers
of nonlinear multi-agent systems, optimally with minimal deviation from the
learning-based policy. Second, in off-nominal settings, the analytical form of
our CaRT robust filter optimally tracks the certified safe trajectory,
generated by the previous layer in the hierarchy, the CaRT safety filter. We
show using contraction theory that CaRT guarantees safety and the exponential
boundedness of the trajectory tracking error, even under the presence of
deterministic and stochastic disturbance. Also, the hierarchical nature of CaRT
enables enhancing its robustness for safety just by its superior tracking to
the certified safe trajectory, thereby making it suitable for off-nominal
scenarios with large disturbances. This is a major distinction from
conventional safety function-driven approaches, where the robustness originates
from the stability of a safe set, which could pull the system
over-conservatively to the interior of the safe set. Our log-barrier
formulation in CaRT allows for its distributed implementation in multi-agent
settings. We demonstrate the effectiveness of CaRT in several examples of
nonlinear motion planning and control problems, including optimal,
multi-spacecraft reconfiguration.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 21:51:29 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Aug 2023 20:36:46 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Tsukamoto",
"Hiroyasu",
""
],
[
"Rivière",
"Benjamin",
""
],
[
"Choi",
"Changrak",
""
],
[
"Rahmani",
"Amir",
""
],
[
"Chung",
"Soon-Jo",
""
]
] |
new_dataset
| 0.993446 |
2307.09531
|
Kai Huang
|
Kai Huang, Junqiao Zhao, Zhongyang Zhu, Chen Ye, Tiantian Feng
|
LOG-LIO: A LiDAR-Inertial Odometry with Efficient Local Geometric
Information Estimation
|
8 pages, 4 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Local geometric information, i.e. normal and distribution of points, is
crucial for LiDAR-based simultaneous localization and mapping (SLAM) because it
provides constraints for data association, which further determines the
direction of optimization and ultimately affects the accuracy of localization.
However, estimating normal and distribution of points are time-consuming tasks
even with the assistance of kdtree or volumetric maps. To achieve fast normal
estimation, we look into the structure of LiDAR scan and propose a ring-based
fast approximate least squares (Ring FALS) method. With the Ring structural
information, estimating the normal requires only the range information of the
points when a new scan arrives. To efficiently estimate the distribution of
points, we extend the ikd-tree to manage the map in voxels and update the
distribution of points in each voxel incrementally while maintaining its
consistency with the normal estimation. We further fix the distribution after
its convergence to balance the time consumption and the correctness of
representation. Based on the extracted and maintained local geometric
information, we devise a robust and accurate hierarchical data association
scheme where point-to-surfel association is prioritized over point-to-plane.
Extensive experiments on diverse public datasets demonstrate the advantages of
our system compared to other state-of-the-art methods. Our open source
implementation is available at https://github.com/tiev-tongji/LOG-LIO.
|
[
{
"version": "v1",
"created": "Tue, 18 Jul 2023 18:20:56 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Aug 2023 01:47:50 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Huang",
"Kai",
""
],
[
"Zhao",
"Junqiao",
""
],
[
"Zhu",
"Zhongyang",
""
],
[
"Ye",
"Chen",
""
],
[
"Feng",
"Tiantian",
""
]
] |
new_dataset
| 0.998785 |
2307.10818
|
Dongwei Xiao
|
Dongwei Xiao, Zhibo Liu, and Shuai Wang
|
PHYFU: Fuzzing Modern Physics Simulation Engines
|
This paper is accepted at The 38th IEEE/ACM International Conference
on Automated Software Engineering, a.k.a. ASE 2023. Please cite the published
version as soon as this paper appears in the conference publications
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
A physical simulation engine (PSE) is a software system that simulates
physical environments and objects. Modern PSEs feature both forward and
backward simulations, where the forward phase predicts the behavior of a
simulated system, and the backward phase provides gradients (guidance) for
learning-based control tasks, such as a robot arm learning to fetch items. This
way, modern PSEs show promising support for learning-based control methods. To
date, PSEs have been largely used in various high-profitable, commercial
applications, such as games, movies, virtual reality (VR), and robotics.
Despite the prosperous development and usage of PSEs by academia and industrial
manufacturers such as Google and NVIDIA, PSEs may produce incorrect
simulations, which may lead to negative results, from poor user experience in
entertainment to accidents in robotics-involved manufacturing and surgical
operations.
This paper introduces PHYFU, a fuzzing framework designed specifically for
PSEs to uncover errors in both forward and backward simulation phases. PHYFU
mutates initial states and asserts if the PSE under test behaves consistently
with respect to basic Physics Laws (PLs). We further use feedback-driven test
input scheduling to guide and accelerate the search for errors. Our study of
four PSEs covers mainstream industrial vendors (Google and NVIDIA) as well as
academic products. We successfully uncover over 5K error-triggering inputs that
generate incorrect simulation results spanning across the whole software stack
of PSEs.
|
[
{
"version": "v1",
"created": "Thu, 20 Jul 2023 12:26:50 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Aug 2023 03:58:59 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Xiao",
"Dongwei",
""
],
[
"Liu",
"Zhibo",
""
],
[
"Wang",
"Shuai",
""
]
] |
new_dataset
| 0.999749 |
2308.01686
|
Zhiwei Zhang
|
Zhiwei Zhang, Zhizhong Zhang, Qian Yu, Ran Yi, Yuan Xie and Lizhuang
Ma
|
LiDAR-Camera Panoptic Segmentation via Geometry-Consistent and
Semantic-Aware Alignment
|
Accepted as ICCV 2023 paper
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
3D panoptic segmentation is a challenging perception task that requires both
semantic segmentation and instance segmentation. In this task, we notice that
images could provide rich texture, color, and discriminative information, which
can complement LiDAR data for evident performance improvement, but their fusion
remains a challenging problem. To this end, we propose LCPS, the first
LiDAR-Camera Panoptic Segmentation network. In our approach, we conduct
LiDAR-Camera fusion in three stages: 1) an Asynchronous Compensation Pixel
Alignment (ACPA) module that calibrates the coordinate misalignment caused by
asynchronous problems between sensors; 2) a Semantic-Aware Region Alignment
(SARA) module that extends the one-to-one point-pixel mapping to one-to-many
semantic relations; 3) a Point-to-Voxel feature Propagation (PVP) module that
integrates both geometric and semantic fusion information for the entire point
cloud. Our fusion strategy improves about 6.9% PQ performance over the
LiDAR-only baseline on NuScenes dataset. Extensive quantitative and qualitative
experiments further demonstrate the effectiveness of our novel framework. The
code will be released at https://github.com/zhangzw12319/lcps.git.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 10:57:58 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Aug 2023 18:32:54 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Zhang",
"Zhiwei",
""
],
[
"Zhang",
"Zhizhong",
""
],
[
"Yu",
"Qian",
""
],
[
"Yi",
"Ran",
""
],
[
"Xie",
"Yuan",
""
],
[
"Ma",
"Lizhuang",
""
]
] |
new_dataset
| 0.991069 |
2308.01861
|
Xueying Du
|
Xueying Du, Mingwei Liu, Kaixin Wang, Hanlin Wang, Junwei Liu, Yixuan
Chen, Jiayi Feng, Chaofeng Sha, Xin Peng, Yiling Lou
|
ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on
Class-level Code Generation
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this work, we make the first attempt to evaluate LLMs in a more
challenging code generation scenario, i.e. class-level code generation. We
first manually construct the first class-level code generation benchmark
ClassEval of 100 class-level Python code generation tasks with approximately
500 person-hours. Based on it, we then perform the first study of 11
state-of-the-art LLMs on class-level code generation. Based on our results, we
have the following main findings. First, we find that all existing LLMs show
much worse performance on class-level code generation compared to on standalone
method-level code generation benchmarks like HumanEval; and the method-level
coding ability cannot equivalently reflect the class-level coding ability among
LLMs. Second, we find that GPT-4 and GPT-3.5 still exhibit dominate superior
than other LLMs on class-level code generation, and the second-tier models
includes Instruct-Starcoder, Instruct-Codegen, and Wizardcoder with very
similar performance. Third, we find that generating the entire class all at
once (i.e. holistic generation strategy) is the best generation strategy only
for GPT-4 and GPT-3.5, while method-by-method generation (i.e. incremental and
compositional) is better strategies for the other models with limited ability
of understanding long instructions and utilizing the middle information.
Lastly, we find the limited model ability of generating method-dependent code
and discuss the frequent error types in generated classes. Our benchmark is
available at https://github.com/FudanSELab/ClassEval.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 16:31:02 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Aug 2023 09:07:00 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Du",
"Xueying",
""
],
[
"Liu",
"Mingwei",
""
],
[
"Wang",
"Kaixin",
""
],
[
"Wang",
"Hanlin",
""
],
[
"Liu",
"Junwei",
""
],
[
"Chen",
"Yixuan",
""
],
[
"Feng",
"Jiayi",
""
],
[
"Sha",
"Chaofeng",
""
],
[
"Peng",
"Xin",
""
],
[
"Lou",
"Yiling",
""
]
] |
new_dataset
| 0.975289 |
2308.04498
|
Hao Fei
|
Yiyun Xiong, Mengwei Dai, Fei Li, Hao Fei, Bobo Li, Shengqiong Wu,
Donghong Ji, Chong Teng
|
DialogRE^C+: An Extension of DialogRE to Investigate How Much
Coreference Helps Relation Extraction in Dialogs
|
Accepted by NLPCC 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Dialogue relation extraction (DRE) that identifies the relations between
argument pairs in dialogue text, suffers much from the frequent occurrence of
personal pronouns, or entity and speaker coreference. This work introduces a
new benchmark dataset DialogRE^C+, introducing coreference resolution into the
DRE scenario. With the aid of high-quality coreference knowledge, the reasoning
of argument relations is expected to be enhanced. In DialogRE^C+ dataset, we
manually annotate total 5,068 coreference chains over 36,369 argument mentions
based on the existing DialogRE data, where four different coreference chain
types namely speaker chain, person chain, location chain and organization chain
are explicitly marked. We further develop 4 coreference-enhanced graph-based
DRE models, which learn effective coreference representations for improving the
DRE task. We also train a coreference resolution model based on our annotations
and evaluate the effect of automatically extracted coreference chains
demonstrating the practicality of our dataset and its potential to other
domains and tasks.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 18:03:29 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Aug 2023 06:12:36 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Xiong",
"Yiyun",
""
],
[
"Dai",
"Mengwei",
""
],
[
"Li",
"Fei",
""
],
[
"Fei",
"Hao",
""
],
[
"Li",
"Bobo",
""
],
[
"Wu",
"Shengqiong",
""
],
[
"Ji",
"Donghong",
""
],
[
"Teng",
"Chong",
""
]
] |
new_dataset
| 0.999485 |
2308.04889
|
Steffen Eger
|
Steffen Eger and Christoph Leiter and Jonas Belouadi and Ran Zhang and
Aida Kostikova and Daniil Larionov and Yanran Chen and Vivian Fresen
|
NLLG Quarterly arXiv Report 06/23: What are the most influential current
AI Papers?
|
Technical Report
| null | null | null |
cs.CY cs.AI cs.CL cs.DL cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The rapid growth of information in the field of Generative Artificial
Intelligence (AI), particularly in the subfields of Natural Language Processing
(NLP) and Machine Learning (ML), presents a significant challenge for
researchers and practitioners to keep pace with the latest developments. To
address the problem of information overload, this report by the Natural
Language Learning Group at Bielefeld University focuses on identifying the most
popular papers on arXiv, with a specific emphasis on NLP and ML. The objective
is to offer a quick guide to the most relevant and widely discussed research,
aiding both newcomers and established researchers in staying abreast of current
trends. In particular, we compile a list of the 40 most popular papers based on
normalized citation counts from the first half of 2023. We observe the
dominance of papers related to Large Language Models (LLMs) and specifically
ChatGPT during the first half of 2023, with the latter showing signs of
declining popularity more recently, however. Further, NLP related papers are
the most influential (around 60\% of top papers) even though there are twice as
many ML related papers in our data. Core issues investigated in the most
heavily cited papers are: LLM efficiency, evaluation techniques, ethical
considerations, embodied agents, and problem-solving with LLMs. Additionally,
we examine the characteristics of top papers in comparison to others outside
the top-40 list (noticing the top paper's focus on LLM related issues and
higher number of co-authors) and analyze the citation distributions in our
dataset, among others.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 11:53:52 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Eger",
"Steffen",
""
],
[
"Leiter",
"Christoph",
""
],
[
"Belouadi",
"Jonas",
""
],
[
"Zhang",
"Ran",
""
],
[
"Kostikova",
"Aida",
""
],
[
"Larionov",
"Daniil",
""
],
[
"Chen",
"Yanran",
""
],
[
"Fresen",
"Vivian",
""
]
] |
new_dataset
| 0.994675 |
2308.04890
|
Jung Ho Ahn
|
Sangpyo Kim and Jongmin Kim and Jaeyoung Choi and Jung Ho Ahn
|
CiFHER: A Chiplet-Based FHE Accelerator with a Resizable Structure
|
15 pages, 9 figures
| null | null | null |
cs.AR cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fully homomorphic encryption (FHE) is in the spotlight as a definitive
solution for privacy, but the high computational overhead of FHE poses a
challenge to its practical adoption. Although prior studies have attempted to
design ASIC accelerators to mitigate the overhead, their designs require
excessive amounts of chip resources (e.g., areas) to contain and process
massive data for FHE operations.
We propose CiFHER, a chiplet-based FHE accelerator with a resizable
structure, to tackle the challenge with a cost-effective multi-chip module
(MCM) design. First, we devise a flexible architecture of a chiplet core whose
configuration can be adjusted to conform to the global organization of chiplets
and design constraints. The distinctive feature of our core is a recomposable
functional unit providing varying computational throughput for number-theoretic
transform (NTT), the most dominant function in FHE. Then, we establish
generalized data mapping methodologies to minimize the network overhead when
organizing the chips into the MCM package in a tiled manner, which becomes a
significant bottleneck due to the technology constraints of MCMs. Also, we
analyze the effectiveness of various algorithms, including a novel limb
duplication algorithm, on the MCM architecture. A detailed evaluation shows
that a CiFHER package composed of 4 to 64 compact chiplets provides performance
comparable to state-of-the-art monolithic ASIC FHE accelerators with
significantly lower package-wide power consumption while reducing the area of a
single core to as small as 4.28mm$^2$.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 11:41:56 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Aug 2023 13:43:33 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Kim",
"Sangpyo",
""
],
[
"Kim",
"Jongmin",
""
],
[
"Choi",
"Jaeyoung",
""
],
[
"Ahn",
"Jung Ho",
""
]
] |
new_dataset
| 0.996048 |
2308.05667
|
Zheng Qin
|
Minhao Li, Zheng Qin, Zhirui Gao, Renjiao Yi, Chenyang Zhu, Yulan Guo,
Kai Xu
|
2D3D-MATR: 2D-3D Matching Transformer for Detection-free Registration
between Images and Point Clouds
|
Accepted by ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The commonly adopted detect-then-match approach to registration finds
difficulties in the cross-modality cases due to the incompatible keypoint
detection and inconsistent feature description. We propose, 2D3D-MATR, a
detection-free method for accurate and robust registration between images and
point clouds. Our method adopts a coarse-to-fine pipeline where it first
computes coarse correspondences between downsampled patches of the input image
and the point cloud and then extends them to form dense correspondences between
pixels and points within the patch region. The coarse-level patch matching is
based on transformer which jointly learns global contextual constraints with
self-attention and cross-modality correlations with cross-attention. To resolve
the scale ambiguity in patch matching, we construct a multi-scale pyramid for
each image patch and learn to find for each point patch the best matching image
patch at a proper resolution level. Extensive experiments on two public
benchmarks demonstrate that 2D3D-MATR outperforms the previous state-of-the-art
P2-Net by around $20$ percentage points on inlier ratio and over $10$ points on
registration recall. Our code and models are available at
https://github.com/minhaolee/2D3DMATR.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2023 16:10:54 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Aug 2023 12:49:28 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Li",
"Minhao",
""
],
[
"Qin",
"Zheng",
""
],
[
"Gao",
"Zhirui",
""
],
[
"Yi",
"Renjiao",
""
],
[
"Zhu",
"Chenyang",
""
],
[
"Guo",
"Yulan",
""
],
[
"Xu",
"Kai",
""
]
] |
new_dataset
| 0.997968 |
2308.06358
|
Yumeng Xue
|
Luyu Cheng, Bairui Su, Yumeng Xue, Xiaoyu Liu, Yunhai Wang
|
CA2: Cyber Attacks Analytics
|
IEEE Conference on Visual Analytics Science and Technology (VAST)
Challenge Workshop 2020
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The VAST Challenge 2020 Mini-Challenge 1 requires participants to identify
the responsible white hat groups behind a fictional Internet outage. To address
this task, we have created a visual analytics system named CA2: Cyber Attacks
Analytics. This system is designed to efficiently compare and match subgraphs
within an extensive graph containing anonymized profiles. Additionally, we
showcase an iterative workflow that utilizes our system's capabilities to
pinpoint the responsible group.
|
[
{
"version": "v1",
"created": "Fri, 11 Aug 2023 19:27:45 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Cheng",
"Luyu",
""
],
[
"Su",
"Bairui",
""
],
[
"Xue",
"Yumeng",
""
],
[
"Liu",
"Xiaoyu",
""
],
[
"Wang",
"Yunhai",
""
]
] |
new_dataset
| 0.99871 |
2308.06375
|
Jiwoong Im
|
Daniel Jiwoong Im, Alexander Kondratskiy, Vincent Harvey, Hsuan-Wei Fu
|
UAMM: UBET Automated Market Maker
| null | null | null | null |
cs.LG cs.CE q-fin.CP
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Automated market makers (AMMs) are pricing mechanisms utilized by
decentralized exchanges (DEX). Traditional AMM approaches are constrained by
pricing solely based on their own liquidity pool, without consideration of
external markets or risk management for liquidity providers. In this paper, we
propose a new approach known as UBET AMM (UAMM), which calculates prices by
considering external market prices and the impermanent loss of the liquidity
pool. Despite relying on external market prices, our method maintains the
desired properties of a constant product curve when computing slippages. The
key element of UAMM is determining the appropriate slippage amount based on the
desired target balance, which encourages the liquidity pool to minimize
impermanent loss. We demonstrate that our approach eliminates arbitrage
opportunities when external market prices are efficient.
|
[
{
"version": "v1",
"created": "Fri, 11 Aug 2023 20:17:22 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Im",
"Daniel Jiwoong",
""
],
[
"Kondratskiy",
"Alexander",
""
],
[
"Harvey",
"Vincent",
""
],
[
"Fu",
"Hsuan-Wei",
""
]
] |
new_dataset
| 0.971157 |
2308.06383
|
Yan Di
|
Yan Di, Chenyangguang Zhang, Ruida Zhang, Fabian Manhardt, Yongzhi Su,
Jason Rambach, Didier Stricker, Xiangyang Ji and Federico Tombari
|
U-RED: Unsupervised 3D Shape Retrieval and Deformation for Partial Point
Clouds
|
ICCV2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, we propose U-RED, an Unsupervised shape REtrieval and
Deformation pipeline that takes an arbitrary object observation as input,
typically captured by RGB images or scans, and jointly retrieves and deforms
the geometrically similar CAD models from a pre-established database to tightly
match the target. Considering existing methods typically fail to handle noisy
partial observations, U-RED is designed to address this issue from two aspects.
First, since one partial shape may correspond to multiple potential full
shapes, the retrieval method must allow such an ambiguous one-to-many
relationship. Thereby U-RED learns to project all possible full shapes of a
partial target onto the surface of a unit sphere. Then during inference, each
sampling on the sphere will yield a feasible retrieval. Second, since
real-world partial observations usually contain noticeable noise, a reliable
learned metric that measures the similarity between shapes is necessary for
stable retrieval. In U-RED, we design a novel point-wise residual-guided metric
that allows noise-robust comparison. Extensive experiments on the synthetic
datasets PartNet, ComplementMe and the real-world dataset Scan2CAD demonstrate
that U-RED surpasses existing state-of-the-art approaches by 47.3%, 16.7% and
31.6% respectively under Chamfer Distance.
|
[
{
"version": "v1",
"created": "Fri, 11 Aug 2023 20:56:05 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Di",
"Yan",
""
],
[
"Zhang",
"Chenyangguang",
""
],
[
"Zhang",
"Ruida",
""
],
[
"Manhardt",
"Fabian",
""
],
[
"Su",
"Yongzhi",
""
],
[
"Rambach",
"Jason",
""
],
[
"Stricker",
"Didier",
""
],
[
"Ji",
"Xiangyang",
""
],
[
"Tombari",
"Federico",
""
]
] |
new_dataset
| 0.987771 |
2308.06393
|
Adnan Qayyum
|
Muhammad Atif Butt, Hassan Ali, Adnan Qayyum, Waqas Sultani, Ala
Al-Fuqaha, Junaid Qadir
|
R2S100K: Road-Region Segmentation Dataset For Semi-Supervised Autonomous
Driving in the Wild
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Semantic understanding of roadways is a key enabling factor for safe
autonomous driving. However, existing autonomous driving datasets provide
well-structured urban roads while ignoring unstructured roadways containing
distress, potholes, water puddles, and various kinds of road patches i.e.,
earthen, gravel etc. To this end, we introduce Road Region Segmentation dataset
(R2S100K) -- a large-scale dataset and benchmark for training and evaluation of
road segmentation in aforementioned challenging unstructured roadways. R2S100K
comprises 100K images extracted from a large and diverse set of video sequences
covering more than 1000 KM of roadways. Out of these 100K privacy respecting
images, 14,000 images have fine pixel-labeling of road regions, with 86,000
unlabeled images that can be leveraged through semi-supervised learning
methods. Alongside, we present an Efficient Data Sampling (EDS) based
self-training framework to improve learning by leveraging unlabeled data. Our
experimental results demonstrate that the proposed method significantly
improves learning methods in generalizability and reduces the labeling cost for
semantic segmentation tasks. Our benchmark will be publicly available to
facilitate future research at https://r2s100k.github.io/.
|
[
{
"version": "v1",
"created": "Fri, 11 Aug 2023 21:31:37 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Butt",
"Muhammad Atif",
""
],
[
"Ali",
"Hassan",
""
],
[
"Qayyum",
"Adnan",
""
],
[
"Sultani",
"Waqas",
""
],
[
"Al-Fuqaha",
"Ala",
""
],
[
"Qadir",
"Junaid",
""
]
] |
new_dataset
| 0.999873 |
2308.06401
|
Mohamed Elmahallawy
|
Yasmine Mustafa, Mohamed Elmahallawy, Tie Luo, Seif Eldawlatly
|
A Brain-Computer Interface Augmented Reality Framework with
Auto-Adaptive SSVEP Recognition
| null | null | null | null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Brain-Computer Interface (BCI) initially gained attention for developing
applications that aid physically impaired individuals. Recently, the idea of
integrating BCI with Augmented Reality (AR) emerged, which uses BCI not only to
enhance the quality of life for individuals with disabilities but also to
develop mainstream applications for healthy users. One commonly used BCI signal
pattern is the Steady-state Visually-evoked Potential (SSVEP), which captures
the brain's response to flickering visual stimuli. SSVEP-based BCI-AR
applications enable users to express their needs/wants by simply looking at
corresponding command options. However, individuals are different in brain
signals and thus require per-subject SSVEP recognition. Moreover, muscle
movements and eye blinks interfere with brain signals, and thus subjects are
required to remain still during BCI experiments, which limits AR engagement. In
this paper, we (1) propose a simple adaptive ensemble classification system
that handles the inter-subject variability, (2) present a simple BCI-AR
framework that supports the development of a wide range of SSVEP-based BCI-AR
applications, and (3) evaluate the performance of our ensemble algorithm in an
SSVEP-based BCI-AR application with head rotations which has demonstrated
robustness to the movement interference. Our testing on multiple subjects
achieved a mean accuracy of 80\% on a PC and 77\% using the HoloLens AR
headset, both of which surpass previous studies that incorporate individual
classifiers and head movements. In addition, our visual stimulation time is 5
seconds which is relatively short. The statistically significant results show
that our ensemble classification approach outperforms individual classifiers in
SSVEP-based BCIs.
|
[
{
"version": "v1",
"created": "Fri, 11 Aug 2023 21:56:00 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Mustafa",
"Yasmine",
""
],
[
"Elmahallawy",
"Mohamed",
""
],
[
"Luo",
"Tie",
""
],
[
"Eldawlatly",
"Seif",
""
]
] |
new_dataset
| 0.967206 |
2308.06445
|
AKM Mubashwir Alam
|
AKM Mubashwir Alam, Justin Boyce, Keke Chen
|
SGX-MR-Prot: Efficient and Developer-Friendly Access-Pattern Protection
in Trusted Execution Environments
|
arXiv admin note: text overlap with arXiv:2009.03518
|
International Conference on Distributed Computing Systems (ICDCS)
2023
| null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Trusted Execution Environments, such as Intel SGX, use hardware supports to
ensure the confidentiality and integrity of applications against a compromised
cloud system. However, side channels like access patterns remain for
adversaries to exploit and obtain sensitive information. Common approaches use
oblivious programs or primitives, such as ORAM, to make access patterns
oblivious to input data, which are challenging to develop. This demonstration
shows a prototype SGX-MR-Prot for efficiently protecting access patterns of
SGX-based data-intensive applications and minimizing developers' efforts.
SGX-MR-Prot uses the MapReduce framework to regulate application dataflows to
reduce the cost of access-pattern protection and hide the data oblivious
details from SGX developers. This demonstration will allow users to intuitively
understand the unique contributions of the framework-based protection approach
via interactive exploration and visualization.
|
[
{
"version": "v1",
"created": "Sat, 12 Aug 2023 02:44:15 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Alam",
"AKM Mubashwir",
""
],
[
"Boyce",
"Justin",
""
],
[
"Chen",
"Keke",
""
]
] |
new_dataset
| 0.960091 |
2308.06466
|
Naresh Goud Boddu
|
Naresh Goud Boddu, Vipul Goyal, Rahul Jain, Jo\~ao Ribeiro
|
Split-State Non-Malleable Codes and Secret Sharing Schemes for Quantum
Messages
| null | null | null | null |
cs.CR quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-malleable codes are fundamental objects at the intersection of
cryptography and coding theory. These codes provide security guarantees even in
settings where error correction and detection are impossible, and have found
applications to several other cryptographic tasks. Roughly speaking, a
non-malleable code for a family of tampering functions guarantees that no
adversary can tamper (using functions from this family) the encoding of a given
message into the encoding of a related distinct message. Non-malleable secret
sharing schemes are a strengthening of non-malleable codes which satisfy
additional privacy and reconstruction properties.
We first focus on the $2$-split-state tampering model, one of the strongest
and most well-studied adversarial tampering models. Here, a codeword is split
into two parts which are stored in physically distant servers, and the
adversary can then independently tamper with each part using arbitrary
functions. This model can be naturally extended to the secret sharing setting
with several parties by having the adversary independently tamper with each
share.
Previous works on non-malleable coding and secret sharing in the split-state
tampering model only considered the encoding of \emph{classical} messages.
Furthermore, until the recent work by Aggarwal, Boddu, and Jain (arXiv 2022),
adversaries with quantum capabilities and \emph{shared entanglement} had not
been considered, and it is a priori not clear whether previous schemes remain
secure in this model.
In this work, we introduce the notions of split-state non-malleable codes and
secret sharing schemes for quantum messages secure against quantum adversaries
with shared entanglement. We also present explicit constructions of such
schemes that achieve low-error non-malleability.
|
[
{
"version": "v1",
"created": "Sat, 12 Aug 2023 05:15:35 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Boddu",
"Naresh Goud",
""
],
[
"Goyal",
"Vipul",
""
],
[
"Jain",
"Rahul",
""
],
[
"Ribeiro",
"João",
""
]
] |
new_dataset
| 0.997619 |
2308.06479
|
Jia Zhang
|
Jia Zhang, Xin Na, Rui Xi, Yimiao Sun, Yuan He
|
mmHawkeye: Passive UAV Detection with a COTS mmWave Radar
|
9 pages, 14 figures, IEEE SECON2023
| null | null | null |
cs.NI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Small Unmanned Aerial Vehicles (UAVs) are becoming potential threats to
security-sensitive areas and personal privacy. A UAV can shoot photos at
height, but how to detect such an uninvited intruder is an open problem. This
paper presents mmHawkeye, a passive approach for UAV detection with a COTS
millimeter wave (mmWave) radar. mmHawkeye doesn't require prior knowledge of
the type, motions, and flight trajectory of the UAV, while exploiting the
signal feature induced by the UAV's periodic micro-motion (PMM) for long-range
accurate detection. The design is therefore effective in dealing with low-SNR
and uncertain reflected signals from the UAV. mmHawkeye can further track the
UAV's position with dynamic programming and particle filtering, and identify it
with a Long Short-Term Memory (LSTM) based detector. We implement mmHawkeye on
a commercial mmWave radar and evaluate its performance under varied settings.
The experimental results show that mmHawkeye has a detection accuracy of 95.8%
and can realize detection at a range up to 80m.
|
[
{
"version": "v1",
"created": "Sat, 12 Aug 2023 06:14:15 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Zhang",
"Jia",
""
],
[
"Na",
"Xin",
""
],
[
"Xi",
"Rui",
""
],
[
"Sun",
"Yimiao",
""
],
[
"He",
"Yuan",
""
]
] |
new_dataset
| 0.993869 |
2308.06483
|
Yenan Zhang
|
Yenan Zhang and Hiroshi Watanabe
|
BigWavGAN: A Wave-To-Wave Generative Adversarial Network for Music
Super-Resolution
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generally, Deep Neural Networks (DNNs) are expected to have high performance
when their model size is large. However, large models failed to produce
high-quality results commensurate with their scale in music Super-Resolution
(SR). We attribute this to that DNNs cannot learn information commensurate with
their size from standard mean square error losses. To unleash the potential of
large DNN models in music SR, we propose BigWavGAN, which incorporates Demucs,
a large-scale wave-to-wave model, with State-Of-The-Art (SOTA) discriminators
and adversarial training strategies. Our discriminator consists of Multi-Scale
Discriminator (MSD) and Multi-Resolution Discriminator (MRD). During inference,
since only the generator is utilized, there are no additional parameters or
computational resources required compared to the baseline model Demucs.
Objective evaluation affirms the effectiveness of BigWavGAN in music SR.
Subjective evaluations indicate that BigWavGAN can generate music with
significantly high perceptual quality over the baseline model. Notably,
BigWavGAN surpasses the SOTA music SR model in both simulated and real-world
scenarios. Moreover, BigWavGAN represents its superior generalization ability
to address out-of-distribution data. The conducted ablation study reveals the
importance of our discriminators and training strategies. Samples are available
on the demo page.
|
[
{
"version": "v1",
"created": "Sat, 12 Aug 2023 06:40:46 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Zhang",
"Yenan",
""
],
[
"Watanabe",
"Hiroshi",
""
]
] |
new_dataset
| 0.997822 |
2308.06488
|
Tahsina Hashem
|
Tahsina Hashem, Weiqing Wang, Derry Tanti Wijaya, Mohammed Eunus Ali,
Yuan-Fang Li
|
Generating Faithful Text From a Knowledge Graph with Noisy Reference
Text
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge Graph (KG)-to-Text generation aims at generating fluent
natural-language text that accurately represents the information of a given
knowledge graph. While significant progress has been made in this task by
exploiting the power of pre-trained language models (PLMs) with appropriate
graph structure-aware modules, existing models still fall short of generating
faithful text, especially when the ground-truth natural-language text contains
additional information that is not present in the graph. In this paper, we
develop a KG-to-text generation model that can generate faithful
natural-language text from a given graph, in the presence of noisy reference
text. Our framework incorporates two core ideas: Firstly, we utilize
contrastive learning to enhance the model's ability to differentiate between
faithful and hallucinated information in the text, thereby encouraging the
decoder to generate text that aligns with the input graph. Secondly, we empower
the decoder to control the level of hallucination in the generated text by
employing a controllable text generation technique. We evaluate our model's
performance through the standard quantitative metrics as well as a
ChatGPT-based quantitative and qualitative analysis. Our evaluation
demonstrates the superior performance of our model over state-of-the-art
KG-to-text models on faithfulness.
|
[
{
"version": "v1",
"created": "Sat, 12 Aug 2023 07:12:45 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Hashem",
"Tahsina",
""
],
[
"Wang",
"Weiqing",
""
],
[
"Wijaya",
"Derry Tanti",
""
],
[
"Ali",
"Mohammed Eunus",
""
],
[
"Li",
"Yuan-Fang",
""
]
] |
new_dataset
| 0.980779 |
2308.06549
|
Tanvir Islam
|
Tanvir Islam, Anika Rahman Joyita, Md. Golam Rabiul Alam, Mohammad
Mehedi Hassan, Md. Rafiul Hassan, Raffaele Gravina
|
Human Behavior-based Personalized Meal Recommendation and Menu Planning
Social System
| null |
IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS. 2022
| null | null |
cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The traditional dietary recommendation systems are basically nutrition or
health-aware where the human feelings on food are ignored. Human affects vary
when it comes to food cravings, and not all foods are appealing in all moods. A
questionnaire-based and preference-aware meal recommendation system can be a
solution. However, automated recognition of social affects on different foods
and planning the menu considering nutritional demand and social-affect has some
significant benefits of the questionnaire-based and preference-aware meal
recommendations. A patient with severe illness, a person in a coma, or patients
with locked-in syndrome and amyotrophic lateral sclerosis (ALS) cannot express
their meal preferences. Therefore, the proposed framework includes a
social-affective computing module to recognize the affects of different meals
where the person's affect is detected using electroencephalography signals. EEG
allows to capture the brain signals and analyze them to anticipate affective
toward a food. In this study, we have used a 14-channel wireless Emotive Epoc+
to measure affectivity for different food items. A hierarchical ensemble method
is applied to predict affectivity upon multiple feature extraction methods and
TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) is
used to generate a food list based on the predicted affectivity. In addition to
the meal recommendation, an automated menu planning approach is also proposed
considering a person's energy intake requirement, affectivity, and nutritional
values of the different menus. The bin-packing algorithm is used for the
personalized menu planning of breakfast, lunch, dinner, and snacks. The
experimental findings reveal that the suggested affective computing, meal
recommendation, and menu planning algorithms perform well across a variety of
assessment parameters.
|
[
{
"version": "v1",
"created": "Sat, 12 Aug 2023 12:19:23 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Islam",
"Tanvir",
""
],
[
"Joyita",
"Anika Rahman",
""
],
[
"Alam",
"Md. Golam Rabiul",
""
],
[
"Hassan",
"Mohammad Mehedi",
""
],
[
"Hassan",
"Md. Rafiul",
""
],
[
"Gravina",
"Raffaele",
""
]
] |
new_dataset
| 0.971047 |
2308.06568
|
Hanna Halaburda
|
Joshua S. Gans and Hanna Halaburda
|
"Zero Cost'' Majority Attacks on Permissionless Blockchains
| null | null | null | null |
cs.CR cs.GT econ.GN q-fin.EC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The core premise of permissionless blockchains is their reliable and secure
operation without the need to trust any individual agent. At the heart of
blockchain consensus mechanisms is an explicit cost (whether work or stake) for
participation in the network and the opportunity to add blocks to the
blockchain. A key rationale for that cost is to make attacks on the network,
which could be theoretically carried out if a majority of nodes were controlled
by a single entity, too expensive to be worthwhile. We demonstrate that a
majority attacker can successfully attack with a {\em negative cost}, which
shows that the protocol mechanisms are insufficient to create a secure network,
and emphasizes the importance of socially driven mechanisms external to the
protocol. At the same time, negative cost enables a new type of majority attack
that is more likely to elude external scrutiny.
|
[
{
"version": "v1",
"created": "Sat, 12 Aug 2023 13:38:37 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Gans",
"Joshua S.",
""
],
[
"Halaburda",
"Hanna",
""
]
] |
new_dataset
| 0.989411 |
2308.06571
|
Hangjie Yuan
|
Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang,
Shiwei Zhang
|
ModelScope Text-to-Video Technical Report
|
Technical report. Project page:
\url{https://modelscope.cn/models/damo/text-to-video-synthesis/summary}
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces ModelScopeT2V, a text-to-video synthesis model that
evolves from a text-to-image synthesis model (i.e., Stable Diffusion).
ModelScopeT2V incorporates spatio-temporal blocks to ensure consistent frame
generation and smooth movement transitions. The model could adapt to varying
frame numbers during training and inference, rendering it suitable for both
image-text and video-text datasets. ModelScopeT2V brings together three
components (i.e., VQGAN, a text encoder, and a denoising UNet), totally
comprising 1.7 billion parameters, in which 0.5 billion parameters are
dedicated to temporal capabilities. The model demonstrates superior performance
over state-of-the-art methods across three evaluation metrics. The code and an
online demo are available at
\url{https://modelscope.cn/models/damo/text-to-video-synthesis/summary}.
|
[
{
"version": "v1",
"created": "Sat, 12 Aug 2023 13:53:10 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Wang",
"Jiuniu",
""
],
[
"Yuan",
"Hangjie",
""
],
[
"Chen",
"Dayou",
""
],
[
"Zhang",
"Yingya",
""
],
[
"Wang",
"Xiang",
""
],
[
"Zhang",
"Shiwei",
""
]
] |
new_dataset
| 0.999234 |
2308.06573
|
Shouyi Lu
|
Guirong Zhuo, Shouyi Lu, Huanyu Zhou, Lianqing Zheng, Lu Xiong
|
4DRVO-Net: Deep 4D Radar-Visual Odometry Using Multi-Modal and
Multi-Scale Adaptive Fusion
|
14 pages,12 figures
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Four-dimensional (4D) radar--visual odometry (4DRVO) integrates complementary
information from 4D radar and cameras, making it an attractive solution for
achieving accurate and robust pose estimation. However, 4DRVO may exhibit
significant tracking errors owing to three main factors: 1) sparsity of 4D
radar point clouds; 2) inaccurate data association and insufficient feature
interaction between the 4D radar and camera; and 3) disturbances caused by
dynamic objects in the environment, affecting odometry estimation. In this
paper, we present 4DRVO-Net, which is a method for 4D radar--visual odometry.
This method leverages the feature pyramid, pose warping, and cost volume (PWC)
network architecture to progressively estimate and refine poses. Specifically,
we propose a multi-scale feature extraction network called Radar-PointNet++
that fully considers rich 4D radar point information, enabling fine-grained
learning for sparse 4D radar point clouds. To effectively integrate the two
modalities, we design an adaptive 4D radar--camera fusion module (A-RCFM) that
automatically selects image features based on 4D radar point features,
facilitating multi-scale cross-modal feature interaction and adaptive
multi-modal feature fusion. In addition, we introduce a velocity-guided
point-confidence estimation module to measure local motion patterns, reduce the
influence of dynamic objects and outliers, and provide continuous updates
during pose refinement. We demonstrate the excellent performance of our method
and the effectiveness of each module design on both the VoD and in-house
datasets. Our method outperforms all learning-based and geometry-based methods
for most sequences in the VoD dataset. Furthermore, it has exhibited promising
performance that closely approaches that of the 64-line LiDAR odometry results
of A-LOAM without mapping optimization.
|
[
{
"version": "v1",
"created": "Sat, 12 Aug 2023 14:00:09 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Zhuo",
"Guirong",
""
],
[
"Lu",
"Shouyi",
""
],
[
"Zhou",
"Huanyu",
""
],
[
"Zheng",
"Lianqing",
""
],
[
"Xiong",
"Lu",
""
]
] |
new_dataset
| 0.994325 |
2308.06594
|
Jumman Hossain
|
Jumman Hossain, Abu-Zaher Faridee, Nirmalya Roy, Anjan Basak, Derrik
E. Asher
|
CoverNav: Cover Following Navigation Planning in Unstructured Outdoor
Environment with Deep Reinforcement Learning
| null | null | null | null |
cs.RO cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous navigation in offroad environments has been extensively studied in
the robotics field. However, navigation in covert situations where an
autonomous vehicle needs to remain hidden from outside observers remains an
underexplored area. In this paper, we propose a novel Deep Reinforcement
Learning (DRL) based algorithm, called CoverNav, for identifying covert and
navigable trajectories with minimal cost in offroad terrains and jungle
environments in the presence of observers. CoverNav focuses on unmanned ground
vehicles seeking shelters and taking covers while safely navigating to a
predefined destination. Our proposed DRL method computes a local cost map that
helps distinguish which path will grant the maximal covertness while
maintaining a low cost trajectory using an elevation map generated from 3D
point cloud data, the robot's pose, and directed goal information. CoverNav
helps robot agents to learn the low elevation terrain using a reward function
while penalizing it proportionately when it experiences high elevation. If an
observer is spotted, CoverNav enables the robot to select natural obstacles
(e.g., rocks, houses, disabled vehicles, trees, etc.) and use them as shelters
to hide behind. We evaluate CoverNav using the Unity simulation environment and
show that it guarantees dynamically feasible velocities in the terrain when fed
with an elevation map generated by another DRL based navigation algorithm.
Additionally, we evaluate CoverNav's effectiveness in achieving a maximum goal
distance of 12 meters and its success rate in different elevation scenarios
with and without cover objects. We observe competitive performance comparable
to state of the art (SOTA) methods without compromising accuracy.
|
[
{
"version": "v1",
"created": "Sat, 12 Aug 2023 15:19:49 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Hossain",
"Jumman",
""
],
[
"Faridee",
"Abu-Zaher",
""
],
[
"Roy",
"Nirmalya",
""
],
[
"Basak",
"Anjan",
""
],
[
"Asher",
"Derrik E.",
""
]
] |
new_dataset
| 0.991634 |
2308.06639
|
Huaishu Peng
|
Zeyu Yan, Hsuanling Lee, Liang He, Huaishu Peng
|
3D Printing Magnetophoretic Displays
| null |
UIST 2023
|
10.1145/3586183.3606804
| null |
cs.HC cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a pipeline for printing interactive and always-on magnetophoretic
displays using affordable Fused Deposition Modeling (FDM) 3D printers. Using
our pipeline, an end-user can convert the surface of a 3D shape into a matrix
of voxels. The generated model can be sent to an FDM 3D printer equipped with
an additional syringe-based injector. During the printing process, an oil and
iron powder-based liquid mixture is injected into each voxel cell, allowing the
appearance of the once-printed object to be editable with external magnetic
sources. To achieve this, we made modifications to the 3D printer hardware and
the firmware. We also developed a 3D editor to prepare printable models. We
demonstrate our pipeline with a variety of examples, including a printed
Stanford bunny with customizable appearances, a small espresso mug that can be
used as a post-it note surface, a board game figurine with a computationally
updated display, and a collection of flexible wearable accessories with
editable visuals.
|
[
{
"version": "v1",
"created": "Sat, 12 Aug 2023 20:07:18 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Yan",
"Zeyu",
""
],
[
"Lee",
"Hsuanling",
""
],
[
"He",
"Liang",
""
],
[
"Peng",
"Huaishu",
""
]
] |
new_dataset
| 0.996419 |
2308.06680
|
Noman Bashir
|
Diptyaroop Maji, Noman Bashir, David Irwin, Prashant Shenoy, Ramesh K.
Sitaraman
|
Untangling Carbon-free Energy Attribution and Carbon Intensity
Estimation for Carbon-aware Computing
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Many organizations, including governments, utilities, and businesses, have
set ambitious targets to reduce carbon emissions as a part of their
sustainability goals. To achieve these targets, these organizations
increasingly use power purchase agreements (PPAs) to obtain renewable energy
credits, which they use to offset their ``brown'' energy consumption. However,
the details of these PPAs are often private and not shared with important
stakeholders, such as grid operators and carbon information services, who
monitor and report the grid's carbon emissions. This often results in incorrect
carbon accounting where the same renewable energy production could be factored
into grid carbon emission reports and also separately claimed by organizations
that own PPAs. Such ``double counting'' of renewable energy production could
lead to organizations with PPAs to understate their carbon emissions and
overstate their progress towards their sustainability goals. Further, we show
that commonly-used carbon reduction measures, such as load shifting, can have
the opposite effect of increasing emissions if such measures were to use
inaccurate carbon intensity signals. For instance, users may increase energy
consumption because the grid's carbon intensity appears low even though carbon
intensity may actually be high when renewable energy attributed to PPAs are
excluded. Unfortunately, there is currently no consensus on how to accurately
compute the grid's carbon intensity by properly accounting for PPAs. The goal
of our work is to shed quantitative light on the renewable energy attribution
problem and evaluate its impact of inaccurate accounting on carbon-aware
systems.
|
[
{
"version": "v1",
"created": "Sun, 13 Aug 2023 04:02:15 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Maji",
"Diptyaroop",
""
],
[
"Bashir",
"Noman",
""
],
[
"Irwin",
"David",
""
],
[
"Shenoy",
"Prashant",
""
],
[
"Sitaraman",
"Ramesh K.",
""
]
] |
new_dataset
| 0.976395 |
2308.06687
|
Shibsankar Das
|
Shibsankar Das, Adrish Banerjee, and Zilong Liu
|
Root Cross Z-Complementary Pairs with Large ZCZ Width
|
This work has been presented in 2022 IEEE International Symposium on
Information Theory (ISIT), Espoo, Finland
|
2022 IEEE International Symposium on Information Theory (ISIT),
Espoo, Finland, 2022, pp. 522-527
|
10.1109/ISIT50566.2022.9834651
| null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a new family of cross $Z$-complementary pairs
(CZCPs) based on generalized Boolean functions and two roots of unity. Our key
idea is to consider an arbitrary partition of the set $\{1,2,\cdots, n\}$ with
two subsets corresponding to two given roots of unity for which two truncated
sequences of new alphabet size determined by the two roots of unity are
obtained. We show that these two truncated sequences form a new $q$-ary CZCP
with flexible sequence length and large zero-correlation zone width.
Furthermore, we derive an enumeration formula by considering the Stirling
number of the second kind for the partitions and show that the number of
constructed CZCPs increases significantly compared to the existing works.
|
[
{
"version": "v1",
"created": "Sun, 13 Aug 2023 05:27:15 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Das",
"Shibsankar",
""
],
[
"Banerjee",
"Adrish",
""
],
[
"Liu",
"Zilong",
""
]
] |
new_dataset
| 0.990901 |
2308.06690
|
Shibsankar Das
|
Shibsankar Das, Adrish Banerjee, and Udaya Parampalli
|
Two-Dimensional Z-Complementary Array Quads with Low Column Sequence
PMEPRs
|
This work has been presented in 2023 IEEE International Symposium on
Information Theory (ISIT), Taipei, Taiwan
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we first propose a new design strategy of 2D $Z$-complementary
array quads (2D-ZCAQs) with feasible array sizes. A 2D-ZCAQ consists of four
distinct unimodular arrays satisfying zero 2D auto-correlation sums for
non-trivial 2D time-shifts within certain zone. Then, we obtain the upper
bounds on the column sequence peak-to-mean envelope power ratio (PMEPR) of the
constructed 2D-ZCAQs by using specific auto-correlation properties of some seed
sequences. The constructed 2D-ZCAQs with bounded column sequence PMEPR can be
used as a potential alternative to 2D Golay complementary array sets for
practical applications
|
[
{
"version": "v1",
"created": "Sun, 13 Aug 2023 05:46:43 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Das",
"Shibsankar",
""
],
[
"Banerjee",
"Adrish",
""
],
[
"Parampalli",
"Udaya",
""
]
] |
new_dataset
| 0.99909 |
2308.06692
|
Mingkai Zheng
|
Mingkai Zheng, Shan You, Lang Huang, Chen Luo, Fei Wang, Chen Qian,
Chang Xu
|
SimMatchV2: Semi-Supervised Learning with Graph Consistency
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Semi-Supervised image classification is one of the most fundamental problem
in computer vision, which significantly reduces the need for human labor. In
this paper, we introduce a new semi-supervised learning algorithm - SimMatchV2,
which formulates various consistency regularizations between labeled and
unlabeled data from the graph perspective. In SimMatchV2, we regard the
augmented view of a sample as a node, which consists of a label and its
corresponding representation. Different nodes are connected with the edges,
which are measured by the similarity of the node representations. Inspired by
the message passing and node classification in graph theory, we propose four
types of consistencies, namely 1) node-node consistency, 2) node-edge
consistency, 3) edge-edge consistency, and 4) edge-node consistency. We also
uncover that a simple feature normalization can reduce the gaps of the feature
norm between different augmented views, significantly improving the performance
of SimMatchV2. Our SimMatchV2 has been validated on multiple semi-supervised
learning benchmarks. Notably, with ResNet-50 as our backbone and 300 epochs of
training, SimMatchV2 achieves 71.9\% and 76.2\% Top-1 Accuracy with 1\% and
10\% labeled examples on ImageNet, which significantly outperforms the previous
methods and achieves state-of-the-art performance. Code and pre-trained models
are available at
\href{https://github.com/mingkai-zheng/SimMatchV2}{https://github.com/mingkai-zheng/SimMatchV2}.
|
[
{
"version": "v1",
"created": "Sun, 13 Aug 2023 05:56:36 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Zheng",
"Mingkai",
""
],
[
"You",
"Shan",
""
],
[
"Huang",
"Lang",
""
],
[
"Luo",
"Chen",
""
],
[
"Wang",
"Fei",
""
],
[
"Qian",
"Chen",
""
],
[
"Xu",
"Chang",
""
]
] |
new_dataset
| 0.990284 |
2308.06696
|
Yichi Zhang
|
Yichi Zhang, Zhuo Chen, Wen Zhang
|
MACO: A Modality Adversarial and Contrastive Framework for
Modality-missing Multi-modal Knowledge Graph Completion
|
This is the ArXiv version of our paper accepted by NLPCC 2023. The
code will be released soon
| null | null | null |
cs.CL cs.AI cs.MM
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recent years have seen significant advancements in multi-modal knowledge
graph completion (MMKGC). MMKGC enhances knowledge graph completion (KGC) by
integrating multi-modal entity information, thereby facilitating the discovery
of unobserved triples in the large-scale knowledge graphs (KGs). Nevertheless,
existing methods emphasize the design of elegant KGC models to facilitate
modality interaction, neglecting the real-life problem of missing modalities in
KGs. The missing modality information impedes modal interaction, consequently
undermining the model's performance. In this paper, we propose a modality
adversarial and contrastive framework (MACO) to solve the modality-missing
problem in MMKGC. MACO trains a generator and discriminator adversarially to
generate missing modality features that can be incorporated into the MMKGC
model. Meanwhile, we design a cross-modal contrastive loss to improve the
performance of the generator. Experiments on public benchmarks with further
explorations demonstrate that MACO could achieve state-of-the-art results and
serve as a versatile framework to bolster various MMKGC models. Our code and
benchmark data are available at https://github.com/zjukg/MACO.
|
[
{
"version": "v1",
"created": "Sun, 13 Aug 2023 06:29:38 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Zhang",
"Yichi",
""
],
[
"Chen",
"Zhuo",
""
],
[
"Zhang",
"Wen",
""
]
] |
new_dataset
| 0.997823 |
2308.06699
|
Jia Li
|
Jia Li, Ziling Chen, Xiaolong Wu, Lu Wang, Beibei Wang, Lei Zhang
|
Neural Super-Resolution for Real-time Rendering with Radiance
Demodulation
| null | null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rendering high-resolution images in real-time applications (e.g., video
games, virtual reality) is time-consuming, thus super-resolution technology
becomes more and more crucial in real-time rendering. However, it is still
challenging to preserve sharp texture details, keep the temporal stability and
avoid the ghosting artifacts in the real-time rendering super-resolution. To
this end, we introduce radiance demodulation into real-time rendering
super-resolution, separating the rendered image or radiance into a lighting
component and a material component, due to the fact that the light component
tends to be smoother than the rendered image and the high-resolution material
component with detailed textures can be easily obtained. Therefore, we perform
the super-resolution only on the lighting component and re-modulate with the
high-resolution material component to obtain the final super-resolution image.
In this way, the texture details can be preserved much better. Then, we propose
a reliable warping module by explicitly pointing out the unreliable occluded
regions with a motion mask to remove the ghosting artifacts. We further enhance
the temporal stability by designing a frame-recurrent neural network to
aggregate the previous and current frames, which better captures the
spatial-temporal correlation between reconstructed frames. As a result, our
method is able to produce temporally stable results in real-time rendering with
high-quality details, even in the highly challenging 4 $\times$ 4
super-resolution scenarios.
|
[
{
"version": "v1",
"created": "Sun, 13 Aug 2023 06:40:41 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Li",
"Jia",
""
],
[
"Chen",
"Ziling",
""
],
[
"Wu",
"Xiaolong",
""
],
[
"Wang",
"Lu",
""
],
[
"Wang",
"Beibei",
""
],
[
"Zhang",
"Lei",
""
]
] |
new_dataset
| 0.987877 |
2308.06701
|
Haichao Zhang
|
Haichao Zhang, Can Qin, Yu Yin, Yun Fu
|
Camouflaged Image Synthesis Is All You Need to Boost Camouflaged
Detection
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Camouflaged objects that blend into natural scenes pose significant
challenges for deep-learning models to detect and synthesize. While camouflaged
object detection is a crucial task in computer vision with diverse real-world
applications, this research topic has been constrained by limited data
availability. We propose a framework for synthesizing camouflage data to
enhance the detection of camouflaged objects in natural scenes. Our approach
employs a generative model to produce realistic camouflage images, which can be
used to train existing object detection models. Specifically, we use a
camouflage environment generator supervised by a camouflage distribution
classifier to synthesize the camouflage images, which are then fed into our
generator to expand the dataset. Our framework outperforms the current
state-of-the-art method on three datasets (COD10k, CAMO, and CHAMELEON),
demonstrating its effectiveness in improving camouflaged object detection. This
approach can serve as a plug-and-play data generation and augmentation module
for existing camouflaged object detection tasks and provides a novel way to
introduce more diversity and distributions into current camouflage datasets.
|
[
{
"version": "v1",
"created": "Sun, 13 Aug 2023 06:55:05 GMT"
}
] | 2023-08-15T00:00:00 |
[
[
"Zhang",
"Haichao",
""
],
[
"Qin",
"Can",
""
],
[
"Yin",
"Yu",
""
],
[
"Fu",
"Yun",
""
]
] |
new_dataset
| 0.986921 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.