id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2308.15673
Bhaskar Ramasubramanian
Arezoo Rajabi, Surudhi Asokraj, Fengqing Jiang, Luyao Niu, Bhaskar Ramasubramanian, Jim Ritcey, Radha Poovendran
MDTD: A Multi Domain Trojan Detector for Deep Neural Networks
Accepted to ACM Conference on Computer and Communications Security (ACM CCS) 2023
null
null
null
cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
Machine learning models that use deep neural networks (DNNs) are vulnerable to backdoor attacks. An adversary carrying out a backdoor attack embeds a predefined perturbation called a trigger into a small subset of input samples and trains the DNN such that the presence of the trigger in the input results in an adversary-desired output class. Such adversarial retraining however needs to ensure that outputs for inputs without the trigger remain unaffected and provide high classification accuracy on clean samples. In this paper, we propose MDTD, a Multi-Domain Trojan Detector for DNNs, which detects inputs containing a Trojan trigger at testing time. MDTD does not require knowledge of trigger-embedding strategy of the attacker and can be applied to a pre-trained DNN model with image, audio, or graph-based inputs. MDTD leverages an insight that input samples containing a Trojan trigger are located relatively farther away from a decision boundary than clean samples. MDTD estimates the distance to a decision boundary using adversarial learning methods and uses this distance to infer whether a test-time input sample is Trojaned or not. We evaluate MDTD against state-of-the-art Trojan detection methods across five widely used image-based datasets: CIFAR100, CIFAR10, GTSRB, SVHN, and Flowers102; four graph-based datasets: AIDS, WinMal, Toxicant, and COLLAB; and the SpeechCommand audio dataset. MDTD effectively identifies samples that contain different types of Trojan triggers. We evaluate MDTD against adaptive attacks where an adversary trains a robust DNN to increase (decrease) distance of benign (Trojan) inputs from a decision boundary.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 00:03:03 GMT" }, { "version": "v2", "created": "Sun, 3 Sep 2023 01:59:49 GMT" } ]
2023-09-06T00:00:00
[ [ "Rajabi", "Arezoo", "" ], [ "Asokraj", "Surudhi", "" ], [ "Jiang", "Fengqing", "" ], [ "Niu", "Luyao", "" ], [ "Ramasubramanian", "Bhaskar", "" ], [ "Ritcey", "Jim", "" ], [ "Poovendran", "Radha", "" ] ]
new_dataset
0.974977
2308.16481
Ahmed Hatem
Ahmed Hatem, Yiming Qian, Yang Wang
Point-TTA: Test-Time Adaptation for Point Cloud Registration Using Multitask Meta-Auxiliary Learning
Accepted at ICCV 2023
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
We present Point-TTA, a novel test-time adaptation framework for point cloud registration (PCR) that improves the generalization and the performance of registration models. While learning-based approaches have achieved impressive progress, generalization to unknown testing environments remains a major challenge due to the variations in 3D scans. Existing methods typically train a generic model and the same trained model is applied on each instance during testing. This could be sub-optimal since it is difficult for the same model to handle all the variations during testing. In this paper, we propose a test-time adaptation approach for PCR. Our model can adapt to unseen distributions at test-time without requiring any prior knowledge of the test data. Concretely, we design three self-supervised auxiliary tasks that are optimized jointly with the primary PCR task. Given a test instance, we adapt our model using these auxiliary tasks and the updated model is used to perform the inference. During training, our model is trained using a meta-auxiliary learning approach, such that the adapted model via auxiliary tasks improves the accuracy of the primary task. Experimental results demonstrate the effectiveness of our approach in improving generalization of point cloud registration and outperforming other state-of-the-art approaches.
[ { "version": "v1", "created": "Thu, 31 Aug 2023 06:32:11 GMT" }, { "version": "v2", "created": "Fri, 1 Sep 2023 18:13:58 GMT" } ]
2023-09-06T00:00:00
[ [ "Hatem", "Ahmed", "" ], [ "Qian", "Yiming", "" ], [ "Wang", "Yang", "" ] ]
new_dataset
0.976061
2308.16890
Shuai Bai
Shuai Bai, Shusheng Yang, Jinze Bai, Peng Wang, Xingxuan Zhang, Junyang Lin, Xinggang Wang, Chang Zhou, Jingren Zhou
TouchStone: Evaluating Vision-Language Models by Language Models
https://github.com/OFA-Sys/TouchStone
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large vision-language models (LVLMs) have recently witnessed rapid advancements, exhibiting a remarkable capacity for perceiving, understanding, and processing visual information by connecting visual receptor with large language models (LLMs). However, current assessments mainly focus on recognizing and reasoning abilities, lacking direct evaluation of conversational skills and neglecting visual storytelling abilities. In this paper, we propose an evaluation method that uses strong LLMs as judges to comprehensively evaluate the various abilities of LVLMs. Firstly, we construct a comprehensive visual dialogue dataset TouchStone, consisting of open-world images and questions, covering five major categories of abilities and 27 subtasks. This dataset not only covers fundamental recognition and comprehension but also extends to literary creation. Secondly, by integrating detailed image annotations we effectively transform the multimodal input content into a form understandable by LLMs. This enables us to employ advanced LLMs for directly evaluating the quality of the multimodal dialogue without requiring human intervention. Through validation, we demonstrate that powerful LVLMs, such as GPT-4, can effectively score dialogue quality by leveraging their textual capabilities alone, aligning with human preferences. We hope our work can serve as a touchstone for LVLMs' evaluation and pave the way for building stronger LVLMs. The evaluation code is available at https://github.com/OFA-Sys/TouchStone.
[ { "version": "v1", "created": "Thu, 31 Aug 2023 17:52:04 GMT" }, { "version": "v2", "created": "Mon, 4 Sep 2023 15:06:15 GMT" } ]
2023-09-06T00:00:00
[ [ "Bai", "Shuai", "" ], [ "Yang", "Shusheng", "" ], [ "Bai", "Jinze", "" ], [ "Wang", "Peng", "" ], [ "Zhang", "Xingxuan", "" ], [ "Lin", "Junyang", "" ], [ "Wang", "Xinggang", "" ], [ "Zhou", "Chang", "" ], [ "Zhou", "Jingren", "" ] ]
new_dataset
0.981101
2309.00428
Xiaoyu Pan
Xiaoyu Pan, Bowen Zheng, Xinwei Jiang, Guanglong Xu, Xianli Gu, Jingxiang Li, Qilong Kou, He Wang, Tianjia Shao, Kun Zhou and Xiaogang Jin
A Locality-based Neural Solver for Optical Motion Capture
Siggraph Asia 2023 Conference Paper
null
10.1145/3610548.3618148
null
cs.GR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel locality-based learning method for cleaning and solving optical motion capture data. Given noisy marker data, we propose a new heterogeneous graph neural network which treats markers and joints as different types of nodes, and uses graph convolution operations to extract the local features of markers and joints and transform them to clean motions. To deal with anomaly markers (e.g. occluded or with big tracking errors), the key insight is that a marker's motion shows strong correlations with the motions of its immediate neighboring markers but less so with other markers, a.k.a. locality, which enables us to efficiently fill missing markers (e.g. due to occlusion). Additionally, we also identify marker outliers due to tracking errors by investigating their acceleration profiles. Finally, we propose a training regime based on representation learning and data augmentation, by training the model on data with masking. The masking schemes aim to mimic the occluded and noisy markers often observed in the real data. Finally, we show that our method achieves high accuracy on multiple metrics across various datasets. Extensive comparison shows our method outperforms state-of-the-art methods in terms of prediction accuracy of occluded marker position error by approximately 20%, which leads to a further error reduction on the reconstructed joint rotations and positions by 30%. The code and data for this paper are available at https://github.com/non-void/LocalMoCap.
[ { "version": "v1", "created": "Fri, 1 Sep 2023 12:40:17 GMT" }, { "version": "v2", "created": "Mon, 4 Sep 2023 09:21:14 GMT" } ]
2023-09-06T00:00:00
[ [ "Pan", "Xiaoyu", "" ], [ "Zheng", "Bowen", "" ], [ "Jiang", "Xinwei", "" ], [ "Xu", "Guanglong", "" ], [ "Gu", "Xianli", "" ], [ "Li", "Jingxiang", "" ], [ "Kou", "Qilong", "" ], [ "Wang", "He", "" ], [ "Shao", "Tianjia", "" ], [ "Zhou", "Kun", "" ], [ "Jin", "Xiaogang", "" ] ]
new_dataset
0.995057
2309.00682
Burak Bartan
Burak Bartan and Mert Pilanci
Randomized Polar Codes for Anytime Distributed Machine Learning
null
null
null
null
cs.DC cs.IT cs.LG math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel distributed computing framework that is robust to slow compute nodes, and is capable of both approximate and exact computation of linear operations. The proposed mechanism integrates the concepts of randomized sketching and polar codes in the context of coded computation. We propose a sequential decoding algorithm designed to handle real valued data while maintaining low computational complexity for recovery. Additionally, we provide an anytime estimator that can generate provably accurate estimates even when the set of available node outputs is not decodable. We demonstrate the potential applications of this framework in various contexts, such as large-scale matrix multiplication and black-box optimization. We present the implementation of these methods on a serverless cloud computing system and provide numerical results to demonstrate their scalability in practice, including ImageNet scale computations.
[ { "version": "v1", "created": "Fri, 1 Sep 2023 18:02:04 GMT" } ]
2023-09-06T00:00:00
[ [ "Bartan", "Burak", "" ], [ "Pilanci", "Mert", "" ] ]
new_dataset
0.995156
2309.00687
Gabor P. Nagy
M\'arton Erd\'elyi and P\'al Heged\"us and S\'andor Z. Kiss and G\'abor P. Nagy
On Linear Codes with Random Multiplier Vectors and the Maximum Trace Dimension Property
null
null
null
null
cs.IT math.IT math.NT
http://creativecommons.org/licenses/by/4.0/
Let $C$ be a linear code of length $n$ and dimension $k$ over the finite field $\mathbb{F}_{q^m}$. The trace code $\mathrm{Tr}(C)$ is a linear code of the same length $n$ over the subfield $\mathbb{F}_q$. The obvious upper bound for the dimension of the trace code over $\mathbb{F}_q$ is $mk$. If equality holds, then we say that $C$ has maximum trace dimension. The problem of finding the true dimension of trace codes and their duals is relevant for the size of the public key of various code-based cryptographic protocols. Let $C_{\mathbf{a}}$ denote the code obtained from $C$ and a multiplier vector $\mathbf{a}\in (\mathbb{F}_{q^m})^n$. In this paper, we give a lower bound for the probability that a random multiplier vector produces a code $C_{\mathbf{a}}$ of maximum trace dimension. We give an interpretation of the bound for the class of algebraic geometry codes in terms of the degree of the defining divisor. The bound explains the experimental fact that random alternant codes have minimal dimension. Our bound holds whenever $n\geq m(k+h)$, where $h\geq 0$ is the Singleton defect of $C$. For the extremal case $n=m(h+k)$, numerical experiments reveal a closed connection between the probability of having maximum trace dimension and the probability that a random matrix has full rank.
[ { "version": "v1", "created": "Fri, 1 Sep 2023 18:13:23 GMT" } ]
2023-09-06T00:00:00
[ [ "Erdélyi", "Márton", "" ], [ "Hegedüs", "Pál", "" ], [ "Kiss", "Sándor Z.", "" ], [ "Nagy", "Gábor P.", "" ] ]
new_dataset
0.990332
2309.00743
Divyanshu Raj
Divyanshu Raj, Chitta Baral, Nakul Gopalan
Language-Conditioned Change-point Detection to Identify Sub-Tasks in Robotics Domains
9 Pages, 13 figures, Accepted paper at the RSS 2023 Workshop on Articulate Robots: Utilizing Language for Robot Learning
null
null
null
cs.RO cs.AI cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this work, we present an approach to identify sub-tasks within a demonstrated robot trajectory using language instructions. We identify these sub-tasks using language provided during demonstrations as guidance to identify sub-segments of a longer robot trajectory. Given a sequence of natural language instructions and a long trajectory consisting of image frames and discrete actions, we want to map an instruction to a smaller fragment of the trajectory. Unlike previous instruction following works which directly learn the mapping from language to a policy, we propose a language-conditioned change-point detection method to identify sub-tasks in a problem. Our approach learns the relationship between constituent segments of a long language command and corresponding constituent segments of a trajectory. These constituent trajectory segments can be used to learn subtasks or sub-goals for planning or options as demonstrated by previous related work. Our insight in this work is that the language-conditioned robot change-point detection problem is similar to the existing video moment retrieval works used to identify sub-segments within online videos. Through extensive experimentation, we demonstrate a $1.78_{\pm 0.82}\%$ improvement over a baseline approach in accurately identifying sub-tasks within a trajectory using our proposed method. Moreover, we present a comprehensive study investigating sample complexity requirements on learning this mapping, between language and trajectory sub-segments, to understand if the video retrieval-based methods are realistic in real robot scenarios.
[ { "version": "v1", "created": "Fri, 1 Sep 2023 21:40:34 GMT" } ]
2023-09-06T00:00:00
[ [ "Raj", "Divyanshu", "" ], [ "Baral", "Chitta", "" ], [ "Gopalan", "Nakul", "" ] ]
new_dataset
0.998881
2309.00789
Melissa Dell
Abhishek Arora, Melissa Dell
LinkTransformer: A Unified Package for Record Linkage with Transformer Language Models
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Linking information across sources is fundamental to a variety of analyses in social science, business, and government. While large language models (LLMs) offer enormous promise for improving record linkage in noisy datasets, in many domains approximate string matching packages in popular softwares such as R and Stata remain predominant. These packages have clean, simple interfaces and can be easily extended to a diversity of languages. Our open-source package LinkTransformer aims to extend the familiarity and ease-of-use of popular string matching methods to deep learning. It is a general purpose package for record linkage with transformer LLMs that treats record linkage as a text retrieval problem. At its core is an off-the-shelf toolkit for applying transformer models to record linkage with four lines of code. LinkTransformer contains a rich repository of pre-trained transformer semantic similarity models for multiple languages and supports easy integration of any transformer language model from Hugging Face or OpenAI. It supports standard functionality such as blocking and linking on multiple noisy fields. LinkTransformer APIs also perform other common text data processing tasks, e.g., aggregation, noisy de-duplication, and translation-free cross-lingual linkage. Importantly, LinkTransformer also contains comprehensive tools for efficient model tuning, to facilitate different levels of customization when off-the-shelf models do not provide the required accuracy. Finally, to promote reusability, reproducibility, and extensibility, LinkTransformer makes it easy for users to contribute their custom-trained models to its model hub. By combining transformer language models with intuitive APIs that will be familiar to many users of popular string matching packages, LinkTransformer aims to democratize the benefits of LLMs among those who may be less familiar with deep learning frameworks.
[ { "version": "v1", "created": "Sat, 2 Sep 2023 01:45:27 GMT" } ]
2023-09-06T00:00:00
[ [ "Arora", "Abhishek", "" ], [ "Dell", "Melissa", "" ] ]
new_dataset
0.969753
2309.00790
Sikai Chen
Runjia Du, Pei Li, Sikai Chen, Samuel Labi
PFL-LSTR: A privacy-preserving framework for driver intention inference based on in-vehicle and out-vehicle information
Submitted for presentation only at the 2024 Annual Meeting of the Transportation Research Board
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intelligent vehicle anticipation of the movement intentions of other drivers can reduce collisions. Typically, when a human driver of another vehicle (referred to as the target vehicle) engages in specific behaviors such as checking the rearview mirror prior to lane change, a valuable clue is therein provided on the intentions of the target vehicle's driver. Furthermore, the target driver's intentions can be influenced and shaped by their driving environment. For example, if the target vehicle is too close to a leading vehicle, it may renege the lane change decision. On the other hand, a following vehicle in the target lane is too close to the target vehicle could lead to its reversal of the decision to change lanes. Knowledge of such intentions of all vehicles in a traffic stream can help enhance traffic safety. Unfortunately, such information is often captured in the form of images/videos. Utilization of personally identifiable data to train a general model could violate user privacy. Federated Learning (FL) is a promising tool to resolve this conundrum. FL efficiently trains models without exposing the underlying data. This paper introduces a Personalized Federated Learning (PFL) model embedded a long short-term transformer (LSTR) framework. The framework predicts drivers' intentions by leveraging in-vehicle videos (of driver movement, gestures, and expressions) and out-of-vehicle videos (of the vehicle's surroundings - frontal/rear areas). The proposed PFL-LSTR framework is trained and tested through real-world driving data collected from human drivers at Interstate 65 in Indiana. The results suggest that the PFL-LSTR exhibits high adaptability and high precision, and that out-of-vehicle information (particularly, the driver's rear-mirror viewing actions) is important because it helps reduce false positives and thereby enhances the precision of driver intention inference.
[ { "version": "v1", "created": "Sat, 2 Sep 2023 01:51:41 GMT" } ]
2023-09-06T00:00:00
[ [ "Du", "Runjia", "" ], [ "Li", "Pei", "" ], [ "Chen", "Sikai", "" ], [ "Labi", "Samuel", "" ] ]
new_dataset
0.999187
2309.00794
Shibei Meng
Shibei Meng, Yang Fu, Saihui Hou, Chunshui Cao, Xu Liu, Yongzhen Huang
FastPoseGait: A Toolbox and Benchmark for Efficient Pose-based Gait Recognition
10 pages, 4 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
We present FastPoseGait, an open-source toolbox for pose-based gait recognition based on PyTorch. Our toolbox supports a set of cutting-edge pose-based gait recognition algorithms and a variety of related benchmarks. Unlike other pose-based projects that focus on a single algorithm, FastPoseGait integrates several state-of-the-art (SOTA) algorithms under a unified framework, incorporating both the latest advancements and best practices to ease the comparison of effectiveness and efficiency. In addition, to promote future research on pose-based gait recognition, we provide numerous pre-trained models and detailed benchmark results, which offer valuable insights and serve as a reference for further investigations. By leveraging the highly modular structure and diverse methods offered by FastPoseGait, researchers can quickly delve into pose-based gait recognition and promote development in the field. In this paper, we outline various features of this toolbox, aiming that our toolbox and benchmarks can further foster collaboration, facilitate reproducibility, and encourage the development of innovative algorithms for pose-based gait recognition. FastPoseGait is available at https://github.com//BNU-IVC/FastPoseGait and is actively maintained. We will continue updating this report as we add new features.
[ { "version": "v1", "created": "Sat, 2 Sep 2023 02:05:58 GMT" } ]
2023-09-06T00:00:00
[ [ "Meng", "Shibei", "" ], [ "Fu", "Yang", "" ], [ "Hou", "Saihui", "" ], [ "Cao", "Chunshui", "" ], [ "Liu", "Xu", "" ], [ "Huang", "Yongzhen", "" ] ]
new_dataset
0.993604
2309.00796
Chongyang Zhong
Chongyang Zhong, Lei Hu, Zihao Zhang, Shihong Xia
AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism
IEEE International Conference on Computer Vision 2023, 9 pages
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Generating 3D human motion based on textual descriptions has been a research focus in recent years. It requires the generated motion to be diverse, natural, and conform to the textual description. Due to the complex spatio-temporal nature of human motion and the difficulty in learning the cross-modal relationship between text and motion, text-driven motion generation is still a challenging problem. To address these issues, we propose \textbf{AttT2M}, a two-stage method with multi-perspective attention mechanism: \textbf{body-part attention} and \textbf{global-local motion-text attention}. The former focuses on the motion embedding perspective, which means introducing a body-part spatio-temporal encoder into VQ-VAE to learn a more expressive discrete latent space. The latter is from the cross-modal perspective, which is used to learn the sentence-level and word-level motion-text cross-modal relationship. The text-driven motion is finally generated with a generative transformer. Extensive experiments conducted on HumanML3D and KIT-ML demonstrate that our method outperforms the current state-of-the-art works in terms of qualitative and quantitative evaluation, and achieve fine-grained synthesis and action2motion. Our code is in https://github.com/ZcyMonkey/AttT2M
[ { "version": "v1", "created": "Sat, 2 Sep 2023 02:18:17 GMT" } ]
2023-09-06T00:00:00
[ [ "Zhong", "Chongyang", "" ], [ "Hu", "Lei", "" ], [ "Zhang", "Zihao", "" ], [ "Xia", "Shihong", "" ] ]
new_dataset
0.996746
2309.00817
Yida Chen
Yida Chen, Kang Liu, Yi Xin, Xinru Zhao
Soil Image Segmentation Based on Mask R-CNN
4 pages, 5 figures, Published in 2023 3rd International Conference on Consumer Electronics and Computer Engineering
2023 3rd International Conference on Consumer Electronics and Computer Engineering (ICCECE)
10.1109/ICCECE58074.2023.10135317
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The complex background in the soil image collected in the field natural environment will affect the subsequent soil image recognition based on machine vision. Segmenting the soil center area from the soil image can eliminate the influence of the complex background, which is an important preprocessing work for subsequent soil image recognition. For the first time, the deep learning method was applied to soil image segmentation, and the Mask R-CNN model was selected to complete the positioning and segmentation of soil images. Construct a soil image dataset based on the collected soil images, use the EISeg annotation tool to mark the soil area as soil, and save the annotation information; train the Mask R-CNN soil image instance segmentation model. The trained model can obtain accurate segmentation results for soil images, and can show good performance on soil images collected in different environments; the trained instance segmentation model has a loss value of 0.1999 in the training set, and the mAP of the validation set segmentation (IoU=0.5) is 0.8804, and it takes only 0.06s to complete image segmentation based on GPU acceleration, which can meet the real-time segmentation and detection of soil images in the field under natural conditions. You can get our code in the Conclusions. The homepage is https://github.com/YidaMyth.
[ { "version": "v1", "created": "Sat, 2 Sep 2023 04:08:06 GMT" } ]
2023-09-06T00:00:00
[ [ "Chen", "Yida", "" ], [ "Liu", "Kang", "" ], [ "Xin", "Yi", "" ], [ "Zhao", "Xinru", "" ] ]
new_dataset
0.990593
2309.00842
Rishi Vanukuru
Rishi Vanukuru, Suibi Che-Chuan Weng, Krithik Ranjan, Torin Hopkins, Amy Banic, Mark D. Gross, Ellen Yi-Luen Do
DualStream: Spatially Sharing Selves and Surroundings using Mobile Devices and Augmented Reality
10 pages, 4 figures, 1 table; To appear in the proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 2023
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In-person human interaction relies on our spatial perception of each other and our surroundings. Current remote communication tools partially address each of these aspects. Video calls convey real user representations but without spatial interactions. Augmented and Virtual Reality (AR/VR) experiences are immersive and spatial but often use virtual environments and characters instead of real-life representations. Bridging these gaps, we introduce DualStream, a system for synchronous mobile AR remote communication that captures, streams, and displays spatial representations of users and their surroundings. DualStream supports transitions between user and environment representations with different levels of visuospatial fidelity, as well as the creation of persistent shared spaces using environment snapshots. We demonstrate how DualStream can enable spatial communication in real-world contexts, and support the creation of blended spaces for collaboration. A formative evaluation of DualStream revealed that users valued the ability to interact spatially and move between representations, and could see DualStream fitting into their own remote communication practices in the near future. Drawing from these findings, we discuss new opportunities for designing more widely accessible spatial communication tools, centered around the mobile phone.
[ { "version": "v1", "created": "Sat, 2 Sep 2023 06:38:33 GMT" } ]
2023-09-06T00:00:00
[ [ "Vanukuru", "Rishi", "" ], [ "Weng", "Suibi Che-Chuan", "" ], [ "Ranjan", "Krithik", "" ], [ "Hopkins", "Torin", "" ], [ "Banic", "Amy", "" ], [ "Gross", "Mark D.", "" ], [ "Do", "Ellen Yi-Luen", "" ] ]
new_dataset
0.997119
2309.00898
Maksym Planeta
Maksym Planeta, Jan Bierbaum, Michael Roitzsch, Hermann H\"artig
CoRD: Converged RDMA Dataplane for High-Performance Clouds
11 pages
null
null
null
cs.OS
http://creativecommons.org/licenses/by/4.0/
High-performance networking is often characterized by kernel bypass which is considered mandatory in high-performance parallel and distributed applications. But kernel bypass comes at a price because it breaks the traditional OS architecture, requiring applications to use special APIs and limiting the OS control over existing network connections. We make the case, that kernel bypass is not mandatory. Rather, high-performance networking relies on multiple performance-improving techniques, with kernel bypass being the least effective. CoRD removes kernel bypass from RDMA networks, enabling efficient OS-level control over RDMA dataplane.
[ { "version": "v1", "created": "Sat, 2 Sep 2023 10:25:34 GMT" } ]
2023-09-06T00:00:00
[ [ "Planeta", "Maksym", "" ], [ "Bierbaum", "Jan", "" ], [ "Roitzsch", "Michael", "" ], [ "Härtig", "Hermann", "" ] ]
new_dataset
0.989668
2309.00916
Chen Wang
Chen Wang, Minpeng Liao, Zhongqiang Huang, Jinliang Lu, Junhong Wu, Yuchen Liu, Chengqing Zong, Jiajun Zhang
BLSP: Bootstrapping Language-Speech Pre-training via Behavior Alignment of Continuation Writing
null
null
null
null
cs.CL cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
The emergence of large language models (LLMs) has sparked significant interest in extending their remarkable language capabilities to speech. However, modality alignment between speech and text still remains an open problem. Current solutions can be categorized into two strategies. One is a cascaded approach where outputs (tokens or states) of a separately trained speech recognition system are used as inputs for LLMs, which limits their potential in modeling alignment between speech and text. The other is an end-to-end approach that relies on speech instruction data, which is very difficult to collect in large quantities. In this paper, we address these issues and propose the BLSP approach that Bootstraps Language-Speech Pre-training via behavior alignment of continuation writing. We achieve this by learning a lightweight modality adapter between a frozen speech encoder and an LLM, ensuring that the LLM exhibits the same generation behavior regardless of the modality of input: a speech segment or its transcript. The training process can be divided into two steps. The first step prompts an LLM to generate texts with speech transcripts as prefixes, obtaining text continuations. In the second step, these continuations are used as supervised signals to train the modality adapter in an end-to-end manner. We demonstrate that this straightforward process can extend the capabilities of LLMs to speech, enabling speech recognition, speech translation, spoken language understanding, and speech conversation, even in zero-shot cross-lingual scenarios.
[ { "version": "v1", "created": "Sat, 2 Sep 2023 11:46:05 GMT" } ]
2023-09-06T00:00:00
[ [ "Wang", "Chen", "" ], [ "Liao", "Minpeng", "" ], [ "Huang", "Zhongqiang", "" ], [ "Lu", "Jinliang", "" ], [ "Wu", "Junhong", "" ], [ "Liu", "Yuchen", "" ], [ "Zong", "Chengqing", "" ], [ "Zhang", "Jiajun", "" ] ]
new_dataset
0.994249
2309.00928
Kailun Yang
Xuan He, Kailun Yang, Junwei Zheng, Jin Yuan, Luis M. Bergasa, Hui Zhang, Zhiyong Li
S$^3$-MonoDETR: Supervised Shape&Scale-perceptive Deformable Transformer for Monocular 3D Object Detection
The source code will be made publicly available at https://github.com/mikasa3lili/S3-MonoDETR
null
null
null
cs.CV cs.RO eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, transformer-based methods have shown exceptional performance in monocular 3D object detection, which can predict 3D attributes from a single 2D image. These methods typically use visual and depth representations to generate query points on objects, whose quality plays a decisive role in the detection accuracy. However, current unsupervised attention mechanisms without any geometry appearance awareness in transformers are susceptible to producing noisy features for query points, which severely limits the network performance and also makes the model have a poor ability to detect multi-category objects in a single training process. To tackle this problem, this paper proposes a novel "Supervised Shape&Scale-perceptive Deformable Attention" (S$^3$-DA) module for monocular 3D object detection. Concretely, S$^3$-DA utilizes visual and depth features to generate diverse local features with various shapes and scales and predict the corresponding matching distribution simultaneously to impose valuable shape&scale perception for each query. Benefiting from this, S$^3$-DA effectively estimates receptive fields for query points belonging to any category, enabling them to generate robust query features. Besides, we propose a Multi-classification-based Shape$\&$Scale Matching (MSM) loss to supervise the above process. Extensive experiments on KITTI and Waymo Open datasets demonstrate that S$^3$-DA significantly improves the detection accuracy, yielding state-of-the-art performance of single-category and multi-category 3D object detection in a single training process compared to the existing approaches. The source code will be made publicly available at https://github.com/mikasa3lili/S3-MonoDETR.
[ { "version": "v1", "created": "Sat, 2 Sep 2023 12:36:38 GMT" } ]
2023-09-06T00:00:00
[ [ "He", "Xuan", "" ], [ "Yang", "Kailun", "" ], [ "Zheng", "Junwei", "" ], [ "Yuan", "Jin", "" ], [ "Bergasa", "Luis M.", "" ], [ "Zhang", "Hui", "" ], [ "Li", "Zhiyong", "" ] ]
new_dataset
0.967536
2309.00929
Qing Wang
Qing Wang, Jixun Yao, Li Zhang, Pengcheng Guo, and Lei Xie
Timbre-reserved Adversarial Attack in Speaker Identification
11 pages, 8 figures
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As a type of biometric identification, a speaker identification (SID) system is confronted with various kinds of attacks. The spoofing attacks typically imitate the timbre of the target speakers, while the adversarial attacks confuse the SID system by adding a well-designed adversarial perturbation to an arbitrary speech. Although the spoofing attack copies a similar timbre as the victim, it does not exploit the vulnerability of the SID model and may not make the SID system give the attacker's desired decision. As for the adversarial attack, despite the SID system can be led to a designated decision, it cannot meet the specified text or speaker timbre requirements for the specific attack scenarios. In this study, to make the attack in SID not only leverage the vulnerability of the SID model but also reserve the timbre of the target speaker, we propose a timbre-reserved adversarial attack in the speaker identification. We generate the timbre-reserved adversarial audios by adding an adversarial constraint during the different training stages of the voice conversion (VC) model. Specifically, the adversarial constraint is using the target speaker label to optimize the adversarial perturbation added to the VC model representations and is implemented by a speaker classifier joining in the VC model training. The adversarial constraint can help to control the VC model to generate the speaker-wised audio. Eventually, the inference of the VC model is the ideal adversarial fake audio, which is timbre-reserved and can fool the SID system.
[ { "version": "v1", "created": "Sat, 2 Sep 2023 12:42:03 GMT" } ]
2023-09-06T00:00:00
[ [ "Wang", "Qing", "" ], [ "Yao", "Jixun", "" ], [ "Zhang", "Li", "" ], [ "Guo", "Pengcheng", "" ], [ "Xie", "Lei", "" ] ]
new_dataset
0.997683
2309.00944
Soumya Parekh
Soumya Parekh, Jay Patel
Pressmatch: Automated journalist recommendation for media coverage with Nearest Neighbor search
11 pages, 8 figures
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Slating a product for release often involves pitching journalists to run stories on your press release. Good media coverage often ensures greater product reach and drives audience engagement for those products. Hence, ensuring that those releases are pitched to the right journalists with relevant interests is crucial, since they receive several pitches daily. Keeping up with journalist beats and curating a media contacts list is often a huge and time-consuming task. This study proposes a model to automate and expedite the process by recommending suitable journalists to run media coverage on the press releases provided by the user.
[ { "version": "v1", "created": "Sat, 2 Sep 2023 13:41:29 GMT" } ]
2023-09-06T00:00:00
[ [ "Parekh", "Soumya", "" ], [ "Patel", "Jay", "" ] ]
new_dataset
0.995617
2309.00962
Jun Zhang
Jun Zhang, Huayang Zhuge, Yiyao Liu, Guohao Peng, Zhenyu Wu, Haoyuan Zhang, Qiyang Lyu, Heshan Li, Chunyang Zhao, Dogan Kircali, Sanat Mharolkar, Xun Yang, Su Yi, Yuanzhe Wang and Danwei Wang
NTU4DRadLM: 4D Radar-centric Multi-Modal Dataset for Localization and Mapping
2023 IEEE International Intelligent Transportation Systems Conference (ITSC 2023)
null
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by/4.0/
Simultaneous Localization and Mapping (SLAM) is moving towards a robust perception age. However, LiDAR- and visual- SLAM may easily fail in adverse conditions (rain, snow, smoke and fog, etc.). In comparison, SLAM based on 4D Radar, thermal camera and IMU can work robustly. But only a few literature can be found. A major reason is the lack of related datasets, which seriously hinders the research. Even though some datasets are proposed based on 4D radar in past four years, they are mainly designed for object detection, rather than SLAM. Furthermore, they normally do not include thermal camera. Therefore, in this paper, NTU4DRadLM is presented to meet this requirement. The main characteristics are: 1) It is the only dataset that simultaneously includes all 6 sensors: 4D radar, thermal camera, IMU, 3D LiDAR, visual camera and RTK GPS. 2) Specifically designed for SLAM tasks, which provides fine-tuned ground truth odometry and intentionally formulated loop closures. 3) Considered both low-speed robot platform and fast-speed unmanned vehicle platform. 4) Covered structured, unstructured and semi-structured environments. 5) Considered both middle- and large- scale outdoor environments, i.e., the 6 trajectories range from 246m to 6.95km. 6) Comprehensively evaluated three types of SLAM algorithms. Totally, the dataset is around 17.6km, 85mins, 50GB and it will be accessible from this link: https://github.com/junzhang2016/NTU4DRadLM
[ { "version": "v1", "created": "Sat, 2 Sep 2023 15:12:20 GMT" } ]
2023-09-06T00:00:00
[ [ "Zhang", "Jun", "" ], [ "Zhuge", "Huayang", "" ], [ "Liu", "Yiyao", "" ], [ "Peng", "Guohao", "" ], [ "Wu", "Zhenyu", "" ], [ "Zhang", "Haoyuan", "" ], [ "Lyu", "Qiyang", "" ], [ "Li", "Heshan", "" ], [ "Zhao", "Chunyang", "" ], [ "Kircali", "Dogan", "" ], [ "Mharolkar", "Sanat", "" ], [ "Yang", "Xun", "" ], [ "Yi", "Su", "" ], [ "Wang", "Yuanzhe", "" ], [ "Wang", "Danwei", "" ] ]
new_dataset
0.998235
2309.01012
Yaxin Hu
Yaxin Hu, Hajin Lim, Hailey L. Johnson, Josephine M. O'Shaughnessy, Lisa Kakonge, Lyn S. Turkstra, Melissa C. Duff, Catalina L. Toma, Bilge Mutlu
Investigating the Day-to-Day Experiences of Users with Traumatic Brain Injury with Conversational Agents
In Proceedings The 25th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS'23)
null
10.1145/3597638.3608385
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traumatic brain injury (TBI) can cause cognitive, communication, and psychological challenges that profoundly limit independence in everyday life. Conversational Agents (CAs) can provide individuals with TBI with cognitive and communication support, although little is known about how they make use of CAs to address injury-related needs. In this study, we gave nine adults with TBI an at-home CA for four weeks to investigate use patterns, challenges, and design requirements, focusing particularly on injury-related use. The findings revealed significant gaps between the current capabilities of CAs and accessibility challenges faced by TBI users. We also identified 14 TBI-related activities that participants engaged in with CAs. We categorized those activities into four groups: mental health, cognitive activities, healthcare and rehabilitation, and routine activities. Design implications focus on accessibility improvements and functional designs of CAs that can better support the day-to-day needs of people with TBI.
[ { "version": "v1", "created": "Sat, 2 Sep 2023 20:21:07 GMT" } ]
2023-09-06T00:00:00
[ [ "Hu", "Yaxin", "" ], [ "Lim", "Hajin", "" ], [ "Johnson", "Hailey L.", "" ], [ "O'Shaughnessy", "Josephine M.", "" ], [ "Kakonge", "Lisa", "" ], [ "Turkstra", "Lyn S.", "" ], [ "Duff", "Melissa C.", "" ], [ "Toma", "Catalina L.", "" ], [ "Mutlu", "Bilge", "" ] ]
new_dataset
0.98701
2309.01051
Yun Ding
Yun Ding, Shixin Zhu, Yang Li
On Galois self-orthogonal algebraic geometry codes
18papers
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Galois self-orthogonal (SO) codes are generalizations of Euclidean and Hermitian SO codes. Algebraic geometry (AG) codes are the first known class of linear codes exceeding the Gilbert-Varshamov bound. Both of them have attracted much attention for their rich algebraic structures and wide applications in these years. In this paper, we consider them together and study Galois SO AG codes. A criterion for an AG code being Galois SO is presented. Based on this criterion, we construct several new classes of maximum distance separable (MDS) Galois SO AG codes from projective lines and several new classes of Galois SO AG codes from projective elliptic curves, hyper-elliptic curves and hermitian curves. In addition, we give an embedding method that allows us to obtain more MDS Galois SO codes from known MDS Galois SO AG codes.
[ { "version": "v1", "created": "Sun, 3 Sep 2023 02:03:03 GMT" } ]
2023-09-06T00:00:00
[ [ "Ding", "Yun", "" ], [ "Zhu", "Shixin", "" ], [ "Li", "Yang", "" ] ]
new_dataset
0.988903
2309.01066
Surya Karthik Mukkavilli
Maximilian Nitsche (1 and 2), S. Karthik Mukkavilli (3), Niklas K\"uhl (4 and 1), Thomas Brunschwiler (3) ((1) IBM Consulting, Germany, (2) Karlsruhe Institute of Technology, Germany, (3) IBM Research - Europe, Switzerland (4) University of Bayreuth, Germany)
AB2CD: AI for Building Climate Damage Classification and Detection
9 pages, 4 figures
null
null
null
cs.CV cs.AI cs.CY eess.IV physics.geo-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
We explore the implementation of deep learning techniques for precise building damage assessment in the context of natural hazards, utilizing remote sensing data. The xBD dataset, comprising diverse disaster events from across the globe, serves as the primary focus, facilitating the evaluation of deep learning models. We tackle the challenges of generalization to novel disasters and regions while accounting for the influence of low-quality and noisy labels inherent in natural hazard data. Furthermore, our investigation quantitatively establishes that the minimum satellite imagery resolution essential for effective building damage detection is 3 meters and below 1 meter for classification using symmetric and asymmetric resolution perturbation analyses. To achieve robust and accurate evaluations of building damage detection and classification, we evaluated different deep learning models with residual, squeeze and excitation, and dual path network backbones, as well as ensemble techniques. Overall, the U-Net Siamese network ensemble with F-1 score of 0.812 performed the best against the xView2 challenge benchmark. Additionally, we evaluate a Universal model trained on all hazards against a flood expert model and investigate generalization gaps across events, and out of distribution from field data in the Ahr Valley. Our research findings showcase the potential and limitations of advanced AI solutions in enhancing the impact assessment of climate change-induced extreme weather events, such as floods and hurricanes. These insights have implications for disaster impact assessment in the face of escalating climate challenges.
[ { "version": "v1", "created": "Sun, 3 Sep 2023 03:37:04 GMT" } ]
2023-09-06T00:00:00
[ [ "Nitsche", "Maximilian", "", "1 and 2" ], [ "Mukkavilli", "S. Karthik", "", "4 and 1" ], [ "Kühl", "Niklas", "", "4 and 1" ], [ "Brunschwiler", "Thomas", "" ] ]
new_dataset
0.999741
2309.01075
Xinyue Pan
Xinyue Pan, Jiangpeng He, Fengqing Zhu
Muti-Stage Hierarchical Food Classification
accepted for ACM MM 2023 Madima
null
10.1145/3607828.3617798
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Food image classification serves as a fundamental and critical step in image-based dietary assessment, facilitating nutrient intake analysis from captured food images. However, existing works in food classification predominantly focuses on predicting 'food types', which do not contain direct nutritional composition information. This limitation arises from the inherent discrepancies in nutrition databases, which are tasked with associating each 'food item' with its respective information. Therefore, in this work we aim to classify food items to align with nutrition database. To this end, we first introduce VFN-nutrient dataset by annotating each food image in VFN with a food item that includes nutritional composition information. Such annotation of food items, being more discriminative than food types, creates a hierarchical structure within the dataset. However, since the food item annotations are solely based on nutritional composition information, they do not always show visual relations with each other, which poses significant challenges when applying deep learning-based techniques for classification. To address this issue, we then propose a multi-stage hierarchical framework for food item classification by iteratively clustering and merging food items during the training process, which allows the deep model to extract image features that are discriminative across labels. Our method is evaluated on VFN-nutrient dataset and achieve promising results compared with existing work in terms of both food type and food item classification.
[ { "version": "v1", "created": "Sun, 3 Sep 2023 04:45:44 GMT" } ]
2023-09-06T00:00:00
[ [ "Pan", "Xinyue", "" ], [ "He", "Jiangpeng", "" ], [ "Zhu", "Fengqing", "" ] ]
new_dataset
0.983577
2309.01081
Haiyang Yu
Haiyang Yu, Xiaocong Wang, Bin Li, Xiangyang Xue
Orientation-Independent Chinese Text Recognition in Scene Images
IJCAI 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scene text recognition (STR) has attracted much attention due to its broad applications. The previous works pay more attention to dealing with the recognition of Latin text images with complex backgrounds by introducing language models or other auxiliary networks. Different from Latin texts, many vertical Chinese texts exist in natural scenes, which brings difficulties to current state-of-the-art STR methods. In this paper, we take the first attempt to extract orientation-independent visual features by disentangling content and orientation information of text images, thus recognizing both horizontal and vertical texts robustly in natural scenes. Specifically, we introduce a Character Image Reconstruction Network (CIRN) to recover corresponding printed character images with disentangled content and orientation information. We conduct experiments on a scene dataset for benchmarking Chinese text recognition, and the results demonstrate that the proposed method can indeed improve performance through disentangling content and orientation information. To further validate the effectiveness of our method, we additionally collect a Vertical Chinese Text Recognition (VCTR) dataset. The experimental results show that the proposed method achieves 45.63% improvement on VCTR when introducing CIRN to the baseline model.
[ { "version": "v1", "created": "Sun, 3 Sep 2023 05:30:21 GMT" } ]
2023-09-06T00:00:00
[ [ "Yu", "Haiyang", "" ], [ "Wang", "Xiaocong", "" ], [ "Li", "Bin", "" ], [ "Xue", "Xiangyang", "" ] ]
new_dataset
0.99427
2309.01083
Haiyang Yu
Haiyang Yu, Xiaocong Wang, Bin Li, Xiangyang Xue
Chinese Text Recognition with A Pre-Trained CLIP-Like Model Through Image-IDS Aligning
ICCV 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scene text recognition has been studied for decades due to its broad applications. However, despite Chinese characters possessing different characteristics from Latin characters, such as complex inner structures and large categories, few methods have been proposed for Chinese Text Recognition (CTR). Particularly, the characteristic of large categories poses challenges in dealing with zero-shot and few-shot Chinese characters. In this paper, inspired by the way humans recognize Chinese texts, we propose a two-stage framework for CTR. Firstly, we pre-train a CLIP-like model through aligning printed character images and Ideographic Description Sequences (IDS). This pre-training stage simulates humans recognizing Chinese characters and obtains the canonical representation of each character. Subsequently, the learned representations are employed to supervise the CTR model, such that traditional single-character recognition can be improved to text-line recognition through image-IDS matching. To evaluate the effectiveness of the proposed method, we conduct extensive experiments on both Chinese character recognition (CCR) and CTR. The experimental results demonstrate that the proposed method performs best in CCR and outperforms previous methods in most scenarios of the CTR benchmark. It is worth noting that the proposed method can recognize zero-shot Chinese characters in text images without fine-tuning, whereas previous methods require fine-tuning when new classes appear. The code is available at https://github.com/FudanVI/FudanOCR/tree/main/image-ids-CTR.
[ { "version": "v1", "created": "Sun, 3 Sep 2023 05:33:16 GMT" } ]
2023-09-06T00:00:00
[ [ "Yu", "Haiyang", "" ], [ "Wang", "Xiaocong", "" ], [ "Li", "Bin", "" ], [ "Xue", "Xiangyang", "" ] ]
new_dataset
0.995976
2309.01093
Jiajin Tang
Jiajin Tang, Ge Zheng, Jingyi Yu, Sibei Yang
CoTDet: Affordance Knowledge Prompting for Task Driven Object Detection
Accepted by ICCV 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Task driven object detection aims to detect object instances suitable for affording a task in an image. Its challenge lies in object categories available for the task being too diverse to be limited to a closed set of object vocabulary for traditional object detection. Simply mapping categories and visual features of common objects to the task cannot address the challenge. In this paper, we propose to explore fundamental affordances rather than object categories, i.e., common attributes that enable different objects to accomplish the same task. Moreover, we propose a novel multi-level chain-of-thought prompting (MLCoT) to extract the affordance knowledge from large language models, which contains multi-level reasoning steps from task to object examples to essential visual attributes with rationales. Furthermore, to fully exploit knowledge to benefit object recognition and localization, we propose a knowledge-conditional detection framework, namely CoTDet. It conditions the detector from the knowledge to generate object queries and regress boxes. Experimental results demonstrate that our CoTDet outperforms state-of-the-art methods consistently and significantly (+15.6 box AP and +14.8 mask AP) and can generate rationales for why objects are detected to afford the task.
[ { "version": "v1", "created": "Sun, 3 Sep 2023 06:18:39 GMT" } ]
2023-09-06T00:00:00
[ [ "Tang", "Jiajin", "" ], [ "Zheng", "Ge", "" ], [ "Yu", "Jingyi", "" ], [ "Yang", "Sibei", "" ] ]
new_dataset
0.99732
2309.01111
Yuhao Du
Yuhao Du, Yuncheng Jiang, Shuangyi Tan, Xusheng Wu, Qi Dou, Zhen Li, Guanbin Li, Xiang Wan
ArSDM: Colonoscopy Images Synthesis with Adaptive Refinement Semantic Diffusion Models
Accepted by MICCAI-2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Colonoscopy analysis, particularly automatic polyp segmentation and detection, is essential for assisting clinical diagnosis and treatment. However, as medical image annotation is labour- and resource-intensive, the scarcity of annotated data limits the effectiveness and generalization of existing methods. Although recent research has focused on data generation and augmentation to address this issue, the quality of the generated data remains a challenge, which limits the contribution to the performance of subsequent tasks. Inspired by the superiority of diffusion models in fitting data distributions and generating high-quality data, in this paper, we propose an Adaptive Refinement Semantic Diffusion Model (ArSDM) to generate colonoscopy images that benefit the downstream tasks. Specifically, ArSDM utilizes the ground-truth segmentation mask as a prior condition during training and adjusts the diffusion loss for each input according to the polyp/background size ratio. Furthermore, ArSDM incorporates a pre-trained segmentation model to refine the training process by reducing the difference between the ground-truth mask and the prediction mask. Extensive experiments on segmentation and detection tasks demonstrate the generated data by ArSDM could significantly boost the performance of baseline methods.
[ { "version": "v1", "created": "Sun, 3 Sep 2023 07:55:46 GMT" } ]
2023-09-06T00:00:00
[ [ "Du", "Yuhao", "" ], [ "Jiang", "Yuncheng", "" ], [ "Tan", "Shuangyi", "" ], [ "Wu", "Xusheng", "" ], [ "Dou", "Qi", "" ], [ "Li", "Zhen", "" ], [ "Li", "Guanbin", "" ], [ "Wan", "Xiang", "" ] ]
new_dataset
0.969514
2309.01112
Ze Fu
Ze Fu, Yinghui Li, and Weizhong Guo
Swing Leg Motion Strategy for Heavy-load Legged Robot Based on Force Sensing
null
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The heavy-load legged robot has strong load carrying capacity and can adapt to various unstructured terrains. But the large weight results in higher requirements for motion stability and environmental perception ability. In order to utilize force sensing information to improve its motion performance, in this paper, we propose a finite state machine model for the swing leg in the static gait by imitating the movement of the elephant. Based on the presence or absence of additional terrain information, different trajectory planning strategies are provided for the swing leg to enhance the success rate of stepping and save energy. The experimental results on a novel quadruped robot show that our method has strong robustness and can enable heavy-load legged robots to pass through various complex terrains autonomously and smoothly.
[ { "version": "v1", "created": "Sun, 3 Sep 2023 08:03:06 GMT" } ]
2023-09-06T00:00:00
[ [ "Fu", "Ze", "" ], [ "Li", "Yinghui", "" ], [ "Guo", "Weizhong", "" ] ]
new_dataset
0.9951
2309.01114
Yang Tan
Yang Tan, Mingchen Li, Zijie Huang, Huiqun Yu and Guisheng Fan
MedChatZH: a Better Medical Adviser Learns from Better Instructions
7 pages, 3 figures
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative large language models (LLMs) have shown great success in various applications, including question-answering (QA) and dialogue systems. However, in specialized domains like traditional Chinese medical QA, these models may perform unsatisfactorily without fine-tuning on domain-specific datasets. To address this, we introduce MedChatZH, a dialogue model designed specifically for traditional Chinese medical QA. Our model is pre-trained on Chinese traditional medical books and fine-tuned with a carefully curated medical instruction dataset. It outperforms several solid baselines on a real-world medical dialogue dataset. We release our model, code, and dataset on https://github.com/tyang816/MedChatZH to facilitate further research in the domain of traditional Chinese medicine and LLMs.
[ { "version": "v1", "created": "Sun, 3 Sep 2023 08:08:15 GMT" } ]
2023-09-06T00:00:00
[ [ "Tan", "Yang", "" ], [ "Li", "Mingchen", "" ], [ "Huang", "Zijie", "" ], [ "Yu", "Huiqun", "" ], [ "Fan", "Guisheng", "" ] ]
new_dataset
0.999354
2309.01151
Cheng Shi
Cheng Shi and Sibei Yang
EdaDet: Open-Vocabulary Object Detection Using Early Dense Alignment
ICCV 2023; Project Page: https://chengshiest.github.io/edadet
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision-language models such as CLIP have boosted the performance of open-vocabulary object detection, where the detector is trained on base categories but required to detect novel categories. Existing methods leverage CLIP's strong zero-shot recognition ability to align object-level embeddings with textual embeddings of categories. However, we observe that using CLIP for object-level alignment results in overfitting to base categories, i.e., novel categories most similar to base categories have particularly poor performance as they are recognized as similar base categories. In this paper, we first identify that the loss of critical fine-grained local image semantics hinders existing methods from attaining strong base-to-novel generalization. Then, we propose Early Dense Alignment (EDA) to bridge the gap between generalizable local semantics and object-level prediction. In EDA, we use object-level supervision to learn the dense-level rather than object-level alignment to maintain the local fine-grained semantics. Extensive experiments demonstrate our superior performance to competing approaches under the same strict setting and without using external training resources, i.e., improving the +8.4% novel box AP50 on COCO and +3.9% rare mask AP on LVIS.
[ { "version": "v1", "created": "Sun, 3 Sep 2023 12:04:14 GMT" } ]
2023-09-06T00:00:00
[ [ "Shi", "Cheng", "" ], [ "Yang", "Sibei", "" ] ]
new_dataset
0.994134
2309.01236
Dorian F. Henning
Dorian F. Henning, Christopher Choi, Simon Schaefer, Stefan Leutenegger
BodySLAM++: Fast and Tightly-Coupled Visual-Inertial Camera and Human Motion Tracking
IROS 2023. Video: https://youtu.be/UcutiHQwbGk
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robust, fast, and accurate human state - 6D pose and posture - estimation remains a challenging problem. For real-world applications, the ability to estimate the human state in real-time is highly desirable. In this paper, we present BodySLAM++, a fast, efficient, and accurate human and camera state estimation framework relying on visual-inertial data. BodySLAM++ extends an existing visual-inertial state estimation framework, OKVIS2, to solve the dual task of estimating camera and human states simultaneously. Our system improves the accuracy of both human and camera state estimation with respect to baseline methods by 26% and 12%, respectively, and achieves real-time performance at 15+ frames per second on an Intel i7-model CPU. Experiments were conducted on a custom dataset containing both ground truth human and camera poses collected with an indoor motion tracking system.
[ { "version": "v1", "created": "Sun, 3 Sep 2023 18:09:37 GMT" } ]
2023-09-06T00:00:00
[ [ "Henning", "Dorian F.", "" ], [ "Choi", "Christopher", "" ], [ "Schaefer", "Simon", "" ], [ "Leutenegger", "Stefan", "" ] ]
new_dataset
0.993337
2309.01252
Dishani Lahiri
Dishani Lahiri, Neeraj Panse, Moneish Kumar
S2RF: Semantically Stylized Radiance Fields
AI for 3D Content Creation at International Conference on Computer Vision 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present our method for transferring style from any arbitrary image(s) to object(s) within a 3D scene. Our primary objective is to offer more control in 3D scene stylization, facilitating the creation of customizable and stylized scene images from arbitrary viewpoints. To achieve this, we propose a novel approach that incorporates nearest neighborhood-based loss, allowing for flexible 3D scene reconstruction while effectively capturing intricate style details and ensuring multi-view consistency.
[ { "version": "v1", "created": "Sun, 3 Sep 2023 19:32:49 GMT" } ]
2023-09-06T00:00:00
[ [ "Lahiri", "Dishani", "" ], [ "Panse", "Neeraj", "" ], [ "Kumar", "Moneish", "" ] ]
new_dataset
0.998385
2309.01279
Stefano Puliti
Stefano Puliti, Grant Pearse, Peter Surov\'y, Luke Wallace, Markus Hollaus, Maciej Wielgosz, Rasmus Astrup
FOR-instance: a UAV laser scanning benchmark dataset for semantic and instance segmentation of individual trees
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The FOR-instance dataset (available at https://doi.org/10.5281/zenodo.8287792) addresses the challenge of accurate individual tree segmentation from laser scanning data, crucial for understanding forest ecosystems and sustainable management. Despite the growing need for detailed tree data, automating segmentation and tracking scientific progress remains difficult. Existing methodologies often overfit small datasets and lack comparability, limiting their applicability. Amid the progress triggered by the emergence of deep learning methodologies, standardized benchmarking assumes paramount importance in these research domains. This data paper introduces a benchmarking dataset for dense airborne laser scanning data, aimed at advancing instance and semantic segmentation techniques and promoting progress in 3D forest scene segmentation. The FOR-instance dataset comprises five curated and ML-ready UAV-based laser scanning data collections from diverse global locations, representing various forest types. The laser scanning data were manually annotated into individual trees (instances) and different semantic classes (e.g. stem, woody branches, live branches, terrain, low vegetation). The dataset is divided into development and test subsets, enabling method advancement and evaluation, with specific guidelines for utilization. It supports instance and semantic segmentation, offering adaptability to deep learning frameworks and diverse segmentation strategies, while the inclusion of diameter at breast height data expands its utility to the measurement of a classic tree variable. In conclusion, the FOR-instance dataset contributes to filling a gap in the 3D forest research, enhancing the development and benchmarking of segmentation algorithms for dense airborne laser scanning data.
[ { "version": "v1", "created": "Sun, 3 Sep 2023 22:08:29 GMT" } ]
2023-09-06T00:00:00
[ [ "Puliti", "Stefano", "" ], [ "Pearse", "Grant", "" ], [ "Surový", "Peter", "" ], [ "Wallace", "Luke", "" ], [ "Hollaus", "Markus", "" ], [ "Wielgosz", "Maciej", "" ], [ "Astrup", "Rasmus", "" ] ]
new_dataset
0.991034
2309.01318
Gilberto Ochoa-Ruiz
Eduardo Guardu\~no-Martinez and Jorge Ciprian-Sanchez and Gerardo Valente and Vazquez-Garcia and Gerardo Rodriguez-Hernandez and Adriana Palacios-Rosas and Lucile Rossi-Tisson and Gilberto Ochoa-Ruiz
An FPGA smart camera implementation of segmentation models for drone wildfire imagery
This paper has been accepted at the 22nd Mexican International Conference on Artificial Intelligence (MICAI 2023)
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Wildfires represent one of the most relevant natural disasters worldwide, due to their impact on various societal and environmental levels. Thus, a significant amount of research has been carried out to investigate and apply computer vision techniques to address this problem. One of the most promising approaches for wildfire fighting is the use of drones equipped with visible and infrared cameras for the detection, monitoring, and fire spread assessment in a remote manner but in close proximity to the affected areas. However, implementing effective computer vision algorithms on board is often prohibitive since deploying full-precision deep learning models running on GPU is not a viable option, due to their high power consumption and the limited payload a drone can handle. Thus, in this work, we posit that smart cameras, based on low-power consumption field-programmable gate arrays (FPGAs), in tandem with binarized neural networks (BNNs), represent a cost-effective alternative for implementing onboard computing on the edge. Herein we present the implementation of a segmentation model applied to the Corsican Fire Database. We optimized an existing U-Net model for such a task and ported the model to an edge device (a Xilinx Ultra96-v2 FPGA). By pruning and quantizing the original model, we reduce the number of parameters by 90%. Furthermore, additional optimizations enabled us to increase the throughput of the original model from 8 frames per second (FPS) to 33.63 FPS without loss in the segmentation performance: our model obtained 0.912 in Matthews correlation coefficient (MCC),0.915 in F1 score and 0.870 in Hafiane quality index (HAF), and comparable qualitative segmentation results when contrasted to the original full-precision model. The final model was integrated into a low-cost FPGA, which was used to implement a neural network accelerator.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 02:30:14 GMT" } ]
2023-09-06T00:00:00
[ [ "Guarduño-Martinez", "Eduardo", "" ], [ "Ciprian-Sanchez", "Jorge", "" ], [ "Valente", "Gerardo", "" ], [ "Vazquez-Garcia", "", "" ], [ "Rodriguez-Hernandez", "Gerardo", "" ], [ "Palacios-Rosas", "Adriana", "" ], [ "Rossi-Tisson", "Lucile", "" ], [ "Ochoa-Ruiz", "Gilberto", "" ] ]
new_dataset
0.993399
2309.01324
Duo Lu
Himanshu Pahadia, Duo Lu, Bharatesh Chakravarthi, Yezhou Yang
SKoPe3D: A Synthetic Dataset for Vehicle Keypoint Perception in 3D from Traffic Monitoring Cameras
Accepted to IEEE ITSC 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Intelligent transportation systems (ITS) have revolutionized modern road infrastructure, providing essential functionalities such as traffic monitoring, road safety assessment, congestion reduction, and law enforcement. Effective vehicle detection and accurate vehicle pose estimation are crucial for ITS, particularly using monocular cameras installed on the road infrastructure. One fundamental challenge in vision-based vehicle monitoring is keypoint detection, which involves identifying and localizing specific points on vehicles (such as headlights, wheels, taillights, etc.). However, this task is complicated by vehicle model and shape variations, occlusion, weather, and lighting conditions. Furthermore, existing traffic perception datasets for keypoint detection predominantly focus on frontal views from ego vehicle-mounted sensors, limiting their usability in traffic monitoring. To address these issues, we propose SKoPe3D, a unique synthetic vehicle keypoint dataset generated using the CARLA simulator from a roadside perspective. This comprehensive dataset includes generated images with bounding boxes, tracking IDs, and 33 keypoints for each vehicle. Spanning over 25k images across 28 scenes, SKoPe3D contains over 150k vehicle instances and 4.9 million keypoints. To demonstrate its utility, we trained a keypoint R-CNN model on our dataset as a baseline and conducted a thorough evaluation. Our experiments highlight the dataset's applicability and the potential for knowledge transfer between synthetic and real-world data. By leveraging the SKoPe3D dataset, researchers and practitioners can overcome the limitations of existing datasets, enabling advancements in vehicle keypoint detection for ITS.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 02:57:30 GMT" } ]
2023-09-06T00:00:00
[ [ "Pahadia", "Himanshu", "" ], [ "Lu", "Duo", "" ], [ "Chakravarthi", "Bharatesh", "" ], [ "Yang", "Yezhou", "" ] ]
new_dataset
0.999845
2309.01339
Ting-En Lin
Zaijing Li, Ting-En Lin, Yuchuan Wu, Meng Liu, Fengxiao Tang, Ming Zhao, Yongbin Li
UniSA: Unified Generative Framework for Sentiment Analysis
Accepted to ACM MM 2023
null
null
null
cs.CL cs.AI cs.CV cs.MM
http://creativecommons.org/licenses/by/4.0/
Sentiment analysis is a crucial task that aims to understand people's emotional states and predict emotional categories based on multimodal information. It consists of several subtasks, such as emotion recognition in conversation (ERC), aspect-based sentiment analysis (ABSA), and multimodal sentiment analysis (MSA). However, unifying all subtasks in sentiment analysis presents numerous challenges, including modality alignment, unified input/output forms, and dataset bias. To address these challenges, we propose a Task-Specific Prompt method to jointly model subtasks and introduce a multimodal generative framework called UniSA. Additionally, we organize the benchmark datasets of main subtasks into a new Sentiment Analysis Evaluation benchmark, SAEval. We design novel pre-training tasks and training methods to enable the model to learn generic sentiment knowledge among subtasks to improve the model's multimodal sentiment perception ability. Our experimental results show that UniSA performs comparably to the state-of-the-art on all subtasks and generalizes well to various subtasks in sentiment analysis.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 03:49:30 GMT" } ]
2023-09-06T00:00:00
[ [ "Li", "Zaijing", "" ], [ "Lin", "Ting-En", "" ], [ "Wu", "Yuchuan", "" ], [ "Liu", "Meng", "" ], [ "Tang", "Fengxiao", "" ], [ "Zhao", "Ming", "" ], [ "Li", "Yongbin", "" ] ]
new_dataset
0.999329
2309.01346
Roshan Vijay
James Lee Wei Shung, Paul Hibbard, Roshan Vijay, Lincoln Ang Hon Kin, Niels de Boer
White paper on LiDAR performance against selected Automotive Paints
23 pages, 29 figures. This white paper was developed with support from the Urban Mobility Grand Challenge Fund by the Land Transport Authority of Singapore (No. UMGC-L010). For associated dataset, see https://researchdata.ntu.edu.sg/dataset.xhtml?persistentId=doi:10.21979/N9/CGDKMZ
null
null
null
cs.RO eess.SP
http://creativecommons.org/licenses/by-nc-nd/4.0/
LiDAR (Light Detection and Ranging) is a useful sensing technique and an important source of data for autonomous vehicles (AVs). In this publication we present the results of a study undertaken to understand the impact of automotive paint on LiDAR performance along with a methodology used to conduct this study. Our approach consists of evaluating the average reflected intensity output by different LiDAR sensor models when tested with different types of automotive paints. The paints were chosen to represent common paints found on vehicles in Singapore. The experiments were conducted with LiDAR sensors commonly used by autonomous vehicle (AV) developers and OEMs. The paints used were also selected based on those observed in real-world conditions. This stems from a desire to model real-world performance of actual sensing systems when exposed to the physical world. The goal is then to inform regulators of AVs in Singapore of the impact of automotive paint on LiDAR performance, so that they can determine testing standards and specifications which will better reflect real-world performance and also better assess the adequacy of LiDAR systems installed for local AV operations. The tests were conducted for a combination of 13 different paint panels and 3 LiDAR sensors. In general, it was observed that darker coloured paints have lower reflection intensity whereas lighter coloured paints exhibited higher intensity values.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 04:07:05 GMT" } ]
2023-09-06T00:00:00
[ [ "Shung", "James Lee Wei", "" ], [ "Hibbard", "Paul", "" ], [ "Vijay", "Roshan", "" ], [ "Kin", "Lincoln Ang Hon", "" ], [ "de Boer", "Niels", "" ] ]
new_dataset
0.994651
2309.01350
Manish Bhattarai
Maksim E. Eren, Manish Bhattarai, Kim Rasmussen, Boian S. Alexandrov, Charles Nicholas
MalwareDNA: Simultaneous Classification of Malware, Malware Families, and Novel Malware
Accepted at IEEE ISI 2023
null
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Malware is one of the most dangerous and costly cyber threats to national security and a crucial factor in modern cyber-space. However, the adoption of machine learning (ML) based solutions against malware threats has been relatively slow. Shortcomings in the existing ML approaches are likely contributing to this problem. The majority of current ML approaches ignore real-world challenges such as the detection of novel malware. In addition, proposed ML approaches are often designed either for malware/benign-ware classification or malware family classification. Here we introduce and showcase preliminary capabilities of a new method that can perform precise identification of novel malware families, while also unifying the capability for malware/benign-ware classification and malware family classification into a single framework.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 04:27:39 GMT" } ]
2023-09-06T00:00:00
[ [ "Eren", "Maksim E.", "" ], [ "Bhattarai", "Manish", "" ], [ "Rasmussen", "Kim", "" ], [ "Alexandrov", "Boian S.", "" ], [ "Nicholas", "Charles", "" ] ]
new_dataset
0.989313
2309.01366
Haokun Wen
Haokun Wen, Xian Zhang, Xuemeng Song, Yinwei Wei, Liqiang Nie
Target-Guided Composed Image Retrieval
null
ACM Multimedia 2023
10.1145/3581783.3611817
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Composed image retrieval (CIR) is a new and flexible image retrieval paradigm, which can retrieve the target image for a multimodal query, including a reference image and its corresponding modification text. Although existing efforts have achieved compelling success, they overlook the conflict relationship modeling between the reference image and the modification text for improving the multimodal query composition and the adaptive matching degree modeling for promoting the ranking of the candidate images that could present different levels of matching degrees with the given query. To address these two limitations, in this work, we propose a Target-Guided Composed Image Retrieval network (TG-CIR). In particular, TG-CIR first extracts the unified global and local attribute features for the reference/target image and the modification text with the contrastive language-image pre-training model (CLIP) as the backbone, where an orthogonal regularization is introduced to promote the independence among the attribute features. Then TG-CIR designs a target-query relationship-guided multimodal query composition module, comprising a target-free student composition branch and a target-based teacher composition branch, where the target-query relationship is injected into the teacher branch for guiding the conflict relationship modeling of the student branch. Last, apart from the conventional batch-based classification loss, TG-CIR additionally introduces a batch-based target similarity-guided matching degree regularization to promote the metric learning process. Extensive experiments on three benchmark datasets demonstrate the superiority of our proposed method.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 05:26:28 GMT" } ]
2023-09-06T00:00:00
[ [ "Wen", "Haokun", "" ], [ "Zhang", "Xian", "" ], [ "Song", "Xuemeng", "" ], [ "Wei", "Yinwei", "" ], [ "Nie", "Liqiang", "" ] ]
new_dataset
0.954233
2309.01370
Monika Jain
Monika Jain, Kuldeep Singh, Raghava Mutharaju
ReOnto: A Neuro-Symbolic Approach for Biomedical Relation Extraction
Accepted in ECML 2023
null
null
null
cs.CL cs.AI cs.IR cs.LG
http://creativecommons.org/licenses/by/4.0/
Relation Extraction (RE) is the task of extracting semantic relationships between entities in a sentence and aligning them to relations defined in a vocabulary, which is generally in the form of a Knowledge Graph (KG) or an ontology. Various approaches have been proposed so far to address this task. However, applying these techniques to biomedical text often yields unsatisfactory results because it is hard to infer relations directly from sentences due to the nature of the biomedical relations. To address these issues, we present a novel technique called ReOnto, that makes use of neuro symbolic knowledge for the RE task. ReOnto employs a graph neural network to acquire the sentence representation and leverages publicly accessible ontologies as prior knowledge to identify the sentential relation between two entities. The approach involves extracting the relation path between the two entities from the ontology. We evaluate the effect of using symbolic knowledge from ontologies with graph neural networks. Experimental results on two public biomedical datasets, BioRel and ADE, show that our method outperforms all the baselines (approximately by 3\%).
[ { "version": "v1", "created": "Mon, 4 Sep 2023 05:36:58 GMT" } ]
2023-09-06T00:00:00
[ [ "Jain", "Monika", "" ], [ "Singh", "Kuldeep", "" ], [ "Mutharaju", "Raghava", "" ] ]
new_dataset
0.983562
2309.01391
Burhaneddin Yaman
Tanvir Mahmud, Chun-Hao Liu, Burhaneddin Yaman, Diana Marculescu
SSVOD: Semi-Supervised Video Object Detection with Sparse Annotations
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite significant progress in semi-supervised learning for image object detection, several key issues are yet to be addressed for video object detection: (1) Achieving good performance for supervised video object detection greatly depends on the availability of annotated frames. (2) Despite having large inter-frame correlations in a video, collecting annotations for a large number of frames per video is expensive, time-consuming, and often redundant. (3) Existing semi-supervised techniques on static images can hardly exploit the temporal motion dynamics inherently present in videos. In this paper, we introduce SSVOD, an end-to-end semi-supervised video object detection framework that exploits motion dynamics of videos to utilize large-scale unlabeled frames with sparse annotations. To selectively assemble robust pseudo-labels across groups of frames, we introduce \textit{flow-warped predictions} from nearby frames for temporal-consistency estimation. In particular, we introduce cross-IoU and cross-divergence based selection methods over a set of estimated predictions to include robust pseudo-labels for bounding boxes and class labels, respectively. To strike a balance between confirmation bias and uncertainty noise in pseudo-labels, we propose confidence threshold based combination of hard and soft pseudo-labels. Our method achieves significant performance improvements over existing methods on ImageNet-VID, Epic-KITCHENS, and YouTube-VIS datasets. Code and pre-trained models will be released.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 06:41:33 GMT" } ]
2023-09-06T00:00:00
[ [ "Mahmud", "Tanvir", "" ], [ "Liu", "Chun-Hao", "" ], [ "Yaman", "Burhaneddin", "" ], [ "Marculescu", "Diana", "" ] ]
new_dataset
0.994703
2309.01399
Takeshi Yoshimura
Takeshi Yoshimura, Tatsuhiro Chiba, Sunyanan Choochotkaew, Seetharami Seelam, Hui-fang Wen, Jonas Pfefferle
Objcache: An Elastic Filesystem over External Persistent Storage for Container Clusters
13 pages
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Container virtualization enables emerging AI workloads such as model serving, highly parallelized training, machine learning pipelines, and so on, to be easily scaled on demand on the elastic cloud infrastructure. Particularly, AI workloads require persistent storage to store data such as training inputs, models, and checkpoints. An external storage system like cloud object storage is a common choice because of its elasticity and scalability. To mitigate access latency to external storage, caching at a local filesystem is an essential technique. However, building local caches on scaling clusters must cope with explosive disk usage, redundant networking, and unexpected failures. We propose objcache, an elastic filesystem over external storage. Objcache introduces an internal transaction protocol over Raft logging to enable atomic updates of distributed persistent states with consistent hashing. The proposed transaction protocol can also manage inode dirtiness by maintaining the consistency between the local cache and external storage. Objcache supports scaling down to zero by automatically evicting dirty files to external storage. Our evaluation reports that objcache speeded up model serving startup by 98.9% compared to direct copies via S3 interfaces. Scaling up with dirty files completed from 2 to 14 seconds with 1024 dirty files.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 07:03:28 GMT" } ]
2023-09-06T00:00:00
[ [ "Yoshimura", "Takeshi", "" ], [ "Chiba", "Tatsuhiro", "" ], [ "Choochotkaew", "Sunyanan", "" ], [ "Seelam", "Seetharami", "" ], [ "Wen", "Hui-fang", "" ], [ "Pfefferle", "Jonas", "" ] ]
new_dataset
0.998408
2309.01413
Jan Fillies
Jan Fillies, Silvio Peikert, Adrian Paschke
Hateful Messages: A Conversational Data Set of Hate Speech produced by Adolescents on Discord
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
With the rise of social media, a rise of hateful content can be observed. Even though the understanding and definitions of hate speech varies, platforms, communities, and legislature all acknowledge the problem. Therefore, adolescents are a new and active group of social media users. The majority of adolescents experience or witness online hate speech. Research in the field of automated hate speech classification has been on the rise and focuses on aspects such as bias, generalizability, and performance. To increase generalizability and performance, it is important to understand biases within the data. This research addresses the bias of youth language within hate speech classification and contributes by providing a modern and anonymized hate speech youth language data set consisting of 88.395 annotated chat messages. The data set consists of publicly available online messages from the chat platform Discord. ~6,42% of the messages were classified by a self-developed annotation schema as hate speech. For 35.553 messages, the user profiles provided age annotations setting the average author age to under 20 years old.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 07:48:52 GMT" } ]
2023-09-06T00:00:00
[ [ "Fillies", "Jan", "" ], [ "Peikert", "Silvio", "" ], [ "Paschke", "Adrian", "" ] ]
new_dataset
0.970441
2309.01455
Chung-Chi Chen
Jian-Tao Huang, Chung-Chi Chen, Hen-Hsen Huang, Hsin-Hsi Chen
NumHG: A Dataset for Number-Focused Headline Generation
NumEval@SemEval-2024 Dataset
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Headline generation, a key task in abstractive summarization, strives to condense a full-length article into a succinct, single line of text. Notably, while contemporary encoder-decoder models excel based on the ROUGE metric, they often falter when it comes to the precise generation of numerals in headlines. We identify the lack of datasets providing fine-grained annotations for accurate numeral generation as a major roadblock. To address this, we introduce a new dataset, the NumHG, and provide over 27,000 annotated numeral-rich news articles for detailed investigation. Further, we evaluate five well-performing models from previous headline generation tasks using human evaluation in terms of numerical accuracy, reasonableness, and readability. Our study reveals a need for improvement in numerical accuracy, demonstrating the potential of the NumHG dataset to drive progress in number-focused headline generation and stimulate further discussions in numeral-focused text generation.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 09:03:53 GMT" } ]
2023-09-06T00:00:00
[ [ "Huang", "Jian-Tao", "" ], [ "Chen", "Chung-Chi", "" ], [ "Huang", "Hen-Hsen", "" ], [ "Chen", "Hsin-Hsi", "" ] ]
new_dataset
0.999847
2309.01469
Anju Rani
Anju Rani and Daniel O. Arroyo and Petar Durdevic
Defect Detection in Synthetic Fibre Ropes using Detectron2 Framework
12 pages, 7 figures, 4 tables
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Fibre ropes with the latest technology have emerged as an appealing alternative to steel ropes for offshore industries due to their lightweight and high tensile strength. At the same time, frequent inspection of these ropes is essential to ensure the proper functioning and safety of the entire system. The development of deep learning (DL) models in condition monitoring (CM) applications offers a simpler and more effective approach for defect detection in synthetic fibre ropes (SFRs). The present paper investigates the performance of Detectron2, a state-of-the-art library for defect detection and instance segmentation. Detectron2 with Mask R-CNN architecture is used for segmenting defects in SFRs. Mask R-CNN with various backbone configurations has been trained and tested on an experimentally obtained dataset comprising 1,803 high-dimensional images containing seven damage classes (loop high, loop medium, loop low, compression, core out, abrasion, and normal respectively) for SFRs. By leveraging the capabilities of Detectron2, this study aims to develop an automated and efficient method for detecting defects in SFRs, enhancing the inspection process, and ensuring the safety of the fibre ropes.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 09:26:04 GMT" } ]
2023-09-06T00:00:00
[ [ "Rani", "Anju", "" ], [ "Arroyo", "Daniel O.", "" ], [ "Durdevic", "Petar", "" ] ]
new_dataset
0.998675
2309.01519
Chao Peng
Chao Peng, Zhengwei Lv, Jiarong Fu, Jiayuan Liang, Zhao Zhang, Ajitha Rajan, Ping Yang
Hawkeye: Change-targeted Testing for Android Apps based on Deep Reinforcement Learning
null
null
null
null
cs.SE cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Android Apps are frequently updated to keep up with changing user, hardware, and business demands. Ensuring the correctness of App updates through extensive testing is crucial to avoid potential bugs reaching the end user. Existing Android testing tools generate GUI events focussing on improving the test coverage of the entire App rather than prioritising updates and its impacted elements. Recent research has proposed change-focused testing but relies on random exploration to exercise the updates and impacted GUI elements that is ineffective and slow for large complex Apps with a huge input exploration space. We propose directed testing of App updates with Hawkeye that is able to prioritise executing GUI actions associated with code changes based on deep reinforcement learning from historical exploration data. Our empirical evaluation compares Hawkeye with state-of-the-art model-based and reinforcement learning-based testing tools FastBot2 and ARES using 10 popular open-source and 1 commercial App. We find that Hawkeye is able to generate GUI event sequences targeting changed functions more reliably than FastBot2 and ARES for the open source Apps and the large commercial App. Hawkeye achieves comparable performance on smaller open source Apps with a more tractable exploration space. The industrial deployment of Hawkeye in the development pipeline also shows that Hawkeye is ideal to perform smoke testing for merge requests of a complicated commercial App.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 10:57:27 GMT" } ]
2023-09-06T00:00:00
[ [ "Peng", "Chao", "" ], [ "Lv", "Zhengwei", "" ], [ "Fu", "Jiarong", "" ], [ "Liang", "Jiayuan", "" ], [ "Zhang", "Zhao", "" ], [ "Rajan", "Ajitha", "" ], [ "Yang", "Ping", "" ] ]
new_dataset
0.997664
2309.01525
Laura Piispanen
Laura Piispanen, Edward Morrell, Solip Park, Marcell Pfaffhauser, Annakaisa Kultima
The History of Quantum Games
8 pages, from which 1.5 pages of references, 11 figures, one table, presented in the IEEE Conference on Games 2023
null
null
null
cs.GL quant-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we explore the historical development of playable quantum physics related games (\textit{\textbf{quantum games}}). For the purpose of this examination, we have collected over 260 quantum games ranging from commercial games, applied and serious games, and games that have been developed at quantum themed game jams and educational courses. We provide an overview of the journey of quantum games across three dimensions: \textit{the perceivable dimension of quantum physics, the dimension of scientific purposes, and the dimension of quantum technologies}. We then further reflect on the definition of quantum games and its implications. While motivations behind developing quantum games have typically been educational or academic, themes related to quantum physics have begun to be more broadly utilised across a range of commercial games. In addition, as the availability of quantum computer hardware has grown, entirely new variants of quantum games have emerged to take advantage of these machines' inherent capabilities, \textit{quantum computer games}
[ { "version": "v1", "created": "Mon, 4 Sep 2023 11:10:58 GMT" } ]
2023-09-06T00:00:00
[ [ "Piispanen", "Laura", "" ], [ "Morrell", "Edward", "" ], [ "Park", "Solip", "" ], [ "Pfaffhauser", "Marcell", "" ], [ "Kultima", "Annakaisa", "" ] ]
new_dataset
0.999467
2309.01574
Henrik Riedel
Henik Riedel, Robert Steven Lorenzen and Clemens H\"ubler
Raw Data Is All You Need: Virtual Axle Detector with Enhanced Receptive Field
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Rising maintenance costs of ageing infrastructure necessitate innovative monitoring techniques. This paper presents a new approach for axle detection, enabling real-time application of Bridge Weigh-In-Motion (BWIM) systems without dedicated axle detectors. The proposed method adapts the Virtual Axle Detector (VAD) model to handle raw acceleration data, which allows the receptive field to be increased. The proposed Virtual Axle Detector with Enhanced Receptive field (VADER) improves the \(F_1\) score by 73\% and spatial accuracy by 39\%, while cutting computational and memory costs by 99\% compared to the state-of-the-art VAD. VADER reaches a \(F_1\) score of 99.4\% and a spatial error of 4.13~cm when using a representative training set and functional sensors. We also introduce a novel receptive field (RF) rule for an object-size driven design of Convolutional Neural Network (CNN) architectures. Based on this rule, our results suggest that models using raw data could achieve better performance than those using spectrograms, offering a compelling reason to consider raw data as input.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 12:53:54 GMT" } ]
2023-09-06T00:00:00
[ [ "Riedel", "Henik", "" ], [ "Lorenzen", "Robert Steven", "" ], [ "Hübler", "Clemens", "" ] ]
new_dataset
0.997847
2309.01586
Matthew Edwards
Piyush Bajaj and Matthew Edwards
Automatic Scam-Baiting Using ChatGPT
Proceedings of the 7th International Workshop on Applications of AI, Cyber Security and Economics Data Analytics (ACE-2023) (in press)
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Automatic scam-baiting is an online fraud countermeasure that involves automated systems responding to online fraudsters in order to waste their time and deplete their resources, diverting attackers away from real potential victims. Previous work has demonstrated that text generation systems are capable of engaging with attackers as automatic scam-baiters, but the fluency and coherence of generated text may be a limit to the effectiveness of such systems. In this paper, we report on the results of a month-long experiment comparing the effectiveness of two ChatGPT-based automatic scam-baiters to a control measure. Within our results, with engagement from over 250 real email fraudsters, we find that ChatGPT-based scam-baiters show a marked increase in scammer response rate and conversation length relative to the control measure, outperforming previous approaches. We discuss the implications of these results and practical considerations for wider deployment of automatic scam-baiting.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 13:13:35 GMT" } ]
2023-09-06T00:00:00
[ [ "Bajaj", "Piyush", "" ], [ "Edwards", "Matthew", "" ] ]
new_dataset
0.984631
2309.01656
Vuong Nguyen
Vuong Nguyen, Anh Ho, Duc-Anh Vu, Nguyen Thi Ngoc Anh, Tran Ngoc Thang
Building Footprint Extraction in Dense Areas using Super Resolution and Frame Field Learning
Accepted at The 12th International Conference on Awareness Science and Technology
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Despite notable results on standard aerial datasets, current state-of-the-arts fail to produce accurate building footprints in dense areas due to challenging properties posed by these areas and limited data availability. In this paper, we propose a framework to address such issues in polygonal building extraction. First, super resolution is employed to enhance the spatial resolution of aerial image, allowing for finer details to be captured. This enhanced imagery serves as input to a multitask learning module, which consists of a segmentation head and a frame field learning head to effectively handle the irregular building structures. Our model is supervised by adaptive loss weighting, enabling extraction of sharp edges and fine-grained polygons which is difficult due to overlapping buildings and low data quality. Extensive experiments on a slum area in India that mimics a dense area demonstrate that our proposed approach significantly outperforms the current state-of-the-art methods by a large margin.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 15:15:34 GMT" } ]
2023-09-06T00:00:00
[ [ "Nguyen", "Vuong", "" ], [ "Ho", "Anh", "" ], [ "Vu", "Duc-Anh", "" ], [ "Anh", "Nguyen Thi Ngoc", "" ], [ "Thang", "Tran Ngoc", "" ] ]
new_dataset
0.98252
2309.01667
Tian Qiu
Ya-nan Li (1), Tian Qiu (1) and Qiang Tang (1) ((1) The University of Sydney)
Pisces: Private and Compliable Cryptocurrency Exchange
27 pages, 8 figures, 2 tables. To be published in NDSS'24. This is the full version of the conference paper
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cryptocurrency exchange platforms such as Coinbase, Binance, enable users to purchase and sell cryptocurrencies conveniently just like trading stocks/commodities. However, because of the nature of blockchain, when a user withdraws coins (i.e., transfers coins to an external on-chain account), all future transactions can be learned by the platform. This is in sharp contrast to conventional stock exchange where all external activities of users are always hidden from the platform. Since the platform knows highly sensitive user private information such as passport number, bank information etc, linking all (on-chain) transactions raises a serious privacy concern about the potential disastrous data breach in those cryptocurrency exchange platforms. In this paper, we propose a cryptocurrency exchange that restores user anonymity for the first time. To our surprise, the seemingly well-studied privacy/anonymity problem has several new challenges in this setting. Since the public blockchain and internal transaction activities naturally provide many non-trivial leakages to the platform, internal privacy is not only useful in the usual sense but also becomes necessary for regaining the basic anonymity of user transactions. We also ensure that the user cannot double spend, and the user has to properly report accumulated profit for tax purposes, even in the private setting. We give a careful modeling and efficient construction of the system that achieves constant computation and communication overhead (with only simple cryptographic tools and rigorous security analysis); we also implement our system and evaluate its practical performance.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 15:33:46 GMT" } ]
2023-09-06T00:00:00
[ [ "Li", "Ya-nan", "" ], [ "Qiu", "Tian", "" ], [ "Tang", "Qiang", "" ] ]
new_dataset
0.999272
2309.01674
Hassan El Hajj
Hassan El-Hajj and Matteo Valleriani
Prompt me a Dataset: An investigation of text-image prompting for historical image dataset creation using foundation models
12 pages, 3 figures, Accepted in ICIAP2023, AI4DH workshop
null
null
null
cs.CV cs.AI cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a pipeline for image extraction from historical documents using foundation models, and evaluate text-image prompts and their effectiveness on humanities datasets of varying levels of complexity. The motivation for this approach stems from the high interest of historians in visual elements printed alongside historical texts on the one hand, and from the relative lack of well-annotated datasets within the humanities when compared to other domains. We propose a sequential approach that relies on GroundDINO and Meta's Segment-Anything-Model (SAM) to retrieve a significant portion of visual data from historical documents that can then be used for downstream development tasks and dataset creation, as well as evaluate the effect of different linguistic prompts on the resulting detections.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 15:37:03 GMT" } ]
2023-09-06T00:00:00
[ [ "El-Hajj", "Hassan", "" ], [ "Valleriani", "Matteo", "" ] ]
new_dataset
0.999115
2309.01775
Nicolas Zucchet
Nicolas Zucchet, Seijin Kobayashi, Yassir Akram, Johannes von Oswald, Maxime Larcher, Angelika Steger, Jo\~ao Sacramento
Gated recurrent neural networks discover attention
null
null
null
null
cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent architectural developments have enabled recurrent neural networks (RNNs) to reach and even surpass the performance of Transformers on certain sequence modeling tasks. These modern RNNs feature a prominent design pattern: linear recurrent layers interconnected by feedforward paths with multiplicative gating. Here, we show how RNNs equipped with these two design elements can exactly implement (linear) self-attention, the main building block of Transformers. By reverse-engineering a set of trained RNNs, we find that gradient descent in practice discovers our construction. In particular, we examine RNNs trained to solve simple in-context learning tasks on which Transformers are known to excel and find that gradient descent instills in our RNNs the same attention-based in-context learning algorithm used by Transformers. Our findings highlight the importance of multiplicative interactions in neural networks and suggest that certain RNNs might be unexpectedly implementing attention under the hood.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 19:28:54 GMT" } ]
2023-09-06T00:00:00
[ [ "Zucchet", "Nicolas", "" ], [ "Kobayashi", "Seijin", "" ], [ "Akram", "Yassir", "" ], [ "von Oswald", "Johannes", "" ], [ "Larcher", "Maxime", "" ], [ "Steger", "Angelika", "" ], [ "Sacramento", "João", "" ] ]
new_dataset
0.995248
2309.01798
Padmapani Seneviratne
Padmapani Seneviratne, Hannah Cuff, Alexandra Koletsos, Kerry Seekamp, Adrian Thnanopavarn
New Qubit Codes from Multidimensional Circulant Graphs
null
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Two new qubit stabilizer codes with parameters $[77, 0, 19]_2$ and $[90, 0, 22]_2$ are constructed for the first time by employing additive symplectic self-dual $\F_4$ codes from multidimensional circulant (MDC) graphs. We completely classify MDC graph codes for lengths $4\le n \le 40$ and show that many optimal $\dsb{\ell, 0, d}$ qubit codes can be obtained from the MDC construction. Moreover, we prove that adjacency matrices of MDC graphs have nested block circulant structure and determine isomorphism properties of MDC graphs.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 20:24:17 GMT" } ]
2023-09-06T00:00:00
[ [ "Seneviratne", "Padmapani", "" ], [ "Cuff", "Hannah", "" ], [ "Koletsos", "Alexandra", "" ], [ "Seekamp", "Kerry", "" ], [ "Thnanopavarn", "Adrian", "" ] ]
new_dataset
0.999357
2309.01808
Yu-Neng Chuang
Yu-Neng Chuang, Guanchu Wang, Chia-Yuan Chang, Kwei-Herng Lai, Daochen Zha, Ruixiang Tang, Fan Yang, Alfredo Costilla Reyes, Kaixiong Zhou, Xiaoqian Jiang, Xia Hu
DiscoverPath: A Knowledge Refinement and Retrieval System for Interdisciplinarity on Biomedical Research
null
null
null
null
cs.IR cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The exponential growth in scholarly publications necessitates advanced tools for efficient article retrieval, especially in interdisciplinary fields where diverse terminologies are used to describe similar research. Traditional keyword-based search engines often fall short in assisting users who may not be familiar with specific terminologies. To address this, we present a knowledge graph-based paper search engine for biomedical research to enhance the user experience in discovering relevant queries and articles. The system, dubbed DiscoverPath, employs Named Entity Recognition (NER) and part-of-speech (POS) tagging to extract terminologies and relationships from article abstracts to create a KG. To reduce information overload, DiscoverPath presents users with a focused subgraph containing the queried entity and its neighboring nodes and incorporates a query recommendation system, enabling users to iteratively refine their queries. The system is equipped with an accessible Graphical User Interface that provides an intuitive visualization of the KG, query recommendations, and detailed article information, enabling efficient article retrieval, thus fostering interdisciplinary knowledge exploration. DiscoverPath is open-sourced at https://github.com/ynchuang/DiscoverPath.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 20:52:33 GMT" } ]
2023-09-06T00:00:00
[ [ "Chuang", "Yu-Neng", "" ], [ "Wang", "Guanchu", "" ], [ "Chang", "Chia-Yuan", "" ], [ "Lai", "Kwei-Herng", "" ], [ "Zha", "Daochen", "" ], [ "Tang", "Ruixiang", "" ], [ "Yang", "Fan", "" ], [ "Reyes", "Alfredo Costilla", "" ], [ "Zhou", "Kaixiong", "" ], [ "Jiang", "Xiaoqian", "" ], [ "Hu", "Xia", "" ] ]
new_dataset
0.996405
2309.01859
Alexander Visheratin
Alexander Visheratin
NLLB-CLIP -- train performant multilingual image retrieval model on a budget
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Today, the exponential rise of large models developed by academic and industrial institutions with the help of massive computing resources raises the question of whether someone without access to such resources can make a valuable scientific contribution. To explore this, we tried to solve the challenging task of multilingual image retrieval having a limited budget of $1,000. As a result, we present NLLB-CLIP - CLIP model with a text encoder from the NLLB model. To train the model, we used an automatically created dataset of 106,246 good-quality images with captions in 201 languages derived from the LAION COCO dataset. We trained multiple models using image and text encoders of various sizes and kept different parts of the model frozen during the training. We thoroughly analyzed the trained models using existing evaluation datasets and newly created XTD200 and Flickr30k-200 datasets. We show that NLLB-CLIP is comparable in quality to state-of-the-art models and significantly outperforms them on low-resource languages.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 23:26:11 GMT" } ]
2023-09-06T00:00:00
[ [ "Visheratin", "Alexander", "" ] ]
new_dataset
0.998407
2309.01861
Aashish Gottipati
Aashish Gottipati and Jacobus Van der Merwe
FlexRDZ: Autonomous Mobility Management for Radio Dynamic Zones
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.NI eess.SP
http://creativecommons.org/licenses/by/4.0/
FlexRDZ is an online, autonomous manager for radio dynamic zones (RDZ) that seeks to enable the safe operation of RDZs through real-time control of deployed test transmitters. FlexRDZ leverages Hierarchical Task Networks and digital twin modeling to plan and resolve RDZ violations in near real-time. We prototype FlexRDZ with GTPyhop and the Terrain Integrated Rough Earth Model (TIREM). We deploy and evaluate FlexRDZ within a simulated version of the Salt Lake City POWDER testbed, a potential urban RDZ environment. Our simulations show that FlexRDZ enables up to a 20 dBm reduction in mobile interference and a significant reduction in the total power of leaked transmissions while preserving the overall communication capabilities and uptime of test transmitters. To our knowledge, FlexRDZ is the first autonomous system for RDZ management.
[ { "version": "v1", "created": "Mon, 4 Sep 2023 23:35:54 GMT" } ]
2023-09-06T00:00:00
[ [ "Gottipati", "Aashish", "" ], [ "Van der Merwe", "Jacobus", "" ] ]
new_dataset
0.995812
2309.01898
Manan Tayal
Manan Tayal, Shishir Kolathaya
Safe Legged Locomotion using Collision Cone Control Barrier Functions (C3BFs)
5 Pages, 5 Figures. arXiv admin note: text overlap with arXiv:2303.15871
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Legged robots exhibit significant potential across diverse applications, including but not limited to hazardous environment search and rescue missions and the exploration of unexplored regions both on Earth and in outer space. However, the successful navigation of these robots in dynamic environments heavily hinges on the implementation of efficient collision avoidance techniques. In this research paper, we employ Collision Cone Control Barrier Functions (C3BF) to ensure the secure movement of legged robots within environments featuring a wide array of static and dynamic obstacles. We introduce the Quadratic Program (QP) formulation of C3BF, referred to as C3BF-QP, which serves as a protective filter layer atop a reference controller to ensure the robots' safety during operation. The effectiveness of this approach is illustrated through simulations conducted on PyBullet.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 02:15:14 GMT" } ]
2023-09-06T00:00:00
[ [ "Tayal", "Manan", "" ], [ "Kolathaya", "Shishir", "" ] ]
new_dataset
0.998618
2309.01907
Hongruixuan Chen
Jian Song and Hongruixuan Chen and Naoto Yokoya
SyntheWorld: A Large-Scale Synthetic Dataset for Land Cover Mapping and Building Change Detection
Accepted by WACV 2024
null
null
null
cs.CV cs.AI cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Synthetic datasets, recognized for their cost effectiveness, play a pivotal role in advancing computer vision tasks and techniques. However, when it comes to remote sensing image processing, the creation of synthetic datasets becomes challenging due to the demand for larger-scale and more diverse 3D models. This complexity is compounded by the difficulties associated with real remote sensing datasets, including limited data acquisition and high annotation costs, which amplifies the need for high-quality synthetic alternatives. To address this, we present SyntheWorld, a synthetic dataset unparalleled in quality, diversity, and scale. It includes 40,000 images with submeter-level pixels and fine-grained land cover annotations of eight categories, and it also provides 40,000 pairs of bitemporal image pairs with building change annotations for building change detection task. We conduct experiments on multiple benchmark remote sensing datasets to verify the effectiveness of SyntheWorld and to investigate the conditions under which our synthetic data yield advantages. We will release SyntheWorld to facilitate remote sensing image processing research.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 02:42:41 GMT" } ]
2023-09-06T00:00:00
[ [ "Song", "Jian", "" ], [ "Chen", "Hongruixuan", "" ], [ "Yokoya", "Naoto", "" ] ]
new_dataset
0.999751
2309.01925
Haozhe Wang
Lei Zhou, Zhiyang Liu, Runze Gan, Haozhe Wang, Marcelo H. Ang Jr
DR-Pose: A Two-stage Deformation-and-Registration Pipeline for Category-level 6D Object Pose Estimation
Camera-ready version accepted to IROS 2023
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Category-level object pose estimation involves estimating the 6D pose and the 3D metric size of objects from predetermined categories. While recent approaches take categorical shape prior information as reference to improve pose estimation accuracy, the single-stage network design and training manner lead to sub-optimal performance since there are two distinct tasks in the pipeline. In this paper, the advantage of two-stage pipeline over single-stage design is discussed. To this end, we propose a two-stage deformation-and registration pipeline called DR-Pose, which consists of completion-aided deformation stage and scaled registration stage. The first stage uses a point cloud completion method to generate unseen parts of target object, guiding subsequent deformation on the shape prior. In the second stage, a novel registration network is designed to extract pose-sensitive features and predict the representation of object partial point cloud in canonical space based on the deformation results from the first stage. DR-Pose produces superior results to the state-of-the-art shape prior-based methods on both CAMERA25 and REAL275 benchmarks. Codes are available at https://github.com/Zray26/DR-Pose.git.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 03:24:09 GMT" } ]
2023-09-06T00:00:00
[ [ "Zhou", "Lei", "" ], [ "Liu", "Zhiyang", "" ], [ "Gan", "Runze", "" ], [ "Wang", "Haozhe", "" ], [ "Ang", "Marcelo H.", "Jr" ] ]
new_dataset
0.997327
2309.01950
Dongyeun Lee
Dongyeun Lee, Chaewon Kim, Sangjoon Yu, Jaejun Yoo, Gyeong-Moon Park
RADIO: Reference-Agnostic Dubbing Video Synthesis
Under review
null
null
null
cs.CV cs.AI cs.LG cs.SD eess.AS
http://creativecommons.org/licenses/by-nc-sa/4.0/
One of the most challenging problems in audio-driven talking head generation is achieving high-fidelity detail while ensuring precise synchronization. Given only a single reference image, extracting meaningful identity attributes becomes even more challenging, often causing the network to mirror the facial and lip structures too closely. To address these issues, we introduce RADIO, a framework engineered to yield high-quality dubbed videos regardless of the pose or expression in reference images. The key is to modulate the decoder layers using latent space composed of audio and reference features. Additionally, we incorporate ViT blocks into the decoder to emphasize high-fidelity details, especially in the lip region. Our experimental results demonstrate that RADIO displays high synchronization without the loss of fidelity. Especially in harsh scenarios where the reference frame deviates significantly from the ground truth, our method outperforms state-of-the-art methods, highlighting its robustness. Pre-trained model and codes will be made public after the review.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 04:56:18 GMT" } ]
2023-09-06T00:00:00
[ [ "Lee", "Dongyeun", "" ], [ "Kim", "Chaewon", "" ], [ "Yu", "Sangjoon", "" ], [ "Yoo", "Jaejun", "" ], [ "Park", "Gyeong-Moon", "" ] ]
new_dataset
0.964053
2309.01954
Dibakar Datta
Dibakar Datta
Electro-Chemo-Mechanical Modeling of Multiscale Active Materials for Next-Generation Energy Storage: Opportunities and Challenges
33 pages, 17 figures
null
null
null
cs.CE
http://creativecommons.org/licenses/by/4.0/
The recent geopolitical crisis resulted in a gas price surge. Although lithium-ion batteries represent the best available rechargeable battery technology, a significant energy and power density gap exists between LIBs and petrol/gasoline. The battery electrodes comprise a mixture of active materials particles, conductive carbon, and binder additives deposited onto a current collector. Although this basic design has persisted for decades, the active material particle's desired size scale is debated. Traditionally, microparticles have been used in batteries. Advances in nanotechnology have spurred interest in deploying nanoparticles as active materials. However, despite many efforts in nano, industries still primarily use 'old' microparticles. Most importantly, the battery industry is unlikely to replace microstructures with nanometer-sized analogs. This poses an important question: Is there a place for nanostructure in battery design due to irreplaceable microstructure? The way forward lies in multiscale active materials, microscale structures with built-in nanoscale features, such as microparticles assembled from nanoscale building blocks or patterned with engineered or natural nanopores. Although experimental strides have been made in developing such materials, computational progress in this domain remains limited and, in some cases, negligible. However, the fields hold immense computational potential, presenting a multitude of opportunities. This perspective highlights the existing gaps in modeling multiscale active materials and delineates various open challenges in the realm of electro-chemo-mechanical modeling. By doing so, it aims to inspire computational research within this field and promote synergistic collaborative efforts between computational and experimental researchers.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 05:06:17 GMT" } ]
2023-09-06T00:00:00
[ [ "Datta", "Dibakar", "" ] ]
new_dataset
0.986546
2309.01983
Md Ajaharul Hossain
Md Ajaharul Hossain, Ramakrishna Bandi
Quaternary Conjucyclic Codes with an Application to EAQEC Codes
null
null
null
null
cs.IT math.IT math.RA
http://creativecommons.org/licenses/by/4.0/
Conjucyclic codes are part of a family of codes that includes cyclic, constacyclic, and quasi-cyclic codes, among others. Despite their importance in quantum error correction, they have not received much attention in the literature. This paper focuses on additive conjucyclic (ACC) codes over $\mathbb{F}_4$ and investigates their properties. Specifically, we derive the duals of ACC codes using a trace inner product and obtain the trace hull and its dimension. Also, establish a necessary and sufficient condition for an additive code to have a complementary dual (ACD). Additionally, we identify a necessary condition for an additive conjucyclic complementary pair of codes over $\mathbb{F}_4$. Furthermore, we show that the trace code of an ACC code is cyclic and provide a condition for the trace code of an ACC code to be LCD. To demonstrate the practical application of our findings, we construct some good entanglement-assisted quantum error-correcting (EAQEC) codes using the trace code of ACC codes.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 06:32:43 GMT" } ]
2023-09-06T00:00:00
[ [ "Hossain", "Md Ajaharul", "" ], [ "Bandi", "Ramakrishna", "" ] ]
new_dataset
0.999851
2309.01985
Md Ajaharul Hossain
Md Ajaharul Hossain, Ramakrishna Bandi
The $\ell$-intersection Pairs of Constacyclic and Conjucyclic Codes
null
null
null
null
cs.IT math.IT math.RA
http://creativecommons.org/licenses/by/4.0/
A pair of linear codes whose intersection is of dimension $\ell$, where $\ell$ is a non-negetive integer, is called an $\ell$-intersection pair of codes. This paper focuses on studying $\ell$-intersection pairs of $\lambda_i$-constacyclic, $i=1,2,$ and conjucyclic codes. We first characterize an $\ell$-intersection pair of $\lambda_i$-constacyclic codes. A formula for $\ell$ has been established in terms of the degrees of the generator polynomials of $\lambda_i$-constacyclic codes. This allows obtaining a condition for $\ell$-linear complementary pairs (LPC) of constacyclic codes. Later, we introduce and characterize the $\ell$-intersection pair of conjucyclic codes over $\mathbb{F}_{q^2}$. The first observation in the process is that there are no non-trivial linear conjucyclic codes over finite fields. So focus on the characterization of additive conjucyclic (ACC) codes. We show that the largest $\mathbb{F}_q$-subcode of an ACC code over $\mathbb{F}_{q^2}$ is cyclic and obtain its generating polynomial. This enables us to find the size of an ACC code. Furthermore, we discuss the trace code of an ACC code and show that it is cyclic. Finally, we determine $\ell$-intersection pairs of trace codes of ACC codes over $\mathbb{F}_4$.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 06:40:23 GMT" } ]
2023-09-06T00:00:00
[ [ "Hossain", "Md Ajaharul", "" ], [ "Bandi", "Ramakrishna", "" ] ]
new_dataset
0.999246
2309.02019
Nicolas Anquetil
Younoussa Sow, Larisa Safina, L\'eandre Brault, Papa Ibou Diouf, St\'ephane Ducasse, Nicolas Anquetil
Parsing Fortran-77 with proprietary extensions
Accepted at ICSME'23 Industrial track
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Far from the latest innovations in software development, many organizations still rely on old code written in "obsolete" programming languages. Because this source code is old and proven it often contributes significantly to the continuing success of these organizations. Yet to keep the applications relevant and running in an evolving environment, they sometimes need to be updated or migrated to new languages or new platforms. One difficulty of working with these "veteran languages" is being able to parse the source code to build a representation of it. Parsing can also allow modern software development tools and IDEs to offer better support to these veteran languages. We initiated a project between our group and the Framatome company to help migrate old Fortran-77 with proprietary extensions (called Esope) into more modern Fortran. In this paper, we explain how we parsed the Esope language with a combination of island grammar and regular parser to build an abstract syntax tree of the code.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 07:54:02 GMT" } ]
2023-09-06T00:00:00
[ [ "Sow", "Younoussa", "" ], [ "Safina", "Larisa", "" ], [ "Brault", "Léandre", "" ], [ "Diouf", "Papa Ibou", "" ], [ "Ducasse", "Stéphane", "" ], [ "Anquetil", "Nicolas", "" ] ]
new_dataset
0.99154
2309.02026
Christian Lienen
Christian Lienen, Mathis Brede, Daniel Karger, Kevin Koch, Dalisha Logan, Janet Mazur, Alexander Philipp Nowosad, Alexander Schnelle, Mohness Waizy and Marco Platzner
AutonomROS: A ReconROS-based Autonomonous Driving Unit
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Autonomous driving has become an important research area in recent years, and the corresponding system creates an enormous demand for computations. Heterogeneous computing platforms such as systems-on-chip that combine CPUs with reprogrammable hardware offer both computational performance and flexibility and are thus interesting targets for autonomous driving architectures. The de-facto software architecture standard in robotics, including autonomous driving systems, is ROS 2. ReconROS is a framework for creating robotics applications that extends ROS 2 with the possibility of mapping compute-intense functions to hardware. This paper presents AutonomROS, an autonomous driving unit based on the ReconROS framework. AutonomROS serves as a blueprint for a larger robotics application developed with ReconROS and demonstrates its suitability and extendability. The application integrates the ROS 2 package Navigation 2 with custom-developed software and hardware-accelerated functions for point cloud generation, obstacle detection, and lane detection. In addition, we detail a new communication middleware for shared memory communication between software and hardware functions. We evaluate AutonomROS and show the advantage of hardware acceleration and the new communication middleware for improving turnaround times, achievable frame rates, and, most importantly, reducing CPU load.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 08:12:58 GMT" } ]
2023-09-06T00:00:00
[ [ "Lienen", "Christian", "" ], [ "Brede", "Mathis", "" ], [ "Karger", "Daniel", "" ], [ "Koch", "Kevin", "" ], [ "Logan", "Dalisha", "" ], [ "Mazur", "Janet", "" ], [ "Nowosad", "Alexander Philipp", "" ], [ "Schnelle", "Alexander", "" ], [ "Waizy", "Mohness", "" ], [ "Platzner", "Marco", "" ] ]
new_dataset
0.999445
2309.02067
Anand Sharma
Anand Sharma (MIET, Meerut), A. G. Ramakrishnan (IISc, Bengaluru)
Histograms of Points, Orientations, and Dynamics of Orientations Features for Hindi Online Handwritten Character Recognition
21 pages, 12 jpg figures
null
null
null
cs.CV eess.SP
http://creativecommons.org/licenses/by-nc-sa/4.0/
A set of features independent of character stroke direction and order variations is proposed for online handwritten character recognition. A method is developed that maps features like co-ordinates of points, orientations of strokes at points, and dynamics of orientations of strokes at points spatially as a function of co-ordinate values of the points and computes histograms of these features from different regions in the spatial map. Different features like spatio-temporal, discrete Fourier transform, discrete cosine transform, discrete wavelet transform, spatial, and histograms of oriented gradients used in other studies for training classifiers for character recognition are considered. The classifier chosen for classification performance comparison, when trained with different features, is support vector machines (SVM). The character datasets used for training and testing the classifiers consist of online handwritten samples of 96 different Hindi characters. There are 12832 and 2821 samples in training and testing datasets, respectively. SVM classifiers trained with the proposed features has the highest classification accuracy of 92.9\% when compared to the performances of SVM classifiers trained with the other features and tested on the same testing dataset. Therefore, the proposed features have better character discriminative capability than the other features considered for comparison.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 09:11:18 GMT" } ]
2023-09-06T00:00:00
[ [ "Sharma", "Anand", "", "MIET, Meerut" ], [ "Ramakrishnan", "A. G.", "", "IISc, Bengaluru" ] ]
new_dataset
0.996296
2309.02102
Stephan Alaniz
Stephan Alaniz, Massimiliano Mancini, Zeynep Akata
Iterative Superquadric Recomposition of 3D Objects from Multiple Views
Accepted at ICCV 2023
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans are good at recomposing novel objects, i.e. they can identify commonalities between unknown objects from general structure to finer detail, an ability difficult to replicate by machines. We propose a framework, ISCO, to recompose an object using 3D superquadrics as semantic parts directly from 2D views without training a model that uses 3D supervision. To achieve this, we optimize the superquadric parameters that compose a specific instance of the object, comparing its rendered 3D view and 2D image silhouette. Our ISCO framework iteratively adds new superquadrics wherever the reconstruction error is high, abstracting first coarse regions and then finer details of the target object. With this simple coarse-to-fine inductive bias, ISCO provides consistent superquadrics for related object parts, despite not having any semantic supervision. Since ISCO does not train any neural network, it is also inherently robust to out-of-distribution objects. Experiments show that, compared to recent single instance superquadrics reconstruction approaches, ISCO provides consistently more accurate 3D reconstructions, even from images in the wild. Code available at https://github.com/ExplainableML/ISCO .
[ { "version": "v1", "created": "Tue, 5 Sep 2023 10:21:37 GMT" } ]
2023-09-06T00:00:00
[ [ "Alaniz", "Stephan", "" ], [ "Mancini", "Massimiliano", "" ], [ "Akata", "Zeynep", "" ] ]
new_dataset
0.996061
2309.02120
Lorenzo Mur-Labadia
Lorenzo Mur-Labadia, Jose J. Guerrero and Ruben Martinez-Cantin
Multi-label affordance mapping from egocentric vision
International Conference on Computer Vision (ICCV) 2023
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Accurate affordance detection and segmentation with pixel precision is an important piece in many complex systems based on interactions, such as robots and assitive devices. We present a new approach to affordance perception which enables accurate multi-label segmentation. Our approach can be used to automatically extract grounded affordances from first person videos of interactions using a 3D map of the environment providing pixel level precision for the affordance location. We use this method to build the largest and most complete dataset on affordances based on the EPIC-Kitchen dataset, EPIC-Aff, which provides interaction-grounded, multi-label, metric and spatial affordance annotations. Then, we propose a new approach to affordance segmentation based on multi-label detection which enables multiple affordances to co-exists in the same space, for example if they are associated with the same object. We present several strategies of multi-label detection using several segmentation architectures. The experimental results highlight the importance of the multi-label detection. Finally, we show how our metric representation can be exploited for build a map of interaction hotspots in spatial action-centric zones and use that representation to perform a task-oriented navigation.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 10:56:23 GMT" } ]
2023-09-06T00:00:00
[ [ "Mur-Labadia", "Lorenzo", "" ], [ "Guerrero", "Jose J.", "" ], [ "Martinez-Cantin", "Ruben", "" ] ]
new_dataset
0.998711
2309.02171
Shaoyi Liu
Shaoyi Liu, Nan Ma, Yaning Chen, Ke Peng and Dongsheng Xue
A Wideband MIMO Channel Model for Aerial Intelligent Reflecting Surface-Assisted Wireless Communications
6 pages, 7 figures
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compared to traditional intelligent reflecting surfaces(IRS), aerial IRS (AIRS) has unique advantages, such as more flexible deployment and wider service coverage. However, modeling AIRS in the channel presents new challenges due to their mobility. In this paper, a three-dimensional (3D) wideband channel model for AIRS and IRS joint-assisted multiple-input multiple-output (MIMO) communication system is proposed, where considering the rotational degrees of freedom in three directions and the motion angles of AIRS in space. Based on the proposed model, the channel impulse response (CIR), correlation function, and channel capacity are derived, and several feasible joint phase shifts schemes for AIRS and IRS units are proposed. Simulation results show that the proposed model can capture the channel characteristics accurately, and the proposed phase shifts methods can effectively improve the channel statistical characteristics and increase the system capacity. Additionally, we observe that in certain scenarios, the paths involving the IRS and the line-of-sight (LoS) paths exhibit similar characteristics. These findings provide valuable insights for the future development of intelligent communication systems.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 12:21:32 GMT" } ]
2023-09-06T00:00:00
[ [ "Liu", "Shaoyi", "" ], [ "Ma", "Nan", "" ], [ "Chen", "Yaning", "" ], [ "Peng", "Ke", "" ], [ "Xue", "Dongsheng", "" ] ]
new_dataset
0.998194
2309.02175
Tamas David-Barrett
Tamas David-Barrett
Collaboration Conundrum: Synchrony-Cooperation Trade-off
21 pages, 7 figures
null
null
null
cs.SI physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
In large groups, every collaborative act requires balancing two pressures: the need to achieve behavioural synchrony and the need to keep free riding to a minimum. This paper introduces a model of collaboration that requires both synchronisation on a social network and costly cooperation. The results show that coordination slows, and cooperativeness increases with the social network`s local integratedness, measured by the clustering coefficient. That is, in a large-group collaboration, achieving behavioural synchrony and strategic cooperation are in opposition to each other. The optimal clustering coefficient has no natural state in our species, and is determined by the ecological environment, the group`s technology set, and the group`s size. This opens the space for social technologies that solve this optimisation problem by generating optimal social network structures.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 12:27:09 GMT" } ]
2023-09-06T00:00:00
[ [ "David-Barrett", "Tamas", "" ] ]
new_dataset
0.986855
2309.02186
Jiaolong Yang
Yue Wu, Sicheng Xu, Jianfeng Xiang, Fangyun Wei, Qifeng Chen, Jiaolong Yang, Xin Tong
AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image Collections
SIGGRAPH Asia 2023. Project Page: https://yuewuhkust.github.io/AniPortraitGAN/
null
null
null
cs.CV cs.AI cs.GR
http://creativecommons.org/licenses/by/4.0/
Previous animatable 3D-aware GANs for human generation have primarily focused on either the human head or full body. However, head-only videos are relatively uncommon in real life, and full body generation typically does not deal with facial expression control and still has challenges in generating high-quality results. Towards applicable video avatars, we present an animatable 3D-aware GAN that generates portrait images with controllable facial expression, head pose, and shoulder movements. It is a generative model trained on unstructured 2D image collections without using 3D or video data. For the new task, we base our method on the generative radiance manifold representation and equip it with learnable facial and head-shoulder deformations. A dual-camera rendering and adversarial learning scheme is proposed to improve the quality of the generated faces, which is critical for portrait images. A pose deformation processing network is developed to generate plausible deformations for challenging regions such as long hair. Experiments show that our method, trained on unstructured 2D images, can generate diverse and high-quality 3D portraits with desired control over different properties.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 12:44:57 GMT" } ]
2023-09-06T00:00:00
[ [ "Wu", "Yue", "" ], [ "Xu", "Sicheng", "" ], [ "Xiang", "Jianfeng", "" ], [ "Wei", "Fangyun", "" ], [ "Chen", "Qifeng", "" ], [ "Yang", "Jiaolong", "" ], [ "Tong", "Xin", "" ] ]
new_dataset
0.99875
2309.02221
Harrie Passier
Arno Broeders and Ruud Hermans and Sylvia Stuurman and Lex Bijlsma and Harrie Passier
Improving students' code correctness and test completeness by informal specifications
14 pages
null
null
null
cs.SE
http://creativecommons.org/licenses/by-nc-nd/4.0/
The quality of software produced by students is often poor. How to teach students to develop good quality software has long been a topic in computer science education and research. We must conclude that we still do not have a good answer to this question. Specifications are necessary to determine the correctness of software, to develop error-free software and to write complete tests. Several attempts have been made to teach students to write specifications before writing code. So far, that has not proven to be very successful: Students do not like to write a specification and do not see the benefits of writing specifications. In this paper we focus on the use of informal specifications. Instead of teaching students how to write specifications, we teach them how to use informal specifications to develop correct software. The results were surprising: the number of errors in software and the completeness of tests both improved considerably and, most importantly, students really appreciate the specifications. We think that if students appreciate specification, we have a key to teach them how to specify and to appreciate its value.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 13:24:43 GMT" } ]
2023-09-06T00:00:00
[ [ "Broeders", "Arno", "" ], [ "Hermans", "Ruud", "" ], [ "Stuurman", "Sylvia", "" ], [ "Bijlsma", "Lex", "" ], [ "Passier", "Harrie", "" ] ]
new_dataset
0.990484
2309.02224
Wencan Huang
Wencan Huang, Daizong Liu, Wei Hu
Dense Object Grounding in 3D Scenes
ACM MM 2023
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Localizing objects in 3D scenes according to the semantics of a given natural language is a fundamental yet important task in the field of multimedia understanding, which benefits various real-world applications such as robotics and autonomous driving. However, the majority of existing 3D object grounding methods are restricted to a single-sentence input describing an individual object, which cannot comprehend and reason more contextualized descriptions of multiple objects in more practical 3D cases. To this end, we introduce a new challenging task, called 3D Dense Object Grounding (3D DOG), to jointly localize multiple objects described in a more complicated paragraph rather than a single sentence. Instead of naively localizing each sentence-guided object independently, we found that dense objects described in the same paragraph are often semantically related and spatially located in a focused region of the 3D scene. To explore such semantic and spatial relationships of densely referred objects for more accurate localization, we propose a novel Stacked Transformer based framework for 3D DOG, named 3DOGSFormer. Specifically, we first devise a contextual query-driven local transformer decoder to generate initial grounding proposals for each target object. Then, we employ a proposal-guided global transformer decoder that exploits the local object features to learn their correlation for further refining initial grounding proposals. Extensive experiments on three challenging benchmarks (Nr3D, Sr3D, and ScanRefer) show that our proposed 3DOGSFormer outperforms state-of-the-art 3D single-object grounding methods and their dense-object variants by significant margins.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 13:27:19 GMT" } ]
2023-09-06T00:00:00
[ [ "Huang", "Wencan", "" ], [ "Liu", "Daizong", "" ], [ "Hu", "Wei", "" ] ]
new_dataset
0.958027
2309.02230
Zhirui Wang Dr
Zhechao Wang and Peirui Cheng and Shujing Duan and Kaiqiang Chen and Zhirui Wang and Xinming Li and Xian Sun
DCP-Net: A Distributed Collaborative Perception Network for Remote Sensing Semantic Segmentation
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Onboard intelligent processing is widely applied in emergency tasks in the field of remote sensing. However, it is predominantly confined to an individual platform with a limited observation range as well as susceptibility to interference, resulting in limited accuracy. Considering the current state of multi-platform collaborative observation, this article innovatively presents a distributed collaborative perception network called DCP-Net. Firstly, the proposed DCP-Net helps members to enhance perception performance by integrating features from other platforms. Secondly, a self-mutual information match module is proposed to identify collaboration opportunities and select suitable partners, prioritizing critical collaborative features and reducing redundant transmission cost. Thirdly, a related feature fusion module is designed to address the misalignment between local and collaborative features, improving the quality of fused features for the downstream task. We conduct extensive experiments and visualization analyses using three semantic segmentation datasets, including Potsdam, iSAID and DFC23. The results demonstrate that DCP-Net outperforms the existing methods comprehensively, improving mIoU by 2.61%~16.89% at the highest collaboration efficiency, which promotes the performance to a state-of-the-art level.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 13:36:40 GMT" } ]
2023-09-06T00:00:00
[ [ "Wang", "Zhechao", "" ], [ "Cheng", "Peirui", "" ], [ "Duan", "Shujing", "" ], [ "Chen", "Kaiqiang", "" ], [ "Wang", "Zhirui", "" ], [ "Li", "Xinming", "" ], [ "Sun", "Xian", "" ] ]
new_dataset
0.987784
2309.02253
Lucas Correia
Lucas Correia, Jan-Christoph Goos, Philipp Klein, Thomas B\"ack, Anna V. Kononova
MA-VAE: Multi-head Attention-based Variational Autoencoder Approach for Anomaly Detection in Multivariate Time-series Applied to Automotive Endurance Powertrain Testing
Accepted in NCTA2023
null
null
null
cs.LG cs.AI cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
A clear need for automatic anomaly detection applied to automotive testing has emerged as more and more attention is paid to the data recorded and manual evaluation by humans reaches its capacity. Such real-world data is massive, diverse, multivariate and temporal in nature, therefore requiring modelling of the testee behaviour. We propose a variational autoencoder with multi-head attention (MA-VAE), which, when trained on unlabelled data, not only provides very few false positives but also manages to detect the majority of the anomalies presented. In addition to that, the approach offers a novel way to avoid the bypass phenomenon, an undesirable behaviour investigated in literature. Lastly, the approach also introduces a new method to remap individual windows to a continuous time series. The results are presented in the context of a real-world industrial data set and several experiments are undertaken to further investigate certain aspects of the proposed model. When configured properly, it is 9% of the time wrong when an anomaly is flagged and discovers 67% of the anomalies present. Also, MA-VAE has the potential to perform well with only a fraction of the training and validation subset, however, to extract it, a more sophisticated threshold estimation method is required.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 14:05:37 GMT" } ]
2023-09-06T00:00:00
[ [ "Correia", "Lucas", "" ], [ "Goos", "Jan-Christoph", "" ], [ "Klein", "Philipp", "" ], [ "Bäck", "Thomas", "" ], [ "Kononova", "Anna V.", "" ] ]
new_dataset
0.994817
2309.02255
Damien Courouss\'e
Thomas Chamelot and Damien Courouss\'e and Karine Heydemann
MAFIA: Protecting the Microarchitecture of Embedded Systems Against Fault Injection Attacks
published by IEEE TCAD
IEEE TCAD (2023)
10.1109/TCAD.2023.3276507
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Fault injection attacks represent an effective threat to embedded systems. Recently, Laurent et al. have reported that fault injection attacks can leverage faults inside the microarchitecture. However, state-of-the-art counter-measures, hardwareonly or with hardware support, do not consider the integrity of microarchitecture control signals that are the target of these faults. We present MAFIA, a microarchitecture protection against fault injection attacks. MAFIA ensures integrity of pipeline control signals through a signature-based mechanism, and ensures fine-grained control-flow integrity with a complete indirect branch support and code authenticity. We analyse the security properties of two different implementations with different security/overhead trade-offs: one with a CBC-MAC/Prince signature function, and another one with a CRC32. We present our implementation of MAFIA in a RISC-V processor, supported by a dedicated compiler toolchain based on LLVM/Clang. We report a hardware area overhead of 23.8 % and 6.5 % for the CBC-MAC/Prince and CRC32 respectively. The average code size and execution time overheads are 29.4 % and 18.4 % respectively for the CRC32 implementation and are 50 % and 39 % for the CBC-MAC/Prince.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 14:08:36 GMT" } ]
2023-09-06T00:00:00
[ [ "Chamelot", "Thomas", "" ], [ "Couroussé", "Damien", "" ], [ "Heydemann", "Karine", "" ] ]
new_dataset
0.998364
2309.02258
Patricia Bachmann
Patricia Bachmann, Ignaz Rutter, Peter Stumpf
On 3-Coloring Circle Graphs
Appears in the Proceedings of the 31st International Symposium on Graph Drawing and Network Visualization (GD 2023)
null
null
null
cs.DM cs.DS
http://creativecommons.org/licenses/by/4.0/
Given a graph $G$ with a fixed vertex order $\prec$, one obtains a circle graph $H$ whose vertices are the edges of $G$ and where two such edges are adjacent if and only if their endpoints are pairwise distinct and alternate in $\prec$. Therefore, the problem of determining whether $G$ has a $k$-page book embedding with spine order $\prec$ is equivalent to deciding whether $H$ can be colored with $k$ colors. Finding a $k$-coloring for a circle graph is known to be NP-complete for $k \geq 4$ and trivial for $k \leq 2$. For $k = 3$, Unger (1992) claims an efficient algorithm that finds a 3-coloring in $O(n \log n)$ time, if it exists. Given a circle graph $H$, Unger's algorithm (1) constructs a 3-\textsc{Sat} formula $\Phi$ that is satisfiable if and only if $H$ admits a 3-coloring and (2) solves $\Phi$ by a backtracking strategy that relies on the structure imposed by the circle graph. However, the extended abstract misses several details and Unger refers to his PhD thesis (in German) for details. In this paper we argue that Unger's algorithm for 3-coloring circle graphs is not correct and that 3-coloring circle graphs should be considered as an open problem. We show that step (1) of Unger's algorithm is incorrect by exhibiting a circle graph whose formula $\Phi$ is satisfiable but that is not 3-colorable. We further show that Unger's backtracking strategy for solving $\Phi$ in step (2) may produce incorrect results and give empirical evidence that it exhibits a runtime behaviour that is not consistent with the claimed running time.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 14:11:29 GMT" } ]
2023-09-06T00:00:00
[ [ "Bachmann", "Patricia", "" ], [ "Rutter", "Ignaz", "" ], [ "Stumpf", "Peter", "" ] ]
new_dataset
0.998084
2309.02259
Ruipeng Yang
Ruipeng Yang, Yi Fang, Pingping Chen, and Huan Ma
Design of a New CIM-DCSK-Based Ambient Backscatter Communication System
null
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To improve the data rate in differential chaos shift keying (DCSK) based ambient backscatter communication (AmBC) system, we propose a new AmBC system based on code index modulation (CIM), referred to as CIM-DCSK-AmBC system. In the proposed system, the CIM-DCSK signal transmitted in the direct link is used as the radio frequency source of the backscatter link. The signal format in the backscatter link is designed to increase the data rate as well as eliminate the interference of the direct link signal. As such, the direct link signal and the backscatter link signal can be received and demodulated simultaneously. Moreover, we derive and validate the theoretical bit error rate (BER) expressions of the CIM-DCSK-AmBC system over multipath Rayleigh fading channels. Regarding the short reference DCSK-based AmBC (SR-DCSK-AmBC) system as a benchmark system, numerical results reveal that the CIM-DCSK-AmBC system can achieve better BER performance in the direct link and higher throughput in the backscatter link than the benchmark system.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 14:12:14 GMT" } ]
2023-09-06T00:00:00
[ [ "Yang", "Ruipeng", "" ], [ "Fang", "Yi", "" ], [ "Chen", "Pingping", "" ], [ "Ma", "Huan", "" ] ]
new_dataset
0.991258
2309.02273
Markus Wallinger
Martin N\"ollenburg and Markus Wallinger
Computing Hive Plots: A Combinatorial Framework
Appears in the Proceedings of the 31st International Symposium on Graph Drawing and Network Visualization (GD 2023)
null
null
null
cs.CG cs.HC
http://creativecommons.org/licenses/by/4.0/
Hive plots are a graph visualization style placing vertices on a set of radial axes emanating from a common center and drawing edges as smooth curves connecting their respective endpoints. In previous work on hive plots, assignment to an axis and vertex positions on each axis were determined based on selected vertex attributes and the order of axes was prespecified. Here, we present a new framework focusing on combinatorial aspects of these drawings to extend the original hive plot idea and optimize visual properties such as the total edge length and the number of edge crossings in the resulting hive plots. Our framework comprises three steps: (1) partition the vertices into multiple groups, each corresponding to an axis of the hive plot; (2) optimize the cyclic axis order to bring more strongly connected groups near each other; (3) optimize the vertex ordering on each axis to minimize edge crossings. Each of the three steps is related to a well-studied, but NP-complete computational problem. We combine and adapt suitable algorithmic approaches, implement them as an instantiation of our framework and show in a case study how it can be applied in a practical setting. Furthermore, we conduct computational experiments to gain further insights regarding algorithmic choices of the framework. The code of the implementation and a prototype web application can be found on OSF.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 14:37:59 GMT" } ]
2023-09-06T00:00:00
[ [ "Nöllenburg", "Martin", "" ], [ "Wallinger", "Markus", "" ] ]
new_dataset
0.996319
2309.02286
Julian Lorenz
Julian Lorenz, Florian Barthel, Daniel Kienzle, Rainer Lienhart
Haystack: A Panoptic Scene Graph Dataset to Evaluate Rare Predicate Classes
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current scene graph datasets suffer from strong long-tail distributions of their predicate classes. Due to a very low number of some predicate classes in the test sets, no reliable metrics can be retrieved for the rarest classes. We construct a new panoptic scene graph dataset and a set of metrics that are designed as a benchmark for the predictive performance especially on rare predicate classes. To construct the new dataset, we propose a model-assisted annotation pipeline that efficiently finds rare predicate classes that are hidden in a large set of images like needles in a haystack. Contrary to prior scene graph datasets, Haystack contains explicit negative annotations, i.e. annotations that a given relation does not have a certain predicate class. Negative annotations are helpful especially in the field of scene graph generation and open up a whole new set of possibilities to improve current scene graph generation models. Haystack is 100% compatible with existing panoptic scene graph datasets and can easily be integrated with existing evaluation pipelines. Our dataset and code can be found here: https://lorjul.github.io/haystack/. It includes annotation files and simple to use scripts and utilities, to help with integrating our dataset in existing work.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 14:45:54 GMT" } ]
2023-09-06T00:00:00
[ [ "Lorenz", "Julian", "" ], [ "Barthel", "Florian", "" ], [ "Kienzle", "Daniel", "" ], [ "Lienhart", "Rainer", "" ] ]
new_dataset
0.998781
2309.02340
Alhasan Abdellatif
Alhasan Abdellatif and Ahmed H. Elsheikh
Generating Infinite-Resolution Texture using GANs with Patch-by-Patch Paradigm
null
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce a novel approach for generating texture images of infinite resolutions using Generative Adversarial Networks (GANs) based on a patch-by-patch paradigm. Existing texture synthesis techniques often rely on generating a large-scale texture using a one-forward pass to the generating model, this limits the scalability and flexibility of the generated images. In contrast, the proposed approach trains GANs models on a single texture image to generate relatively small patches that are locally correlated and can be seamlessly concatenated to form a larger image while using a constant GPU memory footprint. Our method learns the local texture structure and is able to generate arbitrary-size textures, while also maintaining coherence and diversity. The proposed method relies on local padding in the generator to ensure consistency between patches and utilizes spatial stochastic modulation to allow for local variations and diversity within the large-scale image. Experimental results demonstrate superior scalability compared to existing approaches while maintaining visual coherence of generated textures.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 15:57:23 GMT" } ]
2023-09-06T00:00:00
[ [ "Abdellatif", "Alhasan", "" ], [ "Elsheikh", "Ahmed H.", "" ] ]
new_dataset
0.963045
2309.02367
Tiziano Dalmonte
Tiziano Dalmonte
Minimal modal logics, constructive modal logics and their relations
null
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
We present a family of minimal modal logics (namely, modal logics based on minimal propositional logic) corresponding each to a different classical modal logic. The minimal modal logics are defined based on their classical counterparts in two distinct ways: (1) via embedding into fusions of classical modal logics through a natural extension of the G\"odel-Johansson translation of minimal logic into modal logic S4; (2) via extension to modal logics of the multi- vs. single-succedent correspondence of sequent calculi for classical and minimal logic. We show that, despite being mutually independent, the two methods turn out to be equivalent for a wide class of modal systems. Moreover, we compare the resulting minimal version of K with the constructive modal logic CK studied in the literature, displaying tight relations among the two systems. Based on these relations, we also define a constructive correspondent for each minimal system, thus obtaining a family of constructive modal logics which includes CK as well as other constructive modal logics studied in the literature.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 16:29:34 GMT" } ]
2023-09-06T00:00:00
[ [ "Dalmonte", "Tiziano", "" ] ]
new_dataset
0.999099
2309.02394
Natalia Pavlasek
Natalia Pavlasek, Charles Champagne Cossette, David Roy-Guay, James Richard Forbes
Magnetic Navigation using Attitude-Invariant Magnetic Field Information for Loop Closure Detection
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Indoor magnetic fields are a combination of Earth's magnetic field and disruptions induced by ferromagnetic objects, such as steel structural components in buildings. As a result of these disruptions, pervasive in indoor spaces, magnetic field data is often omitted from navigation algorithms in indoor environments. This paper leverages the spatially-varying disruptions to Earth's magnetic field to extract positional information for use in indoor navigation algorithms. The algorithm uses a rate gyro and an array of four magnetometers to estimate the robot's pose. Additionally, the magnetometer array is used to compute attitude-invariant measurements associated with the magnetic field and its gradient. These measurements are used to detect loop closure points. Experimental results indicate that the proposed approach can estimate the pose of a ground robot in an indoor environment within meter accuracy.
[ { "version": "v1", "created": "Tue, 5 Sep 2023 17:05:16 GMT" } ]
2023-09-06T00:00:00
[ [ "Pavlasek", "Natalia", "" ], [ "Cossette", "Charles Champagne", "" ], [ "Roy-Guay", "David", "" ], [ "Forbes", "James Richard", "" ] ]
new_dataset
0.955032
2309.02401
Nanne van Noord
Nanne van Noord
Prototype-based Dataset Comparison
To be presented at ICCV 2023
null
null
null
cs.CV cs.MM
http://creativecommons.org/licenses/by/4.0/
Dataset summarisation is a fruitful approach to dataset inspection. However, when applied to a single dataset the discovery of visual concepts is restricted to those most prominent. We argue that a comparative approach can expand upon this paradigm to enable richer forms of dataset inspection that go beyond the most prominent concepts. To enable dataset comparison we present a module that learns concept-level prototypes across datasets. We leverage self-supervised learning to discover these prototypes without supervision, and we demonstrate the benefits of our approach in two case-studies. Our findings show that dataset comparison extends dataset inspection and we hope to encourage more works in this direction. Code and usage instructions available at https://github.com/Nanne/ProtoSim
[ { "version": "v1", "created": "Tue, 5 Sep 2023 17:27:16 GMT" } ]
2023-09-06T00:00:00
[ [ "van Noord", "Nanne", "" ] ]
new_dataset
0.999158
2008.06465
Ugur Kursuncu
Thilini Wijesiriwardene, Hale Inan, Ugur Kursuncu, Manas Gaur, Valerie L. Shalin, Krishnaprasad Thirunarayan, Amit Sheth, I. Budak Arpinar
ALONE: A Dataset for Toxic Behavior among Adolescents on Twitter
Accepted: Social Informatics 2020
International Conference on Social Informatics. 12467 (2020) 427-439
10.1007/978-3-030-60975-7_31
null
cs.SI cs.CY cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
The convenience of social media has also enabled its misuse, potentially resulting in toxic behavior. Nearly 66% of internet users have observed online harassment, and 41% claim personal experience, with 18% facing severe forms of online harassment. This toxic communication has a significant impact on the well-being of young individuals, affecting mental health and, in some cases, resulting in suicide. These communications exhibit complex linguistic and contextual characteristics, making recognition of such narratives challenging. In this paper, we provide a multimodal dataset of toxic social media interactions between confirmed high school students, called ALONE (AdoLescents ON twittEr), along with descriptive explanation. Each instance of interaction includes tweets, images, emoji and related metadata. Our observations show that individual tweets do not provide sufficient evidence for toxic behavior, and meaningful use of context in interactions can enable highlighting or exonerating tweets with purported toxicity.
[ { "version": "v1", "created": "Fri, 14 Aug 2020 17:02:55 GMT" } ]
2023-09-04T00:00:00
[ [ "Wijesiriwardene", "Thilini", "" ], [ "Inan", "Hale", "" ], [ "Kursuncu", "Ugur", "" ], [ "Gaur", "Manas", "" ], [ "Shalin", "Valerie L.", "" ], [ "Thirunarayan", "Krishnaprasad", "" ], [ "Sheth", "Amit", "" ], [ "Arpinar", "I. Budak", "" ] ]
new_dataset
0.99969
2208.00487
Aravind Battaje
Aravind Battaje, Oliver Brock
One Object at a Time: Accurate and Robust Structure From Motion for Robots
v3: Add link to project page v2: Update DOI v1: Accepted at 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
null
10.1109/IROS47612.2022.9981953
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A gaze-fixating robot perceives distance to the fixated object and relative positions of surrounding objects immediately, accurately, and robustly. We show how fixation, which is the act of looking at one object while moving, exploits regularities in the geometry of 3D space to obtain this information. These regularities introduce rotation-translation couplings that are not commonly used in structure from motion. To validate, we use a Franka Emika Robot with an RGB camera. We a) find that error in distance estimate is less than 5 mm at a distance of 15 cm, and b) show how relative position can be used to find obstacles under challenging scenarios. We combine accurate distance estimates and obstacle information into a reactive robot behavior that is able to pick up objects of unknown size, while impeded by unforeseen obstacles. Project page: https://oxidification.com/p/one-object-at-a-time/ .
[ { "version": "v1", "created": "Sun, 31 Jul 2022 18:17:04 GMT" }, { "version": "v2", "created": "Tue, 3 Jan 2023 13:07:45 GMT" }, { "version": "v3", "created": "Fri, 1 Sep 2023 14:02:16 GMT" } ]
2023-09-04T00:00:00
[ [ "Battaje", "Aravind", "" ], [ "Brock", "Oliver", "" ] ]
new_dataset
0.96833
2210.11299
Nikolaos Athanasios Anagnostopoulos
Emiliia Nazarenko, Nikolaos Athanasios Anagnostopoulos, Stavros G. Stavrinides, Nico Mexis, Florian Frank, Tolga Arul, Stefan Katzenbeisser
Real-World Chaos-Based Cryptography Using Synchronised Chua Chaotic Circuits
This work was accepted for and presented as a hardware demo at the 2022 IEEE International Symposium on Hardware Oriented Security and Trust (HOST 2022), held from 27 to 30 June 2022, in Washington, DC, USA
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
This work presents the hardware demonstrator of a secure encryption system based on synchronised Chua chaotic circuits. In particular, the presented encryption system comprises two Chua circuits that are synchronised using a dedicated bidirectional synchronisation line. One of them forms part of the transmitter, while the other of the receiver. Both circuits are tuned to operate in a chaotic mode. The output (chaotic) signal of the first circuit (transmitter) is digitised and then combined with the message to be encrypted, through an XOR gate. The second Chua circuit (receiver) is used for the decryption; the output chaotic signal of this circuit is similarly digitised and combined with the encrypted message to retrieve the original message. Our hardware demonstrator proves that this method can be used in order to provide extremely lightweight real-world, chaos-based cryptographic solutions.
[ { "version": "v1", "created": "Fri, 12 Aug 2022 00:42:42 GMT" }, { "version": "v2", "created": "Thu, 13 Jul 2023 16:12:19 GMT" } ]
2023-09-04T00:00:00
[ [ "Nazarenko", "Emiliia", "" ], [ "Anagnostopoulos", "Nikolaos Athanasios", "" ], [ "Stavrinides", "Stavros G.", "" ], [ "Mexis", "Nico", "" ], [ "Frank", "Florian", "" ], [ "Arul", "Tolga", "" ], [ "Katzenbeisser", "Stefan", "" ] ]
new_dataset
0.999649
2211.13854
Xuehai He
Kenan Jiang, Xuehai He, Ruize Xu, Xin Eric Wang
ComCLIP: Training-Free Compositional Image and Text Matching
null
null
null
null
cs.CV cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Contrastive Language-Image Pretraining (CLIP) has demonstrated great zero-shot performance for matching images and text. However, it is still challenging to adapt vision-lanaguage pretrained models like CLIP to compositional image and text matching -- a more challenging image and text matching task requiring the model understanding of compositional word concepts and visual components. Towards better compositional generalization in zero-shot image and text matching, in this paper, we study the problem from a causal perspective: the erroneous semantics of individual entities are essentially confounders that cause the matching failure. Therefore, we propose a novel \textbf{\textit{training-free}} compositional CLIP model (ComCLIP). ComCLIP disentangles input images into subjects, objects, and action sub-images and composes CLIP's vision encoder and text encoder to perform evolving matching over compositional text embedding and sub-image embeddings. In this way, ComCLIP can mitigate spurious correlations introduced by the pretrained CLIP models and dynamically evaluate the importance of each component. Experiments on four compositional image-text matching datasets: SVO, ComVG, Winoground, and VL-checklist, and two general image-text retrieval datasets: Flick30K, and MSCOCO demonstrate the effectiveness of our plug-and-play method, which boosts the \textbf{\textit{zero-shot}} inference ability of CLIP, SLIP, and BLIP2 even without further training or fine-tuning.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 01:37:48 GMT" }, { "version": "v2", "created": "Fri, 1 Sep 2023 05:07:18 GMT" } ]
2023-09-04T00:00:00
[ [ "Jiang", "Kenan", "" ], [ "He", "Xuehai", "" ], [ "Xu", "Ruize", "" ], [ "Wang", "Xin Eric", "" ] ]
new_dataset
0.973415
2212.01691
Shathushan Sivashangaran
Shathushan Sivashangaran and Azim Eskandarian
XTENTH-CAR: A Proportionally Scaled Experimental Vehicle Platform for Connected Autonomy and All-Terrain Research
$\copyright$ 2023 ASME. This work has been accepted to ASME for publication
null
null
null
cs.RO
http://creativecommons.org/licenses/by-sa/4.0/
Connected Autonomous Vehicles (CAVs) are key components of the Intelligent Transportation System (ITS), and all-terrain Autonomous Ground Vehicles (AGVs) are indispensable tools for a wide range of applications such as disaster response, automated mining, agriculture, military operations, search and rescue missions, and planetary exploration. Experimental validation is a requisite for CAV and AGV research, but requires a large, safe experimental environment when using full-size vehicles which is time-consuming and expensive. To address these challenges, we developed XTENTH-CAR (eXperimental one-TENTH scaled vehicle platform for Connected autonomy and All-terrain Research), an open-source, cost-effective proportionally one-tenth scaled experimental vehicle platform governed by the same physics as a full-size on-road vehicle. XTENTH-CAR is equipped with the best-in-class NVIDIA Jetson AGX Orin System on Module (SOM), stereo camera, 2D LiDAR and open-source Electronic Speed Controller (ESC) with drivers written for both versions of the Robot Operating System (ROS 1 & ROS 2) to facilitate experimental CAV and AGV perception, motion planning and control research, that incorporate state-of-the-art computationally expensive algorithms such as Deep Reinforcement Learning (DRL). XTENTH-CAR is designed for compact experimental environments, and aims to increase the accessibility of experimental CAV and AGV research with low upfront costs, and complete Autonomous Vehicle (AV) hardware and software architectures similar to the full-sized X-CAR experimental vehicle platform, enabling efficient cross-platform development between small-scale and full-scale vehicles.
[ { "version": "v1", "created": "Sat, 3 Dec 2022 21:00:41 GMT" }, { "version": "v2", "created": "Fri, 1 Sep 2023 03:10:26 GMT" } ]
2023-09-04T00:00:00
[ [ "Sivashangaran", "Shathushan", "" ], [ "Eskandarian", "Azim", "" ] ]
new_dataset
0.997585
2304.02013
Shih-Yang Su
Shih-Yang Su, Timur Bagautdinov, Helge Rhodin
NPC: Neural Point Characters from Video
Project website: https://lemonatsu.github.io/npc/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-fidelity human 3D models can now be learned directly from videos, typically by combining a template-based surface model with neural representations. However, obtaining a template surface requires expensive multi-view capture systems, laser scans, or strictly controlled conditions. Previous methods avoid using a template but rely on a costly or ill-posed mapping from observation to canonical space. We propose a hybrid point-based representation for reconstructing animatable characters that does not require an explicit surface model, while being generalizable to novel poses. For a given video, our method automatically produces an explicit set of 3D points representing approximate canonical geometry, and learns an articulated deformation model that produces pose-dependent point transformations. The points serve both as a scaffold for high-frequency neural features and an anchor for efficiently mapping between observation and canonical space. We demonstrate on established benchmarks that our representation overcomes limitations of prior work operating in either canonical or in observation space. Moreover, our automatic point extraction approach enables learning models of human and animal characters alike, matching the performance of the methods using rigged surface templates despite being more general. Project website: https://lemonatsu.github.io/npc/
[ { "version": "v1", "created": "Tue, 4 Apr 2023 17:59:22 GMT" }, { "version": "v2", "created": "Fri, 1 Sep 2023 04:20:25 GMT" } ]
2023-09-04T00:00:00
[ [ "Su", "Shih-Yang", "" ], [ "Bagautdinov", "Timur", "" ], [ "Rhodin", "Helge", "" ] ]
new_dataset
0.965258
2304.02216
Zilong Zhang
Zilong Zhang, Zhibin Zhao, Xingwu Zhang, Chuang Sun, Xuefeng Chen
Industrial Anomaly Detection with Domain Shift: A Real-world Dataset and Masked Multi-scale Reconstruction
Accept by Computers in Industry
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Industrial anomaly detection (IAD) is crucial for automating industrial quality inspection. The diversity of the datasets is the foundation for developing comprehensive IAD algorithms. Existing IAD datasets focus on the diversity of data categories, overlooking the diversity of domains within the same data category. In this paper, to bridge this gap, we propose the Aero-engine Blade Anomaly Detection (AeBAD) dataset, consisting of two sub-datasets: the single-blade dataset and the video anomaly detection dataset of blades. Compared to existing datasets, AeBAD has the following two characteristics: 1.) The target samples are not aligned and at different scales. 2.) There is a domain shift between the distribution of normal samples in the test set and the training set, where the domain shifts are mainly caused by the changes in illumination and view. Based on this dataset, we observe that current state-of-the-art (SOTA) IAD methods exhibit limitations when the domain of normal samples in the test set undergoes a shift. To address this issue, we propose a novel method called masked multi-scale reconstruction (MMR), which enhances the model's capacity to deduce causality among patches in normal samples by a masked reconstruction task. MMR achieves superior performance compared to SOTA methods on the AeBAD dataset. Furthermore, MMR achieves competitive performance with SOTA methods to detect the anomalies of different types on the MVTec AD dataset. Code and dataset are available at https://github.com/zhangzilongc/MMR.
[ { "version": "v1", "created": "Wed, 5 Apr 2023 04:07:54 GMT" }, { "version": "v2", "created": "Fri, 1 Sep 2023 07:26:08 GMT" } ]
2023-09-04T00:00:00
[ [ "Zhang", "Zilong", "" ], [ "Zhao", "Zhibin", "" ], [ "Zhang", "Xingwu", "" ], [ "Sun", "Chuang", "" ], [ "Chen", "Xuefeng", "" ] ]
new_dataset
0.999571
2304.03763
Fangyin Wei
Fangyin Wei, Thomas Funkhouser, Szymon Rusinkiewicz
Clutter Detection and Removal in 3D Scenes with View-Consistent Inpainting
18 pages. ICCV 2023. Project page: https://weify627.github.io/clutter/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Removing clutter from scenes is essential in many applications, ranging from privacy-concerned content filtering to data augmentation. In this work, we present an automatic system that removes clutter from 3D scenes and inpaints with coherent geometry and texture. We propose techniques for its two key components: 3D segmentation from shared properties and 3D inpainting, both of which are important problems. The definition of 3D scene clutter (frequently-moving objects) is not well captured by commonly-studied object categories in computer vision. To tackle the lack of well-defined clutter annotations, we group noisy fine-grained labels, leverage virtual rendering, and impose an instance-level area-sensitive loss. Once clutter is removed, we inpaint geometry and texture in the resulting holes by merging inpainted RGB-D images. This requires novel voting and pruning strategies that guarantee multi-view consistency across individually inpainted images for mesh reconstruction. Experiments on ScanNet and Matterport dataset show that our method outperforms baselines for clutter segmentation and 3D inpainting, both visually and quantitatively.
[ { "version": "v1", "created": "Fri, 7 Apr 2023 17:57:20 GMT" }, { "version": "v2", "created": "Fri, 1 Sep 2023 15:22:19 GMT" } ]
2023-09-04T00:00:00
[ [ "Wei", "Fangyin", "" ], [ "Funkhouser", "Thomas", "" ], [ "Rusinkiewicz", "Szymon", "" ] ]
new_dataset
0.999333
2304.11496
Shathushan Sivashangaran
Shathushan Sivashangaran, Apoorva Khairnar and Azim Eskandarian
AutoVRL: A High Fidelity Autonomous Ground Vehicle Simulator for Sim-to-Real Deep Reinforcement Learning
$\copyright$ 2023 the authors. This work has been accepted to IFAC for publication under a Creative Commons License CC-BY-NC-ND
null
null
null
cs.RO
http://creativecommons.org/licenses/by-sa/4.0/
Deep Reinforcement Learning (DRL) enables cognitive Autonomous Ground Vehicle (AGV) navigation utilizing raw sensor data without a-priori maps or GPS, which is a necessity in hazardous, information poor environments such as regions where natural disasters occur, and extraterrestrial planets. The substantial training time required to learn an optimal DRL policy, which can be days or weeks for complex tasks, is a major hurdle to real-world implementation in AGV applications. Training entails repeated collisions with the surrounding environment over an extended time period, dependent on the complexity of the task, to reinforce positive exploratory, application specific behavior that is expensive, and time consuming in the real-world. Effectively bridging the simulation to real-world gap is a requisite for successful implementation of DRL in complex AGV applications, enabling learning of cost-effective policies. We present AutoVRL, an open-source high fidelity simulator built upon the Bullet physics engine utilizing OpenAI Gym and Stable Baselines3 in PyTorch to train AGV DRL agents for sim-to-real policy transfer. AutoVRL is equipped with sensor implementations of GPS, IMU, LiDAR and camera, actuators for AGV control, and realistic environments, with extensibility for new environments and AGV models. The simulator provides access to state-of-the-art DRL algorithms, utilizing a python interface for simple algorithm and environment customization, and simulation execution.
[ { "version": "v1", "created": "Sat, 22 Apr 2023 23:14:56 GMT" }, { "version": "v2", "created": "Fri, 1 Sep 2023 04:35:06 GMT" } ]
2023-09-04T00:00:00
[ [ "Sivashangaran", "Shathushan", "" ], [ "Khairnar", "Apoorva", "" ], [ "Eskandarian", "Azim", "" ] ]
new_dataset
0.990908
2305.07270
Kailun Yang
Xuan He, Fan Yang, Kailun Yang, Jiacheng Lin, Haolong Fu, Meng Wang, Jin Yuan, Zhiyong Li
SSD-MonoDETR: Supervised Scale-aware Deformable Transformer for Monocular 3D Object Detection
Accepted to IEEE Transactions on Intelligent Vehicles (T-IV). Code will be made publicly available at https://github.com/mikasa3lili/SSD-MonoDETR
null
null
null
cs.CV cs.RO eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transformer-based methods have demonstrated superior performance for monocular 3D object detection recently, which aims at predicting 3D attributes from a single 2D image. Most existing transformer-based methods leverage both visual and depth representations to explore valuable query points on objects, and the quality of the learned query points has a great impact on detection accuracy. Unfortunately, existing unsupervised attention mechanisms in transformers are prone to generate low-quality query features due to inaccurate receptive fields, especially on hard objects. To tackle this problem, this paper proposes a novel "Supervised Scale-aware Deformable Attention" (SSDA) for monocular 3D object detection. Specifically, SSDA presets several masks with different scales and utilizes depth and visual features to adaptively learn a scale-aware filter for object query augmentation. Imposing the scale awareness, SSDA could well predict the accurate receptive field of an object query to support robust query feature generation. Aside from this, SSDA is assigned with a Weighted Scale Matching (WSM) loss to supervise scale prediction, which presents more confident results as compared to the unsupervised attention mechanisms. Extensive experiments on the KITTI and Waymo Open datasets demonstrate that SSDA significantly improves the detection accuracy, especially on moderate and hard objects, yielding state-of-the-art performance as compared to the existing approaches. Our code will be made publicly available at https://github.com/mikasa3lili/SSD-MonoDETR.
[ { "version": "v1", "created": "Fri, 12 May 2023 06:17:57 GMT" }, { "version": "v2", "created": "Fri, 2 Jun 2023 05:26:17 GMT" }, { "version": "v3", "created": "Mon, 3 Jul 2023 05:18:56 GMT" }, { "version": "v4", "created": "Fri, 1 Sep 2023 16:17:54 GMT" } ]
2023-09-04T00:00:00
[ [ "He", "Xuan", "" ], [ "Yang", "Fan", "" ], [ "Yang", "Kailun", "" ], [ "Lin", "Jiacheng", "" ], [ "Fu", "Haolong", "" ], [ "Wang", "Meng", "" ], [ "Yuan", "Jin", "" ], [ "Li", "Zhiyong", "" ] ]
new_dataset
0.958841
2305.16759
Takato Yoshikawa
Takato Yoshikawa, Yuki Endo, Yoshihiro Kanamori
StyleHumanCLIP: Text-guided Garment Manipulation for StyleGAN-Human
null
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper tackles text-guided control of StyleGAN for editing garments in full-body human images. Existing StyleGAN-based methods suffer from handling the rich diversity of garments and body shapes and poses. We propose a framework for text-guided full-body human image synthesis via an attention-based latent code mapper, which enables more disentangled control of StyleGAN than existing mappers. Our latent code mapper adopts an attention mechanism that adaptively manipulates individual latent codes on different StyleGAN layers under text guidance. In addition, we introduce feature-space masking at inference time to avoid unwanted changes caused by text inputs. Our quantitative and qualitative evaluations reveal that our method can control generated images more faithfully to given texts than existing methods.
[ { "version": "v1", "created": "Fri, 26 May 2023 09:21:56 GMT" }, { "version": "v2", "created": "Tue, 25 Jul 2023 08:39:31 GMT" }, { "version": "v3", "created": "Fri, 1 Sep 2023 09:13:10 GMT" } ]
2023-09-04T00:00:00
[ [ "Yoshikawa", "Takato", "" ], [ "Endo", "Yuki", "" ], [ "Kanamori", "Yoshihiro", "" ] ]
new_dataset
0.996354
2306.11300
Zilun Zhang
Zilun Zhang, Tiancheng Zhao, Yulong Guo, Jianwei Yin
RS5M: A Large Scale Vision-Language Dataset for Remote Sensing Vision-Language Foundation Model
RS5M dataset v4
null
null
null
cs.CV cs.AI cs.CL cs.MM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Pre-trained Vision-Language Foundation Models utilizing extensive image-text paired data have demonstrated unprecedented image-text association capabilities, achieving remarkable results across various downstream tasks. A critical challenge is how to make use of existing large-scale pre-trained VLMs, which are trained on common objects, to perform the domain-specific transfer for accomplishing domain-related downstream tasks. In this paper, we propose a new framework that includes the Domain Foundation Model (DFM), bridging the gap between the General Foundation Model (GFM) and domain-specific downstream tasks. Moreover, we present an image-text paired dataset in the field of remote sensing (RS), RS5M, which has 5 million RS images with English descriptions. The dataset is obtained from filtering publicly available image-text paired datasets and captioning label-only RS datasets with pre-trained VLM. These constitute the first large-scale RS image-text paired dataset. Additionally, we tried several Parameter-Efficient Fine-Tuning methods on RS5M to implement the DFM. Experimental results show that our proposed dataset are highly effective for various tasks, improving upon the baseline by $8 \% \sim 16 \%$ in zero-shot classification tasks, and obtaining good results in both Vision-Language Retrieval and Semantic Localization tasks. \url{https://github.com/om-ai-lab/RS5M}
[ { "version": "v1", "created": "Tue, 20 Jun 2023 05:30:59 GMT" }, { "version": "v2", "created": "Thu, 31 Aug 2023 22:33:54 GMT" } ]
2023-09-04T00:00:00
[ [ "Zhang", "Zilun", "" ], [ "Zhao", "Tiancheng", "" ], [ "Guo", "Yulong", "" ], [ "Yin", "Jianwei", "" ] ]
new_dataset
0.999575
2306.11702
Chen Zui
Zui Chen, Lei Cao, Sam Madden
Lingua Manga: A Generic Large Language Model Centric System for Data Curation
4 pages, 6 figures, VLDB 2023 Demo paper
null
null
null
cs.DB cs.CL
http://creativecommons.org/licenses/by/4.0/
Data curation is a wide-ranging area which contains many critical but time-consuming data processing tasks. However, the diversity of such tasks makes it challenging to develop a general-purpose data curation system. To address this issue, we present Lingua Manga, a user-friendly and versatile system that utilizes pre-trained large language models. Lingua Manga offers automatic optimization for achieving high performance and label efficiency while facilitating flexible and rapid development. Through three example applications with distinct objectives and users of varying levels of technical proficiency, we demonstrate that Lingua Manga can effectively assist both skilled programmers and low-code or even no-code users in addressing data curation challenges.
[ { "version": "v1", "created": "Tue, 20 Jun 2023 17:30:02 GMT" }, { "version": "v2", "created": "Fri, 1 Sep 2023 15:40:40 GMT" } ]
2023-09-04T00:00:00
[ [ "Chen", "Zui", "" ], [ "Cao", "Lei", "" ], [ "Madden", "Sam", "" ] ]
new_dataset
0.999569
2306.13177
Baolin Li
Baolin Li, Rohan Basu Roy, Daniel Wang, Siddharth Samsi, Vijay Gadepally, Devesh Tiwari
Toward Sustainable HPC: Carbon Footprint Estimation and Environmental Implications of HPC Systems
null
null
10.1145/3581784.3607035
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
The rapid growth in demand for HPC systems has led to a rise in carbon footprint, which requires urgent intervention. In this work, we present a comprehensive analysis of the carbon footprint of high-performance computing (HPC) systems, considering the carbon footprint during both the hardware manufacturing and system operational stages. Our work employs HPC hardware component carbon footprint modeling, regional carbon intensity analysis, and experimental characterization of the system life cycle to highlight the importance of quantifying the carbon footprint of HPC systems.
[ { "version": "v1", "created": "Thu, 22 Jun 2023 19:38:54 GMT" }, { "version": "v2", "created": "Tue, 8 Aug 2023 05:51:48 GMT" }, { "version": "v3", "created": "Thu, 31 Aug 2023 22:17:06 GMT" } ]
2023-09-04T00:00:00
[ [ "Li", "Baolin", "" ], [ "Roy", "Rohan Basu", "" ], [ "Wang", "Daniel", "" ], [ "Samsi", "Siddharth", "" ], [ "Gadepally", "Vijay", "" ], [ "Tiwari", "Devesh", "" ] ]
new_dataset
0.991085
2308.01525
Jiyoung Lee
Jiyoung Lee, Seungho Kim, Seunghyun Won, Joonseok Lee, Marzyeh Ghassemi, James Thorne, Jaeseok Choi, O-Kil Kwon, Edward Choi
VisAlign: Dataset for Measuring the Degree of Alignment between AI and Humans in Visual Perception
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
AI alignment refers to models acting towards human-intended goals, preferences, or ethical principles. Given that most large-scale deep learning models act as black boxes and cannot be manually controlled, analyzing the similarity between models and humans can be a proxy measure for ensuring AI safety. In this paper, we focus on the models' visual perception alignment with humans, further referred to as AI-human visual alignment. Specifically, we propose a new dataset for measuring AI-human visual alignment in terms of image classification, a fundamental task in machine perception. In order to evaluate AI-human visual alignment, a dataset should encompass samples with various scenarios that may arise in the real world and have gold human perception labels. Our dataset consists of three groups of samples, namely Must-Act (i.e., Must-Classify), Must-Abstain, and Uncertain, based on the quantity and clarity of visual information in an image and further divided into eight categories. All samples have a gold human perception label; even Uncertain (severely blurry) sample labels were obtained via crowd-sourcing. The validity of our dataset is verified by sampling theory, statistical theories related to survey design, and experts in the related fields. Using our dataset, we analyze the visual alignment and reliability of five popular visual perception models and seven abstention methods. Our code and data is available at \url{https://github.com/jiyounglee-0523/VisAlign}.
[ { "version": "v1", "created": "Thu, 3 Aug 2023 04:04:03 GMT" }, { "version": "v2", "created": "Fri, 1 Sep 2023 08:52:02 GMT" } ]
2023-09-04T00:00:00
[ [ "Lee", "Jiyoung", "" ], [ "Kim", "Seungho", "" ], [ "Won", "Seunghyun", "" ], [ "Lee", "Joonseok", "" ], [ "Ghassemi", "Marzyeh", "" ], [ "Thorne", "James", "" ], [ "Choi", "Jaeseok", "" ], [ "Kwon", "O-Kil", "" ], [ "Choi", "Edward", "" ] ]
new_dataset
0.999857