id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2308.04162
Kailun Yang
Jiajun Chen, Jiacheng Lin, Zhiqiang Xiao, Haolong Fu, Ke Nai, Kailun Yang, Zhiyong Li
EPCFormer: Expression Prompt Collaboration Transformer for Universal Referring Video Object Segmentation
The source code will be made publicly available at https://github.com/lab206/EPCFormer
null
null
null
cs.CV eess.AS eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Audio-guided Video Object Segmentation (A-VOS) and Referring Video Object Segmentation (R-VOS) are two highly-related tasks, which both aim to segment specific objects from video sequences according to user-provided expression prompts. However, due to the challenges in modeling representations for different modalities, contemporary methods struggle to strike a balance between interaction flexibility and high-precision localization and segmentation. In this paper, we address this problem from two perspectives: the alignment representation of audio and text and the deep interaction among audio, text, and visual features. First, we propose a universal architecture, the Expression Prompt Collaboration Transformer, herein EPCFormer. Next, we propose an Expression Alignment (EA) mechanism for audio and text expressions. By introducing contrastive learning for audio and text expressions, the proposed EPCFormer realizes comprehension of the semantic equivalence between audio and text expressions denoting the same objects. Then, to facilitate deep interactions among audio, text, and video features, we introduce an Expression-Visual Attention (EVA) mechanism. The knowledge of video object segmentation in terms of the expression prompts can seamlessly transfer between the two tasks by deeply exploring complementary cues between text and audio. Experiments on well-recognized benchmarks demonstrate that our universal EPCFormer attains state-of-the-art results on both tasks. The source code of EPCFormer will be made publicly available at https://github.com/lab206/EPCFormer.
[ { "version": "v1", "created": "Tue, 8 Aug 2023 09:48:00 GMT" } ]
2023-08-09T00:00:00
[ [ "Chen", "Jiajun", "" ], [ "Lin", "Jiacheng", "" ], [ "Xiao", "Zhiqiang", "" ], [ "Fu", "Haolong", "" ], [ "Nai", "Ke", "" ], [ "Yang", "Kailun", "" ], [ "Li", "Zhiyong", "" ] ]
new_dataset
0.993309
2308.04189
Carsten Nielsen
Carsten Nielsen, Zhe Su, Giacomo Indiveri
Yak: An Asynchronous Bundled Data Pipeline Description Language
null
null
null
null
cs.AR
http://creativecommons.org/licenses/by/4.0/
The design of asynchronous circuits typically requires a judicious definition of signals and modules, combined with a proper specification of their timing constraints, which can be a complex and error-prone process, using standard Hardware Description Languages (HDLs). In this paper we introduce Yak, a new dataflow description language for asynchronous bundled data circuits. Yak allows designers to generate Verilog and timing constraints automatically, from a textual description of bundled data control flow structures and combinational logic blocks. The timing constraints are generated using the Local Clock Set methodology and can be consumed by standard industry tools. Yak includes ergonomic language features such as structured bindings of channels undergoing fork and join operations, named value scope propagation along channels, and channel typing. Here we present Yak's language front-end and compare the automated synthesis and layout results of an example circuit with a manual constraint specification approach.
[ { "version": "v1", "created": "Tue, 8 Aug 2023 11:24:46 GMT" } ]
2023-08-09T00:00:00
[ [ "Nielsen", "Carsten", "" ], [ "Su", "Zhe", "" ], [ "Indiveri", "Giacomo", "" ] ]
new_dataset
0.999718
2308.04218
Muduo Xu
Muduo Xu, Jianhao Su, Yutao Liu
AquaSAM: Underwater Image Foreground Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Segment Anything Model (SAM) has revolutionized natural image segmentation, nevertheless, its performance on underwater images is still restricted. This work presents AquaSAM, the first attempt to extend the success of SAM on underwater images with the purpose of creating a versatile method for the segmentation of various underwater targets. To achieve this, we begin by classifying and extracting various labels automatically in SUIM dataset. Subsequently, we develop a straightforward fine-tuning method to adapt SAM to general foreground underwater image segmentation. Through extensive experiments involving eight segmentation tasks like human divers, we demonstrate that AquaSAM outperforms the default SAM model especially at hard tasks like coral reefs. AquaSAM achieves an average Dice Similarity Coefficient (DSC) of 7.13 (%) improvement and an average of 8.27 (%) on mIoU improvement in underwater segmentation tasks.
[ { "version": "v1", "created": "Tue, 8 Aug 2023 12:30:36 GMT" } ]
2023-08-09T00:00:00
[ [ "Xu", "Muduo", "" ], [ "Su", "Jianhao", "" ], [ "Liu", "Yutao", "" ] ]
new_dataset
0.998866
2308.04249
Huiguang He
Yizhuo Lu, Changde Du, Qiongyi zhou, Dianpeng Wang, Huiguang He
MindDiffuser: Controlled Image Reconstruction from Human Brain Activity with Semantic and Structural Diffusion
arXiv admin note: substantial text overlap with arXiv:2303.14139
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconstructing visual stimuli from brain recordings has been a meaningful and challenging task. Especially, the achievement of precise and controllable image reconstruction bears great significance in propelling the progress and utilization of brain-computer interfaces. Despite the advancements in complex image reconstruction techniques, the challenge persists in achieving a cohesive alignment of both semantic (concepts and objects) and structure (position, orientation, and size) with the image stimuli. To address the aforementioned issue, we propose a two-stage image reconstruction model called MindDiffuser. In Stage 1, the VQ-VAE latent representations and the CLIP text embeddings decoded from fMRI are put into Stable Diffusion, which yields a preliminary image that contains semantic information. In Stage 2, we utilize the CLIP visual feature decoded from fMRI as supervisory information, and continually adjust the two feature vectors decoded in Stage 1 through backpropagation to align the structural information. The results of both qualitative and quantitative analyses demonstrate that our model has surpassed the current state-of-the-art models on Natural Scenes Dataset (NSD). The subsequent experimental findings corroborate the neurobiological plausibility of the model, as evidenced by the interpretability of the multimodal feature employed, which align with the corresponding brain responses.
[ { "version": "v1", "created": "Tue, 8 Aug 2023 13:28:34 GMT" } ]
2023-08-09T00:00:00
[ [ "Lu", "Yizhuo", "" ], [ "Du", "Changde", "" ], [ "zhou", "Qiongyi", "" ], [ "Wang", "Dianpeng", "" ], [ "He", "Huiguang", "" ] ]
new_dataset
0.9967
2308.04288
Daiheng Gao
Daiheng Gao, Xu Chen, Xindi Zhang, Qi Wang, Ke Sun, Bang Zhang, Liefeng Bo, Qixing Huang
Cloth2Tex: A Customized Cloth Texture Generation Pipeline for 3D Virtual Try-On
15 pages, 15 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Fabricating and designing 3D garments has become extremely demanding with the increasing need for synthesizing realistic dressed persons for a variety of applications, e.g. 3D virtual try-on, digitalization of 2D clothes into 3D apparel, and cloth animation. It thus necessitates a simple and straightforward pipeline to obtain high-quality texture from simple input, such as 2D reference images. Since traditional warping-based texture generation methods require a significant number of control points to be manually selected for each type of garment, which can be a time-consuming and tedious process. We propose a novel method, called Cloth2Tex, which eliminates the human burden in this process. Cloth2Tex is a self-supervised method that generates texture maps with reasonable layout and structural consistency. Another key feature of Cloth2Tex is that it can be used to support high-fidelity texture inpainting. This is done by combining Cloth2Tex with a prevailing latent diffusion model. We evaluate our approach both qualitatively and quantitatively and demonstrate that Cloth2Tex can generate high-quality texture maps and achieve the best visual effects in comparison to other methods. Project page: tomguluson92.github.io/projects/cloth2tex/
[ { "version": "v1", "created": "Tue, 8 Aug 2023 14:32:38 GMT" } ]
2023-08-09T00:00:00
[ [ "Gao", "Daiheng", "" ], [ "Chen", "Xu", "" ], [ "Zhang", "Xindi", "" ], [ "Wang", "Qi", "" ], [ "Sun", "Ke", "" ], [ "Zhang", "Bang", "" ], [ "Bo", "Liefeng", "" ], [ "Huang", "Qixing", "" ] ]
new_dataset
0.999841
2308.04323
Miguel Zamora
Zhaoting Li, Miguel Zamora, Hehui Zheng, Stelian Coros
Embracing Safe Contacts with Contact-aware Planning and Control
RSS 2023. Workshop: Experiment-oriented Locomotion and Manipulation Research
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Unlike human beings that can employ the entire surface of their limbs as a means to establish contact with their environment, robots are typically programmed to interact with their environments via their end-effectors, in a collision-free fashion, to avoid damaging their environment. In a departure from such a traditional approach, this work presents a contact-aware controller for reference tracking that maintains interaction forces on the surface of the robot below a safety threshold in the presence of both rigid and soft contacts. Furthermore, we leveraged the proposed controller to extend the BiTRRT sample-based planning method to be contact-aware, using a simplified contact model. The effectiveness of our framework is demonstrated in hardware experiments using a Franka robot in a setup inspired by the Amazon stowing task. A demo video of our results can be seen here: https://youtu.be/2WeYytauhNg
[ { "version": "v1", "created": "Tue, 8 Aug 2023 15:16:51 GMT" } ]
2023-08-09T00:00:00
[ [ "Li", "Zhaoting", "" ], [ "Zamora", "Miguel", "" ], [ "Zheng", "Hehui", "" ], [ "Coros", "Stelian", "" ] ]
new_dataset
0.9965
2308.04328
Nadia Nahar
Nadia Nahar, Haoran Zhang, Grace Lewis, Shurui Zhou, Christian K\"astner
A Dataset and Analysis of Open-Source Machine Learning Products
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Machine learning (ML) components are increasingly incorporated into software products, yet developers face challenges in transitioning from ML prototypes to products. Academic researchers struggle to propose solutions to these challenges and evaluate interventions because they often do not have access to close-sourced ML products from industry. In this study, we define and identify open-source ML products, curating a dataset of 262 repositories from GitHub, to facilitate further research and education. As a start, we explore six broad research questions related to different development activities and report 21 findings from a sample of 30 ML products from the dataset. Our findings reveal a variety of development practices and architectural decisions surrounding different types and uses of ML models that offer ample opportunities for future research innovations. We also find very little evidence of industry best practices such as model testing and pipeline automation within the open-source ML products, which leaves room for further investigation to understand its potential impact on the development and eventual end-user experience for the products.
[ { "version": "v1", "created": "Tue, 8 Aug 2023 15:19:13 GMT" } ]
2023-08-09T00:00:00
[ [ "Nahar", "Nadia", "" ], [ "Zhang", "Haoran", "" ], [ "Lewis", "Grace", "" ], [ "Zhou", "Shurui", "" ], [ "Kästner", "Christian", "" ] ]
new_dataset
0.999812
2308.04337
Fadhil Muhammad
Fadhil Muhammad, Alif Bintang Elfandra, Iqbal Pahlevi Amin, Alfan Farizki Wicaksono
Pengembangan Model untuk Mendeteksi Kerusakan pada Terumbu Karang dengan Klasifikasi Citra
in Indonesian language
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
The abundant biodiversity of coral reefs in Indonesian waters is a valuable asset that needs to be preserved. Rapid climate change and uncontrolled human activities have led to the degradation of coral reef ecosystems, including coral bleaching, which is a critical indicator of coral health conditions. Therefore, this research aims to develop an accurate classification model to distinguish between healthy corals and corals experiencing bleaching. This study utilizes a specialized dataset consisting of 923 images collected from Flickr using the Flickr API. The dataset comprises two distinct classes: healthy corals (438 images) and bleached corals (485 images). These images have been resized to a maximum of 300 pixels in width or height, whichever is larger, to maintain consistent sizes across the dataset. The method employed in this research involves the use of machine learning models, particularly convolutional neural networks (CNN), to recognize and differentiate visual patterns associated with healthy and bleached corals. In this context, the dataset can be used to train and test various classification models to achieve optimal results. By leveraging the ResNet model, it was found that a from-scratch ResNet model can outperform pretrained models in terms of precision and accuracy. The success in developing accurate classification models will greatly benefit researchers and marine biologists in gaining a better understanding of coral reef health. These models can also be employed to monitor changes in the coral reef environment, thereby making a significant contribution to conservation and ecosystem restoration efforts that have far-reaching impacts on life.
[ { "version": "v1", "created": "Tue, 8 Aug 2023 15:30:08 GMT" } ]
2023-08-09T00:00:00
[ [ "Muhammad", "Fadhil", "" ], [ "Elfandra", "Alif Bintang", "" ], [ "Amin", "Iqbal Pahlevi", "" ], [ "Wicaksono", "Alfan Farizki", "" ] ]
new_dataset
0.960006
2308.04352
Ziyu Zhu
Ziyu Zhu, Xiaojian Ma, Yixin Chen, Zhidong Deng, Siyuan Huang, Qing Li
3D-VisTA: Pre-trained Transformer for 3D Vision and Text Alignment
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
3D vision-language grounding (3D-VL) is an emerging field that aims to connect the 3D physical world with natural language, which is crucial for achieving embodied intelligence. Current 3D-VL models rely heavily on sophisticated modules, auxiliary losses, and optimization tricks, which calls for a simple and unified model. In this paper, we propose 3D-VisTA, a pre-trained Transformer for 3D Vision and Text Alignment that can be easily adapted to various downstream tasks. 3D-VisTA simply utilizes self-attention layers for both single-modal modeling and multi-modal fusion without any sophisticated task-specific design. To further enhance its performance on 3D-VL tasks, we construct ScanScribe, the first large-scale 3D scene-text pairs dataset for 3D-VL pre-training. ScanScribe contains 2,995 RGB-D scans for 1,185 unique indoor scenes originating from ScanNet and 3R-Scan datasets, along with paired 278K scene descriptions generated from existing 3D-VL tasks, templates, and GPT-3. 3D-VisTA is pre-trained on ScanScribe via masked language/object modeling and scene-text matching. It achieves state-of-the-art results on various 3D-VL tasks, ranging from visual grounding and dense captioning to question answering and situated reasoning. Moreover, 3D-VisTA demonstrates superior data efficiency, obtaining strong performance even with limited annotations during downstream task fine-tuning.
[ { "version": "v1", "created": "Tue, 8 Aug 2023 15:59:17 GMT" } ]
2023-08-09T00:00:00
[ [ "Zhu", "Ziyu", "" ], [ "Ma", "Xiaojian", "" ], [ "Chen", "Yixin", "" ], [ "Deng", "Zhidong", "" ], [ "Huang", "Siyuan", "" ], [ "Li", "Qing", "" ] ]
new_dataset
0.999309
2308.04370
Juan Wen
Juan Wen, Shupeng Cheng, Peng Xu, Bowen Zhou, Radu Timofte, Weiyan Hou, Luc Van Gool
When Super-Resolution Meets Camouflaged Object Detection: A Comparison Study
23 pages with 8 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Super Resolution (SR) and Camouflaged Object Detection (COD) are two hot topics in computer vision with various joint applications. For instance, low-resolution surveillance images can be successively processed by super-resolution techniques and camouflaged object detection. However, in previous work, these two areas are always studied in isolation. In this paper, we, for the first time, conduct an integrated comparative evaluation for both. Specifically, we benchmark different super-resolution methods on commonly used COD datasets, and meanwhile, we evaluate the robustness of different COD models by using COD data processed by SR methods. Our goal is to bridge these two domains, discover novel experimental phenomena, summarize new experim.
[ { "version": "v1", "created": "Tue, 8 Aug 2023 16:17:46 GMT" } ]
2023-08-09T00:00:00
[ [ "Wen", "Juan", "" ], [ "Cheng", "Shupeng", "" ], [ "Xu", "Peng", "" ], [ "Zhou", "Bowen", "" ], [ "Timofte", "Radu", "" ], [ "Hou", "Weiyan", "" ], [ "Van Gool", "Luc", "" ] ]
new_dataset
0.996966
2308.04398
Josef Jon
Josef Jon and Ond\v{r}ej Bojar
Character-level NMT and language similarity
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We explore the effectiveness of character-level neural machine translation using Transformer architecture for various levels of language similarity and size of the training dataset on translation between Czech and Croatian, German, Hungarian, Slovak, and Spanish. We evaluate the models using automatic MT metrics and show that translation between similar languages benefits from character-level input segmentation, while for less related languages, character-level vanilla Transformer-base often lags behind subword-level segmentation. We confirm previous findings that it is possible to close the gap by finetuning the already trained subword-level models to character-level.
[ { "version": "v1", "created": "Tue, 8 Aug 2023 17:01:42 GMT" } ]
2023-08-09T00:00:00
[ [ "Jon", "Josef", "" ], [ "Bojar", "Ondřej", "" ] ]
new_dataset
0.984483
2308.04409
Yichao Shen
Yichao Shen, Zigang Geng, Yuhui Yuan, Yutong Lin, Ze Liu, Chunyu Wang, Han Hu, Nanning Zheng, Baining Guo
V-DETR: DETR with Vertex Relative Position Encoding for 3D Object Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a highly performant 3D object detector for point clouds using the DETR framework. The prior attempts all end up with suboptimal results because they fail to learn accurate inductive biases from the limited scale of training data. In particular, the queries often attend to points that are far away from the target objects, violating the locality principle in object detection. To address the limitation, we introduce a novel 3D Vertex Relative Position Encoding (3DV-RPE) method which computes position encoding for each point based on its relative position to the 3D boxes predicted by the queries in each decoder layer, thus providing clear information to guide the model to focus on points near the objects, in accordance with the principle of locality. In addition, we systematically improve the pipeline from various aspects such as data normalization based on our understanding of the task. We show exceptional results on the challenging ScanNetV2 benchmark, achieving significant improvements over the previous 3DETR in $\rm{AP}_{25}$/$\rm{AP}_{50}$ from 65.0\%/47.0\% to 77.8\%/66.0\%, respectively. In addition, our method sets a new record on ScanNetV2 and SUN RGB-D datasets.Code will be released at http://github.com/yichaoshen-MS/V-DETR.
[ { "version": "v1", "created": "Tue, 8 Aug 2023 17:14:14 GMT" } ]
2023-08-09T00:00:00
[ [ "Shen", "Yichao", "" ], [ "Geng", "Zigang", "" ], [ "Yuan", "Yuhui", "" ], [ "Lin", "Yutong", "" ], [ "Liu", "Ze", "" ], [ "Wang", "Chunyu", "" ], [ "Hu", "Han", "" ], [ "Zheng", "Nanning", "" ], [ "Guo", "Baining", "" ] ]
new_dataset
0.964982
2011.04400
Soumajyoti Sarkar Mr.
Soumajyoti Sarkar
Bandits in Matching Markets: Ideas and Proposals for Peer Lending
null
null
null
null
cs.GT cs.LG econ.GN q-fin.EC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by recent applications of sequential decision making in matching markets, in this paper we attempt at formulating and abstracting market designs for P2P lending. We describe a paradigm to set the stage for how peer to peer investments can be conceived from a matching market perspective, especially when both borrower and lender preferences are respected. We model these specialized markets as an optimization problem and consider different utilities for agents on both sides of the market while also understanding the impact of equitable allocations to borrowers. We devise a technique based on sequential decision making that allow the lenders to adjust their choices based on the dynamics of uncertainty from competition over time and that also impacts the rewards in return for their investments. Using simulated experiments we show the dynamics of the regret based on the optimal borrower-lender matching and find that the lender regret depends on the initial preferences set by the lenders which could affect their learning over decision making steps.
[ { "version": "v1", "created": "Fri, 30 Oct 2020 20:12:26 GMT" }, { "version": "v2", "created": "Wed, 20 Jan 2021 09:49:49 GMT" }, { "version": "v3", "created": "Tue, 2 Mar 2021 08:14:30 GMT" }, { "version": "v4", "created": "Fri, 16 Apr 2021 07:46:52 GMT" }, { "version": "v5", "created": "Wed, 2 Aug 2023 16:09:47 GMT" } ]
2023-08-08T00:00:00
[ [ "Sarkar", "Soumajyoti", "" ] ]
new_dataset
0.977797
2105.00689
Michael Kompatscher
Michael Kompatscher
CSAT and CEQV for nilpotent Maltsev algebras of Fitting length > 2
23 pages
null
null
null
cs.CC math.RA
http://creativecommons.org/licenses/by/4.0/
The circuit satisfaction problem CSAT(A) of an algebra A is the problem of deciding whether an equation over A (encoded by two circuits) has a solution or not. While solving systems of equations over finite algebras is either in P or NP-complete, no such dichotomy result is known for CSAT(A). In fact, Idziak, Kawalek and Krzaczkowski constructed examples of nilpotent Maltsev algebras A, for which, under the assumption of ETH and an open conjecture in circuit theory, CSAT(A) can be solved in quasipolynomial, but not polynomial time. The same is true for the circuit equivalence problem CEQV(A). In this paper we generalize their result to all nilpotent Maltsev algebras of Fitting length >2. This not only advances the project of classifying the complexity of CSAT (and CEQV) for algebras from congruence modular varieties, but we also believe that the tools we developed are of independent interest in the study of nilpotent algebras.
[ { "version": "v1", "created": "Mon, 3 May 2021 08:51:57 GMT" }, { "version": "v2", "created": "Sun, 6 Aug 2023 16:41:14 GMT" } ]
2023-08-08T00:00:00
[ [ "Kompatscher", "Michael", "" ] ]
new_dataset
0.999315
2106.02350
Giulio Ermanno Pibiri
Giulio Ermanno Pibiri and Roberto Trani
Parallel and External-Memory Construction of Minimal Perfect Hash Functions with PTHash
Accepted by IEEE TKDE
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
A function $f : U \to \{0,\ldots,n-1\}$ is a minimal perfect hash function for a set $S \subseteq U$ of size $n$, if $f$ bijectively maps $S$ into the first $n$ natural numbers. These functions are important for many practical applications in computing, such as search engines, computer networks, and databases. Several algorithms have been proposed to build minimal perfect hash functions that: scale well to large sets, retain fast evaluation time, and take very little space, e.g., 2 - 3 bits/key. PTHash is one such algorithm, achieving very fast evaluation in compressed space, typically several times faster than other techniques. In this work, we propose a new construction algorithm for PTHash enabling: (1) multi-threading, to either build functions more quickly or more space-efficiently, and (2) external-memory processing to scale to inputs much larger than the available internal memory. Only few other algorithms in the literature share these features, despite of their big practical impact. We conduct an extensive experimental assessment on large real-world string collections and show that, with respect to other techniques, PTHash is competitive in construction time and space consumption, but retains 2 - 6$\times$ better lookup time.
[ { "version": "v1", "created": "Fri, 4 Jun 2021 09:02:36 GMT" }, { "version": "v2", "created": "Sun, 6 Aug 2023 10:14:25 GMT" } ]
2023-08-08T00:00:00
[ [ "Pibiri", "Giulio Ermanno", "" ], [ "Trani", "Roberto", "" ] ]
new_dataset
0.961837
2106.08091
Catherine Ordun
Catherine Ordun, Edward Raff, Sanjay Purushotham
Generating Thermal Human Faces for Physiological Assessment Using Thermal Sensor Auxiliary Labels
null
2021 IEEE International Conference on Image Processing (ICIP)
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Thermal images reveal medically important physiological information about human stress, signs of inflammation, and emotional mood that cannot be seen on visible images. Providing a method to generate thermal faces from visible images would be highly valuable for the telemedicine community in order to show this medical information. To the best of our knowledge, there are limited works on visible-to-thermal (VT) face translation, and many current works go the opposite direction to generate visible faces from thermal surveillance images (TV) for law enforcement applications. As a result, we introduce favtGAN, a VT GAN which uses the pix2pix image translation model with an auxiliary sensor label prediction network for generating thermal faces from visible images. Since most TV methods are trained on only one data source drawn from one thermal sensor, we combine datasets from faces and cityscapes. These combined data are captured from similar sensors in order to bootstrap the training and transfer learning task, especially valuable because visible-thermal face datasets are limited. Experiments on these combined datasets show that favtGAN demonstrates an increase in SSIM and PSNR scores of generated thermal faces, compared to training on a single face dataset alone.
[ { "version": "v1", "created": "Tue, 15 Jun 2021 12:32:52 GMT" } ]
2023-08-08T00:00:00
[ [ "Ordun", "Catherine", "" ], [ "Raff", "Edward", "" ], [ "Purushotham", "Sanjay", "" ] ]
new_dataset
0.962641
2111.00221
Long Zhang
Long Zhang, Javier Ron, Benoit Baudry, and Martin Monperrus
Chaos Engineering of Ethereum Blockchain Clients
null
Distributed Ledger Technologies: Research and Practice, 2023
10.1145/3611649
null
cs.SE cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present ChaosETH, a chaos engineering approach for resilience assessment of Ethereum blockchain clients. ChaosETH operates in the following manner: First, it monitors Ethereum clients to determine their normal behavior. Then, it injects system call invocation errors into one single Ethereum client at a time, and observes the behavior resulting from perturbation. Finally, ChaosETH compares the behavior recorded before, during, and after perturbation to assess the impact of the injected system call invocation errors. The experiments are performed on the two most popular Ethereum client implementations: GoEthereum and Nethermind. We assess the impact of 22 different system call errors on those Ethereum clients with respect to 15 application-level metrics. Our results reveal a broad spectrum of resilience characteristics of Ethereum clients w.r.t. system call invocation errors, ranging from direct crashes to full resilience. The experiments clearly demonstrate the feasibility of applying chaos engineering principles to blockchain systems.
[ { "version": "v1", "created": "Sat, 30 Oct 2021 10:03:19 GMT" }, { "version": "v2", "created": "Sun, 18 Jun 2023 00:43:29 GMT" } ]
2023-08-08T00:00:00
[ [ "Zhang", "Long", "" ], [ "Ron", "Javier", "" ], [ "Baudry", "Benoit", "" ], [ "Monperrus", "Martin", "" ] ]
new_dataset
0.998722
2206.08083
Bonifaz Stuhr
Julian Gebele, Bonifaz Stuhr and Johann Haselberger
CARLANE: A Lane Detection Benchmark for Unsupervised Domain Adaptation from Simulation to multiple Real-World Domains
36th Conference on Neural Information Processing Systems (NeurIPS 2022) Track on Datasets and Benchmarks, 22 pages, 11 figures
null
10.34740/kaggle/dsv/3798459
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Unsupervised Domain Adaptation demonstrates great potential to mitigate domain shifts by transferring models from labeled source domains to unlabeled target domains. While Unsupervised Domain Adaptation has been applied to a wide variety of complex vision tasks, only few works focus on lane detection for autonomous driving. This can be attributed to the lack of publicly available datasets. To facilitate research in these directions, we propose CARLANE, a 3-way sim-to-real domain adaptation benchmark for 2D lane detection. CARLANE encompasses the single-target datasets MoLane and TuLane and the multi-target dataset MuLane. These datasets are built from three different domains, which cover diverse scenes and contain a total of 163K unique images, 118K of which are annotated. In addition we evaluate and report systematic baselines, including our own method, which builds upon Prototypical Cross-domain Self-supervised Learning. We find that false positive and false negative rates of the evaluated domain adaptation methods are high compared to those of fully supervised baselines. This affirms the need for benchmarks such as CARLANE to further strengthen research in Unsupervised Domain Adaptation for lane detection. CARLANE, all evaluated models and the corresponding implementations are publicly available at https://carlanebenchmark.github.io.
[ { "version": "v1", "created": "Thu, 16 Jun 2022 10:53:18 GMT" }, { "version": "v2", "created": "Thu, 11 Aug 2022 14:51:41 GMT" }, { "version": "v3", "created": "Tue, 20 Sep 2022 08:10:00 GMT" }, { "version": "v4", "created": "Mon, 7 Aug 2023 13:24:06 GMT" } ]
2023-08-08T00:00:00
[ [ "Gebele", "Julian", "" ], [ "Stuhr", "Bonifaz", "" ], [ "Haselberger", "Johann", "" ] ]
new_dataset
0.998959
2207.04438
Jiawen Zhu
Jiawen Zhu, Xin Chen, Pengyu Zhang, Xinying Wang, Dong Wang, Wenda Zhao, Huchuan Lu
SRRT: Search Region Regulation Tracking
Under review
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dominant trackers generate a fixed-size rectangular region based on the previous prediction or initial bounding box as the model input, i.e., search region. While this manner obtains promising tracking efficiency, a fixed-size search region lacks flexibility and is likely to fail in some cases, e.g., fast motion and distractor interference. Trackers tend to lose the target object due to the limited search region or be interfered with by distractors due to the excessive search region. Drawing inspiration from the pattern humans track an object, we propose a novel tracking paradigm, called Search Region Regulation Tracking (SRRT) that applies a small eyereach when the target is captured and zooms out the search field when the target is about to be lost. SRRT applies a proposed search region regulator to estimate an optimal search region dynamically for each frame, by which the tracker can flexibly respond to transient changes in the location of object occurrences. To adapt the object's appearance variation during online tracking, we further propose a lockingstate determined updating strategy for reference frame updating. The proposed SRRT is concise without bells and whistles, yet achieves evident improvements and competitive results with other state-of-the-art trackers on eight benchmarks. On the large-scale LaSOT benchmark, SRRT improves SiamRPN++ and TransT with absolute gains of 4.6% and 3.1% in terms of AUC. The code and models will be released.
[ { "version": "v1", "created": "Sun, 10 Jul 2022 11:18:26 GMT" }, { "version": "v2", "created": "Fri, 19 Aug 2022 06:55:56 GMT" }, { "version": "v3", "created": "Sun, 6 Aug 2023 10:00:43 GMT" } ]
2023-08-08T00:00:00
[ [ "Zhu", "Jiawen", "" ], [ "Chen", "Xin", "" ], [ "Zhang", "Pengyu", "" ], [ "Wang", "Xinying", "" ], [ "Wang", "Dong", "" ], [ "Zhao", "Wenda", "" ], [ "Lu", "Huchuan", "" ] ]
new_dataset
0.998333
2209.04265
Yubin Liu
Yubin Liu, Qiming Ye, Jose Escribano-Macias, Yuxiang Feng, Eduardo Candela, and Panagiotis Angeloudis
Route Planning for Last-Mile Deliveries Using Mobile Parcel Lockers: A Hybrid Q-Learning Network Approach
54 pages, 18 figures. This paper has been submitted to Transportation Research Part E: Logistics and Transportation Review (Manuscript Number: TRE-D-23-00202)
null
10.1016/j.tre.2023.103234
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Mobile parcel lockers have been recently proposed by logistics operators as a technology that could help reduce traffic congestion and operational costs in urban freight distribution. Given their ability to relocate throughout their area of deployment, they hold the potential to improve customer accessibility and convenience. In this study, we formulate the Mobile Parcel Locker Problem (MPLP) , a special case of the Location-Routing Problem (LRP) which determines the optimal stopover location for MPLs throughout the day and plans corresponding delivery routes. A Hybrid Q Learning Network based Method (HQM) is developed to resolve the computational complexity of the resulting large problem instances while escaping local optima. In addition, the HQM is integrated with global and local search mechanisms to resolve the dilemma of exploration and exploitation faced by classic reinforcement learning methods. We examine the performance of HQM under different problem sizes (up to 200 nodes) and benchmarked it against the exact approach and Genetic Algorithm (GA). Our results indicate that HQM achieves better optimisation performance with shorter computation time than the exact approach solved by the Gurobi solver in large problem instances. Additionally, the average reward obtained by HQM is 1.96 times greater than GA, which demonstrates that HQM has a better optimisation ability. Further, we identify critical factors that contribute to fleet size requirements, travel distances, and service delays. Our findings outline that the efficiency of MPLs is mainly contingent on the length of time windows and the deployment of MPL stopovers. Finally, we highlight managerial implications based on parametric analysis to provide guidance for logistics operators in the context of efficient last-mile distribution operations.
[ { "version": "v1", "created": "Fri, 9 Sep 2022 11:59:42 GMT" }, { "version": "v2", "created": "Sat, 19 Nov 2022 08:05:17 GMT" }, { "version": "v3", "created": "Fri, 10 Feb 2023 02:39:29 GMT" } ]
2023-08-08T00:00:00
[ [ "Liu", "Yubin", "" ], [ "Ye", "Qiming", "" ], [ "Escribano-Macias", "Jose", "" ], [ "Feng", "Yuxiang", "" ], [ "Candela", "Eduardo", "" ], [ "Angeloudis", "Panagiotis", "" ] ]
new_dataset
0.997081
2210.12364
Lvxiaowei Xu
Lvxiaowei Xu, Jianwang Wu, Jiawei Peng, Jiayu Fu, Ming Cai
FCGEC: Fine-Grained Corpus for Chinese Grammatical Error Correction
Long paper, accepted at the Findings of EMNLP 2022
null
10.18653/v1/2022.findings-emnlp.137
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Grammatical Error Correction (GEC) has been broadly applied in automatic correction and proofreading system recently. However, it is still immature in Chinese GEC due to limited high-quality data from native speakers in terms of category and scale. In this paper, we present FCGEC, a fine-grained corpus to detect, identify and correct the grammatical errors. FCGEC is a human-annotated corpus with multiple references, consisting of 41,340 sentences collected mainly from multi-choice questions in public school Chinese examinations. Furthermore, we propose a Switch-Tagger-Generator (STG) baseline model to correct the grammatical errors in low-resource settings. Compared to other GEC benchmark models, experimental results illustrate that STG outperforms them on our FCGEC. However, there exists a significant gap between benchmark models and humans that encourages future models to bridge it.
[ { "version": "v1", "created": "Sat, 22 Oct 2022 06:29:05 GMT" } ]
2023-08-08T00:00:00
[ [ "Xu", "Lvxiaowei", "" ], [ "Wu", "Jianwang", "" ], [ "Peng", "Jiawei", "" ], [ "Fu", "Jiayu", "" ], [ "Cai", "Ming", "" ] ]
new_dataset
0.999734
2211.08264
Priyanka Agrawal
Priyanka Agrawal, Chris Alberti, Fantine Huot, Joshua Maynez, Ji Ma, Sebastian Ruder, Kuzman Ganchev, Dipanjan Das, Mirella Lapata
QAmeleon: Multilingual QA with Only 5 Examples
To Appear at Transactions of Association for Computational Linguistics (TACL)
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The availability of large, high-quality datasets has been one of the main drivers of recent progress in question answering (QA). Such annotated datasets however are difficult and costly to collect, and rarely exist in languages other than English, rendering QA technology inaccessible to underrepresented languages. An alternative to building large monolingual training datasets is to leverage pre-trained language models (PLMs) under a few-shot learning setting. Our approach, QAmeleon, uses a PLM to automatically generate multilingual data upon which QA models are trained, thus avoiding costly annotation. Prompt tuning the PLM for data synthesis with only five examples per language delivers accuracy superior to translation-based baselines, bridges nearly 60% of the gap between an English-only baseline and a fully supervised upper bound trained on almost 50,000 hand labeled examples, and always leads to substantial improvements compared to fine-tuning a QA model directly on labeled examples in low resource settings. Experiments on the TyDiQA-GoldP and MLQA benchmarks show that few-shot prompt tuning for data synthesis scales across languages and is a viable alternative to large-scale annotation.
[ { "version": "v1", "created": "Tue, 15 Nov 2022 16:14:39 GMT" }, { "version": "v2", "created": "Mon, 7 Aug 2023 11:22:16 GMT" } ]
2023-08-08T00:00:00
[ [ "Agrawal", "Priyanka", "" ], [ "Alberti", "Chris", "" ], [ "Huot", "Fantine", "" ], [ "Maynez", "Joshua", "" ], [ "Ma", "Ji", "" ], [ "Ruder", "Sebastian", "" ], [ "Ganchev", "Kuzman", "" ], [ "Das", "Dipanjan", "" ], [ "Lapata", "Mirella", "" ] ]
new_dataset
0.998766
2211.15300
Fabian Ruffy
Fabian Ruffy, Jed Liu, Prathima Kotikalapudi, Vojt\v{e}ch Havel, Hanneli Tavante, Rob Sherwood, Vladyslav Dubina, Volodymyr Peschanenko, Anirudh Sivaraman, and Nate Foster
P4Testgen: An Extensible Test Oracle For P4
null
ACM SIGCOMM 2023 Conference (ACM SIGCOMM '23)
10.1145/3603269.3604834
null
cs.NI cs.SC cs.SE
http://creativecommons.org/licenses/by/4.0/
We present P4Testgen, a test oracle for the P4$_{16}$ language. P4Testgen supports automatic test generation for any P4 target and is designed to be extensible to many P4 targets. It models the complete semantics of the target's packet-processing pipeline including the P4 language, architectures and externs, and target-specific extensions. To handle non-deterministic behaviors and complex externs (e.g., checksums and hash functions), P4Testgen uses taint tracking and concolic execution. It also provides path selection strategies that reduce the number of tests required to achieve full coverage. We have instantiated P4Testgen for the V1model, eBPF, PNA, and Tofino P4 architectures. Each extension required effort commensurate with the complexity of the target. We validated the tests generated by P4Testgen by running them across the entire P4C test suite as well as the programs supplied with the Tofino P4 Studio. Using the tool, we have also confirmed 25 bugs in mature, production toolchains for BMv2 and Tofino.
[ { "version": "v1", "created": "Mon, 28 Nov 2022 13:31:42 GMT" }, { "version": "v2", "created": "Thu, 2 Mar 2023 21:35:00 GMT" }, { "version": "v3", "created": "Sun, 6 Aug 2023 11:15:37 GMT" } ]
2023-08-08T00:00:00
[ [ "Ruffy", "Fabian", "" ], [ "Liu", "Jed", "" ], [ "Kotikalapudi", "Prathima", "" ], [ "Havel", "Vojtěch", "" ], [ "Tavante", "Hanneli", "" ], [ "Sherwood", "Rob", "" ], [ "Dubina", "Vladyslav", "" ], [ "Peschanenko", "Volodymyr", "" ], [ "Sivaraman", "Anirudh", "" ], [ "Foster", "Nate", "" ] ]
new_dataset
0.998835
2212.05098
Daniel Lemire
Robert Clausecker and Daniel Lemire
Transcoding Unicode Characters with AVX-512 Instructions
null
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
Intel includes in its recent processors a powerful set of instructions capable of processing 512-bit registers with a single instruction (AVX-512). Some of these instructions have no equivalent in earlier instruction sets. We leverage these instructions to efficiently transcode strings between the most common formats: UTF-8 and UTF-16. With our novel algorithms, we are often twice as fast as the previous best solutions. For example, we transcode Chinese text from UTF-8 to UTF-16 at more than 5 GiB/s using fewer than 2 CPU instructions per character. To ensure reproducibility, we make our software freely available as an open source library. Our library is part of the popular Node.js JavaScript runtime.
[ { "version": "v1", "created": "Fri, 9 Dec 2022 19:55:19 GMT" }, { "version": "v2", "created": "Thu, 15 Dec 2022 20:35:53 GMT" }, { "version": "v3", "created": "Thu, 13 Jul 2023 18:12:09 GMT" }, { "version": "v4", "created": "Sat, 5 Aug 2023 17:40:07 GMT" } ]
2023-08-08T00:00:00
[ [ "Clausecker", "Robert", "" ], [ "Lemire", "Daniel", "" ] ]
new_dataset
0.999604
2212.08254
Zhikai Li
Zhikai Li, Junrui Xiao, Lianwei Yang, and Qingyi Gu
RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers
ICCV 2023
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Post-training quantization (PTQ), which only requires a tiny dataset for calibration without end-to-end retraining, is a light and practical model compression technique. Recently, several PTQ schemes for vision transformers (ViTs) have been presented; unfortunately, they typically suffer from non-trivial accuracy degradation, especially in low-bit cases. In this paper, we propose RepQ-ViT, a novel PTQ framework for ViTs based on quantization scale reparameterization, to address the above issues. RepQ-ViT decouples the quantization and inference processes, where the former employs complex quantizers and the latter employs scale-reparameterized simplified quantizers. This ensures both accurate quantization and efficient inference, which distinguishes it from existing approaches that sacrifice quantization performance to meet the target hardware. More specifically, we focus on two components with extreme distributions: post-LayerNorm activations with severe inter-channel variation and post-Softmax activations with power-law features, and initially apply channel-wise quantization and log$\sqrt{2}$ quantization, respectively. Then, we reparameterize the scales to hardware-friendly layer-wise quantization and log2 quantization for inference, with only slight accuracy or computational costs. Extensive experiments are conducted on multiple vision tasks with different model variants, proving that RepQ-ViT, without hyperparameters and expensive reconstruction procedures, can outperform existing strong baselines and encouragingly improve the accuracy of 4-bit PTQ of ViTs to a usable level. Code is available at https://github.com/zkkli/RepQ-ViT.
[ { "version": "v1", "created": "Fri, 16 Dec 2022 02:52:37 GMT" }, { "version": "v2", "created": "Mon, 7 Aug 2023 03:00:41 GMT" } ]
2023-08-08T00:00:00
[ [ "Li", "Zhikai", "" ], [ "Xiao", "Junrui", "" ], [ "Yang", "Lianwei", "" ], [ "Gu", "Qingyi", "" ] ]
new_dataset
0.99938
2212.08283
Feiqi Cao
Feiqi Cao, Siwen Luo, Felipe Nunez, Zean Wen, Josiah Poon, Caren Han
SceneGATE: Scene-Graph based co-Attention networks for TExt visual question answering
Published in Robotics (Q1, SCI indexed Journal): https://www.mdpi.com/2218-6581/12/4/114
null
10.3390/robotics12040114
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most TextVQA approaches focus on the integration of objects, scene texts and question words by a simple transformer encoder. But this fails to capture the semantic relations between different modalities. The paper proposes a Scene Graph based co-Attention Network (SceneGATE) for TextVQA, which reveals the semantic relations among the objects, Optical Character Recognition (OCR) tokens and the question words. It is achieved by a TextVQA-based scene graph that discovers the underlying semantics of an image. We created a guided-attention module to capture the intra-modal interplay between the language and the vision as a guidance for inter-modal interactions. To make explicit teaching of the relations between the two modalities, we proposed and integrated two attention modules, namely a scene graph-based semantic relation-aware attention and a positional relation-aware attention. We conducted extensive experiments on two benchmark datasets, Text-VQA and ST-VQA. It is shown that our SceneGATE method outperformed existing ones because of the scene graph and its attention modules.
[ { "version": "v1", "created": "Fri, 16 Dec 2022 05:10:09 GMT" }, { "version": "v2", "created": "Mon, 1 May 2023 05:22:40 GMT" }, { "version": "v3", "created": "Mon, 7 Aug 2023 08:32:54 GMT" } ]
2023-08-08T00:00:00
[ [ "Cao", "Feiqi", "" ], [ "Luo", "Siwen", "" ], [ "Nunez", "Felipe", "" ], [ "Wen", "Zean", "" ], [ "Poon", "Josiah", "" ], [ "Han", "Caren", "" ] ]
new_dataset
0.997935
2212.12294
Joo Chan Lee
Joo Chan Lee, Daniel Rho, Jong Hwan Ko, Eunbyung Park
FFNeRV: Flow-Guided Frame-Wise Neural Representations for Videos
Our project page including code is available at https://maincold2.github.io/ffnerv/
null
10.1145/3581783.3612444
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural fields, also known as coordinate-based or implicit neural representations, have shown a remarkable capability of representing, generating, and manipulating various forms of signals. For video representations, however, mapping pixel-wise coordinates to RGB colors has shown relatively low compression performance and slow convergence and inference speed. Frame-wise video representation, which maps a temporal coordinate to its entire frame, has recently emerged as an alternative method to represent videos, improving compression rates and encoding speed. While promising, it has still failed to reach the performance of state-of-the-art video compression algorithms. In this work, we propose FFNeRV, a novel method for incorporating flow information into frame-wise representations to exploit the temporal redundancy across the frames in videos inspired by the standard video codecs. Furthermore, we introduce a fully convolutional architecture, enabled by one-dimensional temporal grids, improving the continuity of spatial features. Experimental results show that FFNeRV yields the best performance for video compression and frame interpolation among the methods using frame-wise representations or neural fields. To reduce the model size even further, we devise a more compact convolutional architecture using the group and pointwise convolutions. With model compression techniques, including quantization-aware training and entropy coding, FFNeRV outperforms widely-used standard video codecs (H.264 and HEVC) and performs on par with state-of-the-art video compression algorithms.
[ { "version": "v1", "created": "Fri, 23 Dec 2022 12:51:42 GMT" }, { "version": "v2", "created": "Mon, 7 Aug 2023 01:21:19 GMT" } ]
2023-08-08T00:00:00
[ [ "Lee", "Joo Chan", "" ], [ "Rho", "Daniel", "" ], [ "Ko", "Jong Hwan", "" ], [ "Park", "Eunbyung", "" ] ]
new_dataset
0.990813
2301.04643
Hugo Sousa
Hugo Sousa, Al\'ipio Jorge, Ricardo Campos
tieval: An Evaluation Framework for Temporal Information Extraction Systems
10 pages
null
10.1145/3539618.3591892
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Temporal information extraction (TIE) has attracted a great deal of interest over the last two decades, leading to the development of a significant number of datasets. Despite its benefits, having access to a large volume of corpora makes it difficult when it comes to benchmark TIE systems. On the one hand, different datasets have different annotation schemes, thus hindering the comparison between competitors across different corpora. On the other hand, the fact that each corpus is commonly disseminated in a different format requires a considerable engineering effort for a researcher/practitioner to develop parsers for all of them. This constraint forces researchers to select a limited amount of datasets to evaluate their systems which consequently limits the comparability of the systems. Yet another obstacle that hinders the comparability of the TIE systems is the evaluation metric employed. While most research works adopt traditional metrics such as precision, recall, and $F_1$, a few others prefer temporal awareness -- a metric tailored to be more comprehensive on the evaluation of temporal systems. Although the reason for the absence of temporal awareness in the evaluation of most systems is not clear, one of the factors that certainly weights this decision is the necessity to implement the temporal closure algorithm in order to compute temporal awareness, which is not straightforward to implement neither is currently easily available. All in all, these problems have limited the fair comparison between approaches and consequently, the development of temporal extraction systems. To mitigate these problems, we have developed tieval, a Python library that provides a concise interface for importing different corpora and facilitates system evaluation. In this paper, we present the first public release of tieval and highlight its most relevant features.
[ { "version": "v1", "created": "Wed, 11 Jan 2023 18:55:22 GMT" }, { "version": "v2", "created": "Fri, 21 Apr 2023 15:24:09 GMT" } ]
2023-08-08T00:00:00
[ [ "Sousa", "Hugo", "" ], [ "Jorge", "Alípio", "" ], [ "Campos", "Ricardo", "" ] ]
new_dataset
0.981671
2301.06648
Zhongyang Zhang
Zhongyang Zhang, Kaidong Chai, Haowen Yu, Ramzi Majaj, Francesca Walsh, Edward Wang, Upal Mahbub, Hava Siegelmann, Donghyun Kim, Tauhidur Rahman
Neuromorphic High-Frequency 3D Dancing Pose Estimation in Dynamic Environment
null
Neurocomputing, Volume 547, 2023, 126388
10.1016/j.neucom.2023.126388
ISSN 0925-2312
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As a beloved sport worldwide, dancing is getting integrated into traditional and virtual reality-based gaming platforms nowadays. It opens up new opportunities in the technology-mediated dancing space. These platforms primarily rely on passive and continuous human pose estimation as an input capture mechanism. Existing solutions are mainly based on RGB or RGB-Depth cameras for dance games. The former suffers in low-lighting conditions due to the motion blur and low sensitivity, while the latter is too power-hungry, has a low frame rate, and has limited working distance. With ultra-low latency, energy efficiency, and wide dynamic range characteristics, the event camera is a promising solution to overcome these shortcomings. We propose YeLan, an event camera-based 3-dimensional high-frequency human pose estimation(HPE) system that survives low-lighting conditions and dynamic backgrounds. We collected the world's first event camera dance dataset and developed a fully customizable motion-to-event physics-aware simulator. YeLan outperforms the baseline models in these challenging conditions and demonstrated robustness against different types of clothing, background motion, viewing angle, occlusion, and lighting fluctuations.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 00:55:12 GMT" }, { "version": "v2", "created": "Fri, 27 Jan 2023 05:02:29 GMT" } ]
2023-08-08T00:00:00
[ [ "Zhang", "Zhongyang", "" ], [ "Chai", "Kaidong", "" ], [ "Yu", "Haowen", "" ], [ "Majaj", "Ramzi", "" ], [ "Walsh", "Francesca", "" ], [ "Wang", "Edward", "" ], [ "Mahbub", "Upal", "" ], [ "Siegelmann", "Hava", "" ], [ "Kim", "Donghyun", "" ], [ "Rahman", "Tauhidur", "" ] ]
new_dataset
0.995891
2301.10880
Hans Hanley
Hans W. A. Hanley, Deepak Kumar, Zakir Durumeric
A Golden Age: Conspiracy Theories' Relationship with Misinformation Outlets, News Media, and the Wider Internet
Accepted to CSCW 2023
null
null
null
cs.CY cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Do we live in a "Golden Age of Conspiracy Theories?" In the last few decades, conspiracy theories have proliferated on the Internet with some having dangerous real-world consequences. A large contingent of those who participated in the January 6th attack on the US Capitol fervently believed in the QAnon conspiracy theory. In this work, we study the relationships amongst five prominent conspiracy theories (QAnon, COVID, UFO/Aliens, 9/11, and Flat-Earth) and each of their respective relationships to the news media, both authentic news and misinformation. Identifying and publishing a set of 755 different conspiracy theory websites dedicated to our five conspiracy theories, we find that each set often hyperlinks to the same external domains, with COVID and QAnon conspiracy theory websites having the largest amount of shared connections. Examining the role of news media, we further find that not only do outlets known for spreading misinformation hyperlink to our set of conspiracy theory websites more often than authentic news websites but also that this hyperlinking increased dramatically between 2018 and 2021, with the advent of QAnon and the start of COVID-19 pandemic. Using partial Granger-causality, we uncover several positive correlative relationships between the hyperlinks from misinformation websites and the popularity of conspiracy theory websites, suggesting the prominent role that misinformation news outlets play in popularizing many conspiracy theories.
[ { "version": "v1", "created": "Thu, 26 Jan 2023 00:20:02 GMT" }, { "version": "v2", "created": "Wed, 5 Apr 2023 20:50:21 GMT" }, { "version": "v3", "created": "Sun, 6 Aug 2023 00:56:21 GMT" } ]
2023-08-08T00:00:00
[ [ "Hanley", "Hans W. A.", "" ], [ "Kumar", "Deepak", "" ], [ "Durumeric", "Zakir", "" ] ]
new_dataset
0.993512
2302.13026
JinYuan Liu
Jinyuan Liu, Minglei Fu, Andong Liu, Wenan Zhang, and Bo Chen
A Homotopy Invariant Based on Convex Dissection Topology and a Distance Optimal Path Planning Algorithm
Please note that the letter version of this paper is currently under review by IEEE Robotics and Automation Letters (RA-L). In comparison to the letter version, this full version provides more rigorous proofs and reasoning for the CDT encoder, along with numerous practical theorems and corollaries. The complete paper consists of 17 pages, 14 figures, and 7 tables
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The concept of path homotopy has received widely attention in the field of path planning in recent years. In this article, a homotopy invariant based on convex dissection for a two-dimensional bounded Euclidean space is developed, which can efficiently encode all homotopy path classes between any two points. Thereafter, the optimal path planning task consists of two steps: (i) search for the homotopy path class that may contain the optimal path, and (ii) obtain the shortest homotopy path in this class. Furthermore, an optimal path planning algorithm called CDT-RRT* (Rapidly-exploring Random Tree Star based on Convex Division Topology) is proposed. We designed an efficient sampling formula for CDT-RRT*, which gives it a tendency to actively explore unknown homotopy classes, and incorporated the principles of the Elastic Band algorithm to obtain the shortest path in each class. Through a series of experiments, it was determined that the performance of the proposed algorithm is comparable with state-of-the-art path planning algorithms. Hence, the application significance of the developed homotopy invariant in the field of path planning was verified.
[ { "version": "v1", "created": "Sat, 25 Feb 2023 08:52:48 GMT" }, { "version": "v2", "created": "Sun, 6 Aug 2023 12:47:51 GMT" } ]
2023-08-08T00:00:00
[ [ "Liu", "Jinyuan", "" ], [ "Fu", "Minglei", "" ], [ "Liu", "Andong", "" ], [ "Zhang", "Wenan", "" ], [ "Chen", "Bo", "" ] ]
new_dataset
0.987273
2303.01711
Chathura Gamage
Chathura Gamage, Vimukthini Pinto, Cheng Xue, Peng Zhang, Ekaterina Nikonova, Matthew Stephenson, Jochen Renz
NovPhy: A Testbed for Physical Reasoning in Open-world Environments
Testbed website: https://github.com/phy-q/novphy
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the emergence of AI systems that interact with the physical environment, there is an increased interest in incorporating physical reasoning capabilities into those AI systems. But is it enough to only have physical reasoning capabilities to operate in a real physical environment? In the real world, we constantly face novel situations we have not encountered before. As humans, we are competent at successfully adapting to those situations. Similarly, an agent needs to have the ability to function under the impact of novelties in order to properly operate in an open-world physical environment. To facilitate the development of such AI systems, we propose a new testbed, NovPhy, that requires an agent to reason about physical scenarios in the presence of novelties and take actions accordingly. The testbed consists of tasks that require agents to detect and adapt to novelties in physical scenarios. To create tasks in the testbed, we develop eight novelties representing a diverse novelty space and apply them to five commonly encountered scenarios in a physical environment. According to our testbed design, we evaluate two capabilities of an agent: the performance on a novelty when it is applied to different physical scenarios and the performance on a physical scenario when different novelties are applied to it. We conduct a thorough evaluation with human players, learning agents, and heuristic agents. Our evaluation shows that humans' performance is far beyond the agents' performance. Some agents, even with good normal task performance, perform significantly worse when there is a novelty, and the agents that can adapt to novelties typically adapt slower than humans. We promote the development of intelligent agents capable of performing at the human level or above when operating in open-world physical environments. Testbed website: https://github.com/phy-q/novphy
[ { "version": "v1", "created": "Fri, 3 Mar 2023 04:59:03 GMT" }, { "version": "v2", "created": "Sat, 5 Aug 2023 12:47:07 GMT" } ]
2023-08-08T00:00:00
[ [ "Gamage", "Chathura", "" ], [ "Pinto", "Vimukthini", "" ], [ "Xue", "Cheng", "" ], [ "Zhang", "Peng", "" ], [ "Nikonova", "Ekaterina", "" ], [ "Stephenson", "Matthew", "" ], [ "Renz", "Jochen", "" ] ]
new_dataset
0.999442
2303.04320
Aniket Bera
Rashmi Bhaskara and Maurice Chiu and Aniket Bera
SG-LSTM: Social Group LSTM for Robot Navigation Through Dense Crowds
To appear in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023)
null
null
null
cs.RO cs.AI cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the increasing availability and affordability of personal robots, they will no longer be confined to large corporate warehouses or factories but will instead be expected to operate in less controlled environments alongside larger groups of people. In addition to ensuring safety and efficiency, it is crucial to minimize any negative psychological impact robots may have on humans and follow unwritten social norms in these situations. Our research aims to develop a model that can predict the movements of pedestrians and perceptually-social groups in crowded environments. We introduce a new Social Group Long Short-term Memory (SG-LSTM) model that models human groups and interactions in dense environments using a socially-aware LSTM to produce more accurate trajectory predictions. Our approach enables navigation algorithms to calculate collision-free paths faster and more accurately in crowded environments. Additionally, we also release a large video dataset with labeled pedestrian groups for the broader social navigation community. We show comparisons with different metrics on different datasets (ETH, Hotel, MOT15) and different prediction approaches (LIN, LSTM, O-LSTM, S-LSTM) as well as runtime performance.
[ { "version": "v1", "created": "Wed, 8 Mar 2023 01:38:20 GMT" }, { "version": "v2", "created": "Sun, 6 Aug 2023 17:17:05 GMT" } ]
2023-08-08T00:00:00
[ [ "Bhaskara", "Rashmi", "" ], [ "Chiu", "Maurice", "" ], [ "Bera", "Aniket", "" ] ]
new_dataset
0.999444
2303.04322
Aniket Bera
Dipam Patel and Phu Pham and Aniket Bera
DroNeRF: Real-time Multi-agent Drone Pose Optimization for Computing Neural Radiance Fields
To appear in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023)
null
null
null
cs.RO cs.AI cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel optimization algorithm called DroNeRF for the autonomous positioning of monocular camera drones around an object for real-time 3D reconstruction using only a few images. Neural Radiance Fields or NeRF, is a novel view synthesis technique used to generate new views of an object or scene from a set of input images. Using drones in conjunction with NeRF provides a unique and dynamic way to generate novel views of a scene, especially with limited scene capabilities of restricted movements. Our approach focuses on calculating optimized pose for individual drones while solely depending on the object geometry without using any external localization system. The unique camera positioning during the data-capturing phase significantly impacts the quality of the 3D model. To evaluate the quality of our generated novel views, we compute different perceptual metrics like the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure(SSIM). Our work demonstrates the benefit of using an optimal placement of various drones with limited mobility to generate perceptually better results.
[ { "version": "v1", "created": "Wed, 8 Mar 2023 01:46:19 GMT" }, { "version": "v2", "created": "Sun, 6 Aug 2023 17:20:41 GMT" } ]
2023-08-08T00:00:00
[ [ "Patel", "Dipam", "" ], [ "Pham", "Phu", "" ], [ "Bera", "Aniket", "" ] ]
new_dataset
0.996254
2303.07792
George Alexandropoulos
Ioannis Gavras, Md Atiqul Islam, Besma Smida, and George C. Alexandropoulos
Full Duplex Holographic MIMO for Near-Field Integrated Sensing and Communications
5 pages, 3 figures, EUSIPCO 2023
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper presents an in-band Full Duplex (FD) integrated sensing and communications system comprising a holographic Multiple-Input Multiple-Output (MIMO) base station, which is capable to simultaneously communicate with multiple users in the downlink direction, while sensing targets being randomly distributed within its coverage area. Considering near-field wireless operation at THz frequencies, the FD node adopts dynamic metasurface antenna panels for both transmission and reception, which consist of massive numbers of sub-wavelength-spaced metamaterials, enabling reduced cost and power consumption analog precoding and combining. We devise an optimization framework for the FD node's reconfigurable parameters with the dual objective of maximizing the targets' parameters estimation accuracy and the downlink communication performance. Our simulation results verify the integrated sensing and communications capability of the proposed FD holographic MIMO system, showcasing the interplays among its various design parameters.
[ { "version": "v1", "created": "Tue, 14 Mar 2023 11:06:49 GMT" }, { "version": "v2", "created": "Mon, 7 Aug 2023 09:27:56 GMT" } ]
2023-08-08T00:00:00
[ [ "Gavras", "Ioannis", "" ], [ "Islam", "Md Atiqul", "" ], [ "Smida", "Besma", "" ], [ "Alexandropoulos", "George C.", "" ] ]
new_dataset
0.99952
2304.00989
Yaojie Hu
Yaojie Hu, Jin Tian
Neuro-Symbolic Execution of Generic Source Code
null
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Can a Python program be executed statement-by-statement by neural networks composed according to the source code? We formulate the Neuro-Symbolic Execution Problem and introduce Neural Interpretation (NI), the first neural model for the execution of generic source code that allows missing definitions. NI preserves source code structure, where every variable has a vector encoding, and every function executes a neural network. NI is a novel neural model of computers with a compiler architecture that can assemble neural layers "programmed" by source code. NI is the first neural model capable of executing Py150 dataset programs, including library functions without concrete inputs, and it can be trained with flexible code understanding objectives. We demonstrate white-box execution without concrete inputs for variable misuse localization and repair.
[ { "version": "v1", "created": "Thu, 23 Mar 2023 17:56:45 GMT" }, { "version": "v2", "created": "Fri, 4 Aug 2023 18:15:05 GMT" } ]
2023-08-08T00:00:00
[ [ "Hu", "Yaojie", "" ], [ "Tian", "Jin", "" ] ]
new_dataset
0.998379
2304.13458
Rodothea Myrsini Tsoupidi
Rodothea Myrsini Tsoupidi, Elena Troubitsyna, Panagiotis Papadimitratos
Thwarting Code-Reuse and Side-Channel Attacks in Embedded Systems
null
null
null
null
cs.CR cs.PF
http://creativecommons.org/licenses/by/4.0/
Embedded devices are increasingly present in our everyday life. They often process critical information, and hence, rely on cryptographic protocols to achieve security. However, embedded devices remain vulnerable to attackers seeking to hijack their operation and extract sensitive information by exploiting side channels and code reuse. Code-Reuse Attacks (CRAs) can steer the execution of a program to malicious outcomes, altering existing on-board code without direct access to the device memory. Moreover, Side-Channel Attacks (SCAs) may reveal secret information to the attacker based on mere observation of the device. Thwarting CRAs and SCAs against embedded devices is challenging because embedded devices are often resource constrained. Fine-grained code diversification hinders CRAs by introducing uncertainty to the binary code; while software mechanisms can thwart timing or power SCAs. The resilience to either attack may come at the price of the overall efficiency. Moreover, a unified approach that preserves these mitigations against both CRAs and SCAs is not available. In this paper, we propose a novel Secure Diversity by Construction (SecDivCon) approach that tackles this challenge. SecDivCon is a combinatorial compiler-based approach that combines software diversification against CRAs with software mitigations against SCAs. SecDivCon restricts the performance overhead introduced by the generated code that thwarts the attacks and hence, offers a secure-by-design approach enabling control over the performance-security trade-off. Our experiments, using 16 benchmark programs, show that SCA-aware diversification is effective against CRAs, while preserving SCA mitigation properties at a low, controllable overhead. Given the combinatorial nature of our approach, SecDivCon is suitable for small, performance-critical functions that are sensitive to SCAs.
[ { "version": "v1", "created": "Wed, 26 Apr 2023 11:31:45 GMT" }, { "version": "v2", "created": "Fri, 28 Apr 2023 07:03:49 GMT" }, { "version": "v3", "created": "Mon, 7 Aug 2023 08:08:09 GMT" } ]
2023-08-08T00:00:00
[ [ "Tsoupidi", "Rodothea Myrsini", "" ], [ "Troubitsyna", "Elena", "" ], [ "Papadimitratos", "Panagiotis", "" ] ]
new_dataset
0.999643
2305.05880
Aozhu Chen
Aozhu Chen, Ziyuan Wang, Chengbo Dong, Kaibin Tian, Ruixiang Zhao, Xun Liang, Zhanhui Kang, Xirong Li
ChinaOpen: A Dataset for Open-world Multimodal Learning
Accepted by ACMMM 2023
null
10.1145/3581783.3612156
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces ChinaOpen, a dataset sourced from Bilibili, a popular Chinese video-sharing website, for open-world multimodal learning. While the state-of-the-art multimodal learning networks have shown impressive performance in automated video annotation and cross-modal video retrieval, their training and evaluation are primarily conducted on YouTube videos with English text. Their effectiveness on Chinese data remains to be verified. In order to support multimodal learning in the new context, we construct ChinaOpen-50k, a webly annotated training set of 50k Bilibili videos associated with user-generated titles and tags. Both text-based and content-based data cleaning are performed to remove low-quality videos in advance. For a multi-faceted evaluation, we build ChinaOpen-1k, a manually labeled test set of 1k videos. Each test video is accompanied with a manually checked user title and a manually written caption. Besides, each video is manually tagged to describe objects / actions / scenes shown in the visual content. The original user tags are also manually checked. Moreover, with all the Chinese text translated into English, ChinaOpen-1k is also suited for evaluating models trained on English data. In addition to ChinaOpen, we propose Generative Video-to-text Transformer (GVT) for Chinese video captioning. We conduct an extensive evaluation of the state-of-the-art single-task / multi-task models on the new dataset, resulting in a number of novel findings and insights.
[ { "version": "v1", "created": "Wed, 10 May 2023 04:00:54 GMT" }, { "version": "v2", "created": "Sun, 6 Aug 2023 10:43:25 GMT" } ]
2023-08-08T00:00:00
[ [ "Chen", "Aozhu", "" ], [ "Wang", "Ziyuan", "" ], [ "Dong", "Chengbo", "" ], [ "Tian", "Kaibin", "" ], [ "Zhao", "Ruixiang", "" ], [ "Liang", "Xun", "" ], [ "Kang", "Zhanhui", "" ], [ "Li", "Xirong", "" ] ]
new_dataset
0.999835
2306.06505
Catherine Ordun
Catherine Ordun, Edward Raff, Sanjay Purushotham
Vista-Morph: Unsupervised Image Registration of Visible-Thermal Facial Pairs
null
2023, 7th IEEE International Joint Conference on Biometrics (IJCB)
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For a variety of biometric cross-spectral tasks, Visible-Thermal (VT) facial pairs are used. However, due to a lack of calibration in the lab, photographic capture between two different sensors leads to severely misaligned pairs that can lead to poor results for person re-identification and generative AI. To solve this problem, we introduce our approach for VT image registration called Vista Morph. Unlike existing VT facial registration that requires manual, hand-crafted features for pixel matching and/or a supervised thermal reference, Vista Morph is completely unsupervised without the need for a reference. By learning the affine matrix through a Vision Transformer (ViT)-based Spatial Transformer Network (STN) and Generative Adversarial Networks (GAN), Vista Morph successfully aligns facial and non-facial VT images. Our approach learns warps in Hard, No, and Low-light visual settings and is robust to geometric perturbations and erasure at test time. We conduct a downstream generative AI task to show that registering training data with Vista Morph improves subject identity of generated thermal faces when performing V2T image translation.
[ { "version": "v1", "created": "Sat, 10 Jun 2023 18:42:36 GMT" } ]
2023-08-08T00:00:00
[ [ "Ordun", "Catherine", "" ], [ "Raff", "Edward", "" ], [ "Purushotham", "Sanjay", "" ] ]
new_dataset
0.999115
2307.11315
Kathleen M Lewis
Kathleen M. Lewis and Emily Mu and Adrian V. Dalca and John Guttag
GIST: Generating Image-Specific Text for Fine-grained Object Classification
The first two authors contributed equally to this work and are listed in alphabetical order
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent vision-language models outperform vision-only models on many image classification tasks. However, because of the absence of paired text/image descriptions, it remains difficult to fine-tune these models for fine-grained image classification. In this work, we propose a method, GIST, for generating image-specific fine-grained text descriptions from image-only datasets, and show that these text descriptions can be used to improve classification. Key parts of our method include 1. prompting a pretrained large language model with domain-specific prompts to generate diverse fine-grained text descriptions for each class and 2. using a pretrained vision-language model to match each image to label-preserving text descriptions that capture relevant visual features in the image. We demonstrate the utility of GIST by fine-tuning vision-language models on the image-and-generated-text pairs to learn an aligned vision-language representation space for improved classification. We evaluate our learned representation space in full-shot and few-shot scenarios across four diverse fine-grained classification datasets, each from a different domain. Our method achieves an average improvement of $4.1\%$ in accuracy over CLIP linear probes and an average of $1.1\%$ improvement in accuracy over the previous state-of-the-art image-text classification method on the full-shot datasets. Our method achieves similar improvements across few-shot regimes. Code is available at https://github.com/emu1729/GIST.
[ { "version": "v1", "created": "Fri, 21 Jul 2023 02:47:18 GMT" }, { "version": "v2", "created": "Fri, 4 Aug 2023 19:36:31 GMT" } ]
2023-08-08T00:00:00
[ [ "Lewis", "Kathleen M.", "" ], [ "Mu", "Emily", "" ], [ "Dalca", "Adrian V.", "" ], [ "Guttag", "John", "" ] ]
new_dataset
0.999741
2307.13294
You Jiang
Junbin Fang, Canjian Jiang, You Jiang, Puxi Lin, Zhaojie Chen, Yujing Sun, Siu-Ming Yiu, Zoe L. Jiang
Imperceptible Physical Attack against Face Recognition Systems via LED Illumination Modulation
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although face recognition starts to play an important role in our daily life, we need to pay attention that data-driven face recognition vision systems are vulnerable to adversarial attacks. However, the current two categories of adversarial attacks, namely digital attacks and physical attacks both have drawbacks, with the former ones impractical and the latter one conspicuous, high-computational and inexecutable. To address the issues, we propose a practical, executable, inconspicuous and low computational adversarial attack based on LED illumination modulation. To fool the systems, the proposed attack generates imperceptible luminance changes to human eyes through fast intensity modulation of scene LED illumination and uses the rolling shutter effect of CMOS image sensors in face recognition systems to implant luminance information perturbation to the captured face images. In summary,we present a denial-of-service (DoS) attack for face detection and a dodging attack for face verification. We also evaluate their effectiveness against well-known face detection models, Dlib, MTCNN and RetinaFace , and face verification models, Dlib, FaceNet,and ArcFace.The extensive experiments show that the success rates of DoS attacks against face detection models reach 97.67%, 100%, and 100%, respectively, and the success rates of dodging attacks against all face verification models reach 100%.
[ { "version": "v1", "created": "Tue, 25 Jul 2023 07:20:21 GMT" }, { "version": "v2", "created": "Mon, 7 Aug 2023 08:12:57 GMT" } ]
2023-08-08T00:00:00
[ [ "Fang", "Junbin", "" ], [ "Jiang", "Canjian", "" ], [ "Jiang", "You", "" ], [ "Lin", "Puxi", "" ], [ "Chen", "Zhaojie", "" ], [ "Sun", "Yujing", "" ], [ "Yiu", "Siu-Ming", "" ], [ "Jiang", "Zoe L.", "" ] ]
new_dataset
0.989555
2308.00628
Wenzhao Zheng
Bohao Fan, Siqi Wang, Wenxuan Guo, Wenzhao Zheng, Jianjiang Feng, Jie Zhou
Human-M3: A Multi-view Multi-modal Dataset for 3D Human Pose Estimation in Outdoor Scenes
Code and data will be released on https://github.com/soullessrobot/Human-M3-Dataset
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
3D human pose estimation in outdoor environments has garnered increasing attention recently. However, prevalent 3D human pose datasets pertaining to outdoor scenes lack diversity, as they predominantly utilize only one type of modality (RGB image or pointcloud), and often feature only one individual within each scene. This limited scope of dataset infrastructure considerably hinders the variability of available data. In this article, we propose Human-M3, an outdoor multi-modal multi-view multi-person human pose database which includes not only multi-view RGB videos of outdoor scenes but also corresponding pointclouds. In order to obtain accurate human poses, we propose an algorithm based on multi-modal data input to generate ground truth annotation. This benefits from robust pointcloud detection and tracking, which solves the problem of inaccurate human localization and matching ambiguity that may exist in previous multi-view RGB videos in outdoor multi-person scenes, and generates reliable ground truth annotations. Evaluation of multiple different modalities algorithms has shown that this database is challenging and suitable for future research. Furthermore, we propose a 3D human pose estimation algorithm based on multi-modal data input, which demonstrates the advantages of multi-modal data input for 3D human pose estimation. Code and data will be released on https://github.com/soullessrobot/Human-M3-Dataset.
[ { "version": "v1", "created": "Tue, 1 Aug 2023 15:55:41 GMT" }, { "version": "v2", "created": "Sun, 6 Aug 2023 14:47:00 GMT" } ]
2023-08-08T00:00:00
[ [ "Fan", "Bohao", "" ], [ "Wang", "Siqi", "" ], [ "Guo", "Wenxuan", "" ], [ "Zheng", "Wenzhao", "" ], [ "Feng", "Jianjiang", "" ], [ "Zhou", "Jie", "" ] ]
new_dataset
0.999886
2308.01390
Anas Awadalla
Anas Awadalla and Irena Gao and Josh Gardner and Jack Hessel and Yusuf Hanafy and Wanrong Zhu and Kalyani Marathe and Yonatan Bitton and Samir Gadre and Shiori Sagawa and Jenia Jitsev and Simon Kornblith and Pang Wei Koh and Gabriel Ilharco and Mitchell Wortsman and Ludwig Schmidt
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
[ { "version": "v1", "created": "Wed, 2 Aug 2023 19:10:23 GMT" }, { "version": "v2", "created": "Mon, 7 Aug 2023 17:53:09 GMT" } ]
2023-08-08T00:00:00
[ [ "Awadalla", "Anas", "" ], [ "Gao", "Irena", "" ], [ "Gardner", "Josh", "" ], [ "Hessel", "Jack", "" ], [ "Hanafy", "Yusuf", "" ], [ "Zhu", "Wanrong", "" ], [ "Marathe", "Kalyani", "" ], [ "Bitton", "Yonatan", "" ], [ "Gadre", "Samir", "" ], [ "Sagawa", "Shiori", "" ], [ "Jitsev", "Jenia", "" ], [ "Kornblith", "Simon", "" ], [ "Koh", "Pang Wei", "" ], [ "Ilharco", "Gabriel", "" ], [ "Wortsman", "Mitchell", "" ], [ "Schmidt", "Ludwig", "" ] ]
new_dataset
0.977283
2308.02524
AICHA SEKHARI
Paweena Suebsombut (DISP, CMU), Pradorn Sureephong (CMU), Aicha Sekhari (DISP), Suepphong Chernbumroong (CMU), Abdelaziz Bouras
Chatbot Application to Support Smart Agriculture in Thailand
null
2022 Joint International Conference on Digital Arts, Media and Technology with ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunications Engineering (ECTI DAMT and NCON), Chiang Rai university, Jan 2022, Chiang Rai, Thailand. pp.364-367
10.1109/ectidamtncon53731.2022.9720318
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A chatbot is a software developed to help reply to text or voice conversations automatically and quickly in real time. In the agriculture sector, the existing smart agriculture systems just use data from sensing and internet of things (IoT) technologies that exclude crop cultivation knowledge to support decision-making by farmers. To enhance this, the chatbot application can be an assistant to farmers to provide crop cultivation knowledge. Consequently, we propose the LINE chatbot application as an information and knowledge representation providing crop cultivation recommendations to farmers. It works with smart agriculture and recommendation systems. Our proposed LINE chatbot application consists of five main functions (start/stop menu, main page, drip irri gation page, mist irrigation page, and monitor page). Farmers will receive information for data monitoring to support their decision-making. Moreover, they can control the irrigation system via the LINE chatbot. Furthermore, farmers can ask questions relevant to the crop environment via a chat box. After implementing our proposed chatbot, farmers are very satisfied with the application, scoring a 96% satisfaction score. However, in terms of asking questions via chat box, this LINE chatbot application is a rule-based bot or script bot. Farmers have to type in the correct keywords as prescribed, otherwise they won't get a response from the chatbots. In the future, we will enhance the asking function of our LINE chatbot to be an intelligent bot.
[ { "version": "v1", "created": "Mon, 31 Jul 2023 11:42:44 GMT" } ]
2023-08-08T00:00:00
[ [ "Suebsombut", "Paweena", "", "DISP, CMU" ], [ "Sureephong", "Pradorn", "", "CMU" ], [ "Sekhari", "Aicha", "", "DISP" ], [ "Chernbumroong", "Suepphong", "", "CMU" ], [ "Bouras", "Abdelaziz", "" ] ]
new_dataset
0.991291
2308.02594
Amirhossein Zolfagharian
Amirhossein Zolfagharian, Manel Abdellatif, Lionel C. Briand, and Ramesh S
SMARLA: A Safety Monitoring Approach for Deep Reinforcement Learning Agents
null
null
null
null
cs.LG cs.AI cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep reinforcement learning algorithms (DRL) are increasingly being used in safety-critical systems. Ensuring the safety of DRL agents is a critical concern in such contexts. However, relying solely on testing is not sufficient to ensure safety as it does not offer guarantees. Building safety monitors is one solution to alleviate this challenge. This paper proposes SMARLA, a machine learning-based safety monitoring approach designed for DRL agents. For practical reasons, SMARLA is designed to be black-box (as it does not require access to the internals of the agent) and leverages state abstraction to reduce the state space and thus facilitate the learning of safety violation prediction models from agent's states. We validated SMARLA on two well-known RL case studies. Empirical analysis reveals that SMARLA achieves accurate violation prediction with a low false positive rate, and can predict safety violations at an early stage, approximately halfway through the agent's execution before violations occur.
[ { "version": "v1", "created": "Thu, 3 Aug 2023 21:08:51 GMT" } ]
2023-08-08T00:00:00
[ [ "Zolfagharian", "Amirhossein", "" ], [ "Abdellatif", "Manel", "" ], [ "Briand", "Lionel C.", "" ], [ "S", "Ramesh", "" ] ]
new_dataset
0.980006
2308.02618
Saipraneeth Devunuri
Saipraneeth Devunuri, Shirin Qiam, Lewis Lehe
ChatGPT for GTFS: From Words to Information
18 pages, 7 figures, 1 table, Transportation Research Board
null
null
null
cs.IR cs.AI cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
The General Transit Feed Specification (GTFS) standard for publishing transit data is ubiquitous. GTFS being tabular data, with information spread across different files, necessitates specialized tools or packages to retrieve information. Concurrently, the use of Large Language Models for text and information retrieval is growing. The idea of this research is to see if the current widely adopted LLMs (ChatGPT) are able to retrieve information from GTFS using natural language instructions. We first test whether ChatGPT (GPT-3.5) understands the GTFS specification. GPT-3.5 answers 77% of our multiple-choice questions (MCQ) correctly. Next, we task the LLM with information extractions from a filtered GTFS feed with 4 routes. For information retrieval, we compare zero-shot and program synthesis. Program synthesis works better, achieving ~90% accuracy on simple questions and ~40% accuracy on complex questions.
[ { "version": "v1", "created": "Fri, 4 Aug 2023 14:50:37 GMT" } ]
2023-08-08T00:00:00
[ [ "Devunuri", "Saipraneeth", "" ], [ "Qiam", "Shirin", "" ], [ "Lehe", "Lewis", "" ] ]
new_dataset
0.998184
2308.02640
Ahmed Sabbah
Ahmed Sabbah, Mohammed Kharma, Mustafa Jarrar
Creating Android Malware Knowledge Graph Based on a Malware Ontology
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
As mobile and smart connectivity continue to grow, malware presents a permanently evolving threat to different types of critical domains such as health, logistics, banking, and community segments. Different types of malware have dynamic behaviors and complicated characteristics that are shared among members of the same malware family. Malware threat intelligence reports play a crucial role in describing and documenting the detected malware, providing a wealth of information regarding its attributes, patterns, and behaviors. There is a large amount of intelligent threat information regarding malware. The ontology allows the systematic organization and categorization of this information to ensure consistency in representing concepts and entities across various sources. In this study, we reviewed and extended an existing malware ontology to cover Android malware. Our extended ontology is called AndMalOnt. It consisted of 13 new classes, 16 object properties, and 31 data properties. Second, we created an Android malware knowledge graph by extracting reports from the MalwareBazaar repository and representing them in AndMalOnt. This involved generating a knowledge graph that encompasses over 2600 malware samples. Our ontology, knowledge graph, and source code are all open-source and accessible via GitHub
[ { "version": "v1", "created": "Fri, 4 Aug 2023 18:00:44 GMT" } ]
2023-08-08T00:00:00
[ [ "Sabbah", "Ahmed", "" ], [ "Kharma", "Mohammed", "" ], [ "Jarrar", "Mustafa", "" ] ]
new_dataset
0.998873
2308.02666
Justin Stevens
Justin Stevens, Vadim Bulitko, David Thue
Solving Witness-type Triangle Puzzles Faster with an Automatically Learned Human-Explainable Predicate
10 pages
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Automatically solving puzzle instances in the game The Witness can guide players toward solutions and help puzzle designers generate better puzzles. In the latter case such an Artificial Intelligence puzzle solver can inform a human puzzle designer and procedural puzzle generator to produce better instances. The puzzles, however, are combinatorially difficult and search-based solvers can require large amounts of time and memory. We accelerate such search by automatically learning a human-explainable predicate that predicts whether a partial path to a Witness-type puzzle is not completable to a solution path. We prove a key property of the learned predicate which allows us to use it for pruning successor states in search thereby accelerating search by an average of six times while maintaining completeness of the underlying search. Conversely given a fixed search time budget per puzzle our predicate-accelerated search can solve more puzzle instances of larger sizes than the baseline search.
[ { "version": "v1", "created": "Fri, 4 Aug 2023 18:52:18 GMT" } ]
2023-08-08T00:00:00
[ [ "Stevens", "Justin", "" ], [ "Bulitko", "Vadim", "" ], [ "Thue", "David", "" ] ]
new_dataset
0.9977
2308.02670
Weihan Wang
Weihan Wang, Jiani Li, Yuhang Ming, Philippos Mordohai
EDI: ESKF-based Disjoint Initialization for Visual-Inertial SLAM Systems
null
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual-inertial initialization can be classified into joint and disjoint approaches. Joint approaches tackle both the visual and the inertial parameters together by aligning observations from feature-bearing points based on IMU integration then use a closed-form solution with visual and acceleration observations to find initial velocity and gravity. In contrast, disjoint approaches independently solve the Structure from Motion (SFM) problem and determine inertial parameters from up-to-scale camera poses obtained from pure monocular SLAM. However, previous disjoint methods have limitations, like assuming negligible acceleration bias impact or accurate rotation estimation by pure monocular SLAM. To address these issues, we propose EDI, a novel approach for fast, accurate, and robust visual-inertial initialization. Our method incorporates an Error-state Kalman Filter (ESKF) to estimate gyroscope bias and correct rotation estimates from monocular SLAM, overcoming dependence on pure monocular SLAM for rotation estimation. To estimate the scale factor without prior information, we offer a closed-form solution for initial velocity, scale, gravity, and acceleration bias estimation. To address gravity and acceleration bias coupling, we introduce weights in the linear least-squares equations, ensuring acceleration bias observability and handling outliers. Extensive evaluation on the EuRoC dataset shows that our method achieves an average scale error of 5.8% in less than 3 seconds, outperforming other state-of-the-art disjoint visual-inertial initialization approaches, even in challenging environments and with artificial noise corruption.
[ { "version": "v1", "created": "Fri, 4 Aug 2023 19:06:58 GMT" } ]
2023-08-08T00:00:00
[ [ "Wang", "Weihan", "" ], [ "Li", "Jiani", "" ], [ "Ming", "Yuhang", "" ], [ "Mordohai", "Philippos", "" ] ]
new_dataset
0.979763
2308.02696
Mohammad Soleymani
Mohammad Soleymani, Ignacio Santamaria, and Eduard Jorswieck
NOMA-based Improper Signaling for MIMO STAR-RIS-assisted Broadcast Channels with Hardware Impairments
IEEE GLOBECOM 2023
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes schemes to improve the spectral efficiency of a multiple-input multiple-output (MIMO) broadcast channel (BC) with I/Q imbalance (IQI) at transceivers by employing a combination of improper Gaussian signaling (IGS), non-orthogonal multiple access (NOMA) and simultaneously transmit and reflect (STAR) reconfigurable intelligent surface (RIS). When there exists IQI, the output RF signal is a widely linear transformation of the input signal, which may make the output signal improper. To compensate for IQI, we employ IGS, thus generating a transmit improper signal. We show that IGS alongside with NOMA can highly increase the minimum rate of the users. Moreover, we propose schemes for different operational modes of STAR-RIS and show that STAR-RIS can significantly improve the system performance. Additionally, we show that IQI can highly degrade the performance especially if it is overlooked in the design.
[ { "version": "v1", "created": "Fri, 4 Aug 2023 20:21:17 GMT" } ]
2023-08-08T00:00:00
[ [ "Soleymani", "Mohammad", "" ], [ "Santamaria", "Ignacio", "" ], [ "Jorswieck", "Eduard", "" ] ]
new_dataset
0.951305
2308.02752
Dmitry Baranchuk
Dmitry Baranchuk, Matthijs Douze, Yash Upadhyay, I. Zeki Yalniz
DeDrift: Robust Similarity Search under Content Drift
ICCV2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The statistical distribution of content uploaded and searched on media sharing sites changes over time due to seasonal, sociological and technical factors. We investigate the impact of this "content drift" for large-scale similarity search tools, based on nearest neighbor search in embedding space. Unless a costly index reconstruction is performed frequently, content drift degrades the search accuracy and efficiency. The degradation is especially severe since, in general, both the query and database distributions change. We introduce and analyze real-world image and video datasets for which temporal information is available over a long time period. Based on the learnings, we devise DeDrift, a method that updates embedding quantizers to continuously adapt large-scale indexing structures on-the-fly. DeDrift almost eliminates the accuracy degradation due to the query and database content drift while being up to 100x faster than a full index reconstruction.
[ { "version": "v1", "created": "Sat, 5 Aug 2023 00:12:39 GMT" } ]
2023-08-08T00:00:00
[ [ "Baranchuk", "Dmitry", "" ], [ "Douze", "Matthijs", "" ], [ "Upadhyay", "Yash", "" ], [ "Yalniz", "I. Zeki", "" ] ]
new_dataset
0.999019
2308.02764
Md Naimul Hoque
Md Naimul Hoque and Niklas Elmqvist
Dataopsy: Scalable and Fluid Visual Exploration using Aggregate Query Sculpting
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
We present aggregate query sculpting (AQS), a faceted visual query technique for large-scale multidimensional data. As a "born scalable" query technique, AQS starts visualization with a single visual mark representing an aggregation of the entire dataset. The user can then progressively explore the dataset through a sequence of operations abbreviated as P6: pivot (facet an aggregate based on an attribute), partition (lay out a facet in space), peek (see inside a subset using an aggregate visual representation), pile (merge two or more subsets), project (extracting a subset into a new substrate), and prune (discard an aggregate not currently of interest). We validate AQS with Dataopsy, a prototype implementation of AQS that has been designed for fluid interaction on desktop and touch-based mobile devices. We demonstrate AQS and Dataopsy using two case studies and three application examples.
[ { "version": "v1", "created": "Sat, 5 Aug 2023 01:51:22 GMT" } ]
2023-08-08T00:00:00
[ [ "Hoque", "Md Naimul", "" ], [ "Elmqvist", "Niklas", "" ] ]
new_dataset
0.967238
2308.02767
Alex James Dr
Rajalekshmi TR, Rinku Rani Das, Chithra R, Alex James
Graphene-based RRAM devices for neural computing
Last revision - 04 Jul 2023
null
null
null
cs.ET
http://creativecommons.org/licenses/by-nc-nd/4.0/
Resistive random access memory (RRAM) is very well known for its potential application in in-memory and neural computing. However, they often have different types of device-to-device and cycle-to-cycle variability. This makes it harder to build highly accurate crossbar arrays.Traditional RRAM designs make use of various filament-based oxide materials for creating a channel which is sandwiched between two electrodes to form a two-terminal structure. They are often subjected to mechanical and electrical stress over repeated read-and-write cycles. The behavior of these devices often varies in practice across wafer arrays over these stress when fabricated. The use of emerging 2D materials is explored to improve electrical endurance, long retention In review time, high switching speed, and fewer power losses. This study provides an in-depth exploration of neuro-memristive computing and its potential applications, focusing specifically on the utilization of graphene and 2D materials in resistive random-access memory (RRAM) for neural computing. The paper presents a comprehensive analysis of the structural and design aspects of graphene-based RRAM, along with a thorough examination of commercially available RRAM models and their fabrication techniques. Furthermore, the study investigates the diverse range of applications that can benefit from graphene-based RRAM devices.
[ { "version": "v1", "created": "Sat, 5 Aug 2023 02:10:33 GMT" } ]
2023-08-08T00:00:00
[ [ "TR", "Rajalekshmi", "" ], [ "Das", "Rinku Rani", "" ], [ "R", "Chithra", "" ], [ "James", "Alex", "" ] ]
new_dataset
0.999796
2308.02768
Yuhui Hao
Yuhui Hao and Bo Yu and Qiang Liu and Shao-Shan Liu
FGLQR: Factor Graph Accelerator of LQR Control for Autonomous Machines
null
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Factor graph represents the factorization of a probability distribution function and serves as an effective abstraction in various autonomous machine computing tasks. Control is one of the core applications in autonomous machine computing stacks. Among all control algorithms, Linear Quadratic Regulator (LQR) offers one of the best trade-offs between efficiency and accuracy. However, due to the inherent iterative process and extensive computation, it is a challenging task for the autonomous systems with real-time limits and energy constrained. In this paper, we present FGLQR, an accelerator of LQR control for autonomous machines using the abstraction of a factor graph. By transforming the dynamic equation constraints into least squares constraints, the factor graph solving process is more hardware friendly and accelerated with almost no loss in accuracy. With a domain specific parallel solving pattern, FGLQR achieves 10.2x speed up and 32.9x energy reduction compared to the software implementation on an advanced Intel CPU.
[ { "version": "v1", "created": "Sat, 5 Aug 2023 02:19:04 GMT" } ]
2023-08-08T00:00:00
[ [ "Hao", "Yuhui", "" ], [ "Yu", "Bo", "" ], [ "Liu", "Qiang", "" ], [ "Liu", "Shao-Shan", "" ] ]
new_dataset
0.99746
2308.02773
Jie Zhou
Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
EduChat (https://www.educhat.top/) is a large-scale language model (LLM)-based chatbot system in the education domain. Its goal is to support personalized, fair, and compassionate intelligent education, serving teachers, students, and parents. Guided by theories from psychology and education, it further strengthens educational functions such as open question answering, essay assessment, Socratic teaching, and emotional support based on the existing basic LLMs. Particularly, we learn domain-specific knowledge by pre-training on the educational corpus and stimulate various skills with tool use by fine-tuning on designed system prompts and instructions. Currently, EduChat is available online as an open-source project, with its code, data, and model parameters available on platforms (e.g., GitHub https://github.com/icalk-nlp/EduChat, Hugging Face https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its capabilities online (https://vimeo.com/851004454). This initiative aims to promote research and applications of LLMs for intelligent education.
[ { "version": "v1", "created": "Sat, 5 Aug 2023 02:55:35 GMT" } ]
2023-08-08T00:00:00
[ [ "Dan", "Yuhao", "" ], [ "Lei", "Zhikai", "" ], [ "Gu", "Yiyang", "" ], [ "Li", "Yong", "" ], [ "Yin", "Jianghao", "" ], [ "Lin", "Jiaju", "" ], [ "Ye", "Linhao", "" ], [ "Tie", "Zhiyan", "" ], [ "Zhou", "Yougen", "" ], [ "Wang", "Yilei", "" ], [ "Zhou", "Aimin", "" ], [ "Zhou", "Ze", "" ], [ "Chen", "Qin", "" ], [ "Zhou", "Jie", "" ], [ "He", "Liang", "" ], [ "Qiu", "Xipeng", "" ] ]
new_dataset
0.995975
2308.02792
Sudipta Paria
Sudipta Paria and Swarup Bhunia
DiSPEL: Distributed Security Policy Enforcement for Bus-based SoC
14 Pages, 9 Figures
null
null
null
cs.CR cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The current zero trust model adopted in System-on-Chip (SoC) design is vulnerable to various malicious entities, and modern SoC designs must incorporate various security policies to protect sensitive assets from unauthorized access. These policies involve complex interactions between multiple IP blocks, which poses challenges for SoC designers and security experts when implementing these policies and for system validators when ensuring compliance. Difficulties arise when upgrading policies, reusing IPs for systems targeting different security requirements, and the subsequent increase in design time and time-to-market. This paper proposes a generic and flexible framework, called DiSPEL, for enforcing security policies defined by the user represented in a formal way for any bus-based SoC design. It employs a distributed deployment strategy while ensuring trusted bus operations despite the presence of untrusted IPs. It relies on incorporating a dedicated, centralized module capable of implementing diverse security policies involving bus-level interactions while generating the necessary logic and appending in the bus-level wrapper for IP-level policies. The proposed architecture is generic and independent of specific security policy types supporting both synthesizable and non-synthesizable solutions. The experimental results demonstrate its effectiveness and correctness in enforcing the security requirements and viability due to low overhead in terms of area, delay, and power consumption tested on open-source standard SoC benchmarks.
[ { "version": "v1", "created": "Sat, 5 Aug 2023 05:15:22 GMT" } ]
2023-08-08T00:00:00
[ [ "Paria", "Sudipta", "" ], [ "Bhunia", "Swarup", "" ] ]
new_dataset
0.998061
2308.02795
Md Amjad Hossain
Md Amjad Hossain, Javed I. Khan
ZePoP: A Distributed Leader Election Protocol using the Delay-based Closeness Centrality for Peer-to-Peer Applications
null
null
null
null
cs.DC cs.NI
http://creativecommons.org/licenses/by/4.0/
This paper presents ZePoP, a leader election protocol for distributed systems, optimizing a delay-based closeness centrality. We design the protocol specifically for the Peer to Peer(P2P) applications, where the leader peer (node) is responsible for collecting, processing, and redistributing data or control signals satisfying some timing constraints. The protocol elects an optimal leader node in the dynamically changing network and constructs a Data Collection and Distribution Tree (DCDT) rooted at the leader node. The elected optimal leader is closest to all nodes in the system compared to other nodes. We validate the proposed protocol through theoretical proofs as well as experimental results.
[ { "version": "v1", "created": "Sat, 5 Aug 2023 05:55:18 GMT" } ]
2023-08-08T00:00:00
[ [ "Hossain", "Md Amjad", "" ], [ "Khan", "Javed I.", "" ] ]
new_dataset
0.998918
2308.02812
Oliver Keszocze
Max Bartunik, Jens Kirchner, Oliver Keszocze
Artificial Intelligence for Molecular Communication
The abstract was slightly altered compared to the journal version in order to meet arXiv's requirements
null
10.1515/itit-2023-0029
null
cs.ET cs.AI
http://creativecommons.org/licenses/by/4.0/
Molecular communication is a novel approach for data transmission between miniaturized devices, especially in contexts where electrical signals are to be avoided. The communication is based on sending molecules (or other particles) at nano scale through channel instead sending electrons over a wire. Molecular communication devices have a large potential in medical applications as they offer an alternative to antenna-based transmission systems that may not be applicable due to size, temperature, or radiation constraints. The communication is achieved by transforming a digital signal into concentrations of molecules. These molecules are then detected at the other end of the communication channel and transformed back into a digital signal. Accurately modeling the transmission channel is often not possible which may be due to a lack of data or time-varying parameters of the channel (e. g., the movements of a person wearing a medical device). This makes demodulation of the signal very difficult. Many approaches for demodulation have been discussed with one particular approach having tremendous success: artificial neural networks. These networks imitate the decision process in the human brain and are capable of reliably classifying noisy input data. Training such a network relies on a large set of training data. As molecular communication as a technology is still in its early development phase, this data is not always readily available. We discuss neural network-based demodulation approaches relying on synthetic data based on theoretical channel models as well as works using actual measurements produced by a prototype test bed. In this work, we give a general overview over the field molecular communication, discuss the challenges in the demodulations process of transmitted signals, and present approaches to these challenges that are based on artificial neural networks.
[ { "version": "v1", "created": "Sat, 5 Aug 2023 07:07:02 GMT" } ]
2023-08-08T00:00:00
[ [ "Bartunik", "Max", "" ], [ "Kirchner", "Jens", "" ], [ "Keszocze", "Oliver", "" ] ]
new_dataset
0.992202
2308.02816
Hongwei Yao
Hongwei Yao, Jian Lou, Kui Ren and Zhan Qin
PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification
null
null
null
null
cs.MM cs.CR
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs) have witnessed a meteoric rise in popularity among the general public users over the past few months, facilitating diverse downstream tasks with human-level accuracy and proficiency. Prompts play an essential role in this success, which efficiently adapt pre-trained LLMs to task-specific applications by simply prepending a sequence of tokens to the query texts. However, designing and selecting an optimal prompt can be both expensive and demanding, leading to the emergence of Prompt-as-a-Service providers who profit by providing well-designed prompts for authorized use. With the growing popularity of prompts and their indispensable role in LLM-based services, there is an urgent need to protect the copyright of prompts against unauthorized use. In this paper, we propose PromptCARE, the first framework for prompt copyright protection through watermark injection and verification. Prompt watermarking presents unique challenges that render existing watermarking techniques developed for model and dataset copyright verification ineffective. PromptCARE overcomes these hurdles by proposing watermark injection and verification schemes tailor-made for prompts and NLP characteristics. Extensive experiments on six well-known benchmark datasets, using three prevalent pre-trained LLMs (BERT, RoBERTa, and Facebook OPT-1.3b), demonstrate the effectiveness, harmlessness, robustness, and stealthiness of PromptCARE.
[ { "version": "v1", "created": "Sat, 5 Aug 2023 08:12:34 GMT" } ]
2023-08-08T00:00:00
[ [ "Yao", "Hongwei", "" ], [ "Lou", "Jian", "" ], [ "Ren", "Kui", "" ], [ "Qin", "Zhan", "" ] ]
new_dataset
0.999364
2308.02827
Tianxing Li
Tianxing Li, Rui Shi, Qing Zhu, Takashi Kanai
SwinGar: Spectrum-Inspired Neural Dynamic Deformation for Free-Swinging Garments
null
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our work presents a novel spectrum-inspired learning-based approach for generating clothing deformations with dynamic effects and personalized details. Existing methods in the field of clothing animation are limited to either static behavior or specific network models for individual garments, which hinders their applicability in real-world scenarios where diverse animated garments are required. Our proposed method overcomes these limitations by providing a unified framework that predicts dynamic behavior for different garments with arbitrary topology and looseness, resulting in versatile and realistic deformations. First, we observe that the problem of bias towards low frequency always hampers supervised learning and leads to overly smooth deformations. To address this issue, we introduce a frequency-control strategy from a spectral perspective that enhances the generation of high-frequency details of the deformation. In addition, to make the network highly generalizable and able to learn various clothing deformations effectively, we propose a spectral descriptor to achieve a generalized description of the global shape information. Building on the above strategies, we develop a dynamic clothing deformation estimator that integrates frequency-controllable attention mechanisms with long short-term memory. The estimator takes as input expressive features from garments and human bodies, allowing it to automatically output continuous deformations for diverse clothing types, independent of mesh topology or vertex count. Finally, we present a neural collision handling method to further enhance the realism of garments. Our experimental results demonstrate the effectiveness of our approach on a variety of free-swinging garments and its superiority over state-of-the-art methods.
[ { "version": "v1", "created": "Sat, 5 Aug 2023 09:09:50 GMT" } ]
2023-08-08T00:00:00
[ [ "Li", "Tianxing", "" ], [ "Shi", "Rui", "" ], [ "Zhu", "Qing", "" ], [ "Kanai", "Takashi", "" ] ]
new_dataset
0.999708
2308.02828
Shuyin Ouyang
Shuyin Ouyang, Jie M. Zhang, Mark Harman, Meng Wang
LLM is Like a Box of Chocolates: the Non-determinism of ChatGPT in Code Generation
null
null
null
null
cs.SE
http://creativecommons.org/publicdomain/zero/1.0/
There has been a recent explosion of research on Large Language Models (LLMs) for software engineering tasks, in particular code generation. However, results from LLMs can be highly unstable; nondeterministically returning very different codes for the same prompt. Non-determinism is a potential menace to scientific conclusion validity. When non-determinism is high, scientific conclusions simply cannot be relied upon unless researchers change their behaviour to control for it in their empirical analyses. This paper conducts an empirical study to demonstrate that non-determinism is, indeed, high, thereby underlining the need for this behavioural change. We choose to study ChatGPT because it is already highly prevalent in the code generation research literature. We report results from a study of 829 code generation problems from three code generation benchmarks (i.e., CodeContests, APPS, and HumanEval). Our results reveal high degrees of non-determinism: the ratio of coding tasks with zero equal test output across different requests is 72.73%, 60.40%, and 65.85% for CodeContests, APPS, and HumanEval, respectively. In addition, we find that setting the temperature to 0 does not guarantee determinism in code generation, although it indeed brings less non-determinism than the default configuration (temperature=1). These results confirm that there is, currently, a significant threat to scientific conclusion validity. In order to put LLM-based research on firmer scientific foundations, researchers need to take into account non-determinism in drawing their conclusions.
[ { "version": "v1", "created": "Sat, 5 Aug 2023 09:30:33 GMT" } ]
2023-08-08T00:00:00
[ [ "Ouyang", "Shuyin", "" ], [ "Zhang", "Jie M.", "" ], [ "Harman", "Mark", "" ], [ "Wang", "Meng", "" ] ]
new_dataset
0.967974
2308.02838
Nihir Vedd
Nihir Vedd and Paul Riga
feather -- a Python SDK to share and deploy models
Accepted to ICML 2023 Workshop AI&HCI. 8 pages, 3 figures and 1 figure
null
null
null
cs.AI cs.HC
http://creativecommons.org/licenses/by/4.0/
At its core, feather was a tool that allowed model developers to build shareable user interfaces for their models in under 20 lines of code. Using the Python SDK, developers specified visual components that users would interact with. (e.g. a FileUpload component to allow users to upload a file). Our service then provided 1) a URL that allowed others to access and use the model visually via a user interface; 2) an API endpoint to allow programmatic requests to a model. In this paper, we discuss feather's motivations and the value we intended to offer AI researchers and developers. For example, the SDK can support multi-step models and can be extended to run automatic evaluation against held out datasets. We additionally provide comprehensive technical and implementation details. N.B. feather is presently a dormant project. We have open sourced our code for research purposes: https://github.com/feather-ai/
[ { "version": "v1", "created": "Sat, 5 Aug 2023 10:27:50 GMT" } ]
2023-08-08T00:00:00
[ [ "Vedd", "Nihir", "" ], [ "Riga", "Paul", "" ] ]
new_dataset
0.997894
2308.02866
Jianfeng Wang
Jianfeng Wang, Daniela Massiceti, Xiaolin Hu, Vladimir Pavlovic, Thomas Lukasiewicz
NP-SemiSeg: When Neural Processes meet Semi-Supervised Semantic Segmentation
Appear at ICML2023. Source codes are available at: https://github.com/Jianf-Wang/NP-SemiSeg
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Semi-supervised semantic segmentation involves assigning pixel-wise labels to unlabeled images at training time. This is useful in a wide range of real-world applications where collecting pixel-wise labels is not feasible in time or cost. Current approaches to semi-supervised semantic segmentation work by predicting pseudo-labels for each pixel from a class-wise probability distribution output by a model. If the predicted probability distribution is incorrect, however, this leads to poor segmentation results, which can have knock-on consequences in safety critical systems, like medical images or self-driving cars. It is, therefore, important to understand what a model does not know, which is mainly achieved by uncertainty quantification. Recently, neural processes (NPs) have been explored in semi-supervised image classification, and they have been a computationally efficient and effective method for uncertainty quantification. In this work, we move one step forward by adapting NPs to semi-supervised semantic segmentation, resulting in a new model called NP-SemiSeg. We experimentally evaluated NP-SemiSeg on the public benchmarks PASCAL VOC 2012 and Cityscapes, with different training settings, and the results verify its effectiveness.
[ { "version": "v1", "created": "Sat, 5 Aug 2023 12:42:15 GMT" } ]
2023-08-08T00:00:00
[ [ "Wang", "Jianfeng", "" ], [ "Massiceti", "Daniela", "" ], [ "Hu", "Xiaolin", "" ], [ "Pavlovic", "Vladimir", "" ], [ "Lukasiewicz", "Thomas", "" ] ]
new_dataset
0.990649
2308.02905
Alloy Das
Alloy Das, Prasun Roy, Saumik Bhattacharya, Subhankar Ghosh, Umapada Pal, Michael Blumenstein
FAST: Font-Agnostic Scene Text Editing
13 pages, in submission
null
null
null
cs.CV cs.MM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Scene Text Editing (STE) is a challenging research problem, and it aims to modify existing texts in an image while preserving the background and the font style of the original text of the image. Due to its various real-life applications, researchers have explored several approaches toward STE in recent years. However, most of the existing STE methods show inferior editing performance because of (1) complex image backgrounds, (2) various font styles, and (3) varying word lengths within the text. To address such inferior editing performance issues, in this paper, we propose a novel font-agnostic scene text editing framework, named FAST, for simultaneously generating text in arbitrary styles and locations while preserving a natural and realistic appearance through combined mask generation and style transfer. The proposed approach differs from the existing methods as they directly modify all image pixels. Instead, the proposed method has introduced a filtering mechanism to remove background distractions, allowing the network to focus solely on the text regions where editing is required. Additionally, a text-style transfer module has been designed to mitigate the challenges posed by varying word lengths. Extensive experiments and ablations have been conducted, and the results demonstrate that the proposed method outperforms the existing methods both qualitatively and quantitatively.
[ { "version": "v1", "created": "Sat, 5 Aug 2023 15:54:06 GMT" } ]
2023-08-08T00:00:00
[ [ "Das", "Alloy", "" ], [ "Roy", "Prasun", "" ], [ "Bhattacharya", "Saumik", "" ], [ "Ghosh", "Subhankar", "" ], [ "Pal", "Umapada", "" ], [ "Blumenstein", "Michael", "" ] ]
new_dataset
0.977319
2308.02907
Kasra EdalatNejad
Kasra EdalatNejad, Wouter Lueks, Justinas Sukaitis, Vincent Graf Narbel, Massimo Marelli, Carmela Troncoso
Janus: Safe Biometric Deduplication for Humanitarian Aid Distribution
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humanitarian organizations provide aid to people in need. To use their limited budget efficiently, their distribution processes must ensure that legitimate recipients cannot receive more aid than they are entitled to. Thus, it is essential that recipients can register at most once per aid program. Taking the International Committee of the Red Cross's aid distribution registration process as a use case, we identify the requirements to detect double registration without creating new risks for aid recipients. We then design Janus, which combines privacy-enhancing technologies with biometrics to prevent double registration in a safe manner. Janus does not create plaintext biometric databases and reveals only one bit of information at registration time (whether the user registering is present in the database or not). We implement and evaluate three instantiations of Janus based on secure multiparty computation, somewhat homomorphic encryption, and trusted execution environments. We demonstrate that they support the privacy, accuracy, and performance needs of humanitarian organizations. We compare Janus with existing alternatives and show it is the first system that provides the accuracy our scenario requires while providing strong protection.
[ { "version": "v1", "created": "Sat, 5 Aug 2023 15:59:13 GMT" } ]
2023-08-08T00:00:00
[ [ "EdalatNejad", "Kasra", "" ], [ "Lueks", "Wouter", "" ], [ "Sukaitis", "Justinas", "" ], [ "Narbel", "Vincent Graf", "" ], [ "Marelli", "Massimo", "" ], [ "Troncoso", "Carmela", "" ] ]
new_dataset
0.99525
2308.02915
Le Zhuo
Qiaosong Qi, Le Zhuo, Aixi Zhang, Yue Liao, Fei Fang, Si Liu, Shuicheng Yan
DiffDance: Cascaded Human Motion Diffusion Model for Dance Generation
Accepted at ACM MM 2023
null
10.1145/3581783.3612307
null
cs.GR cs.CV cs.SD eess.AS
http://creativecommons.org/licenses/by-sa/4.0/
When hearing music, it is natural for people to dance to its rhythm. Automatic dance generation, however, is a challenging task due to the physical constraints of human motion and rhythmic alignment with target music. Conventional autoregressive methods introduce compounding errors during sampling and struggle to capture the long-term structure of dance sequences. To address these limitations, we present a novel cascaded motion diffusion model, DiffDance, designed for high-resolution, long-form dance generation. This model comprises a music-to-dance diffusion model and a sequence super-resolution diffusion model. To bridge the gap between music and motion for conditional generation, DiffDance employs a pretrained audio representation learning model to extract music embeddings and further align its embedding space to motion via contrastive loss. During training our cascaded diffusion model, we also incorporate multiple geometric losses to constrain the model outputs to be physically plausible and add a dynamic loss weight that adaptively changes over diffusion timesteps to facilitate sample diversity. Through comprehensive experiments performed on the benchmark dataset AIST++, we demonstrate that DiffDance is capable of generating realistic dance sequences that align effectively with the input music. These results are comparable to those achieved by state-of-the-art autoregressive methods.
[ { "version": "v1", "created": "Sat, 5 Aug 2023 16:18:57 GMT" } ]
2023-08-08T00:00:00
[ [ "Qi", "Qiaosong", "" ], [ "Zhuo", "Le", "" ], [ "Zhang", "Aixi", "" ], [ "Liao", "Yue", "" ], [ "Fang", "Fei", "" ], [ "Liu", "Si", "" ], [ "Yan", "Shuicheng", "" ] ]
new_dataset
0.998302
2308.02944
Renato Geh
Renato Lui Geh, Jonas Gon\c{c}alves, Igor Cataneo Silveira, Denis Deratani Mau\'a, Fabio Gagliardi Cozman
dPASP: A Comprehensive Differentiable Probabilistic Answer Set Programming Environment For Neurosymbolic Learning and Reasoning
12 pages, 1 figure
null
null
null
cs.AI cs.LG cs.LO cs.NE
http://creativecommons.org/licenses/by/4.0/
We present dPASP, a novel declarative probabilistic logic programming framework for differentiable neuro-symbolic reasoning. The framework allows for the specification of discrete probabilistic models with neural predicates, logic constraints and interval-valued probabilistic choices, thus supporting models that combine low-level perception (images, texts, etc), common-sense reasoning, and (vague) statistical knowledge. To support all such features, we discuss the several semantics for probabilistic logic programs that can express nondeterministic, contradictory, incomplete and/or statistical knowledge. We also discuss how gradient-based learning can be performed with neural predicates and probabilistic choices under selected semantics. We then describe an implemented package that supports inference and learning in the language, along with several example programs. The package requires minimal user knowledge of deep learning system's inner workings, while allowing end-to-end training of rather sophisticated models and loss functions.
[ { "version": "v1", "created": "Sat, 5 Aug 2023 19:36:58 GMT" } ]
2023-08-08T00:00:00
[ [ "Geh", "Renato Lui", "" ], [ "Gonçalves", "Jonas", "" ], [ "Silveira", "Igor Cataneo", "" ], [ "Mauá", "Denis Deratani", "" ], [ "Cozman", "Fabio Gagliardi", "" ] ]
new_dataset
0.983087
2308.02945
Yonghae Kim
Yonghae Kim, Anurag Kar, Jaewon Lee, Jaekyu Lee, Hyesoon Kim
RV-CURE: A RISC-V Capability Architecture for Full Memory Safety
null
null
null
null
cs.AR cs.CR
http://creativecommons.org/licenses/by/4.0/
Despite decades of efforts to resolve, memory safety violations are still persistent and problematic in modern systems. Various defense mechanisms have been proposed, but their deployment in real systems remains challenging because of performance, security, or compatibility concerns. In this paper, we propose RV-CURE, a RISC-V capability architecture that implements full-system support for full memory safety. For capability enforcement, we first propose a compiler technique, data-pointer tagging (DPT), applicable to protecting all memory types. It inserts a pointer tag in a pointer address and associates that tag with the pointer's capability metadata. DPT enforces a capability check for every memory access by a tagged pointer and thereby prevents illegitimate memory accesses. Furthermore, we investigate and present lightweight hardware extensions for DPT based on the open-source RISC-V BOOM processor. We observe that a capability-execution pipeline can be implemented in parallel with the existing memory-execution pipeline without intrusive modifications. With our seamless hardware integration, we achieve low-cost capability checks transparently performed in hardware. Altogether, we prototype RV-CURE as a synthesized RTL processor and conduct full-system evaluations on FPGAs running Linux OS. Our evaluations show that RV-CURE achieves strong memory safety at a 10.8% slowdown across the SPEC 2017 C/C++ workloads.
[ { "version": "v1", "created": "Sat, 5 Aug 2023 19:45:18 GMT" } ]
2023-08-08T00:00:00
[ [ "Kim", "Yonghae", "" ], [ "Kar", "Anurag", "" ], [ "Lee", "Jaewon", "" ], [ "Lee", "Jaekyu", "" ], [ "Kim", "Hyesoon", "" ] ]
new_dataset
0.995193
2308.02992
Zian Liu
Zian Liu
Binary Code Similarity Detection
4 pages, conference paper
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Binary code similarity detection is to detect the similarity of code at binary (assembly) level without source code. Existing works have their limitations when dealing with mutated binary code generated by different compiling options. In this paper, we propose a novel approach to addressing this problem. By inspecting the binary code, we found that generally, within a function, some instructions aim to calculate (prepare) values for other instructions. The latter instructions are defined by us as key instructions. Currently, we define four categories of key instructions: calling subfunctions, comparing instruction, returning instruction, and memory-store instruction. Thus if we symbolically execute similar binary codes, symbolic values at these key instructions are expected to be similar. As such, we implement a prototype tool, which has three steps. First, it symbolically executes binary code; Second, it extracts symbolic values at defined key instructions into a graph; Last, it compares the symbolic graph similarity. In our implementation, we also address some problems, including path explosion and loop handling.
[ { "version": "v1", "created": "Sun, 6 Aug 2023 02:24:42 GMT" } ]
2023-08-08T00:00:00
[ [ "Liu", "Zian", "" ] ]
new_dataset
0.989028
2308.03000
Xianyi Liu
Peiguang Jing, Xianyi Liu, Ji Wang, Yinwei Wei, Liqiang Nie, Yuting Su
StyleEDL: Style-Guided High-order Attention Network for Image Emotion Distribution Learning
8 pages, 5 figures, conference
null
null
null
cs.CV cs.MM
http://creativecommons.org/licenses/by/4.0/
Emotion distribution learning has gained increasing attention with the tendency to express emotions through images. As for emotion ambiguity arising from humans' subjectivity, substantial previous methods generally focused on learning appropriate representations from the holistic or significant part of images. However, they rarely consider establishing connections with the stylistic information although it can lead to a better understanding of images. In this paper, we propose a style-guided high-order attention network for image emotion distribution learning termed StyleEDL, which interactively learns stylistic-aware representations of images by exploring the hierarchical stylistic information of visual contents. Specifically, we consider exploring the intra- and inter-layer correlations among GRAM-based stylistic representations, and meanwhile exploit an adversary-constrained high-order attention mechanism to capture potential interactions between subtle visual parts. In addition, we introduce a stylistic graph convolutional network to dynamically generate the content-dependent emotion representations to benefit the final emotion distribution learning. Extensive experiments conducted on several benchmark datasets demonstrate the effectiveness of our proposed StyleEDL compared to state-of-the-art methods. The implementation is released at: https://github.com/liuxianyi/StyleEDL.
[ { "version": "v1", "created": "Sun, 6 Aug 2023 03:22:46 GMT" } ]
2023-08-08T00:00:00
[ [ "Jing", "Peiguang", "" ], [ "Liu", "Xianyi", "" ], [ "Wang", "Ji", "" ], [ "Wei", "Yinwei", "" ], [ "Nie", "Liqiang", "" ], [ "Su", "Yuting", "" ] ]
new_dataset
0.976921
2308.03004
Namyoon Lee
Geon Choi and Namyoon Lee
Deep Polar Codes
null
null
null
null
cs.IT cs.LG math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we introduce a novel class of pre-transformed polar codes, termed as deep polar codes. We first present a deep polar encoder that harnesses a series of multi-layered polar transformations with varying sizes. Our approach to encoding enables a low-complexity implementation while significantly enhancing the weight distribution of the code. Moreover, our encoding method offers flexibility in rate-profiling, embracing a wide range of code rates and blocklengths. Next, we put forth a low-complexity decoding algorithm called successive cancellation list with backpropagation parity checks (SCL-BPC). This decoding algorithm leverages the parity check equations in the reverse process of the multi-layered pre-transformed encoding for SCL decoding. Additionally, we present a low-latency decoding algorithm that employs parallel-SCL decoding by treating partially pre-transformed bit patterns as additional frozen bits. Through simulations, we demonstrate that deep polar codes outperform existing pre-transformed polar codes in terms of block error rates across various code rates under short block lengths, while maintaining low encoding and decoding complexity. Furthermore, we show that concatenating deep polar codes with cyclic-redundancy-check codes can achieve the meta-converse bound of the finite block length capacity within 0.4 dB in some instances.
[ { "version": "v1", "created": "Sun, 6 Aug 2023 03:29:18 GMT" } ]
2023-08-08T00:00:00
[ [ "Choi", "Geon", "" ], [ "Lee", "Namyoon", "" ] ]
new_dataset
0.971682
2308.03006
Xiao Liang
Kareem Eltouny, Seyedomid Sajedi, and Xiao Liang
High-Resolution Vision Transformers for Pixel-Level Identification of Structural Components and Damage
null
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual inspection is predominantly used to evaluate the state of civil structures, but recent developments in unmanned aerial vehicles (UAVs) and artificial intelligence have increased the speed, safety, and reliability of the inspection process. In this study, we develop a semantic segmentation network based on vision transformers and Laplacian pyramids scaling networks for efficiently parsing high-resolution visual inspection images. The massive amounts of collected high-resolution images during inspections can slow down the investigation efforts. And while there have been extensive studies dedicated to the use of deep learning models for damage segmentation, processing high-resolution visual data can pose major computational difficulties. Traditionally, images are either uniformly downsampled or partitioned to cope with computational demands. However, the input is at risk of losing local fine details, such as thin cracks, or global contextual information. Inspired by super-resolution architectures, our vision transformer model learns to resize high-resolution images and masks to retain both the valuable local features and the global semantics without sacrificing computational efficiency. The proposed framework has been evaluated through comprehensive experiments on a dataset of bridge inspection report images using multiple metrics for pixel-wise materials detection.
[ { "version": "v1", "created": "Sun, 6 Aug 2023 03:34:25 GMT" } ]
2023-08-08T00:00:00
[ [ "Eltouny", "Kareem", "" ], [ "Sajedi", "Seyedomid", "" ], [ "Liang", "Xiao", "" ] ]
new_dataset
0.9889
2308.03065
Mohamadreza Delbari
Alejandro Jim\'enez-S\'aez, Arash Asadi, Robin Neuder, Mohamadreza Delbari, and Vahid Jamali
Reconfigurable Intelligent Surfaces with Liquid Crystal Technology: A Hardware Design and Communication Perspective
null
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
With the surge of theoretical work investigating Reconfigurable Intelligent Surfaces (RISs) for wireless communication and sensing, there exists an urgent need of hardware solutions for the evaluation of these theoretical results and further advancing the field. The most common solutions proposed in the literature are based on varactors, Positive Intrinsic-Negative (PIN) diodes, and Micro-Electro-Mechanical Systems (MEMS). This paper presents the use of Liquid Crystal (LC) technology for the realization of continuously tunable extremely large millimeter-wave RISs. We review the basic physical principles of LC theory, introduce two different realizations of LC-RISs, namely reflect-array and phased-array, and highlight their key properties that have an impact on the system design and RIS reconfiguration strategy. Moreover, the LC technology is compared with the competing technologies in terms of feasibility, cost, power consumption, reconfiguration speed, and bandwidth. Furthermore, several important open problems for both theoretical and experimental research on LC-RISs are presented.
[ { "version": "v1", "created": "Sun, 6 Aug 2023 09:20:15 GMT" } ]
2023-08-08T00:00:00
[ [ "Jiménez-Sáez", "Alejandro", "" ], [ "Asadi", "Arash", "" ], [ "Neuder", "Robin", "" ], [ "Delbari", "Mohamadreza", "" ], [ "Jamali", "Vahid", "" ] ]
new_dataset
0.998189
2308.03108
Amira Guesmi
Amira Guesmi, Muhammad Abdullah Hanif, Bassem Ouni, Muhammad Shafique
SAAM: Stealthy Adversarial Attack on Monoculor Depth Estimation
null
null
null
null
cs.CV cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we investigate the vulnerability of MDE to adversarial patches. We propose a novel \underline{S}tealthy \underline{A}dversarial \underline{A}ttacks on \underline{M}DE (SAAM) that compromises MDE by either corrupting the estimated distance or causing an object to seamlessly blend into its surroundings. Our experiments, demonstrate that the designed stealthy patch successfully causes a DNN-based MDE to misestimate the depth of objects. In fact, our proposed adversarial patch achieves a significant 60\% depth error with 99\% ratio of the affected region. Importantly, despite its adversarial nature, the patch maintains a naturalistic appearance, making it inconspicuous to human observers. We believe that this work sheds light on the threat of adversarial attacks in the context of MDE on edge devices. We hope it raises awareness within the community about the potential real-life harm of such attacks and encourages further research into developing more robust and adaptive defense mechanisms.
[ { "version": "v1", "created": "Sun, 6 Aug 2023 13:29:42 GMT" } ]
2023-08-08T00:00:00
[ [ "Guesmi", "Amira", "" ], [ "Hanif", "Muhammad Abdullah", "" ], [ "Ouni", "Bassem", "" ], [ "Shafique", "Muhammad", "" ] ]
new_dataset
0.997925
2308.03120
Conrad Sanderson
Ryan R. Curtin, Marcus Edel, Conrad Sanderson
Bandicoot: C++ Library for GPU Linear Algebra and Scientific Computing
null
null
null
null
cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This report provides an introduction to the Bandicoot C++ library for linear algebra and scientific computing on GPUs, overviewing its user interface and performance characteristics, as well as the technical details of its internal design. Bandicoot is the GPU-enabled counterpart to the well-known Armadillo C++ linear algebra library, aiming to allow users to take advantage of GPU-accelerated computation for their existing codebases without significant changes. Adapting the same internal template meta-programming techniques that Armadillo uses, Bandicoot is able to provide compile-time optimisation of mathematical expressions within user code. The library is ready for production use and is available at https://coot.sourceforge.io. Bandicoot is distributed under the Apache 2.0 License.
[ { "version": "v1", "created": "Sun, 6 Aug 2023 14:01:12 GMT" } ]
2023-08-08T00:00:00
[ [ "Curtin", "Ryan R.", "" ], [ "Edel", "Marcus", "" ], [ "Sanderson", "Conrad", "" ] ]
new_dataset
0.999465
2308.03121
Yuan Tong
Yuan Tong, Mengshun Hu, Zheng Wang
NNVISR: Bring Neural Network Video Interpolation and Super Resolution into Video Processing Framework
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
We present NNVISR - an open-source filter plugin for the VapourSynth video processing framework, which facilitates the application of neural networks for various kinds of video enhancing tasks, including denoising, super resolution, interpolation, and spatio-temporal super-resolution. NNVISR fills the gap between video enhancement neural networks and video processing pipelines, by accepting any network that enhances a group of frames, and handling all other network agnostic details during video processing. NNVISR is publicly released at https://github.com/tongyuantongyu/vs-NNVISR.
[ { "version": "v1", "created": "Sun, 6 Aug 2023 14:09:00 GMT" } ]
2023-08-08T00:00:00
[ [ "Tong", "Yuan", "" ], [ "Hu", "Mengshun", "" ], [ "Wang", "Zheng", "" ] ]
new_dataset
0.997854
2308.03122
Prerak Gandhi
Prerak Gandhi, Vishal Pramanik, Pushpak Bhattacharyya
"Kurosawa": A Script Writer's Assistant
6 pages, 9 figures, 1 table
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Storytelling is the lifeline of the entertainment industry -- movies, TV shows, and stand-up comedies, all need stories. A good and gripping script is the lifeline of storytelling and demands creativity and resource investment. Good scriptwriters are rare to find and often work under severe time pressure. Consequently, entertainment media are actively looking for automation. In this paper, we present an AI-based script-writing workbench called KUROSAWA which addresses the tasks of plot generation and script generation. Plot generation aims to generate a coherent and creative plot (600-800 words) given a prompt (15-40 words). Script generation, on the other hand, generates a scene (200-500 words) in a screenplay format from a brief description (15-40 words). Kurosawa needs data to train. We use a 4-act structure of storytelling to annotate the plot dataset manually. We create a dataset of 1000 manually annotated plots and their corresponding prompts/storylines and a gold-standard dataset of 1000 scenes with four main elements -- scene headings, action lines, dialogues, and character names -- tagged individually. We fine-tune GPT-3 with the above datasets to generate plots and scenes. These plots and scenes are first evaluated and then used by the scriptwriters of a large and famous media platform ErosNow. We release the annotated datasets and the models trained on these datasets as a working benchmark for automatic movie plot and script generation.
[ { "version": "v1", "created": "Sun, 6 Aug 2023 14:09:02 GMT" } ]
2023-08-08T00:00:00
[ [ "Gandhi", "Prerak", "" ], [ "Pramanik", "Vishal", "" ], [ "Bhattacharyya", "Pushpak", "" ] ]
new_dataset
0.999891
2308.03151
Zheng Ma
Zheng Ma, Mianzhi Pan, Wenhan Wu, Kanzhi Cheng, Jianbing Zhang, Shujian Huang and Jiajun Chen
Food-500 Cap: A Fine-Grained Food Caption Benchmark for Evaluating Vision-Language Models
Accepted at ACM Multimedia (ACMMM) 2023
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision-language models (VLMs) have shown impressive performance in substantial downstream multi-modal tasks. However, only comparing the fine-tuned performance on downstream tasks leads to the poor interpretability of VLMs, which is adverse to their future improvement. Several prior works have identified this issue and used various probing methods under a zero-shot setting to detect VLMs' limitations, but they all examine VLMs using general datasets instead of specialized ones. In practical applications, VLMs are usually applied to specific scenarios, such as e-commerce and news fields, so the generalization of VLMs in specific domains should be given more attention. In this paper, we comprehensively investigate the capabilities of popular VLMs in a specific field, the food domain. To this end, we build a food caption dataset, Food-500 Cap, which contains 24,700 food images with 494 categories. Each image is accompanied by a detailed caption, including fine-grained attributes of food, such as the ingredient, shape, and color. We also provide a culinary culture taxonomy that classifies each food category based on its geographic origin in order to better analyze the performance differences of VLM in different regions. Experiments on our proposed datasets demonstrate that popular VLMs underperform in the food domain compared with their performance in the general domain. Furthermore, our research reveals severe bias in VLMs' ability to handle food items from different geographic regions. We adopt diverse probing methods and evaluate nine VLMs belonging to different architectures to verify the aforementioned observations. We hope that our study will bring researchers' attention to VLM's limitations when applying them to the domain of food or culinary cultures, and spur further investigations to address this issue.
[ { "version": "v1", "created": "Sun, 6 Aug 2023 15:56:31 GMT" } ]
2023-08-08T00:00:00
[ [ "Ma", "Zheng", "" ], [ "Pan", "Mianzhi", "" ], [ "Wu", "Wenhan", "" ], [ "Cheng", "Kanzhi", "" ], [ "Zhang", "Jianbing", "" ], [ "Huang", "Shujian", "" ], [ "Chen", "Jiajun", "" ] ]
new_dataset
0.993999
2308.03163
Md Farhamdur Reza
Md Farhamdur Reza, Ali Rahmati, Tianfu Wu, Huaiyu Dai
CGBA: Curvature-aware Geometric Black-box Attack
This paper is accepted to publish in ICCV
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Decision-based black-box attacks often necessitate a large number of queries to craft an adversarial example. Moreover, decision-based attacks based on querying boundary points in the estimated normal vector direction often suffer from inefficiency and convergence issues. In this paper, we propose a novel query-efficient curvature-aware geometric decision-based black-box attack (CGBA) that conducts boundary search along a semicircular path on a restricted 2D plane to ensure finding a boundary point successfully irrespective of the boundary curvature. While the proposed CGBA attack can work effectively for an arbitrary decision boundary, it is particularly efficient in exploiting the low curvature to craft high-quality adversarial examples, which is widely seen and experimentally verified in commonly used classifiers under non-targeted attacks. In contrast, the decision boundaries often exhibit higher curvature under targeted attacks. Thus, we develop a new query-efficient variant, CGBA-H, that is adapted for the targeted attack. In addition, we further design an algorithm to obtain a better initial boundary point at the expense of some extra queries, which considerably enhances the performance of the targeted attack. Extensive experiments are conducted to evaluate the performance of our proposed methods against some well-known classifiers on the ImageNet and CIFAR10 datasets, demonstrating the superiority of CGBA and CGBA-H over state-of-the-art non-targeted and targeted attacks, respectively. The source code is available at https://github.com/Farhamdur/CGBA.
[ { "version": "v1", "created": "Sun, 6 Aug 2023 17:18:04 GMT" } ]
2023-08-08T00:00:00
[ [ "Reza", "Md Farhamdur", "" ], [ "Rahmati", "Ali", "" ], [ "Wu", "Tianfu", "" ], [ "Dai", "Huaiyu", "" ] ]
new_dataset
0.991623
2308.03164
Yue Hu
Yue Hu, Xinan Ye, Yifei Liu, Souvik Kundu, Gourav Datta, Srikar Mutnuri, Namo Asavisanu, Nora Ayanian, Konstantinos Psounis, Peter Beerel
FireFly A Synthetic Dataset for Ember Detection in Wildfire
Artificial Intelligence (AI) and Humanitarian Assistance and Disaster Recovery (HADR) workshop, ICCV 2023 in Paris, France
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents "FireFly", a synthetic dataset for ember detection created using Unreal Engine 4 (UE4), designed to overcome the current lack of ember-specific training resources. To create the dataset, we present a tool that allows the automated generation of the synthetic labeled dataset with adjustable parameters, enabling data diversity from various environmental conditions, making the dataset both diverse and customizable based on user requirements. We generated a total of 19,273 frames that have been used to evaluate FireFly on four popular object detection models. Further to minimize human intervention, we leveraged a trained model to create a semi-automatic labeling process for real-life ember frames. Moreover, we demonstrated an up to 8.57% improvement in mean Average Precision (mAP) in real-world wildfire scenarios compared to models trained exclusively on a small real dataset.
[ { "version": "v1", "created": "Sun, 6 Aug 2023 17:19:51 GMT" } ]
2023-08-08T00:00:00
[ [ "Hu", "Yue", "" ], [ "Ye", "Xinan", "" ], [ "Liu", "Yifei", "" ], [ "Kundu", "Souvik", "" ], [ "Datta", "Gourav", "" ], [ "Mutnuri", "Srikar", "" ], [ "Asavisanu", "Namo", "" ], [ "Ayanian", "Nora", "" ], [ "Psounis", "Konstantinos", "" ], [ "Beerel", "Peter", "" ] ]
new_dataset
0.999695
2308.03165
Wei Cai
Zhonghao Lin, Haihan Duan, Jiaye Li, Xinyao Sun, Wei Cai
MetaCast: A Self-Driven Metaverse Announcer Architecture Based on Quality of Experience Evaluation Model
null
null
10.1145/3581783.3613761
null
cs.MM cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metaverse provides users with a novel experience through immersive multimedia technologies. Along with the rapid user growth, numerous events bursting in the metaverse necessitate an announcer to help catch and monitor ongoing events. However, systems on the market primarily serve for esports competitions and rely on human directors, making it challenging to provide 24-hour delivery in the metaverse persistent world. To fill the blank, we proposed a three-stage architecture for metaverse announcers, which is designed to identify events, position cameras, and blend between shots. Based on the architecture, we introduced a Metaverse Announcer User Experience (MAUE) model to identify the factors affecting the users' Quality of Experience (QoE) from a human-centered perspective. In addition, we implemented \textit{MetaCast}, a practical self-driven metaverse announcer in a university campus metaverse prototype, to conduct user studies for MAUE model. The experimental results have effectively achieved satisfactory announcer settings that align with the preferences of most users, encompassing parameters such as video transition rate, repetition rate, importance threshold value, and image composition.
[ { "version": "v1", "created": "Sun, 6 Aug 2023 17:21:31 GMT" } ]
2023-08-08T00:00:00
[ [ "Lin", "Zhonghao", "" ], [ "Duan", "Haihan", "" ], [ "Li", "Jiaye", "" ], [ "Sun", "Xinyao", "" ], [ "Cai", "Wei", "" ] ]
new_dataset
0.993496
2308.03166
Chunming He
Chunming He, Kai Li, Yachao Zhang, Yulun Zhang, Zhenhua Guo, Xiu Li, Martin Danelljan, Fisher Yu
Strategic Preys Make Acute Predators: Enhancing Camouflaged Object Detectors by Generating Camouflaged Objects
10 pages, 7 figures, 4 tables
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Camouflaged object detection (COD) is the challenging task of identifying camouflaged objects visually blended into surroundings. Albeit achieving remarkable success, existing COD detectors still struggle to obtain precise results in some challenging cases. To handle this problem, we draw inspiration from the prey-vs-predator game that leads preys to develop better camouflage and predators to acquire more acute vision systems and develop algorithms from both the prey side and the predator side. On the prey side, we propose an adversarial training framework, Camouflageator, which introduces an auxiliary generator to generate more camouflaged objects that are harder for a COD method to detect. Camouflageator trains the generator and detector in an adversarial way such that the enhanced auxiliary generator helps produce a stronger detector. On the predator side, we introduce a novel COD method, called Internal Coherence and Edge Guidance (ICEG), which introduces a camouflaged feature coherence module to excavate the internal coherence of camouflaged objects, striving to obtain more complete segmentation results. Additionally, ICEG proposes a novel edge-guided separated calibration module to remove false predictions to avoid obtaining ambiguous boundaries. Extensive experiments show that ICEG outperforms existing COD detectors and Camouflageator is flexible to improve various COD detectors, including ICEG, which brings state-of-the-art COD performance.
[ { "version": "v1", "created": "Sun, 6 Aug 2023 17:27:08 GMT" } ]
2023-08-08T00:00:00
[ [ "He", "Chunming", "" ], [ "Li", "Kai", "" ], [ "Zhang", "Yachao", "" ], [ "Zhang", "Yulun", "" ], [ "Guo", "Zhenhua", "" ], [ "Li", "Xiu", "" ], [ "Danelljan", "Martin", "" ], [ "Yu", "Fisher", "" ] ]
new_dataset
0.985945
2308.03193
Rohit Mohan
Rohit Mohan, Jos\'e Arce, Sassan Mokhtar, Daniele Cattaneo and Abhinav Valada
Syn-Mediverse: A Multimodal Synthetic Dataset for Intelligent Scene Understanding of Healthcare Facilities
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Safety and efficiency are paramount in healthcare facilities where the lives of patients are at stake. Despite the adoption of robots to assist medical staff in challenging tasks such as complex surgeries, human expertise is still indispensable. The next generation of autonomous healthcare robots hinges on their capacity to perceive and understand their complex and frenetic environments. While deep learning models are increasingly used for this purpose, they require extensive annotated training data which is impractical to obtain in real-world healthcare settings. To bridge this gap, we present Syn-Mediverse, the first hyper-realistic multimodal synthetic dataset of diverse healthcare facilities. Syn-Mediverse contains over \num{48000} images from a simulated industry-standard optical tracking camera and provides more than 1.5M annotations spanning five different scene understanding tasks including depth estimation, object detection, semantic segmentation, instance segmentation, and panoptic segmentation. We demonstrate the complexity of our dataset by evaluating the performance on a broad range of state-of-the-art baselines for each task. To further advance research on scene understanding of healthcare facilities, along with the public dataset we provide an online evaluation benchmark available at \url{http://syn-mediverse.cs.uni-freiburg.de}
[ { "version": "v1", "created": "Sun, 6 Aug 2023 19:20:18 GMT" } ]
2023-08-08T00:00:00
[ [ "Mohan", "Rohit", "" ], [ "Arce", "José", "" ], [ "Mokhtar", "Sassan", "" ], [ "Cattaneo", "Daniele", "" ], [ "Valada", "Abhinav", "" ] ]
new_dataset
0.998302
2308.03262
Jianqi Ma
Jianqi Ma, Zhetong Liang, Wangmeng Xiang, Xi Yang, Lei Zhang
A Benchmark for Chinese-English Scene Text Image Super-resolution
Accepted by ICCV2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scene Text Image Super-resolution (STISR) aims to recover high-resolution (HR) scene text images with visually pleasant and readable text content from the given low-resolution (LR) input. Most existing works focus on recovering English texts, which have relatively simple character structures, while little work has been done on the more challenging Chinese texts with diverse and complex character structures. In this paper, we propose a real-world Chinese-English benchmark dataset, namely Real-CE, for the task of STISR with the emphasis on restoring structurally complex Chinese characters. The benchmark provides 1,935/783 real-world LR-HR text image pairs~(contains 33,789 text lines in total) for training/testing in 2$\times$ and 4$\times$ zooming modes, complemented by detailed annotations, including detection boxes and text transcripts. Moreover, we design an edge-aware learning method, which provides structural supervision in image and feature domains, to effectively reconstruct the dense structures of Chinese characters. We conduct experiments on the proposed Real-CE benchmark and evaluate the existing STISR models with and without our edge-aware loss. The benchmark, including data and source code, is available at https://github.com/mjq11302010044/Real-CE.
[ { "version": "v1", "created": "Mon, 7 Aug 2023 02:57:48 GMT" } ]
2023-08-08T00:00:00
[ [ "Ma", "Jianqi", "" ], [ "Liang", "Zhetong", "" ], [ "Xiang", "Wangmeng", "" ], [ "Yang", "Xi", "" ], [ "Zhang", "Lei", "" ] ]
new_dataset
0.999886
2308.03349
Shengzhi Li
Shengzhi Li, Nima Tajbakhsh
SciGraphQA: A Large-Scale Synthetic Multi-Turn Question-Answering Dataset for Scientific Graphs
null
null
null
null
cs.CL cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
In this work, we present SciGraphQA, a synthetic multi-turn question-answer dataset related to academic graphs. SciGraphQA is 13 times larger than ChartVQA, the previously largest chart-visual question-answering dataset. It is also the largest open-sourced chart VQA dataset with non-synthetic charts. To build our dataset, we selected 290,000 Computer Science or Machine Learning ArXiv papers published between 2010 and 2020, and then used Palm-2 to generate 295K samples of open-vocabulary multi-turn question-answering dialogues about the graphs. As context, we provided the text-only Palm-2 with paper title, abstract, paragraph mentioning the graph, and rich text contextual data from the graph itself, obtaining dialogues with an average 2.23 question-answer turns for each graph. We asked GPT-4 to assess the matching quality of our question-answer turns given the paper's context, obtaining an average rating of 8.7/10 on our 3K test set. We evaluated the 0-shot capability of the most popular MLLM models such as LLaVa, mPLUGowl, BLIP-2, and openFlamingo's on our dataset, finding LLaVA-13B being the most performant with a CIDEr score of 0.08. We further enriched the question prompts for LLAVA by including the serialized data tables extracted from the graphs using the DePlot model, boosting LLaVA's 0-shot CIDEr to 0.15. To verify the validity of our dataset, we also fine-tuned LLaVa using our dataset, reaching a substantially higher CIDEr score of 0.26. We anticipate further accuracy improvement by including segmentation mask tokens and leveraging larger LLM backbones coupled with emergent prompting techniques. Our code and data are open-sourced.
[ { "version": "v1", "created": "Mon, 7 Aug 2023 07:03:49 GMT" } ]
2023-08-08T00:00:00
[ [ "Li", "Shengzhi", "" ], [ "Tajbakhsh", "Nima", "" ] ]
new_dataset
0.999819
2308.03357
Yoshiki Obinata
Yoshiki Obinata, Naoaki Kanazawa, Kento Kawaharazuka, Iori Yanokura, Soonhyo Kim, Kei Okada and Masayuki Inaba
Foundation Model based Open Vocabulary Task Planning and Executive System for General Purpose Service Robots
In review
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a strategy for implementing a robotic system capable of performing General Purpose Service Robot (GPSR) tasks in robocup@home. The GPSR task is that a real robot hears a variety of commands in spoken language and executes a task in a daily life environment. To achieve the task, we integrate foundation models based inference system and a state machine task executable. The foundation models plan the task and detect objects with open vocabulary, and a state machine task executable manages each robot's actions. This system works stable, and we took first place in the RoboCup@home Japan Open 2022's GPSR with 130 points, more than 85 points ahead of the other teams.
[ { "version": "v1", "created": "Mon, 7 Aug 2023 07:26:50 GMT" } ]
2023-08-08T00:00:00
[ [ "Obinata", "Yoshiki", "" ], [ "Kanazawa", "Naoaki", "" ], [ "Kawaharazuka", "Kento", "" ], [ "Yanokura", "Iori", "" ], [ "Kim", "Soonhyo", "" ], [ "Okada", "Kei", "" ], [ "Inaba", "Masayuki", "" ] ]
new_dataset
0.998905
2308.03375
Maximilian Neidhardt
M. Neidhardt, S. Gerlach F. N. Schmidt, I. A. K. Fiedler, S. Grube, B. Busse, and A. Schlaefer
VR-based body tracking to stimulate musculoskeletal training
Conference
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Training helps to maintain and improve sufficient muscle function, body control, and body coordination. These are important to reduce the risk of fracture incidents caused by falls, especially for the elderly or people recovering from injury. Virtual reality training can offer a cost-effective and individualized training experience. We present an application for the HoloLens 2 to enable musculoskeletal training for elderly and impaired persons to allow for autonomous training and automatic progress evaluation. We designed a virtual downhill skiing scenario that is controlled by body movement to stimulate balance and body control. By adapting the parameters of the ski slope, we can tailor the intensity of the training to individual users. In this work, we evaluate whether the movement data of the HoloLens 2 alone is sufficient to control and predict body movement and joint angles during musculoskeletal training. We record the movements of 10 healthy volunteers with external tracking cameras and track a set of body and joint angles of the participant during training. We estimate correlation coefficients and systematically analyze whether whole body movement can be derived from the movement data of the HoloLens 2. No participant reports movement sickness effects and all were able to quickly interact and control their movement during skiing. Our results show a high correlation between HoloLens 2 movement data and the external tracking of the upper body movement and joint angles of the lower limbs.
[ { "version": "v1", "created": "Mon, 7 Aug 2023 07:54:32 GMT" } ]
2023-08-08T00:00:00
[ [ "Neidhardt", "M.", "" ], [ "Schmidt", "S. Gerlach F. N.", "" ], [ "Fiedler", "I. A. K.", "" ], [ "Grube", "S.", "" ], [ "Busse", "B.", "" ], [ "Schlaefer", "A.", "" ] ]
new_dataset
0.976478
2308.03424
Matthias Urban
Matthias Urban and Carsten Binnig
CAESURA: Language Models as Multi-Modal Query Planners
6 pages, 4 figures
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
Traditional query planners translate SQL queries into query plans to be executed over relational data. However, it is impossible to query other data modalities, such as images, text, or video stored in modern data systems such as data lakes using these query planners. In this paper, we propose Language-Model-Driven Query Planning, a new paradigm of query planning that uses Language Models to translate natural language queries into executable query plans. Different from relational query planners, the resulting query plans can contain complex operators that are able to process arbitrary modalities. As part of this paper, we present a first GPT-4 based prototype called CEASURA and show the general feasibility of this idea on two datasets. Finally, we discuss several ideas to improve the query planning capabilities of today's Language Models.
[ { "version": "v1", "created": "Mon, 7 Aug 2023 09:20:32 GMT" } ]
2023-08-08T00:00:00
[ [ "Urban", "Matthias", "" ], [ "Binnig", "Carsten", "" ] ]
new_dataset
0.999806
2308.03425
Federico Rossi
Federico Rossi, Francesco Urbani, Marco Cococcioni, Emanuele Ruffaldi, Sergio Saponara
FPPU: Design and Implementation of a Pipelined Full Posit Processing Unit
null
null
null
null
cs.AR cs.PF
http://creativecommons.org/licenses/by/4.0/
By exploiting the modular RISC-V ISA this paper presents the customization of instruction set with posit\textsuperscript{\texttrademark} arithmetic instructions to provide improved numerical accuracy, well-defined behavior and increased range of representable numbers while keeping the flexibility and benefits of open-source ISA, like no licensing and royalty fee and community development. In this work we present the design, implementation and integration into the low-power Ibex RISC-V core of a full posit processing unit capable to directly implement in hardware the four arithmetic operations (add, sub, mul, div and fma), the inversion, the float-to-posit and posit-to-float conversions. We evaluate speed, power and area of this unit (that we have called Full Posit Processing Unit). The FPPU has been prototyped on Alveo and Kintex FPGAs, and its impact on the metrics of the full-RISC-V core have been evaluated, showing that we can provide real number processing capabilities to the mentioned core with an increase in area limited to $7\%$ for 8-bit posits and to $15\%$ for 16-bit posits. Finally we present tests one the use of posits for deep neural networks with different network models and datasets, showing minimal drop in accuracy when using 16-bit posits instead of 32-bit IEEE floats.
[ { "version": "v1", "created": "Mon, 7 Aug 2023 09:20:49 GMT" } ]
2023-08-08T00:00:00
[ [ "Rossi", "Federico", "" ], [ "Urbani", "Francesco", "" ], [ "Cococcioni", "Marco", "" ], [ "Ruffaldi", "Emanuele", "" ], [ "Saponara", "Sergio", "" ] ]
new_dataset
0.991531
2308.03427
Jingqing Ruan
Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Xingyu Zeng, Rui Zhao
TPTU: Task Planning and Tool Usage of Large Language Model-based AI Agents
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. Despite their prowess, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and discuss the crucial capabilities necessary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applications. Our study emphasizes the substantial potential of these models, while also identifying areas that need more investigation and improvement.
[ { "version": "v1", "created": "Mon, 7 Aug 2023 09:22:03 GMT" } ]
2023-08-08T00:00:00
[ [ "Ruan", "Jingqing", "" ], [ "Chen", "Yihong", "" ], [ "Zhang", "Bin", "" ], [ "Xu", "Zhiwei", "" ], [ "Bao", "Tianpeng", "" ], [ "Du", "Guoqing", "" ], [ "Shi", "Shiwei", "" ], [ "Mao", "Hangyu", "" ], [ "Zeng", "Xingyu", "" ], [ "Zhao", "Rui", "" ] ]
new_dataset
0.996414
2308.03429
Herman Sugiharto
Herman Sugiharto, Aradea, Husni Mubarok
RCMHA: Relative Convolutional Multi-Head Attention for Natural Language Modelling
13 pages, 13 figures, 6 tables
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
The Attention module finds common usage in language modeling, presenting distinct challenges within the broader scope of Natural Language Processing. Multi-Head Attention (MHA) employs an absolute positional encoding, which imposes limitations on token length and entails substantial memory consumption during the processing of embedded inputs. The current remedy proposed by researchers involves the utilization of relative positional encoding, similar to the approach adopted in Transformer-XL or Relative Multi-Head Attention (RMHA), albeit the employed architecture consumes considerable memory resources. To address these challenges, this study endeavors to refine MHA, leveraging relative positional encoding in conjunction with the Depth-Wise Convolutional Layer architecture, which promises heightened accuracy coupled with minimized memory usage. The proposed RCMHA framework entails the modification of two integral components: firstly, the application of the Depth-Wise Convolutional Layer to the input embedding, encompassing Query, Key, and Value parameters; secondly, the incorporation of Relative Positional Encoding into the attention scoring phase, harmoniously integrated with Scaled Dot-Product Attention. Empirical experiments underscore the advantages of RCMHA, wherein it exhibits superior accuracy, boasting a score of 0.572 in comparison to alternative attention modules such as MHA, Multi-DConv-Head Attention (MDHA), and RMHA. Concerning memory utilization, RMHA emerges as the most frugal, demonstrating an average consumption of 2.98 GB, surpassing RMHA which necessitates 3.5 GB.
[ { "version": "v1", "created": "Mon, 7 Aug 2023 09:24:24 GMT" } ]
2023-08-08T00:00:00
[ [ "Sugiharto", "Herman", "" ], [ "Aradea", "", "" ], [ "Mubarok", "Husni", "" ] ]
new_dataset
0.981423
2308.03467
Guruprasad Parasnis
Guruprasad Parasnis, Anmol Chokshi, Kailas Devadkar
RoadScan: A Novel and Robust Transfer Learning Framework for Autonomous Pothole Detection in Roads
6 pages, 5 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This research paper presents a novel approach to pothole detection using Deep Learning and Image Processing techniques. The proposed system leverages the VGG16 model for feature extraction and utilizes a custom Siamese network with triplet loss, referred to as RoadScan. The system aims to address the critical issue of potholes on roads, which pose significant risks to road users. Accidents due to potholes on the roads have led to numerous accidents. Although it is necessary to completely remove potholes, it is a time-consuming process. Hence, a general road user should be able to detect potholes from a safe distance in order to avoid damage. Existing methods for pothole detection heavily rely on object detection algorithms which tend to have a high chance of failure owing to the similarity in structures and textures of a road and a pothole. Additionally, these systems utilize millions of parameters thereby making the model difficult to use in small-scale applications for the general citizen. By analyzing diverse image processing methods and various high-performing networks, the proposed model achieves remarkable performance in accurately detecting potholes. Evaluation metrics such as accuracy, EER, precision, recall, and AUROC validate the effectiveness of the system. Additionally, the proposed model demonstrates computational efficiency and cost-effectiveness by utilizing fewer parameters and data for training. The research highlights the importance of technology in the transportation sector and its potential to enhance road safety and convenience. The network proposed in this model performs with a 96.12 % accuracy, 3.89 % EER, and a 0.988 AUROC value, which is highly competitive with other state-of-the-art works.
[ { "version": "v1", "created": "Mon, 7 Aug 2023 10:47:08 GMT" } ]
2023-08-08T00:00:00
[ [ "Parasnis", "Guruprasad", "" ], [ "Chokshi", "Anmol", "" ], [ "Devadkar", "Kailas", "" ] ]
new_dataset
0.958106
2308.03487
Stephanie Jean-Daubias
St\'ephanie Jean-Daubias (LIRIS, TWEAK, UCBL)
JADE: a board game to teach software ergonomics
null
Interaction Design and Architecture(s) Journal, 2023, 56, pp.29-52
10.55612/s-5002-056-002
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
JADE is an educational game we have imagined, designed, built, and used successfully in various contexts. This board game enables learning and practicing software ergonomics concepts. It is intended for beginners. We use it every year during several hours with our second-year computer science students at Lyon 1 University. In this paper, we present the classical version of the game, as well as the design and evaluation process that we applied. We also present the hybrid version of JADE, which relies on the use of QR codes and videos. We also present its use in our teaching (with about 850 learners for a total duration of 54 hours, which totals more than 2500 student-hours). We then discuss the results obtained and present the considered evolutions.
[ { "version": "v1", "created": "Mon, 7 Aug 2023 11:29:34 GMT" } ]
2023-08-08T00:00:00
[ [ "Jean-Daubias", "Stéphanie", "", "LIRIS, TWEAK, UCBL" ] ]
new_dataset
0.999401
2308.03514
Sungho Suh
Sungho Suh, Vitor Fortes Rey, Sizhen Bian, Yu-Chi Huang, Jo\v{z}e M. Ro\v{z}anec, Hooman Tavakoli Ghinani, Bo Zhou, Paul Lukowicz
Worker Activity Recognition in Manufacturing Line Using Near-body Electric Field
null
null
null
null
cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Manufacturing industries strive to improve production efficiency and product quality by deploying advanced sensing and control systems. Wearable sensors are emerging as a promising solution for achieving this goal, as they can provide continuous and unobtrusive monitoring of workers' activities in the manufacturing line. This paper presents a novel wearable sensing prototype that combines IMU and body capacitance sensing modules to recognize worker activities in the manufacturing line. To handle these multimodal sensor data, we propose and compare early, and late sensor data fusion approaches for multi-channel time-series convolutional neural networks and deep convolutional LSTM. We evaluate the proposed hardware and neural network model by collecting and annotating sensor data using the proposed sensing prototype and Apple Watches in the testbed of the manufacturing line. Experimental results demonstrate that our proposed methods achieve superior performance compared to the baseline methods, indicating the potential of the proposed approach for real-world applications in manufacturing industries. Furthermore, the proposed sensing prototype with a body capacitive sensor and feature fusion method improves by 6.35%, yielding a 9.38% higher macro F1 score than the proposed sensing prototype without a body capacitive sensor and Apple Watch data, respectively.
[ { "version": "v1", "created": "Mon, 7 Aug 2023 12:10:13 GMT" } ]
2023-08-08T00:00:00
[ [ "Suh", "Sungho", "" ], [ "Rey", "Vitor Fortes", "" ], [ "Bian", "Sizhen", "" ], [ "Huang", "Yu-Chi", "" ], [ "Rožanec", "Jože M.", "" ], [ "Ghinani", "Hooman Tavakoli", "" ], [ "Zhou", "Bo", "" ], [ "Lukowicz", "Paul", "" ] ]
new_dataset
0.998415
2308.03526
Michael Mathieu
Micha\"el Mathieu, Sherjil Ozair, Srivatsan Srinivasan, Caglar Gulcehre, Shangtong Zhang, Ray Jiang, Tom Le Paine, Richard Powell, Konrad \.Zo{\l}na, Julian Schrittwieser, David Choi, Petko Georgiev, Daniel Toyama, Aja Huang, Roman Ring, Igor Babuschkin, Timo Ewalds, Mahyar Bordbar, Sarah Henderson, Sergio G\'omez Colmenarejo, A\"aron van den Oord, Wojciech Marian Czarnecki, Nando de Freitas, Oriol Vinyals
AlphaStar Unplugged: Large-Scale Offline Reinforcement Learning
32 pages, 13 figures, previous version published as a NeurIPS 2021 workshop: https://openreview.net/forum?id=Np8Pumfoty
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
StarCraft II is one of the most challenging simulated reinforcement learning environments; it is partially observable, stochastic, multi-agent, and mastering StarCraft II requires strategic planning over long time horizons with real-time low-level execution. It also has an active professional competitive scene. StarCraft II is uniquely suited for advancing offline RL algorithms, both because of its challenging nature and because Blizzard has released a massive dataset of millions of StarCraft II games played by human players. This paper leverages that and establishes a benchmark, called AlphaStar Unplugged, introducing unprecedented challenges for offline reinforcement learning. We define a dataset (a subset of Blizzard's release), tools standardizing an API for machine learning methods, and an evaluation protocol. We also present baseline agents, including behavior cloning, offline variants of actor-critic and MuZero. We improve the state of the art of agents using only offline data, and we achieve 90% win rate against previously published AlphaStar behavior cloning agent.
[ { "version": "v1", "created": "Mon, 7 Aug 2023 12:21:37 GMT" } ]
2023-08-08T00:00:00
[ [ "Mathieu", "Michaël", "" ], [ "Ozair", "Sherjil", "" ], [ "Srinivasan", "Srivatsan", "" ], [ "Gulcehre", "Caglar", "" ], [ "Zhang", "Shangtong", "" ], [ "Jiang", "Ray", "" ], [ "Paine", "Tom Le", "" ], [ "Powell", "Richard", "" ], [ "Żołna", "Konrad", "" ], [ "Schrittwieser", "Julian", "" ], [ "Choi", "David", "" ], [ "Georgiev", "Petko", "" ], [ "Toyama", "Daniel", "" ], [ "Huang", "Aja", "" ], [ "Ring", "Roman", "" ], [ "Babuschkin", "Igor", "" ], [ "Ewalds", "Timo", "" ], [ "Bordbar", "Mahyar", "" ], [ "Henderson", "Sarah", "" ], [ "Colmenarejo", "Sergio Gómez", "" ], [ "Oord", "Aäron van den", "" ], [ "Czarnecki", "Wojciech Marian", "" ], [ "de Freitas", "Nando", "" ], [ "Vinyals", "Oriol", "" ] ]
new_dataset
0.998634
2308.03558
Wai Man Si
Wai Man Si, Michael Backes, Yang Zhang
Mondrian: Prompt Abstraction Attack Against Large Language Models for Cheaper API Pricing
null
null
null
null
cs.CR cs.CL
http://creativecommons.org/licenses/by/4.0/
The Machine Learning as a Service (MLaaS) market is rapidly expanding and becoming more mature. For example, OpenAI's ChatGPT is an advanced large language model (LLM) that generates responses for various queries with associated fees. Although these models can deliver satisfactory performance, they are far from perfect. Researchers have long studied the vulnerabilities and limitations of LLMs, such as adversarial attacks and model toxicity. Inevitably, commercial ML models are also not exempt from such issues, which can be problematic as MLaaS continues to grow. In this paper, we discover a new attack strategy against LLM APIs, namely the prompt abstraction attack. Specifically, we propose Mondrian, a simple and straightforward method that abstracts sentences, which can lower the cost of using LLM APIs. In this approach, the adversary first creates a pseudo API (with a lower established price) to serve as the proxy of the target API (with a higher established price). Next, the pseudo API leverages Mondrian to modify the user query, obtain the abstracted response from the target API, and forward it back to the end user. Our results show that Mondrian successfully reduces user queries' token length ranging from 13% to 23% across various tasks, including text classification, generation, and question answering. Meanwhile, these abstracted queries do not significantly affect the utility of task-specific and general language models like ChatGPT. Mondrian also reduces instruction prompts' token length by at least 11% without compromising output quality. As a result, the prompt abstraction attack enables the adversary to profit without bearing the cost of API development and deployment.
[ { "version": "v1", "created": "Mon, 7 Aug 2023 13:10:35 GMT" } ]
2023-08-08T00:00:00
[ [ "Si", "Wai Man", "" ], [ "Backes", "Michael", "" ], [ "Zhang", "Yang", "" ] ]
new_dataset
0.968351
2308.03586
Nafiseh Kakhani
Nafiseh Kakhani, Moien Rangzan, Ali Jamali, Sara Attarchi, Seyed Kazem Alavipanah, and Thomas Scholten
SoilNet: An Attention-based Spatio-temporal Deep Learning Framework for Soil Organic Carbon Prediction with Digital Soil Mapping in Europe
12 pages
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Digital soil mapping (DSM) is an advanced approach that integrates statistical modeling and cutting-edge technologies, including machine learning (ML) methods, to accurately depict soil properties and their spatial distribution. Soil organic carbon (SOC) is a crucial soil attribute providing valuable insights into soil health, nutrient cycling, greenhouse gas emissions, and overall ecosystem productivity. This study highlights the significance of spatial-temporal deep learning (DL) techniques within the DSM framework. A novel architecture is proposed, incorporating spatial information using a base convolutional neural network (CNN) model and spatial attention mechanism, along with climate temporal information using a long short-term memory (LSTM) network, for SOC prediction across Europe. The model utilizes a comprehensive set of environmental features, including Landsat-8 images, topography, remote sensing indices, and climate time series, as input features. Results demonstrate that the proposed framework outperforms conventional ML approaches like random forest commonly used in DSM, yielding lower root mean square error (RMSE). This model is a robust tool for predicting SOC and could be applied to other soil properties, thereby contributing to the advancement of DSM techniques and facilitating land management and decision-making processes based on accurate information.
[ { "version": "v1", "created": "Mon, 7 Aug 2023 13:44:44 GMT" } ]
2023-08-08T00:00:00
[ [ "Kakhani", "Nafiseh", "" ], [ "Rangzan", "Moien", "" ], [ "Jamali", "Ali", "" ], [ "Attarchi", "Sara", "" ], [ "Alavipanah", "Seyed Kazem", "" ], [ "Scholten", "Thomas", "" ] ]
new_dataset
0.993568
2308.03610
Liao Qu
Huichao Zhang, Bowen Chen, Hao Yang, Liao Qu, Xu Wang, Li Chen, Chao Long, Feida Zhu, Kang Du, Min Zheng
AvatarVerse: High-quality & Stable 3D Avatar Creation from Text and Pose
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Creating expressive, diverse and high-quality 3D avatars from highly customized text descriptions and pose guidance is a challenging task, due to the intricacy of modeling and texturing in 3D that ensure details and various styles (realistic, fictional, etc). We present AvatarVerse, a stable pipeline for generating expressive high-quality 3D avatars from nothing but text descriptions and pose guidance. In specific, we introduce a 2D diffusion model conditioned on DensePose signal to establish 3D pose control of avatars through 2D images, which enhances view consistency from partially observed scenarios. It addresses the infamous Janus Problem and significantly stablizes the generation process. Moreover, we propose a progressive high-resolution 3D synthesis strategy, which obtains substantial improvement over the quality of the created 3D avatars. To this end, the proposed AvatarVerse pipeline achieves zero-shot 3D modeling of 3D avatars that are not only more expressive, but also in higher quality and fidelity than previous works. Rigorous qualitative evaluations and user studies showcase AvatarVerse's superiority in synthesizing high-fidelity 3D avatars, leading to a new standard in high-quality and stable 3D avatar creation. Our project page is: https://avatarverse3d.github.io
[ { "version": "v1", "created": "Mon, 7 Aug 2023 14:09:46 GMT" } ]
2023-08-08T00:00:00
[ [ "Zhang", "Huichao", "" ], [ "Chen", "Bowen", "" ], [ "Yang", "Hao", "" ], [ "Qu", "Liao", "" ], [ "Wang", "Xu", "" ], [ "Chen", "Li", "" ], [ "Long", "Chao", "" ], [ "Zhu", "Feida", "" ], [ "Du", "Kang", "" ], [ "Zheng", "Min", "" ] ]
new_dataset
0.999705
2308.03652
Ardit Ramadani
Ardit Ramadani, Peter Ewert, Heribert Schunkert, Nassir Navab
WarpEM: Dynamic Time Warping for Accurate Catheter Registration in EM-guided Procedures
The 26th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Accurate catheter tracking is crucial during minimally invasive endovascular procedures (MIEP), and electromagnetic (EM) tracking is a widely used technology that serves this purpose. However, registration between preoperative images and the EM tracking system is often challenging. Existing registration methods typically require manual interactions, which can be time-consuming, increase the risk of errors and change the procedural workflow. Although several registration methods are available for catheter tracking, such as marker-based and path-based approaches, their limitations can impact the accuracy of the resulting tracking solution, consequently, the outcome of the medical procedure. This paper introduces a novel automated catheter registration method for EM-guided MIEP. The method utilizes 3D signal temporal analysis, such as Dynamic Time Warping (DTW) algorithms, to improve registration accuracy and reliability compared to existing methods. DTW can accurately warp and match EM-tracked paths to the vessel's centerline, making it particularly suitable for registration. The introduced registration method is evaluated for accuracy in a vascular phantom using a marker-based registration as the ground truth. The results indicate that the DTW method yields accurate and reliable registration outcomes, with a mean error of $2.22$mm. The introduced registration method presents several advantages over state-of-the-art methods, such as high registration accuracy, no initialization required, and increased automation.
[ { "version": "v1", "created": "Mon, 7 Aug 2023 15:07:21 GMT" } ]
2023-08-08T00:00:00
[ [ "Ramadani", "Ardit", "" ], [ "Ewert", "Peter", "" ], [ "Schunkert", "Heribert", "" ], [ "Navab", "Nassir", "" ] ]
new_dataset
0.981605
2308.03665
Felix Chalumeau
Felix Chalumeau, Bryan Lim, Raphael Boige, Maxime Allard, Luca Grillotti, Manon Flageat, Valentin Mac\'e, Arthur Flajolet, Thomas Pierrot, Antoine Cully
QDax: A Library for Quality-Diversity and Population-based Algorithms with Hardware Acceleration
null
null
null
null
cs.AI cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
QDax is an open-source library with a streamlined and modular API for Quality-Diversity (QD) optimization algorithms in Jax. The library serves as a versatile tool for optimization purposes, ranging from black-box optimization to continuous control. QDax offers implementations of popular QD, Neuroevolution, and Reinforcement Learning (RL) algorithms, supported by various examples. All the implementations can be just-in-time compiled with Jax, facilitating efficient execution across multiple accelerators, including GPUs and TPUs. These implementations effectively demonstrate the framework's flexibility and user-friendliness, easing experimentation for research purposes. Furthermore, the library is thoroughly documented and tested with 95\% coverage.
[ { "version": "v1", "created": "Mon, 7 Aug 2023 15:29:44 GMT" } ]
2023-08-08T00:00:00
[ [ "Chalumeau", "Felix", "" ], [ "Lim", "Bryan", "" ], [ "Boige", "Raphael", "" ], [ "Allard", "Maxime", "" ], [ "Grillotti", "Luca", "" ], [ "Flageat", "Manon", "" ], [ "Macé", "Valentin", "" ], [ "Flajolet", "Arthur", "" ], [ "Pierrot", "Thomas", "" ], [ "Cully", "Antoine", "" ] ]
new_dataset
0.968428