id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2308.06719
Yiding Qiu
Yiding Qiu, Henrik I. Christensen
3D Scene Graph Prediction on Point Clouds Using Knowledge Graphs
accepted at CASE 2023
null
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
3D scene graph prediction is a task that aims to concurrently predict object classes and their relationships within a 3D environment. As these environments are primarily designed by and for humans, incorporating commonsense knowledge regarding objects and their relationships can significantly constrain and enhance the prediction of the scene graph. In this paper, we investigate the application of commonsense knowledge graphs for 3D scene graph prediction on point clouds of indoor scenes. Through experiments conducted on a real-world indoor dataset, we demonstrate that integrating external commonsense knowledge via the message-passing method leads to a 15.0 % improvement in scene graph prediction accuracy with external knowledge and $7.96\%$ with internal knowledge when compared to state-of-the-art algorithms. We also tested in the real world with 10 frames per second for scene graph generation to show the usage of the model in a more realistic robotics setting.
[ { "version": "v1", "created": "Sun, 13 Aug 2023 08:20:17 GMT" } ]
2023-08-15T00:00:00
[ [ "Qiu", "Yiding", "" ], [ "Christensen", "Henrik I.", "" ] ]
new_dataset
0.973554
2308.06732
Zhiqing Wei
Yingying Zou, Zhiqing Wei, Yanpeng Cui, Xinyi Liu, and Zhiyong Feng
UD-MAC: Delay Tolerant Multiple Access Control Protocol for Unmanned Aerial Vehicle Networks
null
null
null
null
cs.NI cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In unmanned aerial vehicle (UAV) networks, high-capacity data transmission is of utmost importance for applications such as intelligent transportation, smart cities, and forest monitoring, which rely on the mobility of UAVs to collect and transmit large amount of data, including video and image data. Due to the short flight time of UAVs, the network capacity will be reduced when they return to the ground unit for charging. Hence, we suggest that UAVs can apply a store-carry-and-forward (SCF) transmission mode to carry packets on their way back to the ground unit for improving network throughput. In this paper, we propose a novel protocol, named UAV delay-tolerant multiple access control (UD-MAC), which can support different transmission modes in UAV networks. We set a higher priority for SCF transmission and analyze the probability of being in SCF mode to derive network throughput. The simulation results show that the network throughput of UD-MAC is improved by 57% to 83% compared to VeMAC.
[ { "version": "v1", "created": "Sun, 13 Aug 2023 09:49:59 GMT" } ]
2023-08-15T00:00:00
[ [ "Zou", "Yingying", "" ], [ "Wei", "Zhiqing", "" ], [ "Cui", "Yanpeng", "" ], [ "Liu", "Xinyi", "" ], [ "Feng", "Zhiyong", "" ] ]
new_dataset
0.998278
2308.06782
Gelei Deng
Gelei Deng, Yi Liu, V\'ictor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, Stefan Rass
PentestGPT: An LLM-empowered Automatic Penetration Testing Tool
null
null
null
null
cs.SE cs.CR
http://creativecommons.org/licenses/by/4.0/
Penetration testing, a crucial industrial practice for ensuring system security, has traditionally resisted automation due to the extensive expertise required by human professionals. Large Language Models (LLMs) have shown significant advancements in various domains, and their emergent abilities suggest their potential to revolutionize industries. In this research, we evaluate the performance of LLMs on real-world penetration testing tasks using a robust benchmark created from test machines with platforms. Our findings reveal that while LLMs demonstrate proficiency in specific sub-tasks within the penetration testing process, such as using testing tools, interpreting outputs, and proposing subsequent actions, they also encounter difficulties maintaining an integrated understanding of the overall testing scenario. In response to these insights, we introduce PentestGPT, an LLM-empowered automatic penetration testing tool that leverages the abundant domain knowledge inherent in LLMs. PentestGPT is meticulously designed with three self-interacting modules, each addressing individual sub-tasks of penetration testing, to mitigate the challenges related to context loss. Our evaluation shows that PentestGPT not only outperforms LLMs with a task-completion increase of 228.6\% compared to the \gptthree model among the benchmark targets but also proves effective in tackling real-world penetration testing challenges. Having been open-sourced on GitHub, PentestGPT has garnered over 4,700 stars and fostered active community engagement, attesting to its value and impact in both the academic and industrial spheres.
[ { "version": "v1", "created": "Sun, 13 Aug 2023 14:35:50 GMT" } ]
2023-08-15T00:00:00
[ [ "Deng", "Gelei", "" ], [ "Liu", "Yi", "" ], [ "Mayoral-Vilches", "Víctor", "" ], [ "Liu", "Peng", "" ], [ "Li", "Yuekang", "" ], [ "Xu", "Yuan", "" ], [ "Zhang", "Tianwei", "" ], [ "Liu", "Yang", "" ], [ "Pinzger", "Martin", "" ], [ "Rass", "Stefan", "" ] ]
new_dataset
0.999321
2308.06802
Xiangliang Kong
Xiangliang Kong
Locally repairable convertible codes with optimal access costs
25 pages
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Modern large-scale distributed storage systems use erasure codes to protect against node failures with low storage overhead. In practice, the failure rate and other factors of storage devices in the system may vary significantly over time, and leads to changes of the ideal code parameters. To maintain the storage efficiency, this requires the system to adjust parameters of the currently used codes. The changing process of code parameters on encoded data is called code conversion. As an important class of storage codes, locally repairable codes (LRCs) can repair any codeword symbol using a small number of other symbols. This feature makes LRCs highly efficient for addressing single node failures in the storage systems. In this paper, we investigate the code conversions for locally repairable codes in the merge regime. We establish a lower bound on the access cost of code conversion for general LRCs and propose a general construction of LRCs that can perform code conversions with access cost matching this bound. This construction provides a family of LRCs together with optimal conversion process over the field of size linear in the code length.
[ { "version": "v1", "created": "Sun, 13 Aug 2023 16:09:12 GMT" } ]
2023-08-15T00:00:00
[ [ "Kong", "Xiangliang", "" ] ]
new_dataset
0.99474
2308.06811
Nouran Abdalazim
Nouran Abdalazim, Leonardo Alchieri, Lidia Alecci, Silvia Santini
BiHeartS: Bilateral Heart Rate from multiple devices and body positions for Sleep measurement Dataset
5 pages
null
null
null
cs.HC
http://creativecommons.org/licenses/by-sa/4.0/
Sleep is the primary mean of recovery from accumulated fatigue and thus plays a crucial role in fostering people's mental and physical well-being. Sleep quality monitoring systems are often implemented using wearables that leverage their sensing capabilities to provide sleep behaviour insights and recommendations to users. Building models to estimate sleep quality from sensor data is a challenging task, due to the variability of both physiological data, perception of sleep quality, and the daily routine across users. This challenge gauges the need for a comprehensive dataset that includes information about the daily behaviour of users, physiological signals as well as the perceived sleep quality. In this paper, we try to narrow this gap by proposing Bilateral Heart rate from multiple devices and body positions for Sleep measurement (BiHeartS) dataset. The dataset is collected in the wild from 10 participants for 30 consecutive nights. Both research-grade and commercial wearable devices are included in the data collection campaign. Also, comprehensive self-reports are collected about the sleep quality and the daily routine.
[ { "version": "v1", "created": "Sun, 13 Aug 2023 16:53:09 GMT" } ]
2023-08-15T00:00:00
[ [ "Abdalazim", "Nouran", "" ], [ "Alchieri", "Leonardo", "" ], [ "Alecci", "Lidia", "" ], [ "Santini", "Silvia", "" ] ]
new_dataset
0.992282
2308.06819
Jo\~ao Vitorino
Jo\~ao Vitorino, Isabel Pra\c{c}a, Eva Maia
SoK: Realistic Adversarial Attacks and Defenses for Intelligent Network Intrusion Detection
31 pages, 3 tables, 6 figures, Computers and Security journal
null
null
null
cs.CR cs.LG cs.NI
http://creativecommons.org/licenses/by/4.0/
Machine Learning (ML) can be incredibly valuable to automate anomaly detection and cyber-attack classification, improving the way that Network Intrusion Detection (NID) is performed. However, despite the benefits of ML models, they are highly susceptible to adversarial cyber-attack examples specifically crafted to exploit them. A wide range of adversarial attacks have been created and researchers have worked on various defense strategies to safeguard ML models, but most were not intended for the specific constraints of a communication network and its communication protocols, so they may lead to unrealistic examples in the NID domain. This Systematization of Knowledge (SoK) consolidates and summarizes the state-of-the-art adversarial learning approaches that can generate realistic examples and could be used in real ML development and deployment scenarios with real network traffic flows. This SoK also describes the open challenges regarding the use of adversarial ML in the NID domain, defines the fundamental properties that are required for an adversarial example to be realistic, and provides guidelines for researchers to ensure that their future experiments are adequate for a real communication network.
[ { "version": "v1", "created": "Sun, 13 Aug 2023 17:23:36 GMT" } ]
2023-08-15T00:00:00
[ [ "Vitorino", "João", "" ], [ "Praça", "Isabel", "" ], [ "Maia", "Eva", "" ] ]
new_dataset
0.994414
2308.06829
Hao Xu
Hao Xu, Yunqing Sun, Xiaoshuai Zhang, Erwu Liu and Chih-Lin I
When Web 3.0 Meets Reality: A Hyperdimensional Fractal Polytope P2P Ecosystems
null
null
null
null
cs.NI cs.AR cs.CR cs.DC
http://creativecommons.org/licenses/by/4.0/
Web 3.0 opens the world of new existence of the crypto-network-entity, which is independently defined by the public key pairs for entities and the connection to the Web 3.0 cyberspace. In this paper, we first discover a spacetime coordinate system based on fractal polytope in any dimensions with discrete time offered by blockchain and consensus. Second, the novel network entities and functions are defined to make use of hyperdimensional deterministic switching and routing protocols and blockchain-enabled mutual authentication. In addition to spacetime network architecture, we also define a multi-tier identity scheme which extends the native Web 3.0 crypto-network-entity to outer cyber and physical world, offering legal-compliant anonymity and linkability to all derived identifiers of entities. In this way, we unify the holistic Web 3.0 network based on persistent spacetime and its entity extension to our cyber and physical world.
[ { "version": "v1", "created": "Sun, 13 Aug 2023 18:14:45 GMT" } ]
2023-08-15T00:00:00
[ [ "Xu", "Hao", "" ], [ "Sun", "Yunqing", "" ], [ "Zhang", "Xiaoshuai", "" ], [ "Liu", "Erwu", "" ], [ "I", "Chih-Lin", "" ] ]
new_dataset
0.998011
2308.06850
Laurie Williams
William Enck, Yasemin Acar, Michel Cukier, Alexandros Kapravelos, Christian K\"astner, Laurie Williams
S3C2 Summit 2023-06: Government Secure Supply Chain Summit
arXiv admin note: text overlap with arXiv:2307.16557, arXiv:2307.15642
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recent years have shown increased cyber attacks targeting less secure elements in the software supply chain and causing fatal damage to businesses and organizations. Past well-known examples of software supply chain attacks are the SolarWinds or log4j incidents that have affected thousands of customers and businesses. The US government and industry are equally interested in enhancing software supply chain security. On June 7, 2023, researchers from the NSF-supported Secure Software Supply Chain Center (S3C2) conducted a Secure Software Supply Chain Summit with a diverse set of 17 practitioners from 13 government agencies. The goal of the Summit was two-fold: (1) to share our observations from our previous two summits with industry, and (2) to enable sharing between individuals at the government agencies regarding practical experiences and challenges with software supply chain security. For each discussion topic, we presented our observations and take-aways from the industry summits to spur conversation. We specifically focused on the Executive Order 14028, software bill of materials (SBOMs), choosing new dependencies, provenance and self-attestation, and large language models. The open discussions enabled mutual sharing and shed light on common challenges that government agencies see as impacting government and industry practitioners when securing their software supply chain. In this paper, we provide a summary of the Summit.
[ { "version": "v1", "created": "Sun, 13 Aug 2023 21:51:28 GMT" } ]
2023-08-15T00:00:00
[ [ "Enck", "William", "" ], [ "Acar", "Yasemin", "" ], [ "Cukier", "Michel", "" ], [ "Kapravelos", "Alexandros", "" ], [ "Kästner", "Christian", "" ], [ "Williams", "Laurie", "" ] ]
new_dataset
0.999465
2308.06861
Fahimeh Fooladgar
Fahimeh Fooladgar, Minh Nguyen Nhat To, Parvin Mousavi, Purang Abolmaesumi
Manifold DivideMix: A Semi-Supervised Contrastive Learning Framework for Severe Label Noise
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Deep neural networks have proven to be highly effective when large amounts of data with clean labels are available. However, their performance degrades when training data contains noisy labels, leading to poor generalization on the test set. Real-world datasets contain noisy label samples that either have similar visual semantics to other classes (in-distribution) or have no semantic relevance to any class (out-of-distribution) in the dataset. Most state-of-the-art methods leverage ID labeled noisy samples as unlabeled data for semi-supervised learning, but OOD labeled noisy samples cannot be used in this way because they do not belong to any class within the dataset. Hence, in this paper, we propose incorporating the information from all the training data by leveraging the benefits of self-supervised training. Our method aims to extract a meaningful and generalizable embedding space for each sample regardless of its label. Then, we employ a simple yet effective K-nearest neighbor method to remove portions of out-of-distribution samples. By discarding these samples, we propose an iterative "Manifold DivideMix" algorithm to find clean and noisy samples, and train our model in a semi-supervised way. In addition, we propose "MixEMatch", a new algorithm for the semi-supervised step that involves mixup augmentation at the input and final hidden representations of the model. This will extract better representations by interpolating both in the input and manifold spaces. Extensive experiments on multiple synthetic-noise image benchmarks and real-world web-crawled datasets demonstrate the effectiveness of our proposed framework. Code is available at https://github.com/Fahim-F/ManifoldDivideMix.
[ { "version": "v1", "created": "Sun, 13 Aug 2023 23:33:33 GMT" } ]
2023-08-15T00:00:00
[ [ "Fooladgar", "Fahimeh", "" ], [ "To", "Minh Nguyen Nhat", "" ], [ "Mousavi", "Parvin", "" ], [ "Abolmaesumi", "Purang", "" ] ]
new_dataset
0.995369
2308.06869
Shenyuan Liang
Shenyuan Liang, Mauricio Pamplona Segundo, Sathyanarayanan N. Aakur, Sudeep Sarkar, Anuj Srivastava
Shape-Graph Matching Network (SGM-net): Registration for Statistical Shape Analysis
null
null
null
null
cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper focuses on the statistical analysis of shapes of data objects called shape graphs, a set of nodes connected by articulated curves with arbitrary shapes. A critical need here is a constrained registration of points (nodes to nodes, edges to edges) across objects. This, in turn, requires optimization over the permutation group, made challenging by differences in nodes (in terms of numbers, locations) and edges (in terms of shapes, placements, and sizes) across objects. This paper tackles this registration problem using a novel neural-network architecture and involves an unsupervised loss function developed using the elastic shape metric for curves. This architecture results in (1) state-of-the-art matching performance and (2) an order of magnitude reduction in the computational cost relative to baseline approaches. We demonstrate the effectiveness of the proposed approach using both simulated data and real-world 2D and 3D shape graphs. Code and data will be made publicly available after review to foster research.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 00:42:03 GMT" } ]
2023-08-15T00:00:00
[ [ "Liang", "Shenyuan", "" ], [ "Segundo", "Mauricio Pamplona", "" ], [ "Aakur", "Sathyanarayanan N.", "" ], [ "Sarkar", "Sudeep", "" ], [ "Srivastava", "Anuj", "" ] ]
new_dataset
0.991381
2308.06878
Jiahao Liu
Sijia Liu, Jiahao Liu, Hansu Gu, Dongsheng Li, Tun Lu, Peng Zhang, Ning Gu
AutoSeqRec: Autoencoder for Efficient Sequential Recommendation
10 pages, accepted by CIKM 2023
null
null
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sequential recommendation demonstrates the capability to recommend items by modeling the sequential behavior of users. Traditional methods typically treat users as sequences of items, overlooking the collaborative relationships among them. Graph-based methods incorporate collaborative information by utilizing the user-item interaction graph. However, these methods sometimes face challenges in terms of time complexity and computational efficiency. To address these limitations, this paper presents AutoSeqRec, an incremental recommendation model specifically designed for sequential recommendation tasks. AutoSeqRec is based on autoencoders and consists of an encoder and three decoders within the autoencoder architecture. These components consider both the user-item interaction matrix and the rows and columns of the item transition matrix. The reconstruction of the user-item interaction matrix captures user long-term preferences through collaborative filtering. In addition, the rows and columns of the item transition matrix represent the item out-degree and in-degree hopping behavior, which allows for modeling the user's short-term interests. When making incremental recommendations, only the input matrices need to be updated, without the need to update parameters, which makes AutoSeqRec very efficient. Comprehensive evaluations demonstrate that AutoSeqRec outperforms existing methods in terms of accuracy, while showcasing its robustness and efficiency.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 01:23:37 GMT" } ]
2023-08-15T00:00:00
[ [ "Liu", "Sijia", "" ], [ "Liu", "Jiahao", "" ], [ "Gu", "Hansu", "" ], [ "Li", "Dongsheng", "" ], [ "Lu", "Tun", "" ], [ "Zhang", "Peng", "" ], [ "Gu", "Ning", "" ] ]
new_dataset
0.969647
2308.06891
Chunhao Peng
Chunhao Peng, Dapeng Yang, Ming Cheng, Jinghui Dai, Deyu Zhao, Li Jiang
Viia-hand: a Reach-and-grasp Restoration System Integrating Voice interaction, Computer vision and Auditory feedback for Blind Amputees
null
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual feedback plays a crucial role in the process of amputation patients completing grasping in the field of prosthesis control. However, for blind and visually impaired (BVI) amputees, the loss of both visual and grasping abilities makes the "easy" reach-and-grasp task a feasible challenge. In this paper, we propose a novel multi-sensory prosthesis system helping BVI amputees with sensing, navigation and grasp operations. It combines modules of voice interaction, environmental perception, grasp guidance, collaborative control, and auditory/tactile feedback. In particular, the voice interaction module receives user instructions and invokes other functional modules according to the instructions. The environmental perception and grasp guidance module obtains environmental information through computer vision, and feedbacks the information to the user through auditory feedback modules (voice prompts and spatial sound sources) and tactile feedback modules (vibration stimulation). The prosthesis collaborative control module obtains the context information of the grasp guidance process and completes the collaborative control of grasp gestures and wrist angles of prosthesis in conjunction with the user's control intention in order to achieve stable grasp of various objects. This paper details a prototyping design (named viia-hand) and presents its preliminary experimental verification on healthy subjects completing specific reach-and-grasp tasks. Our results showed that, with the help of our new design, the subjects were able to achieve a precise reach and reliable grasp of the target objects in a relatively cluttered environment. Additionally, the system is extremely user-friendly, as users can quickly adapt to it with minimal training.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 02:09:31 GMT" } ]
2023-08-15T00:00:00
[ [ "Peng", "Chunhao", "" ], [ "Yang", "Dapeng", "" ], [ "Cheng", "Ming", "" ], [ "Dai", "Jinghui", "" ], [ "Zhao", "Deyu", "" ], [ "Jiang", "Li", "" ] ]
new_dataset
0.99685
2308.06911
Pengfei Liu
Pengfei Liu, Yiming Ren and Zhixiang Ren
GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text
16 pages, 5 figures
null
null
null
cs.LG cs.CL q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Large language models have made significant strides in natural language processing, paving the way for innovative applications including molecular representation and generation. However, most existing single-modality approaches cannot capture the abundant and complex information in molecular data. Here, we introduce GIT-Mol, a multi-modal large language model that integrates the structure Graph, Image, and Text information, including the Simplified Molecular Input Line Entry System (SMILES) and molecular captions. To facilitate the integration of multi-modal molecular data, we propose GIT-Former, a novel architecture capable of mapping all modalities into a unified latent space. Our study develops an innovative any-to-language molecular translation strategy and achieves a 10%-15% improvement in molecular captioning, a 5%-10% accuracy increase in property prediction, and a 20% boost in molecule generation validity compared to baseline or single-modality models.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 03:12:29 GMT" } ]
2023-08-15T00:00:00
[ [ "Liu", "Pengfei", "" ], [ "Ren", "Yiming", "" ], [ "Ren", "Zhixiang", "" ] ]
new_dataset
0.999735
2308.06917
Carter Butts
Selena M. Livas, Scott Leo Renshaw, and Carter T. Butts
Calling The Dead: Resilience In The WTC Communication Networks
null
null
null
null
cs.SI nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Organizations in emergency settings must cope with various sources of disruption, most notably personnel loss. Death, incapacitation, or isolation of individuals within an organizational communication network can impair information passing, coordination, and connectivity, and may drive maladaptive responses such as repeated attempts to contact lost personnel (``calling the dead'') that themselves consume scarce resources. At the same time, organizations may respond to such disruption by reorganizing to restore function, a behavior that is fundamental to organizational resilience. Here, we use empirically calibrated models of communication for 17 groups of responders to the World Trade Center Disaster to examine the impact of exogenous removal of personnel on communication activity and network resilience. We find that removal of high-degree personnel and those in institutionally coordinative roles is particularly damaging to these organizations, with specialist responders being slower to adapt to losses. However, all organizations show adaptations to disruption, in some cases becoming better connected and making more complete use of personnel relative to control after experiencing losses.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 03:29:02 GMT" } ]
2023-08-15T00:00:00
[ [ "Livas", "Selena M.", "" ], [ "Renshaw", "Scott Leo", "" ], [ "Butts", "Carter T.", "" ] ]
new_dataset
0.998561
2308.06971
EPTCS
Brent A. Yorgey (Hendrix College)
Disco: A Functional Programming Language for Discrete Mathematics
In Proceedings TFPIE 2023, arXiv:2308.06110
EPTCS 382, 2023, pp. 64-81
10.4204/EPTCS.382.4
null
cs.PL cs.DM
http://creativecommons.org/licenses/by/4.0/
Disco is a pure, strict, statically typed functional programming language designed to be used in the setting of a discrete mathematics course. The goals of the language are to introduce students to functional programming concepts early, and to enhance their learning of mathematics by providing a computational platform for them to play with. It features mathematically-inspired notation, property-based testing, equirecursive algebraic types, subtyping, built-in list, bag, and finite set types, a REPL, and student-focused documentation. Disco is implemented in Haskell, with source code available on GitHub [https://github.com/disco-lang/disco], and interactive web-based REPL available through replit [https://replit.com/@BrentYorgey/Disco#README.md].
[ { "version": "v1", "created": "Mon, 14 Aug 2023 07:09:15 GMT" } ]
2023-08-15T00:00:00
[ [ "Yorgey", "Brent A.", "", "Hendrix College" ] ]
new_dataset
0.999901
2308.06974
Xiangchao Gan
Jiexiong Xu, Weikun Zhao, Zhiyan Tang and Xiangchao Gan
A One Stop 3D Target Reconstruction and multilevel Segmentation Method
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
3D object reconstruction and multilevel segmentation are fundamental to computer vision research. Existing algorithms usually perform 3D scene reconstruction and target objects segmentation independently, and the performance is not fully guaranteed due to the challenge of the 3D segmentation. Here we propose an open-source one stop 3D target reconstruction and multilevel segmentation framework (OSTRA), which performs segmentation on 2D images, tracks multiple instances with segmentation labels in the image sequence, and then reconstructs labelled 3D objects or multiple parts with Multi-View Stereo (MVS) or RGBD-based 3D reconstruction methods. We extend object tracking and 3D reconstruction algorithms to support continuous segmentation labels to leverage the advances in the 2D image segmentation, especially the Segment-Anything Model (SAM) which uses the pretrained neural network without additional training for new scenes, for 3D object segmentation. OSTRA supports most popular 3D object models including point cloud, mesh and voxel, and achieves high performance for semantic segmentation, instance segmentation and part segmentation on several 3D datasets. It even surpasses the manual segmentation in scenes with complex structures and occlusions. Our method opens up a new avenue for reconstructing 3D targets embedded with rich multi-scale segmentation information in complex scenes. OSTRA is available from https://github.com/ganlab/OSTRA.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 07:12:31 GMT" } ]
2023-08-15T00:00:00
[ [ "Xu", "Jiexiong", "" ], [ "Zhao", "Weikun", "" ], [ "Tang", "Zhiyan", "" ], [ "Gan", "Xiangchao", "" ] ]
new_dataset
0.993659
2308.06985
Oren Shrout
Oren Shrout, Ori Nitzan, Yizhak Ben-Shabat, Ayellet Tal
PatchContrast: Self-Supervised Pre-training for 3D Object Detection
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Accurately detecting objects in the environment is a key challenge for autonomous vehicles. However, obtaining annotated data for detection is expensive and time-consuming. We introduce PatchContrast, a novel self-supervised point cloud pre-training framework for 3D object detection. We propose to utilize two levels of abstraction to learn discriminative representation from unlabeled data: proposal-level and patch-level. The proposal-level aims at localizing objects in relation to their surroundings, whereas the patch-level adds information about the internal connections between the object's components, hence distinguishing between different objects based on their individual components. We demonstrate how these levels can be integrated into self-supervised pre-training for various backbones to enhance the downstream 3D detection task. We show that our method outperforms existing state-of-the-art models on three commonly-used 3D detection datasets.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 07:45:54 GMT" } ]
2023-08-15T00:00:00
[ [ "Shrout", "Oren", "" ], [ "Nitzan", "Ori", "" ], [ "Ben-Shabat", "Yizhak", "" ], [ "Tal", "Ayellet", "" ] ]
new_dataset
0.984242
2308.07024
Jui-Min Hsu
Yu-Ting Li, Ching-Te Chiu, An-Ting Hsieh, Mao-Hsiu Hsu, Long Wenyong, Jui-Min Hsu
PGT-Net: Progressive Guided Multi-task Neural Network for Small-area Wet Fingerprint Denoising and Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fingerprint recognition on mobile devices is an important method for identity verification. However, real fingerprints usually contain sweat and moisture which leads to poor recognition performance. In addition, for rolling out slimmer and thinner phones, technology companies reduce the size of recognition sensors by embedding them with the power button. Therefore, the limited size of fingerprint data also increases the difficulty of recognition. Denoising the small-area wet fingerprint images to clean ones becomes crucial to improve recognition performance. In this paper, we propose an end-to-end trainable progressive guided multi-task neural network (PGT-Net). The PGT-Net includes a shared stage and specific multi-task stages, enabling the network to train binary and non-binary fingerprints sequentially. The binary information is regarded as guidance for output enhancement which is enriched with the ridge and valley details. Moreover, a novel residual scaling mechanism is introduced to stabilize the training process. Experiment results on the FW9395 and FT-lightnoised dataset provided by FocalTech shows that PGT-Net has promising performance on the wet-fingerprint denoising and significantly improves the fingerprint recognition rate (FRR). On the FT-lightnoised dataset, the FRR of fingerprint recognition can be declined from 17.75% to 4.47%. On the FW9395 dataset, the FRR of fingerprint recognition can be declined from 9.45% to 1.09%.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 09:19:26 GMT" } ]
2023-08-15T00:00:00
[ [ "Li", "Yu-Ting", "" ], [ "Chiu", "Ching-Te", "" ], [ "Hsieh", "An-Ting", "" ], [ "Hsu", "Mao-Hsiu", "" ], [ "Wenyong", "Long", "" ], [ "Hsu", "Jui-Min", "" ] ]
new_dataset
0.998606
2308.07026
Ziqi Zhou
Ziqi Zhou, Shengshan Hu, Minghui Li, Hangtao Zhang, Yechao Zhang, Hai Jin
AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning
This paper has been accepted by the ACM International Conference on Multimedia (ACM MM '23, October 29-November 3, 2023, Ottawa, ON, Canada)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal contrastive learning aims to train a general-purpose feature extractor, such as CLIP, on vast amounts of raw, unlabeled paired image-text data. This can greatly benefit various complex downstream tasks, including cross-modal image-text retrieval and image classification. Despite its promising prospect, the security issue of cross-modal pre-trained encoder has not been fully explored yet, especially when the pre-trained encoder is publicly available for commercial use. In this work, we propose AdvCLIP, the first attack framework for generating downstream-agnostic adversarial examples based on cross-modal pre-trained encoders. AdvCLIP aims to construct a universal adversarial patch for a set of natural images that can fool all the downstream tasks inheriting the victim cross-modal pre-trained encoder. To address the challenges of heterogeneity between different modalities and unknown downstream tasks, we first build a topological graph structure to capture the relevant positions between target samples and their neighbors. Then, we design a topology-deviation based generative adversarial network to generate a universal adversarial patch. By adding the patch to images, we minimize their embeddings similarity to different modality and perturb the sample distribution in the feature space, achieving unviersal non-targeted attacks. Our results demonstrate the excellent attack performance of AdvCLIP on two types of downstream tasks across eight datasets. We also tailor three popular defenses to mitigate AdvCLIP, highlighting the need for new defense mechanisms to defend cross-modal pre-trained encoders.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 09:29:22 GMT" } ]
2023-08-15T00:00:00
[ [ "Zhou", "Ziqi", "" ], [ "Hu", "Shengshan", "" ], [ "Li", "Minghui", "" ], [ "Zhang", "Hangtao", "" ], [ "Zhang", "Yechao", "" ], [ "Jin", "Hai", "" ] ]
new_dataset
0.998693
2308.07081
Jivnesh Sandhan
Jivnesh Sandhan, Amruta Barbadikar, Malay Maity, Pavankumar Satuluri, Tushar Sandhan, Ravi M. Gupta, Pawan Goyal and Laxmidhar Behera
Aesthetics of Sanskrit Poetry from the Perspective of Computational Linguistics: A Case Study Analysis on Siksastaka
15 pages
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Sanskrit poetry has played a significant role in shaping the literary and cultural landscape of the Indian subcontinent for centuries. However, not much attention has been devoted to uncovering the hidden beauty of Sanskrit poetry in computational linguistics. This article explores the intersection of Sanskrit poetry and computational linguistics by proposing a roadmap of an interpretable framework to analyze and classify the qualities and characteristics of fine Sanskrit poetry. We discuss the rich tradition of Sanskrit poetry and the significance of computational linguistics in automatically identifying the characteristics of fine poetry. The proposed framework involves a human-in-the-loop approach that combines deterministic aspects delegated to machines and deep semantics left to human experts. We provide a deep analysis of Siksastaka, a Sanskrit poem, from the perspective of 6 prominent kavyashastra schools, to illustrate the proposed framework. Additionally, we provide compound, dependency, anvaya (prose order linearised form), meter, rasa (mood), alankar (figure of speech), and riti (writing style) annotations for Siksastaka and a web application to illustrate the poem's analysis and annotations. Our key contributions include the proposed framework, the analysis of Siksastaka, the annotations and the web application for future research. Link for interactive analysis: https://sanskritshala.github.io/shikshastakam/
[ { "version": "v1", "created": "Mon, 14 Aug 2023 11:26:25 GMT" } ]
2023-08-15T00:00:00
[ [ "Sandhan", "Jivnesh", "" ], [ "Barbadikar", "Amruta", "" ], [ "Maity", "Malay", "" ], [ "Satuluri", "Pavankumar", "" ], [ "Sandhan", "Tushar", "" ], [ "Gupta", "Ravi M.", "" ], [ "Goyal", "Pawan", "" ], [ "Behera", "Laxmidhar", "" ] ]
new_dataset
0.999158
2308.07124
Niklas Muennighoff
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, Shayne Longpre
OctoPack: Instruction Tuning Code Large Language Models
57 pages (9 main), 39 figures, 16 tables
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 13:53:54 GMT" } ]
2023-08-15T00:00:00
[ [ "Muennighoff", "Niklas", "" ], [ "Liu", "Qian", "" ], [ "Zebaze", "Armel", "" ], [ "Zheng", "Qinkai", "" ], [ "Hui", "Binyuan", "" ], [ "Zhuo", "Terry Yue", "" ], [ "Singh", "Swayam", "" ], [ "Tang", "Xiangru", "" ], [ "von Werra", "Leandro", "" ], [ "Longpre", "Shayne", "" ] ]
new_dataset
0.997854
2308.07153
Sk Aziz Ali
Sk Aziz Ali, Djamila Aouada, Gerd Reis, Didier Stricker
DELO: Deep Evidential LiDAR Odometry using Partial Optimal Transport
Accepted in ICCV 2023 Workshop
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate, robust, and real-time LiDAR-based odometry (LO) is imperative for many applications like robot navigation, globally consistent 3D scene map reconstruction, or safe motion-planning. Though LiDAR sensor is known for its precise range measurement, the non-uniform and uncertain point sampling density induce structural inconsistencies. Hence, existing supervised and unsupervised point set registration methods fail to establish one-to-one matching correspondences between LiDAR frames. We introduce a novel deep learning-based real-time (approx. 35-40ms per frame) LO method that jointly learns accurate frame-to-frame correspondences and model's predictive uncertainty (PU) as evidence to safe-guard LO predictions. In this work, we propose (i) partial optimal transportation of LiDAR feature descriptor for robust LO estimation, (ii) joint learning of predictive uncertainty while learning odometry over driving sequences, and (iii) demonstrate how PU can serve as evidence for necessary pose-graph optimization when LO network is either under or over confident. We evaluate our method on KITTI dataset and show competitive performance, even superior generalization ability over recent state-of-the-art approaches. Source codes are available.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 14:06:21 GMT" } ]
2023-08-15T00:00:00
[ [ "Ali", "Sk Aziz", "" ], [ "Aouada", "Djamila", "" ], [ "Reis", "Gerd", "" ], [ "Stricker", "Didier", "" ] ]
new_dataset
0.988627
2308.07170
Jeremy Cochoy
Jeremy Cochoy
PitchNet: A Fully Convolutional Neural Network for Pitch Estimation
null
null
null
null
cs.SD cs.LG eess.AS
http://creativecommons.org/licenses/by/4.0/
In the domain of music and sound processing, pitch extraction plays a pivotal role. This research introduces "PitchNet", a convolutional neural network tailored for pitch extraction from the human singing voice, including acapella performances. Integrating autocorrelation with deep learning techniques, PitchNet aims to optimize the accuracy of pitch detection. Evaluation across datasets comprising synthetic sounds, opera recordings, and time-stretched vowels demonstrates its efficacy. This work paves the way for enhanced pitch extraction in both music and voice settings.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 14:26:52 GMT" } ]
2023-08-15T00:00:00
[ [ "Cochoy", "Jeremy", "" ] ]
new_dataset
0.998627
2308.07266
Chandan Kumar Sheemar
Chandan Kumar Sheemar, Sourabh Solanki, Eva Lagunas, Jorge Querol, Symeon Chatzinotas, and Bj\"orn Ottersten
Full Duplex Joint Communications and Sensing for 6G: Opportunities and Challenges
null
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
The paradigm of joint communications and sensing (JCAS) envisions a revolutionary integration of communication and radar functionalities within a unified hardware platform. This novel concept not only opens up unprecedented possibilities, but also presents unique challenges. Its success is highly dependent on efficient full-duplex (FD) operation, which has the potential to enable simultaneous transmission and reception within the same frequency band. While ongoing research explores the potential of JCAS, there are related avenues of investigation that hold tremendous potential to profoundly transform the sixth generation (6G) and beyond cellular networks. This article sheds light on the new opportunities and challenges presented by JCAS by taking into account the key technical challenges of FD systems. Unlike simplified JCAS scenarios, we delve into the most comprehensive configuration, encompassing uplink (UL) and downlink (DL) users, as well as monostatic and bistatic radars, all harmoniously coexisting to jointly push the boundaries of both the communications and sensing performance. The performance improvements introduced by this advancement bring forth numerous new challenges, each meticulously examined and expounded upon.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 16:50:12 GMT" } ]
2023-08-15T00:00:00
[ [ "Sheemar", "Chandan Kumar", "" ], [ "Solanki", "Sourabh", "" ], [ "Lagunas", "Eva", "" ], [ "Querol", "Jorge", "" ], [ "Chatzinotas", "Symeon", "" ], [ "Ottersten", "Björn", "" ] ]
new_dataset
0.998022
2308.07267
Kejia Zhang
Kejia Zhang, Mingyu Yang, Stephen D. J. Lang, Alistair M. McInnes, Richard B. Sherley, Tilo Burghardt
Diving with Penguins: Detecting Penguins and their Prey in Animal-borne Underwater Videos via Deep Learning
5 pages, 5 figures, 4 Tables, "3rd International Workshop on Camera traps, AI, and Ecology (CamTrapAI)"
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
African penguins (Spheniscus demersus) are an endangered species. Little is known regarding their underwater hunting strategies and associated predation success rates, yet this is essential for guiding conservation. Modern bio-logging technology has the potential to provide valuable insights, but manually analysing large amounts of data from animal-borne video recorders (AVRs) is time-consuming. In this paper, we publish an animal-borne underwater video dataset of penguins and introduce a ready-to-deploy deep learning system capable of robustly detecting penguins ([email protected]%) and also instances of fish ([email protected]%). We note that the detectors benefit explicitly from air-bubble learning to improve accuracy. Extending this detector towards a dual-stream behaviour recognition network, we also provide the first results for identifying predation behaviour in penguin underwater videos. Whilst results are promising, further work is required for useful applicability of predation behaviour detection in field scenarios. In summary, we provide a highly reliable underwater penguin detector, a fish detector, and a valuable first attempt towards an automated visual detection of complex behaviours in a marine predator. We publish the networks, the DivingWithPenguins video dataset, annotations, splits, and weights for full reproducibility and immediate usability by practitioners.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 16:50:27 GMT" } ]
2023-08-15T00:00:00
[ [ "Zhang", "Kejia", "" ], [ "Yang", "Mingyu", "" ], [ "Lang", "Stephen D. J.", "" ], [ "McInnes", "Alistair M.", "" ], [ "Sherley", "Richard B.", "" ], [ "Burghardt", "Tilo", "" ] ]
new_dataset
0.997043
2308.07301
Esteve Valls Mascar\'o
Esteve Valls Mascaro, Hyemin Ahn, Dongheui Lee
A Unified Masked Autoencoder with Patchified Skeletons for Motion Synthesis
null
null
null
null
cs.CV cs.GR cs.RO
http://creativecommons.org/licenses/by/4.0/
The synthesis of human motion has traditionally been addressed through task-dependent models that focus on specific challenges, such as predicting future motions or filling in intermediate poses conditioned on known key-poses. In this paper, we present a novel task-independent model called UNIMASK-M, which can effectively address these challenges using a unified architecture. Our model obtains comparable or better performance than the state-of-the-art in each field. Inspired by Vision Transformers (ViTs), our UNIMASK-M model decomposes a human pose into body parts to leverage the spatio-temporal relationships existing in human motion. Moreover, we reformulate various pose-conditioned motion synthesis tasks as a reconstruction problem with different masking patterns given as input. By explicitly informing our model about the masked joints, our UNIMASK-M becomes more robust to occlusions. Experimental results show that our model successfully forecasts human motion on the Human3.6M dataset. Moreover, it achieves state-of-the-art results in motion inbetweening on the LaFAN1 dataset, particularly in long transition periods. More information can be found on the project website https://sites.google.com/view/estevevallsmascaro/publications/unimask-m.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 17:39:44 GMT" } ]
2023-08-15T00:00:00
[ [ "Mascaro", "Esteve Valls", "" ], [ "Ahn", "Hyemin", "" ], [ "Lee", "Dongheui", "" ] ]
new_dataset
0.998999
2308.07307
Yuhe Nie
Yuhe Nie, Shaoming Zheng, Zhan Zhuang, Xuan Song
Extend Wave Function Collapse to Large-Scale Content Generation
This paper is accepted by IEEE Conference on Games 2023 (nomination of the Best Paper Award)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wave Function Collapse (WFC) is a widely used tile-based algorithm in procedural content generation, including textures, objects, and scenes. However, the current WFC algorithm and related research lack the ability to generate commercialized large-scale or infinite content due to constraint conflict and time complexity costs. This paper proposes a Nested WFC (N-WFC) algorithm framework to reduce time complexity. To avoid conflict and backtracking problems, we offer a complete and sub-complete tileset preparation strategy, which requires only a small number of tiles to generate aperiodic and deterministic infinite content. We also introduce the weight-brush system that combines N-WFC and sub-complete tileset, proving its suitability for game design. Our contribution addresses WFC's challenge in massive content generation and provides a theoretical basis for implementing concrete games.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 17:50:38 GMT" } ]
2023-08-15T00:00:00
[ [ "Nie", "Yuhe", "" ], [ "Zheng", "Shaoming", "" ], [ "Zhuang", "Zhan", "" ], [ "Song", "Xuan", "" ] ]
new_dataset
0.994153
2308.07316
Alexander Martin
Alexander Martin and Haitian Zheng and Jie An and Jiebo Luo
Jurassic World Remake: Bringing Ancient Fossils Back to Life via Zero-Shot Long Image-to-Image Translation
9 pages, 10 figures, ACM Multimedia 2023
null
10.1145/3581783.3612708
null
cs.CV cs.MM
http://creativecommons.org/licenses/by/4.0/
With a strong understanding of the target domain from natural language, we produce promising results in translating across large domain gaps and bringing skeletons back to life. In this work, we use text-guided latent diffusion models for zero-shot image-to-image translation (I2I) across large domain gaps (longI2I), where large amounts of new visual features and new geometry need to be generated to enter the target domain. Being able to perform translations across large domain gaps has a wide variety of real-world applications in criminology, astrology, environmental conservation, and paleontology. In this work, we introduce a new task Skull2Animal for translating between skulls and living animals. On this task, we find that unguided Generative Adversarial Networks (GANs) are not capable of translating across large domain gaps. Instead of these traditional I2I methods, we explore the use of guided diffusion and image editing models and provide a new benchmark model, Revive-2I, capable of performing zero-shot I2I via text-prompting latent diffusion models. We find that guidance is necessary for longI2I because, to bridge the large domain gap, prior knowledge about the target domain is needed. In addition, we find that prompting provides the best and most scalable information about the target domain as classifier-guided diffusion models require retraining for specific use cases and lack stronger constraints on the target domain because of the wide variety of images they are trained on.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 17:59:31 GMT" } ]
2023-08-15T00:00:00
[ [ "Martin", "Alexander", "" ], [ "Zheng", "Haitian", "" ], [ "An", "Jie", "" ], [ "Luo", "Jiebo", "" ] ]
new_dataset
0.956147
2308.07317
Ariel N. Lee
Ariel N. Lee, Cole J. Hunter, Nataniel Ruiz
Platypus: Quick, Cheap, and Powerful Refinement of LLMs
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present $\textbf{Platypus}$, a family of fine-tuned and merged Large Language Models (LLMs) that achieves the strongest performance and currently stands at first place in HuggingFace's Open LLM Leaderboard as of the release date of this work. In this work we describe (1) our curated dataset $\textbf{Open-Platypus}$, that is a subset of other open datasets and which $\textit{we release to the public}$ (2) our process of fine-tuning and merging LoRA modules in order to conserve the strong prior of pretrained LLMs, while bringing specific domain knowledge to the surface (3) our efforts in checking for test data leaks and contamination in the training data, which can inform future research. Specifically, the Platypus family achieves strong performance in quantitative LLM metrics across model sizes, topping the global Open LLM leaderboard while using just a fraction of the fine-tuning data and overall compute that are required for other state-of-the-art fine-tuned LLMs. In particular, a 13B Platypus model can be trained on $\textit{a single}$ A100 GPU using 25k questions in 5 hours. This is a testament of the quality of our Open-Platypus dataset, and opens opportunities for more improvements in the field. Project page: https://platypus-llm.github.io
[ { "version": "v1", "created": "Mon, 14 Aug 2023 17:59:56 GMT" } ]
2023-08-15T00:00:00
[ [ "Lee", "Ariel N.", "" ], [ "Hunter", "Cole J.", "" ], [ "Ruiz", "Nataniel", "" ] ]
new_dataset
0.998222
2108.04486
Thomas Studer
Atefeh Rohani and Thomas Studer
Explicit non-normal modal logic
null
null
null
null
cs.LO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Faroldi argues that deontic modals are hyperintensional and thus traditional modal logic cannot provide an appropriate formalization of deontic situations. To overcome this issue, we introduce novel justification logics as hyperintensional analogues to non-normal modal logics. We establish soundness and completeness with respect to various models and we study the problem of realization.
[ { "version": "v1", "created": "Tue, 10 Aug 2021 07:42:56 GMT" }, { "version": "v2", "created": "Fri, 28 Jan 2022 12:06:30 GMT" }, { "version": "v3", "created": "Fri, 11 Aug 2023 07:39:23 GMT" } ]
2023-08-14T00:00:00
[ [ "Rohani", "Atefeh", "" ], [ "Studer", "Thomas", "" ] ]
new_dataset
0.995932
2201.09201
Ming Dai
Ming Dai and Enhui Zheng and Zhenhua Feng and Jiedong Zhuang and Wankou Yang
Vision-Based UAV Self-Positioning in Low-Altitude Urban Environments
13 pages,8 figures
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Unmanned Aerial Vehicles (UAVs) rely on satellite systems for stable positioning. However, due to limited satellite coverage or communication disruptions, UAVs may lose signals from satellite-based positioning systems. In such situations, vision-based techniques can serve as an alternative, ensuring the self-positioning capability of UAVs. However, most of the existing datasets are developed for the geo-localization tasks of the objects identified by UAVs, rather than the self-positioning task of UAVs. Furthermore, the current UAV datasets use discrete sampling on synthetic data, such as Google Maps, thereby neglecting the crucial aspects of dense sampling and the uncertainties commonly experienced in real-world scenarios. To address these issues, this paper presents a new dataset, DenseUAV, which is the first publicly available dataset designed for the UAV self-positioning task. DenseUAV adopts dense sampling on UAV images obtained in low-altitude urban settings. In total, over 27K UAV-view and satellite-view images of 14 university campuses are collected and annotated, establishing a new benchmark. In terms of model development, we first verify the superiority of Transformers over CNNs in this task. Then, we incorporate metric learning into representation learning to enhance the discriminative capacity of the model and to lessen the modality discrepancy. Besides, to facilitate joint learning from both perspectives, we propose a mutually supervised learning approach. Last, we enhance the Recall@K metric and introduce a new measurement, SDM@K, to evaluate the performance of a trained model from both the retrieval and localization perspectives simultaneously. As a result, the proposed baseline method achieves a remarkable Recall@1 score of 83.05% and an SDM@1 score of 86.24% on DenseUAV. The dataset and code will be made publicly available on https://github.com/Dmmm1997/DenseUAV.
[ { "version": "v1", "created": "Sun, 23 Jan 2022 07:18:55 GMT" }, { "version": "v2", "created": "Thu, 10 Aug 2023 18:34:17 GMT" } ]
2023-08-14T00:00:00
[ [ "Dai", "Ming", "" ], [ "Zheng", "Enhui", "" ], [ "Feng", "Zhenhua", "" ], [ "Zhuang", "Jiedong", "" ], [ "Yang", "Wankou", "" ] ]
new_dataset
0.99221
2206.15157
Tim Broedermann
Tim Broedermann (1), Christos Sakaridis (1), Dengxin Dai (2) and Luc Van Gool (1 and 3) ((1) ETH Zurich, (2) MPI for Informatics, (3) KU Leuven)
HRFuser: A Multi-resolution Sensor Fusion Architecture for 2D Object Detection
IEEE International Conference on Intelligent Transportation Systems (ITSC) 2023
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Besides standard cameras, autonomous vehicles typically include multiple additional sensors, such as lidars and radars, which help acquire richer information for perceiving the content of the driving scene. While several recent works focus on fusing certain pairs of sensors - such as camera with lidar or radar - by using architectural components specific to the examined setting, a generic and modular sensor fusion architecture is missing from the literature. In this work, we propose HRFuser, a modular architecture for multi-modal 2D object detection. It fuses multiple sensors in a multi-resolution fashion and scales to an arbitrary number of input modalities. The design of HRFuser is based on state-of-the-art high-resolution networks for image-only dense prediction and incorporates a novel multi-window cross-attention block as the means to perform fusion of multiple modalities at multiple resolutions. We demonstrate via extensive experiments on nuScenes and the adverse conditions DENSE datasets that our model effectively leverages complementary features from additional modalities, substantially improving upon camera-only performance and consistently outperforming state-of-the-art 3D and 2D fusion methods evaluated on 2D object detection metrics. The source code is publicly available.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 09:40:05 GMT" }, { "version": "v2", "created": "Thu, 15 Jun 2023 08:38:57 GMT" }, { "version": "v3", "created": "Fri, 11 Aug 2023 11:06:09 GMT" } ]
2023-08-14T00:00:00
[ [ "Broedermann", "Tim", "", "ETH Zurich" ], [ "Sakaridis", "Christos", "", "ETH Zurich" ], [ "Dai", "Dengxin", "", "MPI for Informatics" ], [ "Van Gool", "Luc", "", "1 and 3" ] ]
new_dataset
0.998083
2208.02993
Jing Tao Tang
Jingtao Tang, Yuan Gao, Tin Lun Lam
Learning to Coordinate for a Worker-Station Multi-robot System in Planar Coverage Tasks
null
IEEE Robotics and Automation Letters, 7(4), 12315-12322, 2022
10.1109/LRA.2022.3214446
null
cs.RO cs.AI cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For massive large-scale tasks, a multi-robot system (MRS) can effectively improve efficiency by utilizing each robot's different capabilities, mobility, and functionality. In this paper, we focus on the multi-robot coverage path planning (mCPP) problem in large-scale planar areas with random dynamic interferers in the environment, where the robots have limited resources. We introduce a worker-station MRS consisting of multiple workers with limited resources for actual work, and one station with enough resources for resource replenishment. We aim to solve the mCPP problem for the worker-station MRS by formulating it as a fully cooperative multi-agent reinforcement learning problem. Then we propose an end-to-end decentralized online planning method, which simultaneously solves coverage planning for workers and rendezvous planning for station. Our method manages to reduce the influence of random dynamic interferers on planning, while the robots can avoid collisions with them. We conduct simulation and real robot experiments, and the comparison results show that our method has competitive performance in solving the mCPP problem for worker-station MRS in metric of task finish time.
[ { "version": "v1", "created": "Fri, 5 Aug 2022 05:36:42 GMT" }, { "version": "v2", "created": "Wed, 24 Aug 2022 08:11:29 GMT" } ]
2023-08-14T00:00:00
[ [ "Tang", "Jingtao", "" ], [ "Gao", "Yuan", "" ], [ "Lam", "Tin Lun", "" ] ]
new_dataset
0.986277
2208.12356
Nikolay Mikhaylovskiy
Eduard Zubchuk, Mikhail Arhipkin, Dmitry Menshikov, Aleksandr Karaush, Nikolay Mikhaylovskiy
Lib-SibGMU -- A University Library Circulation Dataset for Recommender Systems Developmen
Dataset copyright discussion
null
null
null
cs.IR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We opensource under CC BY 4.0 license Lib-SibGMU - a university library circulation dataset - for a wide research community, and benchmark major algorithms for recommender systems on this dataset. For a recommender architecture that consists of a vectorizer that turns the history of the books borrowed into a vector, and a neighborhood-based recommender, trained separately, we show that using the fastText model as a vectorizer delivers competitive results.
[ { "version": "v1", "created": "Thu, 25 Aug 2022 22:10:18 GMT" }, { "version": "v2", "created": "Fri, 11 Aug 2023 16:15:52 GMT" } ]
2023-08-14T00:00:00
[ [ "Zubchuk", "Eduard", "" ], [ "Arhipkin", "Mikhail", "" ], [ "Menshikov", "Dmitry", "" ], [ "Karaush", "Aleksandr", "" ], [ "Mikhaylovskiy", "Nikolay", "" ] ]
new_dataset
0.999338
2211.00732
Haojie Pan
Haojie Pan, Zepeng Zhai, Yuzhou Zhang, Ruiji Fu, Ming Liu, Yangqiu Song, Zhongyuan Wang and Bing Qin
Kuaipedia: a Large-scale Multi-modal Short-video Encyclopedia
null
null
null
null
cs.IR cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online encyclopedias, such as Wikipedia, have been well-developed and researched in the last two decades. One can find any attributes or other information of a wiki item on a wiki page edited by a community of volunteers. However, the traditional text, images and tables can hardly express some aspects of an wiki item. For example, when we talk about ``Shiba Inu'', one may care more about ``How to feed it'' or ``How to train it not to protect its food''. Currently, short-video platforms have become a hallmark in the online world. Whether you're on TikTok, Instagram, Kuaishou, or YouTube Shorts, short-video apps have changed how we consume and create content today. Except for producing short videos for entertainment, we can find more and more authors sharing insightful knowledge widely across all walks of life. These short videos, which we call knowledge videos, can easily express any aspects (e.g. hair or how-to-feed) consumers want to know about an item (e.g. Shiba Inu), and they can be systematically analyzed and organized like an online encyclopedia. In this paper, we propose Kuaipedia, a large-scale multi-modal encyclopedia consisting of items, aspects, and short videos lined to them, which was extracted from billions of videos of Kuaishou (Kwai), a well-known short-video platform in China. We first collected items from multiple sources and mined user-centered aspects from millions of users' queries to build an item-aspect tree. Then we propose a new task called ``multi-modal item-aspect linking'' as an expansion of ``entity linking'' to link short videos into item-aspect pairs and build the whole short-video encyclopedia. Intrinsic evaluations show that our encyclopedia is of large scale and highly accurate. We also conduct sufficient extrinsic experiments to show how Kuaipedia can help fundamental applications such as entity typing and entity linking.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 12:54:30 GMT" }, { "version": "v2", "created": "Thu, 3 Nov 2022 08:51:53 GMT" }, { "version": "v3", "created": "Fri, 11 Aug 2023 04:00:59 GMT" } ]
2023-08-14T00:00:00
[ [ "Pan", "Haojie", "" ], [ "Zhai", "Zepeng", "" ], [ "Zhang", "Yuzhou", "" ], [ "Fu", "Ruiji", "" ], [ "Liu", "Ming", "" ], [ "Song", "Yangqiu", "" ], [ "Wang", "Zhongyuan", "" ], [ "Qin", "Bing", "" ] ]
new_dataset
0.999684
2211.14260
Yiyu Wang
Yiyu Wang, Jiaqi Ge, Alexis Comber
An agent-based simulation model of pedestrian evacuation based on Bayesian Nash Equilibrium
null
null
10.18564/jasss.5037
null
cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This research incorporates Bayesian game theory into pedestrian evacuation in an agent-based model. Three pedestrian behaviours were compared: Random Follow, Shortest Route and Bayesian Nash Equilibrium (BNE), as well as combinations of these. The results showed that BNE pedestrians were able to evacuate more quickly as they predict congestion levels in their next step and adjust their directions to avoid congestion, closely matching the behaviours of evacuating pedestrians in reality. A series of simulation experiments were conducted to evaluate whether and how BNE affects pedestrian evacuation procedures. The results showed that: 1) BNE has a large impact on reducing evacuation time; 2) BNE pedestrians displayed more intelligent and efficient evacuating behaviours; 3) As the proportion of BNE users rises, average evacuation time decreases, and average comfort level increases. A detailed description of the model and relevant experimental results is provided in this paper. Several limitations as well as further works are also identified.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 17:41:03 GMT" } ]
2023-08-14T00:00:00
[ [ "Wang", "Yiyu", "" ], [ "Ge", "Jiaqi", "" ], [ "Comber", "Alexis", "" ] ]
new_dataset
0.991284
2212.06817
Tianhe Yu
Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Tomas Jackson, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Kuang-Huei Lee, Sergey Levine, Yao Lu, Utsav Malla, Deeksha Manjunath, Igor Mordatch, Ofir Nachum, Carolina Parada, Jodilyn Peralta, Emily Perez, Karl Pertsch, Jornell Quiambao, Kanishka Rao, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Kevin Sayed, Jaspiar Singh, Sumedh Sontakke, Austin Stone, Clayton Tan, Huong Tran, Vincent Vanhoucke, Steve Vega, Quan Vuong, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich
RT-1: Robotics Transformer for Real-World Control at Scale
See website at robotics-transformer1.github.io
null
null
null
cs.RO cs.AI cs.CL cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer1.github.io
[ { "version": "v1", "created": "Tue, 13 Dec 2022 18:55:15 GMT" }, { "version": "v2", "created": "Fri, 11 Aug 2023 17:45:27 GMT" } ]
2023-08-14T00:00:00
[ [ "Brohan", "Anthony", "" ], [ "Brown", "Noah", "" ], [ "Carbajal", "Justice", "" ], [ "Chebotar", "Yevgen", "" ], [ "Dabis", "Joseph", "" ], [ "Finn", "Chelsea", "" ], [ "Gopalakrishnan", "Keerthana", "" ], [ "Hausman", "Karol", "" ], [ "Herzog", "Alex", "" ], [ "Hsu", "Jasmine", "" ], [ "Ibarz", "Julian", "" ], [ "Ichter", "Brian", "" ], [ "Irpan", "Alex", "" ], [ "Jackson", "Tomas", "" ], [ "Jesmonth", "Sally", "" ], [ "Joshi", "Nikhil J", "" ], [ "Julian", "Ryan", "" ], [ "Kalashnikov", "Dmitry", "" ], [ "Kuang", "Yuheng", "" ], [ "Leal", "Isabel", "" ], [ "Lee", "Kuang-Huei", "" ], [ "Levine", "Sergey", "" ], [ "Lu", "Yao", "" ], [ "Malla", "Utsav", "" ], [ "Manjunath", "Deeksha", "" ], [ "Mordatch", "Igor", "" ], [ "Nachum", "Ofir", "" ], [ "Parada", "Carolina", "" ], [ "Peralta", "Jodilyn", "" ], [ "Perez", "Emily", "" ], [ "Pertsch", "Karl", "" ], [ "Quiambao", "Jornell", "" ], [ "Rao", "Kanishka", "" ], [ "Ryoo", "Michael", "" ], [ "Salazar", "Grecia", "" ], [ "Sanketi", "Pannag", "" ], [ "Sayed", "Kevin", "" ], [ "Singh", "Jaspiar", "" ], [ "Sontakke", "Sumedh", "" ], [ "Stone", "Austin", "" ], [ "Tan", "Clayton", "" ], [ "Tran", "Huong", "" ], [ "Vanhoucke", "Vincent", "" ], [ "Vega", "Steve", "" ], [ "Vuong", "Quan", "" ], [ "Xia", "Fei", "" ], [ "Xiao", "Ted", "" ], [ "Xu", "Peng", "" ], [ "Xu", "Sichun", "" ], [ "Yu", "Tianhe", "" ], [ "Zitkovich", "Brianna", "" ] ]
new_dataset
0.998808
2303.14029
Murali Sridharan
Murali Sridharan, Leevi Rantala, Mika M\"antyl\"a
PENTACET data -- 23 Million Contextual Code Comments and 250,000 SATD comments
Accepted in MSR 2023 Tools and Data Showcase
null
null
null
cs.SE cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Most Self-Admitted Technical Debt (SATD) research utilizes explicit SATD features such as 'TODO' and 'FIXME' for SATD detection. A closer look reveals several SATD research uses simple SATD ('Easy to Find') code comments without the contextual data (preceding and succeeding source code context). This work addresses this gap through PENTACET (or 5C dataset) data. PENTACET is a large Curated Contextual Code Comments per Contributor and the most extensive SATD data. We mine 9,096 Open Source Software Java projects with a total of 435 million LOC. The outcome is a dataset with 23 million code comments, preceding and succeeding source code context for each comment, and more than 250,000 comments labeled as SATD, including both 'Easy to Find' and 'Hard to Find' SATD. We believe PENTACET data will further SATD research using Artificial Intelligence techniques.
[ { "version": "v1", "created": "Fri, 24 Mar 2023 14:42:42 GMT" }, { "version": "v2", "created": "Fri, 11 Aug 2023 13:40:46 GMT" } ]
2023-08-14T00:00:00
[ [ "Sridharan", "Murali", "" ], [ "Rantala", "Leevi", "" ], [ "Mäntylä", "Mika", "" ] ]
new_dataset
0.999836
2305.10615
Jiatong Shi
Jiatong Shi, Dan Berrebbi, William Chen, Ho-Lam Chung, En-Pei Hu, Wei Ping Huang, Xuankai Chang, Shang-Wen Li, Abdelrahman Mohamed, Hung-yi Lee, Shinji Watanabe
ML-SUPERB: Multilingual Speech Universal PERformance Benchmark
Accepted by Interspeech
null
null
null
cs.SD cs.CL eess.AS
http://creativecommons.org/licenses/by/4.0/
Speech processing Universal PERformance Benchmark (SUPERB) is a leaderboard to benchmark the performance of Self-Supervised Learning (SSL) models on various speech processing tasks. However, SUPERB largely considers English speech in its evaluation. This paper presents multilingual SUPERB (ML-SUPERB), covering 143 languages (ranging from high-resource to endangered), and considering both automatic speech recognition and language identification. Following the concept of SUPERB, ML-SUPERB utilizes frozen SSL features and employs a simple framework for multilingual tasks by learning a shallow downstream model. Similar to the SUPERB benchmark, we find speech SSL models can significantly improve performance compared to FBANK features. Furthermore, we find that multilingual models do not always perform better than their monolingual counterparts. We will release ML-SUPERB as a challenge with organized datasets and reproducible training scripts for future multilingual representation research.
[ { "version": "v1", "created": "Thu, 18 May 2023 00:01:27 GMT" }, { "version": "v2", "created": "Fri, 11 Aug 2023 17:39:21 GMT" } ]
2023-08-14T00:00:00
[ [ "Shi", "Jiatong", "" ], [ "Berrebbi", "Dan", "" ], [ "Chen", "William", "" ], [ "Chung", "Ho-Lam", "" ], [ "Hu", "En-Pei", "" ], [ "Huang", "Wei Ping", "" ], [ "Chang", "Xuankai", "" ], [ "Li", "Shang-Wen", "" ], [ "Mohamed", "Abdelrahman", "" ], [ "Lee", "Hung-yi", "" ], [ "Watanabe", "Shinji", "" ] ]
new_dataset
0.998291
2306.02649
Paul Jungeblut
Paul Jungeblut
On the Complexity of Lombardi Graph Drawing
Appears in the Proceedings of the 31st International Symposium on Graph Drawing and Network Visualization (GD 2023)
null
null
null
cs.CG cs.CC math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a Lombardi drawing of a graph the vertices are drawn as points and the edges are drawn as circular arcs connecting their respective endpoints. Additionally, all vertices have perfect angular resolution, i.e., all angles incident to a vertex $v$ have size $2\pi/\mathrm{deg}(v)$. We prove that it is $\exists\mathbb{R}$-complete to determine whether a given graph admits a Lombardi drawing respecting a fixed cyclic ordering of the incident edges around each vertex. In particular, this implies NP-hardness. While most previous work studied the (non-)existence of Lombardi drawings for different graph classes, our result is the first on the computational complexity of finding Lombardi drawings of general graphs.
[ { "version": "v1", "created": "Mon, 5 Jun 2023 07:33:08 GMT" }, { "version": "v2", "created": "Fri, 11 Aug 2023 09:38:37 GMT" } ]
2023-08-14T00:00:00
[ [ "Jungeblut", "Paul", "" ] ]
new_dataset
0.987599
2307.03903
Huafeng Li
Huafeng Li, Le Xu, Yafei Zhang, Dapeng Tao, Zhengtao Yu
Adversarial Self-Attack Defense and Spatial-Temporal Relation Mining for Visible-Infrared Video Person Re-Identification
11 pages,8 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In visible-infrared video person re-identification (re-ID), extracting features not affected by complex scenes (such as modality, camera views, pedestrian pose, background, etc.) changes, and mining and utilizing motion information are the keys to solving cross-modal pedestrian identity matching. To this end, the paper proposes a new visible-infrared video person re-ID method from a novel perspective, i.e., adversarial self-attack defense and spatial-temporal relation mining. In this work, the changes of views, posture, background and modal discrepancy are considered as the main factors that cause the perturbations of person identity features. Such interference information contained in the training samples is used as an adversarial perturbation. It performs adversarial attacks on the re-ID model during the training to make the model more robust to these unfavorable factors. The attack from the adversarial perturbation is introduced by activating the interference information contained in the input samples without generating adversarial samples, and it can be thus called adversarial self-attack. This design allows adversarial attack and defense to be integrated into one framework. This paper further proposes a spatial-temporal information-guided feature representation network to use the information in video sequences. The network cannot only extract the information contained in the video-frame sequences but also use the relation of the local information in space to guide the network to extract more robust features. The proposed method exhibits compelling performance on large-scale cross-modality video datasets. The source code of the proposed method will be released at https://github.com/lhf12278/xxx.
[ { "version": "v1", "created": "Sat, 8 Jul 2023 05:03:10 GMT" }, { "version": "v2", "created": "Mon, 17 Jul 2023 09:08:49 GMT" }, { "version": "v3", "created": "Fri, 11 Aug 2023 09:15:27 GMT" } ]
2023-08-14T00:00:00
[ [ "Li", "Huafeng", "" ], [ "Xu", "Le", "" ], [ "Zhang", "Yafei", "" ], [ "Tao", "Dapeng", "" ], [ "Yu", "Zhengtao", "" ] ]
new_dataset
0.998953
2307.15483
Max Franke
Max Franke, Steffen Koch
Compact Phase Histograms for Guided Exploration of Periodicity
IEEE VIS 2023 Short Paper
null
null
null
cs.GR
http://creativecommons.org/licenses/by/4.0/
Periodically occurring accumulations of events or measured values are present in many time-dependent datasets and can be of interest for analyses. The frequency of such periodic behavior is often not known in advance, making it difficult to detect and tedious to explore. Automated analysis methods exist, but can be too costly for smooth, interactive analysis. We propose a compact visual representation that reveals periodicity by showing a phase histogram for a given period length that can be used standalone or in combination with other linked visualizations. Our approach supports guided, interactive analyses by suggesting other period lengths to explore, which are ranked based on two quality measures. We further describe how the phase can be mapped to visual representations in other views to reveal periodicity there.
[ { "version": "v1", "created": "Fri, 28 Jul 2023 11:16:28 GMT" }, { "version": "v2", "created": "Fri, 11 Aug 2023 09:33:42 GMT" } ]
2023-08-14T00:00:00
[ [ "Franke", "Max", "" ], [ "Koch", "Steffen", "" ] ]
new_dataset
0.959075
2308.00282
Hyeon Jeon
Hyeon Jeon, Aeri Cho, Jinhwa Jang, Soohyun Lee, Jake Hyun, Hyung-Kwon Ko, Jaemin Jo, Jinwook Seo
ZADU: A Python Library for Evaluating the Reliability of Dimensionality Reduction Embeddings
2023 IEEE Visualization and Visual Analytics (IEEE VIS 2023) Short paper
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Dimensionality reduction (DR) techniques inherently distort the original structure of input high-dimensional data, producing imperfect low-dimensional embeddings. Diverse distortion measures have thus been proposed to evaluate the reliability of DR embeddings. However, implementing and executing distortion measures in practice has so far been time-consuming and tedious. To address this issue, we present ZADU, a Python library that provides distortion measures. ZADU is not only easy to install and execute but also enables comprehensive evaluation of DR embeddings through three key features. First, the library covers a wide range of distortion measures. Second, it automatically optimizes the execution of distortion measures, substantially reducing the running time required to execute multiple measures. Last, the library informs how individual points contribute to the overall distortions, facilitating the detailed analysis of DR embeddings. By simulating a real-world scenario of optimizing DR embeddings, we verify that our optimization scheme substantially reduces the time required to execute distortion measures. Finally, as an application of ZADU, we present another library called ZADUVis that allows users to easily create distortion visualizations that depict the extent to which each region of an embedding suffers from distortions.
[ { "version": "v1", "created": "Tue, 1 Aug 2023 04:38:15 GMT" }, { "version": "v2", "created": "Fri, 11 Aug 2023 04:39:33 GMT" } ]
2023-08-14T00:00:00
[ [ "Jeon", "Hyeon", "" ], [ "Cho", "Aeri", "" ], [ "Jang", "Jinhwa", "" ], [ "Lee", "Soohyun", "" ], [ "Hyun", "Jake", "" ], [ "Ko", "Hyung-Kwon", "" ], [ "Jo", "Jaemin", "" ], [ "Seo", "Jinwook", "" ] ]
new_dataset
0.979811
2308.03043
Fatemah Almeman
Fatemah Almeman, Hadi Sheikhi, Luis Espinosa-Anke
3D-EX : A Unified Dataset of Definitions and Dictionary Examples
11 pages (including references pages), 9 tables, and 1 figure. This paper is submitted to RANLP2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Definitions are a fundamental building block in lexicography, linguistics and computational semantics. In NLP, they have been used for retrofitting word embeddings or augmenting contextual representations in language models. However, lexical resources containing definitions exhibit a wide range of properties, which has implications in the behaviour of models trained and evaluated on them. In this paper, we introduce 3D- EX , a dataset that aims to fill this gap by combining well-known English resources into one centralized knowledge repository in the form of <term, definition, example> triples. 3D- EX is a unified evaluation framework with carefully pre-computed train/validation/test splits to prevent memorization. We report experimental results that suggest that this dataset could be effectively leveraged in downstream NLP tasks. Code and data are available at https://github.com/F-Almeman/3D-EX .
[ { "version": "v1", "created": "Sun, 6 Aug 2023 07:59:12 GMT" }, { "version": "v2", "created": "Fri, 11 Aug 2023 12:07:52 GMT" } ]
2023-08-14T00:00:00
[ [ "Almeman", "Fatemah", "" ], [ "Sheikhi", "Hadi", "" ], [ "Espinosa-Anke", "Luis", "" ] ]
new_dataset
0.999674
2308.04452
Faisal Haque Bappy
Mirza Kamrul Bashar Shuhan, Tariqul Islam, Enam Ahmed Shuvo, Faisal Haque Bappy, Kamrul Hasan, Carlos Caicedo
Quarks: A Secure and Decentralized Blockchain-Based Messaging Network
null
null
10.1109/CSCloud-EdgeCom58631.2023.00053
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In last two decades, messaging systems have gained widespread popularity both in the enterprise and consumer sectors. Many of these systems used secure protocols like end-to-end encryption to ensure strong security in one-to-one communication. However, the majority of them rely on centralized servers, which allows them to use their users' personal data. Also, it allows the government to track and regulate their citizens' activities, which poses significant threats to "digital freedom". Also, these systems have failed to achieve security attributes like confidentiality, integrity, and privacy for group communications. In this paper, we present a novel blockchain-based secure messaging system named Quarks that overcomes the security pitfalls of the existing systems and eliminates centralized control. We have analyzed our architecture with security models to demonstrate the system's reliability and usability. We have developed a Proof of Concept (PoC) of the Quarks system leveraging Distributed Ledger Technology (DLT) and conducted load testing on that. We noticed that our PoC system achieves all the desired attributes that are prevalent in a traditional centralized messaging scheme despite the limited capacity of the development and testing environment. Therefore, this assures us of the applicability of such systems in the near future if scaled up properly.
[ { "version": "v1", "created": "Sat, 5 Aug 2023 02:24:18 GMT" } ]
2023-08-14T00:00:00
[ [ "Shuhan", "Mirza Kamrul Bashar", "" ], [ "Islam", "Tariqul", "" ], [ "Shuvo", "Enam Ahmed", "" ], [ "Bappy", "Faisal Haque", "" ], [ "Hasan", "Kamrul", "" ], [ "Caicedo", "Carlos", "" ] ]
new_dataset
0.998276
2308.05750
Alireza Shafizadeh
Alireza Shafizadeh, Hossein Shahbeik, Mohammad Hossein Nadian, Vijai Kumar Gupta, Abdul-Sattar Nizami, Su Shiung Lam, Wanxi Peng, Junting Pan, Meisam Tabatabaei, Mortaza Aghbashlo
Turning hazardous volatile matter compounds into fuel by catalytic steam reforming: An evolutionary machine learning approach
null
null
10.1016/j.jclepro.2023.137329
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Chemical and biomass processing systems release volatile matter compounds into the environment daily. Catalytic reforming can convert these compounds into valuable fuels, but developing stable and efficient catalysts is challenging. Machine learning can handle complex relationships in big data and optimize reaction conditions, making it an effective solution for addressing the mentioned issues. This study is the first to develop a machine-learning-based research framework for modeling, understanding, and optimizing the catalytic steam reforming of volatile matter compounds. Toluene catalytic steam reforming is used as a case study to show how chemical/textural analyses (e.g., X-ray diffraction analysis) can be used to obtain input features for machine learning models. Literature is used to compile a database covering a variety of catalyst characteristics and reaction conditions. The process is thoroughly analyzed, mechanistically discussed, modeled by six machine learning models, and optimized using the particle swarm optimization algorithm. Ensemble machine learning provides the best prediction performance (R2 > 0.976) for toluene conversion and product distribution. The optimal tar conversion (higher than 77.2%) is obtained at temperatures between 637.44 and 725.62 {\deg}C, with a steam-to-carbon molar ratio of 5.81-7.15 and a catalyst BET surface area 476.03-638.55 m2/g. The feature importance analysis satisfactorily reveals the effects of input descriptors on model prediction. Operating conditions (50.9%) and catalyst properties (49.1%) are equally important in modeling. The developed framework can expedite the search for optimal catalyst characteristics and reaction conditions, not only for catalytic chemical processing but also for related research areas.
[ { "version": "v1", "created": "Tue, 25 Jul 2023 16:29:07 GMT" } ]
2023-08-14T00:00:00
[ [ "Shafizadeh", "Alireza", "" ], [ "Shahbeik", "Hossein", "" ], [ "Nadian", "Mohammad Hossein", "" ], [ "Gupta", "Vijai Kumar", "" ], [ "Nizami", "Abdul-Sattar", "" ], [ "Lam", "Su Shiung", "" ], [ "Peng", "Wanxi", "" ], [ "Pan", "Junting", "" ], [ "Tabatabaei", "Meisam", "" ], [ "Aghbashlo", "Mortaza", "" ] ]
new_dataset
0.952544
2308.05818
Unay Dorken Gallastegi
Unay Dorken Gallastegi, Hoover Rueda-Chacon, Martin J. Stevens, and Vivek K Goyal
Absorption-Based, Passive Range Imaging from Hyperspectral Thermal Measurements
15 pages, 14 figures
null
null
null
cs.CV eess.SP
http://creativecommons.org/licenses/by/4.0/
Passive hyperspectral long-wave infrared measurements are remarkably informative about the surroundings, such as remote object material composition, temperature, and range; and air temperature and gas concentrations. Remote object material and temperature determine the spectrum of thermal radiance, and range, air temperature, and gas concentrations determine how this spectrum is modified by propagation to the sensor. We computationally separate these phenomena, introducing a novel passive range imaging method based on atmospheric absorption of ambient thermal radiance. Previously demonstrated passive absorption-based ranging methods assume hot and highly emitting objects. However, the temperature variation in natural scenes is usually low, making range imaging challenging. Our method benefits from explicit consideration of air emission and parametric modeling of atmospheric absorption. To mitigate noise in low-contrast scenarios, we jointly estimate range and intrinsic object properties by exploiting a variety of absorption lines spread over the infrared spectrum. Along with Monte Carlo simulations that demonstrate the importance of regularization, temperature differentials, and availability of many spectral bands, we apply this method to long-wave infrared (8--13 $\mu$m) hyperspectral image data acquired from natural scenes with no active illumination. Range features from 15m to 150m are recovered, with good qualitative match to unaligned lidar data.
[ { "version": "v1", "created": "Thu, 10 Aug 2023 18:35:22 GMT" } ]
2023-08-14T00:00:00
[ [ "Gallastegi", "Unay Dorken", "" ], [ "Rueda-Chacon", "Hoover", "" ], [ "Stevens", "Martin J.", "" ], [ "Goyal", "Vivek K", "" ] ]
new_dataset
0.9963
2308.05820
Filipe Cordeiro
Daniel Rosa, Filipe R. Cordeiro, Ruan Carvalho, Everton Souza, Sergio Chevtchenko, Luiz Rodrigues, Marcelo Marinho, Thales Vieira and Valmir Macario
Recognizing Handwritten Mathematical Expressions of Vertical Addition and Subtraction
Paper accepted at SIBGRAPI 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Handwritten Mathematical Expression Recognition (HMER) is a challenging task with many educational applications. Recent methods for HMER have been developed for complex mathematical expressions in standard horizontal format. However, solutions for elementary mathematical expression, such as vertical addition and subtraction, have not been explored in the literature. This work proposes a new handwritten elementary mathematical expression dataset composed of addition and subtraction expressions in a vertical format. We also extended the MNIST dataset to generate artificial images with this structure. Furthermore, we proposed a solution for offline HMER, able to recognize vertical addition and subtraction expressions. Our analysis evaluated the object detection algorithms YOLO v7, YOLO v8, YOLO-NAS, NanoDet and FCOS for identifying the mathematical symbols. We also proposed a transcription method to map the bounding boxes from the object detection stage to a mathematical expression in the LATEX markup sequence. Results show that our approach is efficient, achieving a high expression recognition rate. The code and dataset are available at https://github.com/Danielgol/HME-VAS
[ { "version": "v1", "created": "Thu, 10 Aug 2023 18:39:35 GMT" } ]
2023-08-14T00:00:00
[ [ "Rosa", "Daniel", "" ], [ "Cordeiro", "Filipe R.", "" ], [ "Carvalho", "Ruan", "" ], [ "Souza", "Everton", "" ], [ "Chevtchenko", "Sergio", "" ], [ "Rodrigues", "Luiz", "" ], [ "Marinho", "Marcelo", "" ], [ "Vieira", "Thales", "" ], [ "Macario", "Valmir", "" ] ]
new_dataset
0.999459
2308.05821
Houjian Yu
Houjian Yu, Xibai Lou, Yang Yang, and Changhyun Choi
IOSG: Image-driven Object Searching and Grasping
Accepted to IEEE/RSJ International Conference on Intelligent Robots (IROS 2023). Project page: https://sites.google.com/umn.edu/iosg
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When robots retrieve specific objects from cluttered scenes, such as home and warehouse environments, the target objects are often partially occluded or completely hidden. Robots are thus required to search, identify a target object, and successfully grasp it. Preceding works have relied on pre-trained object recognition or segmentation models to find the target object. However, such methods require laborious manual annotations to train the models and even fail to find novel target objects. In this paper, we propose an Image-driven Object Searching and Grasping (IOSG) approach where a robot is provided with the reference image of a novel target object and tasked to find and retrieve it. We design a Target Similarity Network that generates a probability map to infer the location of the novel target. IOSG learns a hierarchical policy; the high-level policy predicts the subtask type, whereas the low-level policies, explorer and coordinator, generate effective push and grasp actions. The explorer is responsible for searching the target object when it is hidden or occluded by other objects. Once the target object is found, the coordinator conducts target-oriented pushing and grasping to retrieve the target from the clutter. The proposed pipeline is trained with full self-supervision in simulation and applied to a real environment. Our model achieves a 96.0% and 94.5% task success rate on coordination and exploration tasks in simulation respectively, and 85.0% success rate on a real robot for the search-and-grasp task.
[ { "version": "v1", "created": "Thu, 10 Aug 2023 18:41:24 GMT" } ]
2023-08-14T00:00:00
[ [ "Yu", "Houjian", "" ], [ "Lou", "Xibai", "" ], [ "Yang", "Yang", "" ], [ "Choi", "Changhyun", "" ] ]
new_dataset
0.960093
2308.05882
Christophe Bonneville
Christophe Bonneville, Youngsoo Choi, Debojyoti Ghosh, Jonathan L. Belof
GPLaSDI: Gaussian Process-based Interpretable Latent Space Dynamics Identification through Deep Autoencoder
null
null
null
null
cs.CE cs.LG cs.NA math.NA
http://creativecommons.org/licenses/by/4.0/
Numerically solving partial differential equations (PDEs) can be challenging and computationally expensive. This has led to the development of reduced-order models (ROMs) that are accurate but faster than full order models (FOMs). Recently, machine learning advances have enabled the creation of non-linear projection methods, such as Latent Space Dynamics Identification (LaSDI). LaSDI maps full-order PDE solutions to a latent space using autoencoders and learns the system of ODEs governing the latent space dynamics. By interpolating and solving the ODE system in the reduced latent space, fast and accurate ROM predictions can be made by feeding the predicted latent space dynamics into the decoder. In this paper, we introduce GPLaSDI, a novel LaSDI-based framework that relies on Gaussian process (GP) for latent space ODE interpolations. Using GPs offers two significant advantages. First, it enables the quantification of uncertainty over the ROM predictions. Second, leveraging this prediction uncertainty allows for efficient adaptive training through a greedy selection of additional training data points. This approach does not require prior knowledge of the underlying PDEs. Consequently, GPLaSDI is inherently non-intrusive and can be applied to problems without a known PDE or its residual. We demonstrate the effectiveness of our approach on the Burgers equation, Vlasov equation for plasma physics, and a rising thermal bubble problem. Our proposed method achieves between 200 and 100,000 times speed-up, with up to 7% relative error.
[ { "version": "v1", "created": "Thu, 10 Aug 2023 23:54:12 GMT" } ]
2023-08-14T00:00:00
[ [ "Bonneville", "Christophe", "" ], [ "Choi", "Youngsoo", "" ], [ "Ghosh", "Debojyoti", "" ], [ "Belof", "Jonathan L.", "" ] ]
new_dataset
0.986859
2308.05884
Alpin Dale
Tear Gosling, Alpin Dale, Yinhe Zheng
PIPPA: A Partially Synthetic Conversational Dataset
13 pages, 5 figures
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
With the emergence of increasingly powerful large language models, there is a burgeoning interest in leveraging these models for casual conversation and role-play applications. However, existing conversational and role-playing datasets often fail to capture the diverse and nuanced interactions typically exhibited by real-world role-play participants. To address this limitation and contribute to the rapidly growing field, we introduce a partially-synthetic dataset named PIPPA (Personal Interaction Pairs between People and AI). PIPPA is a result of a community-driven crowdsourcing effort involving a group of role-play enthusiasts. The dataset comprises over 1 million utterances that are distributed across 26,000 conversation sessions and provides a rich resource for researchers and AI developers to explore and refine conversational AI systems in the context of role-play scenarios.
[ { "version": "v1", "created": "Fri, 11 Aug 2023 00:33:26 GMT" } ]
2023-08-14T00:00:00
[ [ "Gosling", "Tear", "" ], [ "Dale", "Alpin", "" ], [ "Zheng", "Yinhe", "" ] ]
new_dataset
0.999641
2308.05921
Ryugo Morita
Ryugo Morita, Zhiqiang Zhang, Jinjia Zhou
BATINet: Background-Aware Text to Image Synthesis and Manipulation Network
Accepted to ICIP2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background-Induced Text2Image (BIT2I) aims to generate foreground content according to the text on the given background image. Most studies focus on generating high-quality foreground content, although they ignore the relationship between the two contents. In this study, we analyzed a novel Background-Aware Text2Image (BAT2I) task in which the generated content matches the input background. We proposed a Background-Aware Text to Image synthesis and manipulation Network (BATINet), which contains two key components: Position Detect Network (PDN) and Harmonize Network (HN). The PDN detects the most plausible position of the text-relevant object in the background image. The HN harmonizes the generated content referring to background style information. Finally, we reconstructed the generation network, which consists of the multi-GAN and attention module to match more user preferences. Moreover, we can apply BATINet to text-guided image manipulation. It solves the most challenging task of manipulating the shape of an object. We demonstrated through qualitative and quantitative evaluations on the CUB dataset that the proposed model outperforms other state-of-the-art methods.
[ { "version": "v1", "created": "Fri, 11 Aug 2023 03:22:33 GMT" } ]
2023-08-14T00:00:00
[ [ "Morita", "Ryugo", "" ], [ "Zhang", "Zhiqiang", "" ], [ "Zhou", "Jinjia", "" ] ]
new_dataset
0.999721
2308.05938
Xing Lan
Xing Lan, Jiayi Lyu, Hanyu Jiang, Kun Dong, Zehai Niu, Yi Zhang, Jian Xue
FoodSAM: Any Food Segmentation
Code is available at https://github.com/jamesjg/FoodSAM
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we explore the zero-shot capability of the Segment Anything Model (SAM) for food image segmentation. To address the lack of class-specific information in SAM-generated masks, we propose a novel framework, called FoodSAM. This innovative approach integrates the coarse semantic mask with SAM-generated masks to enhance semantic segmentation quality. Besides, we recognize that the ingredients in food can be supposed as independent individuals, which motivated us to perform instance segmentation on food images. Furthermore, FoodSAM extends its zero-shot capability to encompass panoptic segmentation by incorporating an object detector, which renders FoodSAM to effectively capture non-food object information. Drawing inspiration from the recent success of promptable segmentation, we also extend FoodSAM to promptable segmentation, supporting various prompt variants. Consequently, FoodSAM emerges as an all-encompassing solution capable of segmenting food items at multiple levels of granularity. Remarkably, this pioneering framework stands as the first-ever work to achieve instance, panoptic, and promptable segmentation on food images. Extensive experiments demonstrate the feasibility and impressing performance of FoodSAM, validating SAM's potential as a prominent and influential tool within the domain of food image segmentation. We release our code at https://github.com/jamesjg/FoodSAM.
[ { "version": "v1", "created": "Fri, 11 Aug 2023 04:42:10 GMT" } ]
2023-08-14T00:00:00
[ [ "Lan", "Xing", "" ], [ "Lyu", "Jiayi", "" ], [ "Jiang", "Hanyu", "" ], [ "Dong", "Kun", "" ], [ "Niu", "Zehai", "" ], [ "Zhang", "Yi", "" ], [ "Xue", "Jian", "" ] ]
new_dataset
0.999314
2308.05939
Dominic Maggio
Dominic Maggio, Courtney Mario, Luca Carlone
VERF: Runtime Monitoring of Pose Estimation with Neural Radiance Fields
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
We present VERF, a collection of two methods (VERF-PnP and VERF-Light) for providing runtime assurance on the correctness of a camera pose estimate of a monocular camera without relying on direct depth measurements. We leverage the ability of NeRF (Neural Radiance Fields) to render novel RGB perspectives of a scene. We only require as input the camera image whose pose is being estimated, an estimate of the camera pose we want to monitor, and a NeRF model containing the scene pictured by the camera. We can then predict if the pose estimate is within a desired distance from the ground truth and justify our prediction with a level of confidence. VERF-Light does this by rendering a viewpoint with NeRF at the estimated pose and estimating its relative offset to the sensor image up to scale. Since scene scale is unknown, the approach renders another auxiliary image and reasons over the consistency of the optical flows across the three images. VERF-PnP takes a different approach by rendering a stereo pair of images with NeRF and utilizing the Perspective-n-Point (PnP) algorithm. We evaluate both methods on the LLFF dataset, on data from a Unitree A1 quadruped robot, and on data collected from Blue Origin's sub-orbital New Shepard rocket to demonstrate the effectiveness of the proposed pose monitoring method across a range of scene scales. We also show monitoring can be completed in under half a second on a 3090 GPU.
[ { "version": "v1", "created": "Fri, 11 Aug 2023 04:43:31 GMT" } ]
2023-08-14T00:00:00
[ [ "Maggio", "Dominic", "" ], [ "Mario", "Courtney", "" ], [ "Carlone", "Luca", "" ] ]
new_dataset
0.996696
2308.05950
Sumit Patel
Ras Dwivedi, Sumit Patel, Prof. Sandeep Shukla
Blockchain-Based Transferable Digital Rights of Land
5 pages, Paper presented in https://easychair.org/cfp/ICSF2023
null
null
null
cs.DC cs.CR
http://creativecommons.org/licenses/by/4.0/
Land, being a scarce and valuable resource, is in high demand, especially in densely populated areas of older cities. Development authorities require land for infrastructure projects and other amenities, while landowners hold onto their land for both its usage and its financial value. Transferable Development Rights (TDRs) serve as a mechanism to separate the development rights associated with the land from the physical land itself. Development authorities acquire the land by offering compensation in the form of TDRs, which hold monetary value. In this paper, we present the tokenization of development rights, focusing on the implementation in collaboration with a development authority. While there have been previous implementations of land tokenization, we believe our approach is the first to tokenize development rights specifically. Our implementation addresses practical challenges related to record-keeping, ground verification of land, and the unique identification of stakeholders. We ensure the accurate evaluation of development rights by incorporating publicly available circle rates, which consider the ground development of the land and its surrounding areas.
[ { "version": "v1", "created": "Fri, 11 Aug 2023 05:50:40 GMT" } ]
2023-08-14T00:00:00
[ [ "Dwivedi", "Ras", "" ], [ "Patel", "Sumit", "" ], [ "Shukla", "Prof. Sandeep", "" ] ]
new_dataset
0.998695
2308.05960
Zhiwei Liu
Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents
Preprint
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
The massive successes of large language models (LLMs) encourage the emerging exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to generate actions with its core LLM and interact with environments, which facilitates the ability to resolve complex tasks by conditioning on past interactions such as observations and actions. Since the investigation of LAA is still very recent, limited explorations are available. Therefore, we provide a comprehensive comparison of LAA in terms of both agent architectures and LLM backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA, where a controller manages the communication among multiple agents. We conduct simulations on both decision-making and multi-step reasoning environments, which comprehensively justify the capacity of LAAs. Our performance results provide quantitative suggestions for designing LAA architectures and the optimal choice of LLMs, as well as the compatibility of both. We release our implementation code of LAAs to the public at \url{https://github.com/salesforce/BOLAA}.
[ { "version": "v1", "created": "Fri, 11 Aug 2023 06:37:54 GMT" } ]
2023-08-14T00:00:00
[ [ "Liu", "Zhiwei", "" ], [ "Yao", "Weiran", "" ], [ "Zhang", "Jianguo", "" ], [ "Xue", "Le", "" ], [ "Heinecke", "Shelby", "" ], [ "Murthy", "Rithesh", "" ], [ "Feng", "Yihao", "" ], [ "Chen", "Zeyuan", "" ], [ "Niebles", "Juan Carlos", "" ], [ "Arpit", "Devansh", "" ], [ "Xu", "Ran", "" ], [ "Mui", "Phil", "" ], [ "Wang", "Huan", "" ], [ "Xiong", "Caiming", "" ], [ "Savarese", "Silvio", "" ] ]
new_dataset
0.999716
2308.05992
Inhyuk Oh
In Hyuk Oh, Ju Won Seo, Jin Sung Kim, and Chung Choo Chung
Reachable Set-based Path Planning for Automated Vertical Parking System
8 pages, 10 figures, conference. This is the Accepted Manuscript version of an article accepted for publication in [IEEE International Conference on Intelligent Transportation Systems ITSC 2023]. IOP Publishing Ltd is not responsible for any errors or omissions in this version of the manuscript or any version derived from it. No information about DOI has been posted yet
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a local path planning method with a reachable set for Automated vertical Parking Systems (APS). First, given a parking lot layout with a goal position, we define an intermediate pose for the APS to accomplish reverse parking with a single maneuver, i.e., without changing the gear shift. Then, we introduce a reachable set which is a set of points consisting of the grid points of all possible intermediate poses. Once the APS approaches the goal position, it must select an intermediate pose in the reachable set. A minimization problem was formulated and solved to choose the intermediate pose. We performed various scenarios with different parking lot conditions. We used the Hybrid-A* algorithm for the global path planning to move the vehicle from the starting pose to the intermediate pose and utilized clothoid-based local path planning to move from the intermediate pose to the goal pose. Additionally, we designed a controller to follow the generated path and validated its tracking performance. It was confirmed that the tracking error in the mean root square for the lateral position was bounded within 0.06m and for orientation within 0.01rad.
[ { "version": "v1", "created": "Fri, 11 Aug 2023 07:59:13 GMT" } ]
2023-08-14T00:00:00
[ [ "Oh", "In Hyuk", "" ], [ "Seo", "Ju Won", "" ], [ "Kim", "Jin Sung", "" ], [ "Chung", "Chung Choo", "" ] ]
new_dataset
0.975864
2308.06007
Soumya Prakash Dash
Soumya P. Dash and Aryan Kaushik
RIS-Assisted 6G Wireless Communications: A Novel Statistical Framework in the Presence of Direct Channel
5 pages
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A RIS-assisted wireless communication system in the presence of a direct communication path between the transceiver pair is considered in this paper. The transmitter-RIS and the RIS-receiver channels follow independent Nakagami-m distributions, and the direct channel between the transceiver pair follows a Rayleigh distribution. Considering this system model, the statistics of the composite channel for the RIS-assisted communication system are derived in terms of obtaining novel expressions for the probability density functions for the magnitude and the phase of the communication channel. The correctness of the analytical framework is verified via Monte Carlo simulations, and the effects of the shape parameters of the channels and the number of reflecting elements in the RIS on the randomness of the composite channel are studied via numerical results.
[ { "version": "v1", "created": "Fri, 11 Aug 2023 08:26:48 GMT" } ]
2023-08-14T00:00:00
[ [ "Dash", "Soumya P.", "" ], [ "Kaushik", "Aryan", "" ] ]
new_dataset
0.993604
2308.06076
Haoyu Wang
Haoyu Wang, Haozhe Wu, Junliang Xing, Jia Jia
Versatile Face Animator: Driving Arbitrary 3D Facial Avatar in RGBD Space
Accepted by ACM MM2023
null
10.1145/3581783.3612065
null
cs.CV cs.MM
http://creativecommons.org/licenses/by-sa/4.0/
Creating realistic 3D facial animation is crucial for various applications in the movie production and gaming industry, especially with the burgeoning demand in the metaverse. However, prevalent methods such as blendshape-based approaches and facial rigging techniques are time-consuming, labor-intensive, and lack standardized configurations, making facial animation production challenging and costly. In this paper, we propose a novel self-supervised framework, Versatile Face Animator, which combines facial motion capture with motion retargeting in an end-to-end manner, eliminating the need for blendshapes or rigs. Our method has the following two main characteristics: 1) we propose an RGBD animation module to learn facial motion from raw RGBD videos by hierarchical motion dictionaries and animate RGBD images rendered from 3D facial mesh coarse-to-fine, enabling facial animation on arbitrary 3D characters regardless of their topology, textures, blendshapes, and rigs; and 2) we introduce a mesh retarget module to utilize RGBD animation to create 3D facial animation by manipulating facial mesh with controller transformations, which are estimated from dense optical flow fields and blended together with geodesic-distance-based weights. Comprehensive experiments demonstrate the effectiveness of our proposed framework in generating impressive 3D facial animation results, highlighting its potential as a promising solution for the cost-effective and efficient production of facial animation in the metaverse.
[ { "version": "v1", "created": "Fri, 11 Aug 2023 11:29:01 GMT" } ]
2023-08-14T00:00:00
[ [ "Wang", "Haoyu", "" ], [ "Wu", "Haozhe", "" ], [ "Xing", "Junliang", "" ], [ "Jia", "Jia", "" ] ]
new_dataset
0.998933
2308.06082
Manish Kumar
Manish Kumar
Security of XCB and HCTR
M.Tech Dissertation. Indian Statistical Institute, Kolkata, July 2018
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tweakable Enciphering Scheme (TES) is a length preserving scheme which provides confidentiality and admissible integrity. XCB (Extended Code Book) is a TES which was introduced in 2004. In 2007, it was modified and security bound was provided. Later, these two versions were referred to as XCBv1 and XCBv2 respectively. XCBv2 was proposed as the IEEE-std 1619.2 2010 for encryption of sector oriented storage media. In 2013, first time Security bound of XCBv1 was given and XCBv2's security bound was enhanced. A constant of $2^{22}$ appears in the security bounds of the XCBv1 and XCBv2. We showed that this constant of $2^{22}$ can be reduced to $2^{5}$. Further, we modified the XCB (MXCB) scheme such that it gives better security bound compared to the present XCB scheme. We also analyzed some weak keys attack on XCB and a type of TES known as HCTR (proposed in 2005). We performed distinguishing attack and the hash key recovery attack on HCTR. Next, we analyzed the dependency of the two different keys in HCTR.
[ { "version": "v1", "created": "Fri, 11 Aug 2023 11:45:09 GMT" } ]
2023-08-14T00:00:00
[ [ "Kumar", "Manish", "" ] ]
new_dataset
0.980525
2308.06113
Maximilian Kaul
Maximilian Kaul, Alexander K\"uchler, Christian Banse
A Uniform Representation of Classical and Quantum Source Code for Static Code Analysis
2023 IEEE International Conference on Quantum Computing and Engineering (QCE)
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The emergence of quantum computing raises the question of how to identify (security-relevant) programming errors during development. However, current static code analysis tools fail to model information specific to quantum computing. In this paper, we identify this information and propose to extend classical code analysis tools accordingly. Among such tools, we identify the Code Property Graph to be very well suited for this task as it can be easily extended with quantum computing specific information. For our proof of concept, we implemented a tool which includes information from the quantum world in the graph and demonstrate its ability to analyze source code written in Qiskit and OpenQASM. Our tool brings together the information from the classical and quantum world, enabling analysis across both domains. By combining all relevant information into a single detailed analysis, this powerful tool can facilitate tackling future quantum source code analysis challenges.
[ { "version": "v1", "created": "Fri, 11 Aug 2023 13:03:32 GMT" } ]
2023-08-14T00:00:00
[ [ "Kaul", "Maximilian", "" ], [ "Küchler", "Alexander", "" ], [ "Banse", "Christian", "" ] ]
new_dataset
0.951168
2308.06173
Amira Guesmi
Amira Guesmi, Muhammad Abdullah Hanif, Bassem Ouni, and Muhammed Shafique
Physical Adversarial Attacks For Camera-based Smart Systems: Current Trends, Categorization, Applications, Research Challenges, and Future Outlook
null
null
null
null
cs.CR cs.AI cs.CV cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a comprehensive survey of the current trends focusing specifically on physical adversarial attacks. We aim to provide a thorough understanding of the concept of physical adversarial attacks, analyzing their key characteristics and distinguishing features. Furthermore, we explore the specific requirements and challenges associated with executing attacks in the physical world. Our article delves into various physical adversarial attack methods, categorized according to their target tasks in different applications, including classification, detection, face recognition, semantic segmentation and depth estimation. We assess the performance of these attack methods in terms of their effectiveness, stealthiness, and robustness. We examine how each technique strives to ensure the successful manipulation of DNNs while mitigating the risk of detection and withstanding real-world distortions. Lastly, we discuss the current challenges and outline potential future research directions in the field of physical adversarial attacks. We highlight the need for enhanced defense mechanisms, the exploration of novel attack strategies, the evaluation of attacks in different application domains, and the establishment of standardized benchmarks and evaluation criteria for physical adversarial attacks. Through this comprehensive survey, we aim to provide a valuable resource for researchers, practitioners, and policymakers to gain a holistic understanding of physical adversarial attacks in computer vision and facilitate the development of robust and secure DNN-based systems.
[ { "version": "v1", "created": "Fri, 11 Aug 2023 15:02:19 GMT" } ]
2023-08-14T00:00:00
[ [ "Guesmi", "Amira", "" ], [ "Hanif", "Muhammad Abdullah", "" ], [ "Ouni", "Bassem", "" ], [ "Shafique", "Muhammed", "" ] ]
new_dataset
0.984806
2308.06241
Mohammad Maksood Akhter
Mohammad Maksood Akhter, Devpriya Kanojia
Covid-19 Public Sentiment Analysis for Indian Tweets Classification
null
null
null
null
cs.CL cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When any extraordinary event takes place in the world wide area, it is the social media that acts as the fastest carrier of the news along with the consequences dealt with that event. One can gather much information through social networks regarding the sentiments, behavior, and opinions of the people. In this paper, we focus mainly on sentiment analysis of twitter data of India which comprises of COVID-19 tweets. We show how Twitter data has been extracted and then run sentimental analysis queries on it. This is helpful to analyze the information in the tweets where opinions are highly unstructured, heterogeneous, and are either positive or negative or neutral in some cases.
[ { "version": "v1", "created": "Tue, 1 Aug 2023 09:29:55 GMT" } ]
2023-08-14T00:00:00
[ [ "Akhter", "Mohammad Maksood", "" ], [ "Kanojia", "Devpriya", "" ] ]
new_dataset
0.984991
2308.06248
Robin Hesse
Robin Hesse, Simone Schaub-Meyer, Stefan Roth
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods
Accepted at ICCV 2023. Code: https://github.com/visinf/funnybirds
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The field of explainable artificial intelligence (XAI) aims to uncover the inner workings of complex deep neural models. While being crucial for safety-critical domains, XAI inherently lacks ground-truth explanations, making its automatic evaluation an unsolved problem. We address this challenge by proposing a novel synthetic vision dataset, named FunnyBirds, and accompanying automatic evaluation protocols. Our dataset allows performing semantically meaningful image interventions, e.g., removing individual object parts, which has three important implications. First, it enables analyzing explanations on a part level, which is closer to human comprehension than existing methods that evaluate on a pixel level. Second, by comparing the model output for inputs with removed parts, we can estimate ground-truth part importances that should be reflected in the explanations. Third, by mapping individual explanations into a common space of part importances, we can analyze a variety of different explanation types in a single common framework. Using our tools, we report results for 24 different combinations of neural models and XAI methods, demonstrating the strengths and weaknesses of the assessed methods in a fully automatic and systematic manner.
[ { "version": "v1", "created": "Fri, 11 Aug 2023 17:29:02 GMT" } ]
2023-08-14T00:00:00
[ [ "Hesse", "Robin", "" ], [ "Schaub-Meyer", "Simone", "" ], [ "Roth", "Stefan", "" ] ]
new_dataset
0.999252
2112.02399
Longtian Qiu
Longtian Qiu, Renrui Zhang, Ziyu Guo, Ziyao Zeng, Zilu Guo, Yafeng Li, Guangnan Zhang
VT-CLIP: Enhancing Vision-Language Models with Visual-guided Texts
null
null
null
null
cs.CV cs.CL
http://creativecommons.org/licenses/by/4.0/
Contrastive Language-Image Pre-training (CLIP) has drawn increasing attention recently for its transferable visual representation learning. However, due to the semantic gap within datasets, CLIP's pre-trained image-text alignment becomes sub-optimal on downstream tasks, which severely harms its transferring performance. To better adapt the cross-modality embedding space, we propose to enhance CLIP via Visual-guided Texts, named VT-CLIP. Specifically, we guide textual features of different categories to adaptively explore informative regions on the image and aggregate visual features by attention mechanisms. In this way, the texts become visual-guided, namely, more semantically correlated with downstream images, which greatly benefits the category-wise matching process. In few-shot settings, we evaluate our VT-CLIP on 11 well-known classification datasets to demonstrate its effectiveness.
[ { "version": "v1", "created": "Sat, 4 Dec 2021 18:34:24 GMT" }, { "version": "v2", "created": "Thu, 3 Nov 2022 08:23:13 GMT" }, { "version": "v3", "created": "Thu, 10 Aug 2023 15:31:54 GMT" } ]
2023-08-11T00:00:00
[ [ "Qiu", "Longtian", "" ], [ "Zhang", "Renrui", "" ], [ "Guo", "Ziyu", "" ], [ "Zeng", "Ziyao", "" ], [ "Guo", "Zilu", "" ], [ "Li", "Yafeng", "" ], [ "Zhang", "Guangnan", "" ] ]
new_dataset
0.969662
2201.08157
Johannes Hertrich
Fabian Altekr\"uger, Johannes Hertrich
WPPNets and WPPFlows: The Power of Wasserstein Patch Priors for Superresolution
null
SIAM Journal on Imaging Sciences, vol. 16(3), pp. 1033-1067, 2023
10.1137/22M1496542
null
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Exploiting image patches instead of whole images have proved to be a powerful approach to tackle various problems in image processing. Recently, Wasserstein patch priors (WPP), which are based on the comparison of the patch distributions of the unknown image and a reference image, were successfully used as data-driven regularizers in the variational formulation of superresolution. However, for each input image, this approach requires the solution of a non-convex minimization problem which is computationally costly. In this paper, we propose to learn two kind of neural networks in an unsupervised way based on WPP loss functions. First, we show how convolutional neural networks (CNNs) can be incorporated. Once the network, called WPPNet, is learned, it can be very efficiently applied to any input image. Second, we incorporate conditional normalizing flows to provide a tool for uncertainty quantification. Numerical examples demonstrate the very good performance of WPPNets for superresolution in various image classes even if the forward operator is known only approximately.
[ { "version": "v1", "created": "Thu, 20 Jan 2022 13:04:19 GMT" }, { "version": "v2", "created": "Thu, 5 May 2022 10:42:47 GMT" }, { "version": "v3", "created": "Thu, 5 Jan 2023 10:09:10 GMT" } ]
2023-08-11T00:00:00
[ [ "Altekrüger", "Fabian", "" ], [ "Hertrich", "Johannes", "" ] ]
new_dataset
0.99226
2202.02270
Jonatan Langlet
Jonatan Langlet, Ran Ben Basat, Gabriele Oliaro, Michael Mitzenmacher, Minlan Yu, Gianni Antichi
Direct Telemetry Access
As appearing in the proceedings of ACM SIGCOMM'23
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fine-grained network telemetry is becoming a modern datacenter standard and is the basis of essential applications such as congestion control, load balancing, and advanced troubleshooting. As network size increases and telemetry gets more fine-grained, there is a tremendous growth in the amount of data needed to be reported from switches to collectors to enable network-wide view. As a consequence, it is progressively hard to scale data collection systems. We introduce Direct Telemetry Access (DTA), a solution optimized for aggregating and moving hundreds of millions of reports per second from switches into queryable data structures in collectors' memory. DTA is lightweight and it is able to greatly reduce overheads at collectors. DTA is built on top of RDMA, and we propose novel and expressive reporting primitives to allow easy integration with existing state-of-the-art telemetry mechanisms such as INT or Marple. We show that DTA significantly improves telemetry collection rates. For example, when used with INT, it can collect and aggregate over 400M reports per second with a single server, improving over the Atomic MultiLog by up to $16$x.
[ { "version": "v1", "created": "Fri, 4 Feb 2022 17:55:09 GMT" }, { "version": "v2", "created": "Wed, 21 Sep 2022 10:28:03 GMT" }, { "version": "v3", "created": "Thu, 10 Aug 2023 10:11:07 GMT" } ]
2023-08-11T00:00:00
[ [ "Langlet", "Jonatan", "" ], [ "Basat", "Ran Ben", "" ], [ "Oliaro", "Gabriele", "" ], [ "Mitzenmacher", "Michael", "" ], [ "Yu", "Minlan", "" ], [ "Antichi", "Gianni", "" ] ]
new_dataset
0.95822
2202.03026
Xiaokang Chen
Xiaokang Chen, Mingyu Ding, Xiaodi Wang, Ying Xin, Shentong Mo, Yunhao Wang, Shumin Han, Ping Luo, Gang Zeng, Jingdong Wang
Context Autoencoder for Self-Supervised Representation Learning
Accepted by International Journal of Computer Vision (IJCV)
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We present a novel masked image modeling (MIM) approach, context autoencoder (CAE), for self-supervised representation pretraining. We pretrain an encoder by making predictions in the encoded representation space. The pretraining tasks include two tasks: masked representation prediction - predict the representations for the masked patches, and masked patch reconstruction - reconstruct the masked patches. The network is an encoder-regressor-decoder architecture: the encoder takes the visible patches as input; the regressor predicts the representations of the masked patches, which are expected to be aligned with the representations computed from the encoder, using the representations of visible patches and the positions of visible and masked patches; the decoder reconstructs the masked patches from the predicted encoded representations. The CAE design encourages the separation of learning the encoder (representation) from completing the pertaining tasks: masked representation prediction and masked patch reconstruction tasks, and making predictions in the encoded representation space empirically shows the benefit to representation learning. We demonstrate the effectiveness of our CAE through superior transfer performance in downstream tasks: semantic segmentation, object detection and instance segmentation, and classification. The code will be available at https://github.com/Atten4Vis/CAE.
[ { "version": "v1", "created": "Mon, 7 Feb 2022 09:33:45 GMT" }, { "version": "v2", "created": "Mon, 30 May 2022 08:42:10 GMT" }, { "version": "v3", "created": "Thu, 10 Aug 2023 11:01:14 GMT" } ]
2023-08-11T00:00:00
[ [ "Chen", "Xiaokang", "" ], [ "Ding", "Mingyu", "" ], [ "Wang", "Xiaodi", "" ], [ "Xin", "Ying", "" ], [ "Mo", "Shentong", "" ], [ "Wang", "Yunhao", "" ], [ "Han", "Shumin", "" ], [ "Luo", "Ping", "" ], [ "Zeng", "Gang", "" ], [ "Wang", "Jingdong", "" ] ]
new_dataset
0.976985
2206.09973
Gleb Kalachev
Gleb Kalachev, Pavel Panteleev
Two-sided Robustly Testable Codes
26 pages, 3 figures
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
We show that the tensor product of two random linear codes is robustly testable with high probability. This implies that one can obtain pairs of linear codes such that their product and the product of their dual codes are simultaneously robustly testable. Such two-sided robustly testable codes (with a much weaker form of robustness) were the key ingredient in the recent constructions of asymptotically good quantum LDPC codes, which ensured their linear minimum distance. We hope that the existence of such codes with a stronger form of robustness, shown here, can be used to simplify the proofs and provide better distance bounds in these constructions. We also give new very simple examples of non-robustly testable codes. We show that if the parity-checks of two codes are mutually orthogonal, then their product is not robustly testable. In particular, this implies that the product of a code with its dual can never be robustly testable. We also study a property of a collection of linear codes called product-expansion, which can be viewed as a coboundary expansion of the cochain complex naturally associated with the product of these codes. We show that this property is related with the robust testability and the agreement testability of the products of codes.
[ { "version": "v1", "created": "Mon, 20 Jun 2022 19:28:57 GMT" }, { "version": "v2", "created": "Sun, 2 Jul 2023 15:58:35 GMT" }, { "version": "v3", "created": "Thu, 10 Aug 2023 17:12:09 GMT" } ]
2023-08-11T00:00:00
[ [ "Kalachev", "Gleb", "" ], [ "Panteleev", "Pavel", "" ] ]
new_dataset
0.99945
2207.02157
Kumar Vijay Mishra
Tong Wei, Linlong Wu, Kumar Vijay Mishra and M. R. Bhavani Shankar
Multi-IRS-Aided Doppler-Tolerant Wideband DFRC System
16 pages, 8 figures, 2 tables
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intelligent reflecting surface (IRS) is recognized as an enabler of future dual-function radar-communications (DFRC) by improving spectral efficiency, coverage, parameter estimation, and interference suppression. Prior studies on IRS-aided DFRC focus either on narrowband processing, single-IRS deployment, static targets, non-clutter scenario, or on the under-utilized line-of-sight (LoS) and non-line-of-sight (NLoS) paths. In this paper, we address the aforementioned shortcomings by optimizing a wideband DFRC system comprising multiple IRSs and a dual-function base station that jointly processes the LoS and NLoS wideband multi-carrier signals to improve both the communications SINR and the radar SINR in the presence of a moving target and clutter. We formulate the transmit, {receive} and IRS beamformer design as the maximization of the worst-case radar signal-to-interference-plus-noise ratio (SINR) subject to transmit power and communications SINR. We tackle this nonconvex problem under the alternating optimization framework, where the subproblems are solved by a combination of Dinkelbach algorithm, consensus alternating direction method of multipliers, and Riemannian steepest decent. Our numerical experiments show that the proposed multi-IRS-aided wideband DFRC provides over $4$ dB radar SINR and $31.7$\% improvement in target detection over a single-IRS system.
[ { "version": "v1", "created": "Tue, 5 Jul 2022 16:22:03 GMT" }, { "version": "v2", "created": "Thu, 10 Aug 2023 06:03:16 GMT" } ]
2023-08-11T00:00:00
[ [ "Wei", "Tong", "" ], [ "Wu", "Linlong", "" ], [ "Mishra", "Kumar Vijay", "" ], [ "Shankar", "M. R. Bhavani", "" ] ]
new_dataset
0.996701
2209.04278
Rajitha de Silva
Rajitha de Silva, Grzegorz Cielniak, Gang Wang, Junfeng Gao
Deep learning-based Crop Row Detection for Infield Navigation of Agri-Robots
Published in Journal of Field Robotics: https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22238
null
null
null
cs.CV cs.AI cs.RO
http://creativecommons.org/licenses/by/4.0/
Autonomous navigation in agricultural environments is challenged by varying field conditions that arise in arable fields. State-of-the-art solutions for autonomous navigation in such environments require expensive hardware such as RTK-GNSS. This paper presents a robust crop row detection algorithm that withstands such field variations using inexpensive cameras. Existing datasets for crop row detection does not represent all the possible field variations. A dataset of sugar beet images was created representing 11 field variations comprised of multiple grow stages, light levels, varying weed densities, curved crop rows and discontinuous crop rows. The proposed pipeline segments the crop rows using a deep learning-based method and employs the predicted segmentation mask for extraction of the central crop using a novel central crop row selection algorithm. The novel crop row detection algorithm was tested for crop row detection performance and the capability of visual servoing along a crop row. The visual servoing-based navigation was tested on a realistic simulation scenario with the real ground and plant textures. Our algorithm demonstrated robust vision-based crop row detection in challenging field conditions outperforming the baseline.
[ { "version": "v1", "created": "Fri, 9 Sep 2022 12:47:24 GMT" }, { "version": "v2", "created": "Thu, 10 Aug 2023 15:19:34 GMT" } ]
2023-08-11T00:00:00
[ [ "de Silva", "Rajitha", "" ], [ "Cielniak", "Grzegorz", "" ], [ "Wang", "Gang", "" ], [ "Gao", "Junfeng", "" ] ]
new_dataset
0.99508
2209.14408
Rowan Dempster
Eddy Zhou, Alex Zhuang, Alikasim Budhwani, Rowan Dempster, Quanquan Li, Mohammad Al-Sharman, Derek Rayside, and William Melek
RALACs: Action Recognition in Autonomous Vehicles using Interaction Encoding and Optical Flow
null
null
null
null
cs.CV cs.LG cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
When applied to autonomous vehicle (AV) settings, action recognition can enhance an environment model's situational awareness. This is especially prevalent in scenarios where traditional geometric descriptions and heuristics in AVs are insufficient. However, action recognition has traditionally been studied for humans, and its limited adaptability to noisy, un-clipped, un-pampered, raw RGB data has limited its application in other fields. To push for the advancement and adoption of action recognition into AVs, this work proposes a novel two-stage action recognition system, termed RALACs. RALACs formulates the problem of action recognition for road scenes, and bridges the gap between it and the established field of human action recognition. This work shows how attention layers can be useful for encoding the relations across agents, and stresses how such a scheme can be class-agnostic. Furthermore, to address the dynamic nature of agents on the road, RALACs constructs a novel approach to adapting Region of Interest (ROI) Alignment to agent tracks for downstream action classification. Finally, our scheme also considers the problem of active agent detection, and utilizes a novel application of fusing optical flow maps to discern relevant agents in a road scene. We show that our proposed scheme can outperform the baseline on the ICCV2021 Road Challenge dataset and by deploying it on a real vehicle platform, we provide preliminary insight to the usefulness of action recognition in decision making.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 20:36:49 GMT" }, { "version": "v2", "created": "Wed, 9 Aug 2023 18:30:48 GMT" } ]
2023-08-11T00:00:00
[ [ "Zhou", "Eddy", "" ], [ "Zhuang", "Alex", "" ], [ "Budhwani", "Alikasim", "" ], [ "Dempster", "Rowan", "" ], [ "Li", "Quanquan", "" ], [ "Al-Sharman", "Mohammad", "" ], [ "Rayside", "Derek", "" ], [ "Melek", "William", "" ] ]
new_dataset
0.998025
2303.13381
Wouter Jansen
Wouter Jansen, Erik Verreycken, Anthony Schenck, Jean-Edouard Blanquart, Connor Verhulst, Nico Huebel, Jan Steckel
Cosys-AirSim: A Real-Time Simulation Framework Expanded for Complex Industrial Applications
Presented at Annual Modeling and Simulation Conference, ANNSIM 2023, https://ieeexplore.ieee.org/abstract/document/10155352
null
null
null
cs.RO eess.SP
http://creativecommons.org/licenses/by-sa/4.0/
Within academia and industry, there has been a need for expansive simulation frameworks that include model-based simulation of sensors, mobile vehicles, and the environment around them. To this end, the modular, real-time, and open-source AirSim framework has been a popular community-built system that fulfills some of those needs. However, the framework required adding systems to serve some complex industrial applications, including designing and testing new sensor modalities, Simultaneous Localization And Mapping (SLAM), autonomous navigation algorithms, and transfer learning with machine learning models. In this work, we discuss the modification and additions to our open-source version of the AirSim simulation framework, including new sensor modalities, vehicle types, and methods to generate realistic environments with changeable objects procedurally. Furthermore, we show the various applications and use cases the framework can serve.
[ { "version": "v1", "created": "Thu, 23 Mar 2023 15:48:28 GMT" }, { "version": "v2", "created": "Tue, 28 Mar 2023 13:27:16 GMT" }, { "version": "v3", "created": "Thu, 10 Aug 2023 11:15:48 GMT" } ]
2023-08-11T00:00:00
[ [ "Jansen", "Wouter", "" ], [ "Verreycken", "Erik", "" ], [ "Schenck", "Anthony", "" ], [ "Blanquart", "Jean-Edouard", "" ], [ "Verhulst", "Connor", "" ], [ "Huebel", "Nico", "" ], [ "Steckel", "Jan", "" ] ]
new_dataset
0.999307
2304.14520
Alexander Kyuroson
Alexander Kyuroson, Niklas Dahlquist, Nikolaos Stathoulopoulos, Vignesh Kottayam Viswanathan, Anton Koval and George Nikolakopoulos
Multimodal Dataset from Harsh Sub-Terranean Environment with Aerosol Particles for Frontier Exploration
Accepted in the 31st Mediterranean Conference on Control and Automation [MED2023]
null
10.1109/MED59994.2023.10185906
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Algorithms for autonomous navigation in environments without Global Navigation Satellite System (GNSS) coverage mainly rely on onboard perception systems. These systems commonly incorporate sensors like cameras and Light Detection and Rangings (LiDARs), the performance of which may degrade in the presence of aerosol particles. Thus, there is a need of fusing acquired data from these sensors with data from Radio Detection and Rangings (RADARs) which can penetrate through such particles. Overall, this will improve the performance of localization and collision avoidance algorithms under such environmental conditions. This paper introduces a multimodal dataset from the harsh and unstructured underground environment with aerosol particles. A detailed description of the onboard sensors and the environment, where the dataset is collected are presented to enable full evaluation of acquired data. Furthermore, the dataset contains synchronized raw data measurements from all onboard sensors in Robot Operating System (ROS) format to facilitate the evaluation of navigation, and localization algorithms in such environments. In contrast to the existing datasets, the focus of this paper is not only to capture both temporal and spatial data diversities but also to present the impact of harsh conditions on captured data. Therefore, to validate the dataset, a preliminary comparison of odometry from onboard LiDARs is presented.
[ { "version": "v1", "created": "Thu, 27 Apr 2023 20:21:18 GMT" }, { "version": "v2", "created": "Wed, 21 Jun 2023 09:56:57 GMT" } ]
2023-08-11T00:00:00
[ [ "Kyuroson", "Alexander", "" ], [ "Dahlquist", "Niklas", "" ], [ "Stathoulopoulos", "Nikolaos", "" ], [ "Viswanathan", "Vignesh Kottayam", "" ], [ "Koval", "Anton", "" ], [ "Nikolakopoulos", "George", "" ] ]
new_dataset
0.999686
2306.10634
Kai Li
Kai Li, Darren Lee, Shixuan Guan
Understanding the Cryptocurrency Free Giveaway Scam Disseminated on Twitter Lists
9 pages, 5 figures
null
null
null
cs.CR cs.SI
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper presents a comprehensive analysis of the cryptocurrency free giveaway scam disseminated in a new distribution channel, Twitter lists. To collect and detect the scam in this channel, unlike existing scam detection systems that rely on manual effort, this paper develops a fully automated scam detection system, \textit{GiveawayScamHunter}, to continuously collect lists from Twitter and utilize a Nature-Language-Processing (NLP) model to automatically detect the free giveaway scam and extract the scam cryptocurrency address. By running \textit{GiveawayScamHunter} from June 2022 to June 2023, we detected 95,111 free giveaway scam lists on Twitter that were created by thousands of Twitter accounts. Through analyzing the list creator accounts, our work reveals that scammers have combined different strategies to spread the scam, including compromising popular accounts and creating spam accounts on Twitter. Our analysis result shows that 43.9\% of spam accounts still remain active as of this writing. Furthermore, we collected 327 free giveaway domains and 121 new scam cryptocurrency addresses. By tracking the transactions of the scam cryptocurrency addresses, this work uncovers that over 365 victims have been attacked by the scam, resulting in an estimated financial loss of 872K USD. Overall, this work sheds light on the tactics, scale, and impact of free giveaway scams disseminated on Twitter lists, emphasizing the urgent need for effective detection and prevention mechanisms to protect social media users from such fraudulent activity.
[ { "version": "v1", "created": "Sun, 18 Jun 2023 20:10:54 GMT" }, { "version": "v2", "created": "Thu, 10 Aug 2023 17:50:57 GMT" } ]
2023-08-11T00:00:00
[ [ "Li", "Kai", "" ], [ "Lee", "Darren", "" ], [ "Guan", "Shixuan", "" ] ]
new_dataset
0.997964
2306.11029
Delong Chen
Fan Liu, Delong Chen, Zhangqingyun Guan, Xiaocong Zhou, Jiale Zhu, Jun Zhou
RemoteCLIP: A Vision Language Foundation Model for Remote Sensing
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
General-purpose foundation models have become increasingly important in the field of artificial intelligence. While self-supervised learning (SSL) and Masked Image Modeling (MIM) have led to promising results in building such foundation models for remote sensing, these models primarily learn low-level features, require annotated data for fine-tuning, and not applicable for retrieval and zero-shot applications due to the lack of language understanding. In response to these limitations, we propose RemoteCLIP, the first vision-language foundation model for remote sensing that aims to learn robust visual features with rich semantics, as well as aligned text embeddings for seamless downstream application. To address the scarcity of pre-training data, we leverage data scaling, converting heterogeneous annotations based on Box-to-Caption (B2C) and Mask-to-Box (M2B) conversion, and further incorporating UAV imagery, resulting a 12xlarger pretraining dataset. RemoteCLIP can be applied to a variety of downstream tasks, including zero-shot image classification, linear probing, k-NN classification, few-shot classification, image-text retrieval, and object counting. Evaluations on 16 datasets, including a newly introduced RemoteCount benchmark to test the object counting ability, show that RemoteCLIP consistently outperforms baseline foundation models across different model scales. Impressively, RemoteCLIP outperform previous SoTA by 9.14% mean recall on RSICD dataset and by 8.92% on RSICD dataset. For zero-shot classification, our RemoteCLIP outperform CLIP baseline by up to 6.39% average accuracy on 12 downstream datasets.Pretrained models is available at https://github.com/ChenDelong1999/RemoteCLIP .
[ { "version": "v1", "created": "Mon, 19 Jun 2023 15:46:41 GMT" }, { "version": "v2", "created": "Thu, 10 Aug 2023 02:05:45 GMT" } ]
2023-08-11T00:00:00
[ [ "Liu", "Fan", "" ], [ "Chen", "Delong", "" ], [ "Guan", "Zhangqingyun", "" ], [ "Zhou", "Xiaocong", "" ], [ "Zhu", "Jiale", "" ], [ "Zhou", "Jun", "" ] ]
new_dataset
0.999614
2307.14527
Thomas Manzini
Thomas Manzini, Robin Murphy
Open Problems in Computer Vision for Wilderness SAR and The Search for Patricia Wu-Murad
10 pages, 10 figures
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper details the challenges in applying two computer vision systems, an EfficientDET supervised learning model and the unsupervised RX spectral classifier, to 98.9 GB of drone imagery from the Wu-Murad wilderness search and rescue (WSAR) effort in Japan and identifies 3 directions for future research. There have been at least 19 proposed approaches and 3 datasets aimed at locating missing persons in drone imagery, but only 3 approaches (2 unsupervised and 1 of an unknown structure) are referenced in the literature as having been used in an actual WSAR operation. Of these proposed approaches, the EfficientDET architecture and the unsupervised spectral RX classifier were selected as the most appropriate for this setting. The EfficientDET model was applied to the HERIDAL dataset and despite achieving performance that is statistically equivalent to the state-of-the-art, the model fails to translate to the real world in terms of false positives (e.g., identifying tree limbs and rocks as people), and false negatives (e.g., failing to identify members of the search team). The poor results in practice for algorithms that showed good results on datasets suggest 3 areas of future research: more realistic datasets for wilderness SAR, computer vision models that are capable of seamlessly handling the variety of imagery that can be collected during actual WSAR operations, and better alignment on performance measures.
[ { "version": "v1", "created": "Wed, 26 Jul 2023 22:09:29 GMT" }, { "version": "v2", "created": "Thu, 10 Aug 2023 01:46:11 GMT" } ]
2023-08-11T00:00:00
[ [ "Manzini", "Thomas", "" ], [ "Murphy", "Robin", "" ] ]
new_dataset
0.98733
2308.03463
Zhongjie Duan
Zhongjie Duan, Lizhou You, Chengyu Wang, Cen Chen, Ziheng Wu, Weining Qian, Jun Huang
DiffSynth: Latent In-Iteration Deflickering for Realistic Video Synthesis
9 pages, 6 figures
null
null
null
cs.CV cs.MM
http://creativecommons.org/licenses/by/4.0/
In recent years, diffusion models have emerged as the most powerful approach in image synthesis. However, applying these models directly to video synthesis presents challenges, as it often leads to noticeable flickering contents. Although recently proposed zero-shot methods can alleviate flicker to some extent, we still struggle to generate coherent videos. In this paper, we propose DiffSynth, a novel approach that aims to convert image synthesis pipelines to video synthesis pipelines. DiffSynth consists of two key components: a latent in-iteration deflickering framework and a video deflickering algorithm. The latent in-iteration deflickering framework applies video deflickering to the latent space of diffusion models, effectively preventing flicker accumulation in intermediate steps. Additionally, we propose a video deflickering algorithm, named patch blending algorithm, that remaps objects in different frames and blends them together to enhance video consistency. One of the notable advantages of DiffSynth is its general applicability to various video synthesis tasks, including text-guided video stylization, fashion video synthesis, image-guided video stylization, video restoring, and 3D rendering. In the task of text-guided video stylization, we make it possible to synthesize high-quality videos without cherry-picking. The experimental results demonstrate the effectiveness of DiffSynth. All videos can be viewed on our project page. Source codes will also be released.
[ { "version": "v1", "created": "Mon, 7 Aug 2023 10:41:52 GMT" }, { "version": "v2", "created": "Tue, 8 Aug 2023 07:54:55 GMT" }, { "version": "v3", "created": "Thu, 10 Aug 2023 02:26:16 GMT" } ]
2023-08-11T00:00:00
[ [ "Duan", "Zhongjie", "" ], [ "You", "Lizhou", "" ], [ "Wang", "Chengyu", "" ], [ "Chen", "Cen", "" ], [ "Wu", "Ziheng", "" ], [ "Qian", "Weining", "" ], [ "Huang", "Jun", "" ] ]
new_dataset
0.996588
2308.04313
Jan Egger
Jan Egger, Christina Gsaxner, Xiaojun Chen, Jiang Bian, Jens Kleesiek, Behrus Puladi
Apple Vision Pro for Healthcare: "The Ultimate Display"? -- Entering the Wonderland of Precision
This is a Preprint under CC BY. This work was supported by NIH/NIAID R01AI172875, NIH/NCATS UL1 TR001427, the REACT-EU project KITE and enFaced 2.0 (FWF KLI 1044). B. Puladi was funded by the Medical Faculty of the RWTH Aachen University as part of the Clinician Scientist Program. C. Gsaxner was funded by the Advanced Research Opportunities Program from the RWTH Aachen University
null
null
null
cs.AI cs.GR cs.HC
http://creativecommons.org/licenses/by/4.0/
At the Worldwide Developers Conference (WWDC) in June 2023, Apple introduced the Vision Pro. The Vision Pro is a Mixed Reality (MR) headset, more specifically it is a Virtual Reality (VR) device with an additional Video See-Through (VST) capability. The VST capability turns the Vision Pro also into an Augmented Reality (AR) device. The AR feature is enabled by streaming the real world via cameras to the (VR) screens in front of the user's eyes. This is of course not unique and similar to other devices, like the Varjo XR-3. Nevertheless, the Vision Pro has some interesting features, like an inside-out screen that can show the headset wearers' eyes to "outsiders" or a button on the top, called "Digital Crown", that allows you to seamlessly blend digital content with your physical space by turning it. In addition, it is untethered, except for the cable to the battery, which makes the headset more agile, compared to the Varjo XR-3. This could actually come closer to the "Ultimate Display", which Ivan Sutherland had already sketched in 1965. Not available to the public yet, like the Ultimate Display, we want to take a look into the crystal ball in this perspective to see if it can overcome some clinical challenges that - especially - AR still faces in the medical domain, but also go beyond and discuss if the Vision Pro could support clinicians in essential tasks to spend more time with their patients.
[ { "version": "v1", "created": "Tue, 8 Aug 2023 15:01:51 GMT" }, { "version": "v2", "created": "Wed, 9 Aug 2023 13:34:57 GMT" }, { "version": "v3", "created": "Thu, 10 Aug 2023 05:03:39 GMT" } ]
2023-08-11T00:00:00
[ [ "Egger", "Jan", "" ], [ "Gsaxner", "Christina", "" ], [ "Chen", "Xiaojun", "" ], [ "Bian", "Jiang", "" ], [ "Kleesiek", "Jens", "" ], [ "Puladi", "Behrus", "" ] ]
new_dataset
0.992309
2308.04995
Fadi Boutros
Fadi Boutros, Jonas Henry Grebe, Arjan Kuijper, Naser Damer
IDiff-Face: Synthetic-based Face Recognition through Fizzy Identity-Conditioned Diffusion Models
Accepted at ICCV2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
The availability of large-scale authentic face databases has been crucial to the significant advances made in face recognition research over the past decade. However, legal and ethical concerns led to the recent retraction of many of these databases by their creators, raising questions about the continuity of future face recognition research without one of its key resources. Synthetic datasets have emerged as a promising alternative to privacy-sensitive authentic data for face recognition development. However, recent synthetic datasets that are used to train face recognition models suffer either from limitations in intra-class diversity or cross-class (identity) discrimination, leading to less optimal accuracies, far away from the accuracies achieved by models trained on authentic data. This paper targets this issue by proposing IDiff-Face, a novel approach based on conditional latent diffusion models for synthetic identity generation with realistic identity variations for face recognition training. Through extensive evaluations, our proposed synthetic-based face recognition approach pushed the limits of state-of-the-art performances, achieving, for example, 98.00% accuracy on the Labeled Faces in the Wild (LFW) benchmark, far ahead from the recent synthetic-based face recognition solutions with 95.40% and bridging the gap to authentic-based face recognition with 99.82% accuracy.
[ { "version": "v1", "created": "Wed, 9 Aug 2023 14:48:31 GMT" }, { "version": "v2", "created": "Thu, 10 Aug 2023 10:43:53 GMT" } ]
2023-08-11T00:00:00
[ [ "Boutros", "Fadi", "" ], [ "Grebe", "Jonas Henry", "" ], [ "Kuijper", "Arjan", "" ], [ "Damer", "Naser", "" ] ]
new_dataset
0.998657
2308.05179
Md Simul Hasan Talukder
Md. Simul Hasan Talukder, Mohammad Raziuddin Chowdhury, Md Sakib Ullah Sourav, Abdullah Al Rakin, Shabbir Ahmed Shuvo, Rejwan Bin Sulaiman, Musarrat Saberin Nipun, Muntarin Islam, Mst Rumpa Islam, Md Aminul Islam, Zubaer Haque
JutePestDetect: An Intelligent Approach for Jute Pest Identification Using Fine-Tuned Transfer Learning
29 Pages, 7 Tables, 7 Figures, 5 Appendix
null
null
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
In certain Asian countries, Jute is one of the primary sources of income and Gross Domestic Product (GDP) for the agricultural sector. Like many other crops, Jute is prone to pest infestations, and its identification is typically made visually in countries like Bangladesh, India, Myanmar, and China. In addition, this method is time-consuming, challenging, and somewhat imprecise, which poses a substantial financial risk. To address this issue, the study proposes a high-performing and resilient transfer learning (TL) based JutePestDetect model to identify jute pests at the early stage. Firstly, we prepared jute pest dataset containing 17 classes and around 380 photos per pest class, which were evaluated after manual and automatic pre-processing and cleaning, such as background removal and resizing. Subsequently, five prominent pre-trained models -DenseNet201, InceptionV3, MobileNetV2, VGG19, and ResNet50 were selected from a previous study to design the JutePestDetect model. Each model was revised by replacing the classification layer with a global average pooling layer and incorporating a dropout layer for regularization. To evaluate the models performance, various metrics such as precision, recall, F1 score, ROC curve, and confusion matrix were employed. These analyses provided additional insights for determining the efficacy of the models. Among them, the customized regularized DenseNet201-based proposed JutePestDetect model outperformed the others, achieving an impressive accuracy of 99%. As a result, our proposed method and strategy offer an enhanced approach to pest identification in the case of Jute, which can significantly benefit farmers worldwide.
[ { "version": "v1", "created": "Sun, 28 May 2023 15:51:35 GMT" } ]
2023-08-11T00:00:00
[ [ "Talukder", "Md. Simul Hasan", "" ], [ "Chowdhury", "Mohammad Raziuddin", "" ], [ "Sourav", "Md Sakib Ullah", "" ], [ "Rakin", "Abdullah Al", "" ], [ "Shuvo", "Shabbir Ahmed", "" ], [ "Sulaiman", "Rejwan Bin", "" ], [ "Nipun", "Musarrat Saberin", "" ], [ "Islam", "Muntarin", "" ], [ "Islam", "Mst Rumpa", "" ], [ "Islam", "Md Aminul", "" ], [ "Haque", "Zubaer", "" ] ]
new_dataset
0.998547
2308.05184
John Chung
John Joon Young Chung, Eytan Adar
PromptPaint: Steering Text-to-Image Generation Through Paint Medium-like Interactions
Accepted to UIST2023
null
10.1145/3586183.3606777
null
cs.HC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While diffusion-based text-to-image (T2I) models provide a simple and powerful way to generate images, guiding this generation remains a challenge. For concepts that are difficult to describe through language, users may struggle to create prompts. Moreover, many of these models are built as end-to-end systems, lacking support for iterative shaping of the image. In response, we introduce PromptPaint, which combines T2I generation with interactions that model how we use colored paints. PromptPaint allows users to go beyond language to mix prompts that express challenging concepts. Just as we iteratively tune colors through layered placements of paint on a physical canvas, PromptPaint similarly allows users to apply different prompts to different canvas areas and times of the generative process. Through a set of studies, we characterize different approaches for mixing prompts, design trade-offs, and socio-technical challenges for generative models. With PromptPaint we provide insight into future steerable generative tools.
[ { "version": "v1", "created": "Wed, 9 Aug 2023 18:41:11 GMT" } ]
2023-08-11T00:00:00
[ [ "Chung", "John Joon Young", "" ], [ "Adar", "Eytan", "" ] ]
new_dataset
0.997915
2308.05207
Orestis Papadigenopoulos
Vineet Goyal, Salal Humair, Orestis Papadigenopoulos, Assaf Zeevi
MNL-Prophet: Sequential Assortment Selection under Uncertainty
null
null
null
null
cs.DS cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to numerous applications in retail and (online) advertising the problem of assortment selection has been widely studied under many combinations of discrete choice models and feasibility constraints. In many situations, however, an assortment of products has to be constructed gradually and without accurate knowledge of all possible alternatives; in such cases, existing offline approaches become inapplicable. We consider a stochastic variant of the assortment selection problem, where the parameters that determine the revenue and (relative) demand of each item are jointly drawn from some known item-specific distribution. The items are observed sequentially in an arbitrary and unknown order; upon observing the realized parameters of each item, the decision-maker decides irrevocably whether to include it in the constructed assortment, or forfeit it forever. The objective is to maximize the expected total revenue of the constructed assortment, relative to that of an offline algorithm which foresees all the parameter realizations and computes the optimal assortment. We provide simple threshold-based online policies for the unconstrained and cardinality-constrained versions of the problem under a natural class of substitutable choice models; as we show, our policies are (worst-case) optimal under the celebrated Multinomial Logit choice model. We extend our results to the case of knapsack constraints and discuss interesting connections to the Prophet Inequality problem, which is already subsumed by our setting.
[ { "version": "v1", "created": "Wed, 9 Aug 2023 20:02:59 GMT" } ]
2023-08-11T00:00:00
[ [ "Goyal", "Vineet", "" ], [ "Humair", "Salal", "" ], [ "Papadigenopoulos", "Orestis", "" ], [ "Zeevi", "Assaf", "" ] ]
new_dataset
0.989393
2308.05219
Elizabeth Hou
Elizabeth M. Hou, Gregory Castanon
Decoding Layer Saliency in Language Transformers
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce a strategy for identifying textual saliency in large-scale language models applied to classification tasks. In visual networks where saliency is more well-studied, saliency is naturally localized through the convolutional layers of the network; however, the same is not true in modern transformer-stack networks used to process natural language. We adapt gradient-based saliency methods for these networks, propose a method for evaluating the degree of semantic coherence of each layer, and demonstrate consistent improvement over numerous other methods for textual saliency on multiple benchmark classification datasets. Our approach requires no additional training or access to labelled data, and is comparatively very computationally efficient.
[ { "version": "v1", "created": "Wed, 9 Aug 2023 20:53:22 GMT" } ]
2023-08-11T00:00:00
[ [ "Hou", "Elizabeth M.", "" ], [ "Castanon", "Gregory", "" ] ]
new_dataset
0.986865
2308.05221
Hangjie Shi
Hangjie Shi, Leslie Ball, Govind Thattai, Desheng Zhang, Lucy Hu, Qiaozi Gao, Suhaila Shakiah, Xiaofeng Gao, Aishwarya Padmakumar, Bofei Yang, Cadence Chung, Dinakar Guthy, Gaurav Sukhatme, Karthika Arumugam, Matthew Wen, Osman Ipek, Patrick Lange, Rohan Khanna, Shreyas Pansare, Vasu Sharma, Chao Zhang, Cris Flagg, Daniel Pressel, Lavina Vaz, Luke Dai, Prasoon Goyal, Sattvik Sahai, Shaohua Liu, Yao Lu, Anna Gottardi, Shui Hu, Yang Liu, Dilek Hakkani-Tur, Kate Bland, Heather Rocker, James Jeun, Yadunandana Rao, Michael Johnston, Akshaya Iyengar, Arindam Mandal, Prem Natarajan, Reza Ghanadan
Alexa, play with robot: Introducing the First Alexa Prize SimBot Challenge on Embodied AI
null
null
null
null
cs.HC cs.AI cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
The Alexa Prize program has empowered numerous university students to explore, experiment, and showcase their talents in building conversational agents through challenges like the SocialBot Grand Challenge and the TaskBot Challenge. As conversational agents increasingly appear in multimodal and embodied contexts, it is important to explore the affordances of conversational interaction augmented with computer vision and physical embodiment. This paper describes the SimBot Challenge, a new challenge in which university teams compete to build robot assistants that complete tasks in a simulated physical environment. This paper provides an overview of the SimBot Challenge, which included both online and offline challenge phases. We describe the infrastructure and support provided to the teams including Alexa Arena, the simulated environment, and the ML toolkit provided to teams to accelerate their building of vision and language models. We summarize the approaches the participating teams took to overcome research challenges and extract key lessons learned. Finally, we provide analysis of the performance of the competing SimBots during the competition.
[ { "version": "v1", "created": "Wed, 9 Aug 2023 20:56:56 GMT" } ]
2023-08-11T00:00:00
[ [ "Shi", "Hangjie", "" ], [ "Ball", "Leslie", "" ], [ "Thattai", "Govind", "" ], [ "Zhang", "Desheng", "" ], [ "Hu", "Lucy", "" ], [ "Gao", "Qiaozi", "" ], [ "Shakiah", "Suhaila", "" ], [ "Gao", "Xiaofeng", "" ], [ "Padmakumar", "Aishwarya", "" ], [ "Yang", "Bofei", "" ], [ "Chung", "Cadence", "" ], [ "Guthy", "Dinakar", "" ], [ "Sukhatme", "Gaurav", "" ], [ "Arumugam", "Karthika", "" ], [ "Wen", "Matthew", "" ], [ "Ipek", "Osman", "" ], [ "Lange", "Patrick", "" ], [ "Khanna", "Rohan", "" ], [ "Pansare", "Shreyas", "" ], [ "Sharma", "Vasu", "" ], [ "Zhang", "Chao", "" ], [ "Flagg", "Cris", "" ], [ "Pressel", "Daniel", "" ], [ "Vaz", "Lavina", "" ], [ "Dai", "Luke", "" ], [ "Goyal", "Prasoon", "" ], [ "Sahai", "Sattvik", "" ], [ "Liu", "Shaohua", "" ], [ "Lu", "Yao", "" ], [ "Gottardi", "Anna", "" ], [ "Hu", "Shui", "" ], [ "Liu", "Yang", "" ], [ "Hakkani-Tur", "Dilek", "" ], [ "Bland", "Kate", "" ], [ "Rocker", "Heather", "" ], [ "Jeun", "James", "" ], [ "Rao", "Yadunandana", "" ], [ "Johnston", "Michael", "" ], [ "Iyengar", "Akshaya", "" ], [ "Mandal", "Arindam", "" ], [ "Natarajan", "Prem", "" ], [ "Ghanadan", "Reza", "" ] ]
new_dataset
0.996865
2308.05264
Soumyaroop Nandi
Soumyaroop Nandi, Prem Natarajan, Wael Abd-Almageed
TrainFors: A Large Benchmark Training Dataset for Image Manipulation Detection and Localization
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evaluation datasets and metrics for image manipulation detection and localization (IMDL) research have been standardized. But the training dataset for such a task is still nonstandard. Previous researchers have used unconventional and deviating datasets to train neural networks for detecting image forgeries and localizing pixel maps of manipulated regions. For a fair comparison, the training set, test set, and evaluation metrics should be persistent. Hence, comparing the existing methods may not seem fair as the results depend heavily on the training datasets as well as the model architecture. Moreover, none of the previous works release the synthetic training dataset used for the IMDL task. We propose a standardized benchmark training dataset for image splicing, copy-move forgery, removal forgery, and image enhancement forgery. Furthermore, we identify the problems with the existing IMDL datasets and propose the required modifications. We also train the state-of-the-art IMDL methods on our proposed TrainFors1 dataset for a fair evaluation and report the actual performance of these methods under similar conditions.
[ { "version": "v1", "created": "Thu, 10 Aug 2023 00:26:34 GMT" } ]
2023-08-11T00:00:00
[ [ "Nandi", "Soumyaroop", "" ], [ "Natarajan", "Prem", "" ], [ "Abd-Almageed", "Wael", "" ] ]
new_dataset
0.99858
2308.05278
Jo\~ao Carneiro
Paulo Trezentos, Ricardo Capote, Tiago Teodoro, Jo\~ao Carneiro
DCM: A Developers Certification Model for Mobile Ecosystems
8 pages, 4 figures
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article introduces a distributed model of trust for app developers in Android and iOS mobile ecosystems. The model aims to allow the co-existence of multiple app stores and distribution channels while retaining a high level of safety for mobile device users and minimum changes to current mobile operating systems. The Developers Certification Model (DCM) is a trust model for Android and iOS that aims to distinguish legit applications from security threats to user safeness by answering the question: "is the developer of this app trustable"? It proposes security by design, where safety relies on a chain of trust mapping real-world levels of trust across organizations. For the technical implementation, DCM is heavily inspired by SSL/TLS certification protocol, as a proven model that has been working for over 30 years.
[ { "version": "v1", "created": "Thu, 10 Aug 2023 01:44:45 GMT" } ]
2023-08-11T00:00:00
[ [ "Trezentos", "Paulo", "" ], [ "Capote", "Ricardo", "" ], [ "Teodoro", "Tiago", "" ], [ "Carneiro", "João", "" ] ]
new_dataset
0.999576
2308.05334
Dabin Kim
Dabin Kim, Matthias Pezzutto, Luca Schenato, and H. Jin Kim
Visibility-Constrained Control of Multirotor via Reference Governor
8 pages, 6 figures, Accepted to 62nd IEEE Conference on Decision and Control (CDC 2023)
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
For safe vision-based control applications, perception-related constraints have to be satisfied in addition to other state constraints. In this paper, we deal with the problem where a multirotor equipped with a camera needs to maintain the visibility of a point of interest while tracking a reference given by a high-level planner. We devise a method based on reference governor that, differently from existing solutions, is able to enforce control-level visibility constraints with theoretically assured feasibility. To this end, we design a new type of reference governor for linear systems with polynomial constraints which is capable of handling time-varying references. The proposed solution is implemented online for the real-time multirotor control with visibility constraints and validated with simulations and an actual hardware experiment.
[ { "version": "v1", "created": "Thu, 10 Aug 2023 04:48:34 GMT" } ]
2023-08-11T00:00:00
[ [ "Kim", "Dabin", "" ], [ "Pezzutto", "Matthias", "" ], [ "Schenato", "Luca", "" ], [ "Kim", "H. Jin", "" ] ]
new_dataset
0.99401
2308.05336
Mehrnoush ShamsFard
Vahide Tajalli, Fateme Kalantari and Mehrnoush Shamsfard
Developing an Informal-Formal Persian Corpus
16 pages, 1 Figure and 3 tables
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Informal language is a style of spoken or written language frequently used in casual conversations, social media, weblogs, emails and text messages. In informal writing, the language faces some lexical and/or syntactic changes varying among different languages. Persian is one of the languages with many differences between its formal and informal styles of writing, thus developing informal language processing tools for this language seems necessary. Such a converter needs a large aligned parallel corpus of colloquial-formal sentences which can be useful for linguists to extract a regulated grammar and orthography for colloquial Persian as is done for the formal language. In this paper we explain our methodology in building a parallel corpus of 50,000 sentence pairs with alignments in the word/phrase level. The sentences were attempted to cover almost all kinds of lexical and syntactic changes between informal and formal Persian, therefore both methods of exploring and collecting from the different resources of informal scripts and following the phonological and morphological patterns of changes were applied to find as much instances as possible. The resulting corpus has about 530,000 alignments and a dictionary containing 49,397 word and phrase pairs.
[ { "version": "v1", "created": "Thu, 10 Aug 2023 04:57:34 GMT" } ]
2023-08-11T00:00:00
[ [ "Tajalli", "Vahide", "" ], [ "Kalantari", "Fateme", "" ], [ "Shamsfard", "Mehrnoush", "" ] ]
new_dataset
0.998659
2308.05344
Alvaro Fernandez-Quilez
Alvaro Fernandez-Quilez, Tobias Nordstr\"om, Fredrik J\"aderling, Svein Reidar Kjosavik and Martin Eklund
Prostate Age Gap (PAG): An MRI surrogate marker of aging for prostate cancer detection
Under review
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Prostate cancer (PC) MRI-based risk calculators are commonly based on biological (e.g. PSA), MRI markers (e.g. volume), and patient age. Whilst patient age measures the amount of years an individual has existed, biological age (BA) might better reflect the physiology of an individual. However, surrogates from prostate MRI and linkage with clinically significant PC (csPC) remain to be explored. Purpose: To obtain and evaluate Prostate Age Gap (PAG) as an MRI marker tool for csPC risk. Study type: Retrospective. Population: A total of 7243 prostate MRI slices from 468 participants who had undergone prostate biopsies. A deep learning model was trained on 3223 MRI slices cropped around the gland from 81 low-grade PC (ncsPC, Gleason score <=6) and 131 negative cases and tested on the remaining 256 participants. Assessment: Chronological age was defined as the age of the participant at the time of the visit and used to train the deep learning model to predict the age of the patient. Following, we obtained PAG, defined as the model predicted age minus the patient's chronological age. Multivariate logistic regression models were used to estimate the association through odds ratio (OR) and predictive value of PAG and compared against PSA levels and PI-RADS>=3. Statistical tests: T-test, Mann-Whitney U test, Permutation test and ROC curve analysis. Results: The multivariate adjusted model showed a significant difference in the odds of clinically significant PC (csPC, Gleason score >=7) (OR =3.78, 95% confidence interval (CI):2.32-6.16, P <.001). PAG showed a better predictive ability when compared to PI-RADS>=3 and adjusted by other risk factors, including PSA levels: AUC =0.981 vs AUC =0.704, p<.001. Conclusion: PAG was significantly associated with the risk of clinically significant PC and outperformed other well-established PC risk factors.
[ { "version": "v1", "created": "Thu, 10 Aug 2023 05:20:25 GMT" } ]
2023-08-11T00:00:00
[ [ "Fernandez-Quilez", "Alvaro", "" ], [ "Nordström", "Tobias", "" ], [ "Jäderling", "Fredrik", "" ], [ "Kjosavik", "Svein Reidar", "" ], [ "Eklund", "Martin", "" ] ]
new_dataset
0.986352
2308.05355
Xinquan Yang
Xinquan Yang and Jinheng Xie and Xuechen Li and Xuguang Li and Linlin Shen and Yongqiang Deng
TCSloT: Text Guided 3D Context and Slope Aware Triple Network for Dental Implant Position Prediction
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In implant prosthesis treatment, the surgical guide of implant is used to ensure accurate implantation. However, such design heavily relies on the manual location of the implant position. When deep neural network has been proposed to assist the dentist in locating the implant position, most of them take a single slice as input, which do not fully explore 3D contextual information and ignoring the influence of implant slope. In this paper, we design a Text Guided 3D Context and Slope Aware Triple Network (TCSloT) which enables the perception of contextual information from multiple adjacent slices and awareness of variation of implant slopes. A Texture Variation Perception (TVP) module is correspondingly elaborated to process the multiple slices and capture the texture variation among slices and a Slope-Aware Loss (SAL) is proposed to dynamically assign varying weights for the regression head. Additionally, we design a conditional text guidance (CTG) module to integrate the text condition (i.e., left, middle and right) from the CLIP for assisting the implant position prediction. Extensive experiments on a dental implant dataset through five-fold cross-validation demonstrated that the proposed TCSloT achieves superior performance than existing methods.
[ { "version": "v1", "created": "Thu, 10 Aug 2023 05:51:21 GMT" } ]
2023-08-11T00:00:00
[ [ "Yang", "Xinquan", "" ], [ "Xie", "Jinheng", "" ], [ "Li", "Xuechen", "" ], [ "Li", "Xuguang", "" ], [ "Shen", "Linlin", "" ], [ "Deng", "Yongqiang", "" ] ]
new_dataset
0.998959
2308.05358
Guozhang Liu
Guozhang Liu, Baochai Peng, Ting Liu, Pan Zhang, Mengke Yuan, Chaoran Lu, Ningning Cao, Sen Zhang, Simin Huang, Tao Wang
Fine-grained building roof instance segmentation based on domain adapted pretraining and composite dual-backbone
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The diversity of building architecture styles of global cities situated on various landforms, the degraded optical imagery affected by clouds and shadows, and the significant inter-class imbalance of roof types pose challenges for designing a robust and accurate building roof instance segmentor. To address these issues, we propose an effective framework to fulfill semantic interpretation of individual buildings with high-resolution optical satellite imagery. Specifically, the leveraged domain adapted pretraining strategy and composite dual-backbone greatly facilitates the discriminative feature learning. Moreover, new data augmentation pipeline, stochastic weight averaging (SWA) training and instance segmentation based model ensemble in testing are utilized to acquire additional performance boost. Experiment results show that our approach ranks in the first place of the 2023 IEEE GRSS Data Fusion Contest (DFC) Track 1 test phase ($mAP_{50}$:50.6\%). Note-worthily, we have also explored the potential of multimodal data fusion with both optical satellite imagery and SAR data.
[ { "version": "v1", "created": "Thu, 10 Aug 2023 05:54:57 GMT" } ]
2023-08-11T00:00:00
[ [ "Liu", "Guozhang", "" ], [ "Peng", "Baochai", "" ], [ "Liu", "Ting", "" ], [ "Zhang", "Pan", "" ], [ "Yuan", "Mengke", "" ], [ "Lu", "Chaoran", "" ], [ "Cao", "Ningning", "" ], [ "Zhang", "Sen", "" ], [ "Huang", "Simin", "" ], [ "Wang", "Tao", "" ] ]
new_dataset
0.95726
2308.05386
Fuqiang Zhao
Fuqiang Zhao, Bidan Huang, Mingchang Li, Mengde Li, Zhongtao Fu, Ziwei Lei, Miao Li
A novel tactile palm for robotic object manipulation
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tactile sensing is of great importance during human hand usage such as object exploration, grasping and manipulation. Different types of tactile sensors have been designed during the past decades, which are mainly focused on either the fingertips for grasping or the upper-body for human-robot interaction. In this paper, a novel soft tactile sensor has been designed to mimic the functionality of human palm that can estimate the contact state of different objects. The tactile palm mainly consists of three parts including an electrode array, a soft cover skin and the conductive sponge. The design principle are described in details, with a number of experiments showcasing the effectiveness of the proposed design.
[ { "version": "v1", "created": "Thu, 10 Aug 2023 07:03:15 GMT" } ]
2023-08-11T00:00:00
[ [ "Zhao", "Fuqiang", "" ], [ "Huang", "Bidan", "" ], [ "Li", "Mingchang", "" ], [ "Li", "Mengde", "" ], [ "Fu", "Zhongtao", "" ], [ "Lei", "Ziwei", "" ], [ "Li", "Miao", "" ] ]
new_dataset
0.999314
2308.05387
Guozhang Liu
Chaoran Lu, Ningning Cao, Pan Zhang, Ting Liu, Baochai Peng, Guozhang Liu, Mengke Yuan, Sen Zhang, Simin Huang, Tao Wang
HGDNet: A Height-Hierarchy Guided Dual-Decoder Network for Single View Building Extraction and Height Estimation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unifying the correlative single-view satellite image building extraction and height estimation tasks indicates a promising way to share representations and acquire generalist model for large-scale urban 3D reconstruction. However, the common spatial misalignment between building footprints and stereo-reconstructed nDSM height labels incurs degraded performance on both tasks. To address this issue, we propose a Height-hierarchy Guided Dual-decoder Network (HGDNet) to estimate building height. Under the guidance of synthesized discrete height-hierarchy nDSM, auxiliary height-hierarchical building extraction branch enhance the height estimation branch with implicit constraints, yielding an accuracy improvement of more than 6% on the DFC 2023 track2 dataset. Additional two-stage cascade architecture is adopted to achieve more accurate building extraction. Experiments on the DFC 2023 Track 2 dataset shows the superiority of the proposed method in building height estimation ({\delta}1:0.8012), instance extraction (AP50:0.7730), and the final average score 0.7871 ranks in the first place in test phase.
[ { "version": "v1", "created": "Thu, 10 Aug 2023 07:03:32 GMT" } ]
2023-08-11T00:00:00
[ [ "Lu", "Chaoran", "" ], [ "Cao", "Ningning", "" ], [ "Zhang", "Pan", "" ], [ "Liu", "Ting", "" ], [ "Peng", "Baochai", "" ], [ "Liu", "Guozhang", "" ], [ "Yuan", "Mengke", "" ], [ "Zhang", "Sen", "" ], [ "Huang", "Simin", "" ], [ "Wang", "Tao", "" ] ]
new_dataset
0.952959
2308.05441
Hao Liang
Hao Liang, Pietro Perona and Guha Balakrishnan
Benchmarking Algorithmic Bias in Face Recognition: An Experimental Approach Using Synthetic Faces and Human Evaluation
accepted to iccv2023; 18 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose an experimental method for measuring bias in face recognition systems. Existing methods to measure bias depend on benchmark datasets that are collected in the wild and annotated for protected (e.g., race, gender) and non-protected (e.g., pose, lighting) attributes. Such observational datasets only permit correlational conclusions, e.g., "Algorithm A's accuracy is different on female and male faces in dataset X.". By contrast, experimental methods manipulate attributes individually and thus permit causal conclusions, e.g., "Algorithm A's accuracy is affected by gender and skin color." Our method is based on generating synthetic faces using a neural face generator, where each attribute of interest is modified independently while leaving all other attributes constant. Human observers crucially provide the ground truth on perceptual identity similarity between synthetic image pairs. We validate our method quantitatively by evaluating race and gender biases of three research-grade face recognition models. Our synthetic pipeline reveals that for these algorithms, accuracy is lower for Black and East Asian population subgroups. Our method can also quantify how perceptual changes in attributes affect face identity distances reported by these models. Our large synthetic dataset, consisting of 48,000 synthetic face image pairs (10,200 unique synthetic faces) and 555,000 human annotations (individual attributes and pairwise identity comparisons) is available to researchers in this important area.
[ { "version": "v1", "created": "Thu, 10 Aug 2023 08:57:31 GMT" } ]
2023-08-11T00:00:00
[ [ "Liang", "Hao", "" ], [ "Perona", "Pietro", "" ], [ "Balakrishnan", "Guha", "" ] ]
new_dataset
0.957962
2308.05459
Changkun Liu
Changkun Liu, Yukun Zhao, Tristan Braud
KS-APR: Keyframe Selection for Robust Absolute Pose Regression
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Markerless Mobile Augmented Reality (AR) aims to anchor digital content in the physical world without using specific 2D or 3D objects. Absolute Pose Regressors (APR) are end-to-end machine learning solutions that infer the device's pose from a single monocular image. Thanks to their low computation cost, they can be directly executed on the constrained hardware of mobile AR devices. However, APR methods tend to yield significant inaccuracies for input images that are too distant from the training set. This paper introduces KS-APR, a pipeline that assesses the reliability of an estimated pose with minimal overhead by combining the inference results of the APR and the prior images in the training set. Mobile AR systems tend to rely upon visual-inertial odometry to track the relative pose of the device during the experience. As such, KS-APR favours reliability over frequency, discarding unreliable poses. This pipeline can integrate most existing APR methods to improve accuracy by filtering unreliable images with their pose estimates. We implement the pipeline on three types of APR models on indoor and outdoor datasets. The median error on position and orientation is reduced for all models, and the proportion of large errors is minimized across datasets. Our method enables state-of-the-art APRs such as DFNetdm to outperform single-image and sequential APR methods. These results demonstrate the scalability and effectiveness of KS-APR for visual localization tasks that do not require one-shot decisions.
[ { "version": "v1", "created": "Thu, 10 Aug 2023 09:32:20 GMT" } ]
2023-08-11T00:00:00
[ [ "Liu", "Changkun", "" ], [ "Zhao", "Yukun", "" ], [ "Braud", "Tristan", "" ] ]
new_dataset
0.987144
2308.05472
Mengfan Zheng
Mengfan Zheng and Cong Ling
PAC Codes for Source and Joint Source-Channel Coding
6 pages, 6 figures. Submitted to GC 2023 Workshop - Channel Coding Beyond 5G
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Polarization-adjusted convolutional (PAC) codes, as a concatenated coding scheme based on polar codes, is able to approach the finite-length bound of binary-input AWGN channel at short blocklengths. In this paper, we extend PAC codes to the fields of source coding and joint source-channel coding and show that they can also approach the corresponding finite-length bounds at short blocklengths.
[ { "version": "v1", "created": "Thu, 10 Aug 2023 09:55:56 GMT" } ]
2023-08-11T00:00:00
[ [ "Zheng", "Mengfan", "" ], [ "Ling", "Cong", "" ] ]
new_dataset
0.999662
2308.05480
Yuming Chen
Yuming Chen, Xinbin Yuan, Ruiqi Wu, Jiabao Wang, Qibin Hou, Ming-Ming Cheng
YOLO-MS: Rethinking Multi-Scale Representation Learning for Real-time Object Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We aim at providing the object detection community with an efficient and performant object detector, termed YOLO-MS. The core design is based on a series of investigations on how convolutions with different kernel sizes affect the detection performance of objects at different scales. The outcome is a new strategy that can strongly enhance multi-scale feature representations of real-time object detectors. To verify the effectiveness of our strategy, we build a network architecture, termed YOLO-MS. We train our YOLO-MS on the MS COCO dataset from scratch without relying on any other large-scale datasets, like ImageNet, or pre-trained weights. Without bells and whistles, our YOLO-MS outperforms the recent state-of-the-art real-time object detectors, including YOLO-v7 and RTMDet, when using a comparable number of parameters and FLOPs. Taking the XS version of YOLO-MS as an example, with only 4.5M learnable parameters and 8.7G FLOPs, it can achieve an AP score of 43%+ on MS COCO, which is about 2%+ higher than RTMDet with the same model size. Moreover, our work can also be used as a plug-and-play module for other YOLO models. Typically, our method significantly improves the AP of YOLOv8 from 37%+ to 40%+ with even fewer parameters and FLOPs. Code is available at https://github.com/FishAndWasabi/YOLO-MS.
[ { "version": "v1", "created": "Thu, 10 Aug 2023 10:12:27 GMT" } ]
2023-08-11T00:00:00
[ [ "Chen", "Yuming", "" ], [ "Yuan", "Xinbin", "" ], [ "Wu", "Ruiqi", "" ], [ "Wang", "Jiabao", "" ], [ "Hou", "Qibin", "" ], [ "Cheng", "Ming-Ming", "" ] ]
new_dataset
0.989274
2308.05515
Udugama Vithanage Bavantha Lakshan Udugama
U.V.B.L. Udugama, G. Vosselman, F. Nex
Mono-hydra: Real-time 3D scene graph construction from monocular camera input with IMU
7 pages, 5 figures, GSW 2023 conference paper
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
The ability of robots to autonomously navigate through 3D environments depends on their comprehension of spatial concepts, ranging from low-level geometry to high-level semantics, such as objects, places, and buildings. To enable such comprehension, 3D scene graphs have emerged as a robust tool for representing the environment as a layered graph of concepts and their relationships. However, building these representations using monocular vision systems in real-time remains a difficult task that has not been explored in depth. This paper puts forth a real-time spatial perception system Mono-Hydra, combining a monocular camera and an IMU sensor setup, focusing on indoor scenarios. However, the proposed approach is adaptable to outdoor applications, offering flexibility in its potential uses. The system employs a suite of deep learning algorithms to derive depth and semantics. It uses a robocentric visual-inertial odometry (VIO) algorithm based on square-root information, thereby ensuring consistent visual odometry with an IMU and a monocular camera. This system achieves sub-20 cm error in real-time processing at 15 fps, enabling real-time 3D scene graph construction using a laptop GPU (NVIDIA 3080). This enhances decision-making efficiency and effectiveness in simple camera setups, augmenting robotic system agility. We make Mono-Hydra publicly available at: https://github.com/UAV-Centre-ITC/Mono_Hydra
[ { "version": "v1", "created": "Thu, 10 Aug 2023 11:58:38 GMT" } ]
2023-08-11T00:00:00
[ [ "Udugama", "U. V. B. L.", "" ], [ "Vosselman", "G.", "" ], [ "Nex", "F.", "" ] ]
new_dataset
0.999524
2308.05521
Christian Dietrich
Christian Dietrich and Tim-Marek Thomas and Matthias Mnich
Checkpoint Placement for Systematic Fault-Injection Campaigns
Preprint for accepted version at ICCAD'23
null
null
null
cs.AR
http://creativecommons.org/licenses/by-sa/4.0/
Shrinking hardware structures and decreasing operating voltages lead to an increasing number of transient hardware faults,which thus become a core problem to consider for safety-critical systems. Here, systematic fault injection (FI), where one program-under-test is systematically stressed with faults, provides an in-depth resilience analysis in the presence of faults. However, FI campaigns require many independent injection experiments and, combined, long run times, especially if we aim for a high coverage of the fault space. One cost factor is the forwarding phase, which is the time required to bring the system-under test into the fault-free state at injection time. One common technique to speed up the forwarding are checkpoints of the fault-free system state at fixed points in time. In this paper, we show that the placement of checkpoints has a significant influence on the required forwarding cycles, especially if we place faults non-uniformly on the time axis. For this, we discuss the checkpoint-selection problem in general, formalize it as a maximum-weight reward path problem in graphs, propose an ILP formulation and a dynamic programming algorithm that find the optimal solution, and provide a heuristic checkpoint-selection method based on a genetic algorithm. Applied to the MiBench benchmark suite, our approach consistently reduces the forward-phase cycles by at least 88 percent and up to 99.934 percent when placing 16 checkpoints.
[ { "version": "v1", "created": "Thu, 10 Aug 2023 12:03:54 GMT" } ]
2023-08-11T00:00:00
[ [ "Dietrich", "Christian", "" ], [ "Thomas", "Tim-Marek", "" ], [ "Mnich", "Matthias", "" ] ]
new_dataset
0.985516