id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2308.13307
Doreen Jirak
Jasmin Bernotat, Doreen Jirak, Eduardo Benitez Sandoval, Francisco Cruz, Alessandra Sciutti
Asch Meets HRI: Human Conformity to Robot Groups
5 pages, 2 figures
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
We present a research outline that aims at investigating group dynamics and peer pressure in the context of industrial robots. Our research plan was motivated by the fact that industrial robots became already an integral part of human-robot co-working. However, industrial robots have been sparsely integrated into research on robot credibility, group dynamics, and potential users' tendency to follow a robot's indication. Therefore, we aim to transfer the classic Asch experiment (see \cite{Asch_51}) into HRI with industrial robots. More precisely, we will test to what extent participants follow a robot's response when confronted with a group (vs. individual) industrial robot arms (vs. human) peers who give a false response. We are interested in highlighting the effects of group size, perceived robot credibility, psychological stress, and peer pressure in the context of industrial robots. With the results of this research, we hope to highlight group dynamics that might underlie HRI in industrial settings in which numerous robots already work closely together with humans in shared environments.
[ { "version": "v1", "created": "Fri, 25 Aug 2023 11:14:24 GMT" } ]
2023-08-31T00:00:00
[ [ "Bernotat", "Jasmin", "" ], [ "Jirak", "Doreen", "" ], [ "Sandoval", "Eduardo Benitez", "" ], [ "Cruz", "Francisco", "" ], [ "Sciutti", "Alessandra", "" ] ]
new_dataset
0.992217
2308.15214
Neeraj Cherakara
Neeraj Cherakara, Finny Varghese, Sheena Shabana, Nivan Nelson, Abhiram Karukayil, Rohith Kulothungan, Mohammed Afil Farhan, Birthe Nesset, Meriam Moujahid, Tanvi Dinkar, Verena Rieser, Oliver Lemon
FurChat: An Embodied Conversational Agent using LLMs, Combining Open and Closed-Domain Dialogue with Facial Expressions
5 pages, 2 figures, Accepted at SIGDIAL 2023 (24th Meeting of the Special Interest Group on Discourse and Dialogue), for the demo video, see https://youtu.be/fwtUl1kl22s
null
null
null
cs.CL cs.AI cs.HC cs.RO
http://creativecommons.org/licenses/by/4.0/
We demonstrate an embodied conversational agent that can function as a receptionist and generate a mixture of open and closed-domain dialogue along with facial expressions, by using a large language model (LLM) to develop an engaging conversation. We deployed the system onto a Furhat robot, which is highly expressive and capable of using both verbal and nonverbal cues during interaction. The system was designed specifically for the National Robotarium to interact with visitors through natural conversations, providing them with information about the facilities, research, news, upcoming events, etc. The system utilises the state-of-the-art GPT-3.5 model to generate such information along with domain-general conversations and facial expressions based on prompt engineering.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 11:08:40 GMT" }, { "version": "v2", "created": "Wed, 30 Aug 2023 13:13:19 GMT" } ]
2023-08-31T00:00:00
[ [ "Cherakara", "Neeraj", "" ], [ "Varghese", "Finny", "" ], [ "Shabana", "Sheena", "" ], [ "Nelson", "Nivan", "" ], [ "Karukayil", "Abhiram", "" ], [ "Kulothungan", "Rohith", "" ], [ "Farhan", "Mohammed Afil", "" ], [ "Nesset", "Birthe", "" ], [ "Moujahid", "Meriam", "" ], [ "Dinkar", "Tanvi", "" ], [ "Rieser", "Verena", "" ], [ "Lemon", "Oliver", "" ] ]
new_dataset
0.999695
2308.15491
Hung-Hsuan Chen
Ruei-Yuan Wang, Hung-Hsuan Chen
Detecting Inactive Cyberwarriors from Online Forums
null
null
null
null
cs.SI cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
The proliferation of misinformation has emerged as a new form of warfare in the information age. This type of warfare involves cyberwarriors, who deliberately propagate messages aimed at defaming opponents or fostering unity among allies. In this study, we investigate the level of activity exhibited by cyberwarriors within a large online forum, and remarkably, we discover that only a minute fraction of cyberwarriors are active users. Surprisingly, despite their expected role of actively disseminating misinformation, cyberwarriors remain predominantly silent during peacetime and only spring into action when necessary. Moreover, we analyze the challenges associated with identifying cyberwarriors and provide evidence that detecting inactive cyberwarriors is considerably more challenging than identifying their active counterparts. Finally, we discuss potential methodologies to more effectively identify cyberwarriors during their inactive phases, offering insights into better capturing their presence and actions. The experimental code is released for reproducibility: \url{https://github.com/Ryaninthegame/Detect-Inactive-Spammers-on-PTT}.
[ { "version": "v1", "created": "Mon, 28 Aug 2023 01:55:44 GMT" } ]
2023-08-31T00:00:00
[ [ "Wang", "Ruei-Yuan", "" ], [ "Chen", "Hung-Hsuan", "" ] ]
new_dataset
0.957218
2308.15563
Rachel Yun Zhang
Irit Dinur, Siqi Liu, Rachel Yun Zhang
New Codes on High Dimensional Expanders
null
null
null
null
cs.IT cs.CC math.GR math.IT
http://creativecommons.org/licenses/by/4.0/
We describe a new parameterized family of symmetric error-correcting codes with low-density parity-check matrices (LDPC). Our codes can be described in two seemingly different ways. First, in relation to Reed-Muller codes: our codes are functions on a subset of $\mathbb{F}^n$ whose restrictions to a prescribed set of affine lines has low degree. Alternatively, they are Tanner codes on high dimensional expanders, where the coordinates of the codeword correspond to triangles of a $2$-dimensional expander, such that around every edge the local view forms a Reed-Solomon codeword. For some range of parameters our codes are provably locally testable, and their dimension is some fixed power of the block length. For another range of parameters our codes have distance and dimension that are both linear in the block length, but we do not know if they are locally testable. The codes also have the multiplication property: the coordinate-wise product of two codewords is a codeword in a related code. The definition of the codes relies on the construction of a specific family of simplicial complexes which is a slight variant on the coset complexes of Kaufman and Oppenheim. We show a novel way to embed the triangles of these complexes into $\mathbb{F}^n$, with the property that links of edges embed as affine lines in $\mathbb{F}^n$. We rely on this embedding to lower bound the rate of these codes in a way that avoids constraint-counting and thereby achieves non-trivial rate even when the local codes themselves have arbitrarily small rate, and in particular below $1/2$.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 18:34:46 GMT" } ]
2023-08-31T00:00:00
[ [ "Dinur", "Irit", "" ], [ "Liu", "Siqi", "" ], [ "Zhang", "Rachel Yun", "" ] ]
new_dataset
0.999321
2308.15614
Haoran Liu
Haoran Liu, Bokun Wang, Jianling Wang, Xiangjue Dong, Tianbao Yang, James Caverlee
Everything Perturbed All at Once: Enabling Differentiable Graph Attacks
null
null
null
null
cs.LG cs.CR cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As powerful tools for representation learning on graphs, graph neural networks (GNNs) have played an important role in applications including social networks, recommendation systems, and online web services. However, GNNs have been shown to be vulnerable to adversarial attacks, which can significantly degrade their effectiveness. Recent state-of-the-art approaches in adversarial attacks rely on gradient-based meta-learning to selectively perturb a single edge with the highest attack score until they reach the budget constraint. While effective in identifying vulnerable links, these methods are plagued by high computational costs. By leveraging continuous relaxation and parameterization of the graph structure, we propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks and meanwhile eliminate the need for costly retraining. Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint on different benchmark datasets. Additionally, we provide extensive experimental analyses of the transferability of the DGA among different graph models, as well as its robustness against widely-used defense mechanisms.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 20:14:42 GMT" } ]
2023-08-31T00:00:00
[ [ "Liu", "Haoran", "" ], [ "Wang", "Bokun", "" ], [ "Wang", "Jianling", "" ], [ "Dong", "Xiangjue", "" ], [ "Yang", "Tianbao", "" ], [ "Caverlee", "James", "" ] ]
new_dataset
0.995386
2308.15710
Rafael Mosquera
Rafael Mosquera G\'omez, Juli\'an Eusse, Juan Ciro, Daniel Galvez, Ryan Hileman, Kurt Bollacker, David Kanter
Speech Wikimedia: A 77 Language Multilingual Speech Dataset
Data-Centric Machine Learning Workshop at the International Machine Learning Conference 2023 (ICML)
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
The Speech Wikimedia Dataset is a publicly available compilation of audio with transcriptions extracted from Wikimedia Commons. It includes 1780 hours (195 GB) of CC-BY-SA licensed transcribed speech from a diverse set of scenarios and speakers, in 77 different languages. Each audio file has one or more transcriptions in different languages, making this dataset suitable for training speech recognition, speech translation, and machine translation models.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 02:14:49 GMT" } ]
2023-08-31T00:00:00
[ [ "Gómez", "Rafael Mosquera", "" ], [ "Eusse", "Julián", "" ], [ "Ciro", "Juan", "" ], [ "Galvez", "Daniel", "" ], [ "Hileman", "Ryan", "" ], [ "Bollacker", "Kurt", "" ], [ "Kanter", "David", "" ] ]
new_dataset
0.99985
2308.15726
Fei Yu
Nan Che and Chenrui Liu and Fei Yu
AGS: An Dataset and Taxonomy for Domestic Scene Sound Event Recognition
null
null
null
null
cs.SD cs.AI eess.AS
http://creativecommons.org/licenses/by/4.0/
Environmental sound scene and sound event recognition is important for the recognition of suspicious events in indoor and outdoor environments (such as nurseries, smart homes, nursing homes, etc.) and is a fundamental task involved in many audio surveillance applications. In particular, there is no public common data set for the research field of sound event recognition for the data set of the indoor environmental sound scene. Therefore, this paper proposes a data set (called as AGS) for the home environment sound. This data set considers various types of overlapping audio in the scene, background noise. Moreover, based on the proposed data set, this paper compares and analyzes the advanced methods for sound event recognition, and then illustrates the reliability of the data set proposed in this paper, and studies the challenges raised by the new data set. Our proposed AGS and the source code of the corresponding baselines at https://github.com/taolunzu11/AGS .
[ { "version": "v1", "created": "Wed, 30 Aug 2023 03:03:47 GMT" } ]
2023-08-31T00:00:00
[ [ "Che", "Nan", "" ], [ "Liu", "Chenrui", "" ], [ "Yu", "Fei", "" ] ]
new_dataset
0.99659
2308.15784
Roman Jacome
Roman Jacome, Kumar Vijay Mishra, Brian M. Sadler and Henry Arguello
Octonion Phase Retrieval
13 pages, 3 figures
null
null
null
cs.IT eess.IV math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Signal processing over hypercomplex numbers arises in many optical imaging applications. In particular, spectral image or color stereo data are often processed using octonion algebra. Recently, the eight-band multispectral image phase recovery has gained salience, wherein it is desired to recover the eight bands from the phaseless measurements. In this paper, we tackle this hitherto unaddressed hypercomplex variant of the popular phase retrieval (PR) problem. We propose octonion Wirtinger flow (OWF) to recover an octonion signal from its intensity-only observation. However, contrary to the complex-valued Wirtinger flow, the non-associative nature of octonion algebra and the consequent lack of octonion derivatives make the extension to OWF non-trivial. We resolve this using the pseudo-real-matrix representation of octonion to perform the derivatives in each OWF update. We demonstrate that our approach recovers the octonion signal up to a right-octonion phase factor. Numerical experiments validate OWF-based PR with high accuracy under both noiseless and noisy measurements.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 06:32:31 GMT" } ]
2023-08-31T00:00:00
[ [ "Jacome", "Roman", "" ], [ "Mishra", "Kumar Vijay", "" ], [ "Sadler", "Brian M.", "" ], [ "Arguello", "Henry", "" ] ]
new_dataset
0.957523
2308.15819
Tuukka Korhonen
Tuukka Korhonen, Matti J\"arvisalo
SharpSAT-TD in Model Counting Competitions 2021-2023
3 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe SharpSAT-TD, our submission to the unweighted and weighted tracks of the Model Counting Competition in 2021-2023, which has won in total $6$ first places in different tracks of the competition. SharpSAT-TD is based on SharpSAT [Thurley, SAT 2006], with the primary novel modification being the use of tree decompositions in the variable selection heuristic as introduced by the authors in [CP 2021]. Unlike the version of SharpSAT-TD evaluated in [CP 2021], the current version that is available in https://github.com/Laakeri/sharpsat-td features also other significant modifications compared to the original SharpSAT, for example, a new preprocessor.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 07:43:12 GMT" } ]
2023-08-31T00:00:00
[ [ "Korhonen", "Tuukka", "" ], [ "Järvisalo", "Matti", "" ] ]
new_dataset
0.986857
2308.15823
Jianghong Ma
Kangzhe Liu, Jianghong Ma, Shanshan Feng, Haijun Zhang, Zhao Zhang
DRGame: Diversified Recommendation for Multi-category Video Games with Balanced Implicit Preferences
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The growing popularity of subscription services in video game consumption has emphasized the importance of offering diversified recommendations. Providing users with a diverse range of games is essential for ensuring continued engagement and fostering long-term subscriptions. However, existing recommendation models face challenges in effectively handling highly imbalanced implicit feedback in gaming interactions. Additionally, they struggle to take into account the distinctive characteristics of multiple categories and the latent user interests associated with these categories. In response to these challenges, we propose a novel framework, named DRGame, to obtain diversified recommendation. It is centered on multi-category video games, consisting of two {components}: Balance-driven Implicit Preferences Learning for data pre-processing and Clustering-based Diversified Recommendation {Module} for final prediction. The first module aims to achieve a balanced representation of implicit feedback in game time, thereby discovering a comprehensive view of player interests across different categories. The second module adopts category-aware representation learning to cluster and select players and games based on balanced implicit preferences, and then employs asymmetric neighbor aggregation to achieve diversified recommendations. Experimental results on a real-world dataset demonstrate the superiority of our proposed method over existing approaches in terms of game diversity recommendations.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 07:53:27 GMT" } ]
2023-08-31T00:00:00
[ [ "Liu", "Kangzhe", "" ], [ "Ma", "Jianghong", "" ], [ "Feng", "Shanshan", "" ], [ "Zhang", "Haijun", "" ], [ "Zhang", "Zhao", "" ] ]
new_dataset
0.956053
2308.15841
Johannes Zirngibl
Johannes Zirngibl, Florian Gebauer, Patrick Sattler, Markus Sosnowski, Georg Carle
QUIC Library Hunter: Identifying Server Libraries Across the Internet
preprint
null
null
null
cs.NI
http://creativecommons.org/licenses/by-nc-sa/4.0/
The new QUIC protocol can be implemented in user space, and various implementations already exist. While they follow the same specification and general interoperability is given, differences in performance, functionality, but also security (e.g., due to bugs) can be expected. Therefore, knowledge about the implementation of an endpoint on the Internet can help researchers, operators and users to better analyze connections, evaluations and findings. We provide an approach to identify used libraries of QUIC servers based on CONNECTION_CLOSE frames and transport parameter orders. We apply our methodology to Internet-wide scans and identify at least one deployment for 18 QUIC libraries. In total, we can identify the library of 8.8 M IPv4 and 2.5 M IPv6 addresses.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 08:22:05 GMT" } ]
2023-08-31T00:00:00
[ [ "Zirngibl", "Johannes", "" ], [ "Gebauer", "Florian", "" ], [ "Sattler", "Patrick", "" ], [ "Sosnowski", "Markus", "" ], [ "Carle", "Georg", "" ] ]
new_dataset
0.995114
2308.15842
Sujoy Bhore
Sayan Bandyapadhyay, Aritra Banik, Sujoy Bhore
On Colorful Vertex and Edge Cover Problems
null
null
null
null
cs.DS cs.CG
http://creativecommons.org/licenses/by/4.0/
In this paper, we study two generalizations of Vertex Cover and Edge Cover, namely Colorful Vertex Cover and Colorful Edge Cover. In the Colorful Vertex Cover problem, given an $n$-vertex edge-colored graph $G$ with colors from $\{1, \ldots, \omega\}$ and coverage requirements $r_1, r_2, \ldots, r_\omega$, the goal is to find a minimum-sized set of vertices that are incident on at least $r_i$ edges of color $i$, for each $1 \le i \le \omega$, i.e., we need to cover at least $r_i$ edges of color $i$. Colorful Edge Cover is similar to Colorful Vertex Cover, except here we are given a vertex-colored graph and the goal is to cover at least $r_i$ vertices of color $i$, for each $1 \le i \le \omega$, by a minimum-sized set of edges. These problems have several applications in fair covering and hitting of geometric set systems involving points and lines that are divided into multiple groups. Here, fairness ensures that the coverage (resp. hitting) requirement of every group is fully satisfied. We obtain a $(2+\epsilon)$-approximation for the Colorful Vertex Cover problem in time $n^{O(\omega/\epsilon)}$. Thus, for a constant number of colors, the problem admits a $(2+\epsilon)$-approximation in polynomial time. Next, for the Colorful Edge Cover problem, we design an $O(\omega n^3)$ time exact algorithm, via a chain of reductions to a matching problem. For all intermediate problems in this chain of reductions, we design polynomial-time algorithms, which might be of independent interest.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 08:27:09 GMT" } ]
2023-08-31T00:00:00
[ [ "Bandyapadhyay", "Sayan", "" ], [ "Banik", "Aritra", "" ], [ "Bhore", "Sujoy", "" ] ]
new_dataset
0.984031
2308.15846
Yifan Xu
Yifan Xu, Mengdan Zhang, Xiaoshan Yang, Changsheng Xu
Exploring Multi-Modal Contextual Knowledge for Open-Vocabulary Object Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we for the first time explore helpful multi-modal contextual knowledge to understand novel categories for open-vocabulary object detection (OVD). The multi-modal contextual knowledge stands for the joint relationship across regions and words. However, it is challenging to incorporate such multi-modal contextual knowledge into OVD. The reason is that previous detection frameworks fail to jointly model multi-modal contextual knowledge, as object detectors only support vision inputs and no caption description is provided at test time. To this end, we propose a multi-modal contextual knowledge distillation framework, MMC-Det, to transfer the learned contextual knowledge from a teacher fusion transformer with diverse multi-modal masked language modeling (D-MLM) to a student detector. The diverse multi-modal masked language modeling is realized by an object divergence constraint upon traditional multi-modal masked language modeling (MLM), in order to extract fine-grained region-level visual contexts, which are vital to object detection. Extensive experiments performed upon various detection datasets show the effectiveness of our multi-modal context learning strategy, where our approach well outperforms the recent state-of-the-art methods.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 08:33:13 GMT" } ]
2023-08-31T00:00:00
[ [ "Xu", "Yifan", "" ], [ "Zhang", "Mengdan", "" ], [ "Yang", "Xiaoshan", "" ], [ "Xu", "Changsheng", "" ] ]
new_dataset
0.964102
2308.15870
EPTCS
Christian Hatschka (TU Vienna), Agata Ciabattoni (TU Vienna), Thomas Eiter (TU Vienna)
Deontic Paradoxes in ASP with Weak Constraints
In Proceedings ICLP 2023, arXiv:2308.14898
EPTCS 385, 2023, pp. 367-380
10.4204/EPTCS.385.39
null
cs.LO cs.AI cs.CY cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rise of powerful AI technology for a range of applications that are sensitive to legal, social, and ethical norms demands decision-making support in presence of norms and regulations. Normative reasoning is the realm of deontic logics, that are challenged by well-known benchmark problems (deontic paradoxes), and lack efficient computational tools. In this paper, we use Answer Set Programming (ASP) for addressing these shortcomings and showcase how to encode and resolve several well-known deontic paradoxes utilizing weak constraints. By abstracting and generalizing this encoding, we present a methodology for translating normative systems in ASP with weak constraints. This methodology is applied to "ethical" versions of Pac-man, where we obtain a comparable performance with related works, but ethically preferable results.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 08:56:54 GMT" } ]
2023-08-31T00:00:00
[ [ "Hatschka", "Christian", "", "TU Vienna" ], [ "Ciabattoni", "Agata", "", "TU Vienna" ], [ "Eiter", "Thomas", "", "TU Vienna" ] ]
new_dataset
0.974035
2308.15893
EPTCS
Theresa Swift (Johns Hopkins Applied Physics Lab), Carl Andersen
The Janus System: Multi-paradigm Programming in Prolog and Python
In Proceedings ICLP 2023, arXiv:2308.14898
EPTCS 385, 2023, pp. 241-255
10.4204/EPTCS.385.24
null
cs.PL cs.LO
http://creativecommons.org/licenses/by/4.0/
Python and Prolog express different programming paradigms, with different strengths. Python is wildly popular because it is well-structured, easy to use, and mixes well with thousands of scientific and machine learning programs written in C. Prolog's logic-based approach provides powerful reasoning capabilities, especially when combined with constraint evaluation, probabilistic reasoning, well-founded negation, and other advances. Both languages have commonalities as well: both are usually written in C, both are dynamically typed, and both use data structures based on a small number of recursive types. This paper describes the design and implementation of Janus, a system that tightly combines Prolog and Python into a single process. Janus bi-translates data structures and offers performance of many hundreds of thousands of round-trip inter-language calls per second. Although Janus is still new, it has been used in commercial applications including natural language processing, visual query answering and robotic automation. Janus was developed for XSB, but porting Janus code to a second Prolog has been straightforward, indicating that Janus is a tool that other Prologs may easily adopt.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 09:07:05 GMT" } ]
2023-08-31T00:00:00
[ [ "Swift", "Theresa", "", "Johns Hopkins Applied Physics Lab" ], [ "Andersen", "Carl", "" ] ]
new_dataset
0.952637
2308.15917
Konstantin Shibin
Konstantin Shibin, Maksim Jenihhin, Artur Jutman, Sergei Devadze, Anton Tsertov
On-Chip Sensors Data Collection and Analysis for SoC Health Management
6 pages, 3 figures. This paper is accepted at the 36th IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT) 2023
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data produced by on-chip sensors in modern SoCs contains a large amount of information such as occurring faults, aging status, accumulated radiation dose, performance characteristics, environmental and other operational parameters. Such information provides insight into the overall health of a system's hardware as well as the operability of individual modules. This gives a chance to mitigate faults and avoid using faulty units, thus enabling hardware health management. Raw data from embedded sensors cannot be immediately used to perform health management tasks. In most cases, the information about occurred faults needs to be analyzed taking into account the history of the previously reported fault events and other collected statistics. For this purpose, we propose a special structure called Health Map (HM) that holds the information about functional resources, occurring faults and maps relationships between these. In addition, we propose algorithms for aggregation and classification of data received from on-chip sensors. The proposed Health Map contains detailed information on a particular system level (e.g., module, SoC, board) that can be compiled into a summary of hardware health status that in its turn enables distributed hierarchical health management by using this information at a higher level of system hierarchy, thus increasing the system's availability and effective lifetime.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 09:44:28 GMT" } ]
2023-08-31T00:00:00
[ [ "Shibin", "Konstantin", "" ], [ "Jenihhin", "Maksim", "" ], [ "Jutman", "Artur", "" ], [ "Devadze", "Sergei", "" ], [ "Tsertov", "Anton", "" ] ]
new_dataset
0.96082
2308.15939
Hanqiu Deng
Hanqiu Deng, Zhaoxiang Zhang, Jinan Bao, Xingyu Li
AnoVL: Adapting Vision-Language Models for Unified Zero-shot Anomaly Localization
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Contrastive Language-Image Pre-training (CLIP) models have shown promising performance on zero-shot visual recognition tasks by learning visual representations under natural language supervision. Recent studies attempt the use of CLIP to tackle zero-shot anomaly detection by matching images with normal and abnormal state prompts. However, since CLIP focuses on building correspondence between paired text prompts and global image-level representations, the lack of patch-level vision to text alignment limits its capability on precise visual anomaly localization. In this work, we introduce a training-free adaptation (TFA) framework of CLIP for zero-shot anomaly localization. In the visual encoder, we innovate a training-free value-wise attention mechanism to extract intrinsic local tokens of CLIP for patch-level local description. From the perspective of text supervision, we particularly design a unified domain-aware contrastive state prompting template. On top of the proposed TFA, we further introduce a test-time adaptation (TTA) mechanism to refine anomaly localization results, where a layer of trainable parameters in the adapter is optimized using TFA's pseudo-labels and synthetic noise-corrupted tokens. With both TFA and TTA adaptation, we significantly exploit the potential of CLIP for zero-shot anomaly localization and demonstrate the effectiveness of our proposed methods on various datasets.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 10:35:36 GMT" } ]
2023-08-31T00:00:00
[ [ "Deng", "Hanqiu", "" ], [ "Zhang", "Zhaoxiang", "" ], [ "Bao", "Jinan", "" ], [ "Li", "Xingyu", "" ] ]
new_dataset
0.959517
2308.15952
Anton Alekseev
Anton Alekseev, Sergey I. Nikolenko, Gulnara Kabaeva
Benchmarking Multilabel Topic Classification in the Kyrgyz Language
Accepted to AIST 2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Kyrgyz is a very underrepresented language in terms of modern natural language processing resources. In this work, we present a new public benchmark for topic classification in Kyrgyz, introducing a dataset based on collected and annotated data from the news site 24.KG and presenting several baseline models for news classification in the multilabel setting. We train and evaluate both classical statistical and neural models, reporting the scores, discussing the results, and proposing directions for future work.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 11:02:26 GMT" } ]
2023-08-31T00:00:00
[ [ "Alekseev", "Anton", "" ], [ "Nikolenko", "Sergey I.", "" ], [ "Kabaeva", "Gulnara", "" ] ]
new_dataset
0.99963
2308.15964
B\'erenger Bramas
Paul Cardosi, B\'erenger Bramas
Specx: a C++ task-based runtime system for heterogeneous distributed architectures
Research report. https://gitlab.inria.fr/bramas/specx
null
null
null
cs.DC cs.SE
http://creativecommons.org/licenses/by/4.0/
Parallelization is needed everywhere, from laptops and mobile phones to supercomputers. Among parallel programming models, task-based programming has demonstrated a powerful potential and is widely used in high-performance scientific computing. Not only does it allow for efficient parallelization across distributed heterogeneous computing nodes, but it also allows for elegant source code structuring by describing hardware-independent algorithms. In this paper, we present Specx, a task-based runtime system written in modern C++. Specx supports distributed heterogeneous computing by simultaneously exploiting CPUs and GPUs (CUDA/HIP) and incorporating communication into the task graph. We describe the specificities of Specx and demonstrate its potential by running parallel applications.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 11:41:30 GMT" } ]
2023-08-31T00:00:00
[ [ "Cardosi", "Paul", "" ], [ "Bramas", "Bérenger", "" ] ]
new_dataset
0.990765
2308.15985
Jianwu Fang
Jianwu Fang, iahuan Qiao, Jianru Xue, and Zhengguo Li
Vision-Based Traffic Accident Detection and Anticipation: A Survey
accepted in IEEE Transactions on Circuits and Systems for Video Technology; 16 pages, 155 references
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traffic accident detection and anticipation is an obstinate road safety problem and painstaking efforts have been devoted. With the rapid growth of video data, Vision-based Traffic Accident Detection and Anticipation (named Vision-TAD and Vision-TAA) become the last one-mile problem for safe driving and surveillance safety. However, the long-tailed, unbalanced, highly dynamic, complex, and uncertain properties of traffic accidents form the Out-of-Distribution (OOD) feature for Vision-TAD and Vision-TAA. Current AI development may focus on these OOD but important problems. What has been done for Vision-TAD and Vision-TAA? What direction we should focus on in the future for this problem? A comprehensive survey is important. We present the first survey on Vision-TAD in the deep learning era and the first-ever survey for Vision-TAA. The pros and cons of each research prototype are discussed in detail during the investigation. In addition, we also provide a critical review of 31 publicly available benchmarks and related evaluation metrics. Through this survey, we want to spawn new insights and open possible trends for Vision-TAD and Vision-TAA tasks.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 12:13:41 GMT" } ]
2023-08-31T00:00:00
[ [ "Fang", "Jianwu", "" ], [ "Qiao", "iahuan", "" ], [ "Xue", "Jianru", "" ], [ "Li", "Zhengguo", "" ] ]
new_dataset
0.998613
2308.15991
Yinda Xu
Yinda Xu, Lidong Yu
DRL-Based Trajectory Tracking for Motion-Related Modules in Autonomous Driving
Technical report
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autonomous driving systems are always built on motion-related modules such as the planner and the controller. An accurate and robust trajectory tracking method is indispensable for these motion-related modules as a primitive routine. Current methods often make strong assumptions about the model such as the context and the dynamics, which are not robust enough to deal with the changing scenarios in a real-world system. In this paper, we propose a Deep Reinforcement Learning (DRL)-based trajectory tracking method for the motion-related modules in autonomous driving systems. The representation learning ability of DL and the exploration nature of RL bring strong robustness and improve accuracy. Meanwhile, it enhances versatility by running the trajectory tracking in a model-free and data-driven manner. Through extensive experiments, we demonstrate both the efficiency and effectiveness of our method compared to current methods.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 12:24:30 GMT" } ]
2023-08-31T00:00:00
[ [ "Xu", "Yinda", "" ], [ "Yu", "Lidong", "" ] ]
new_dataset
0.996981
2308.16052
Thomas H. Weisswange
Thomas H. Weisswange, Joel B. Schwartz, Aaron J. Horowitz, Jens Schm\"udderich
Telepresence Lantern -- Designing an Immersive Video-Mediated Communication Device for Older Adults
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
We present the Telepresence Lantern concept, developed to provide opportunities for older adults to stay in contact with remote family and friends. It provides a new approach to video-mediated communication, designed to facilitate natural and ambient interactions with simplified call setup. Video communication is an established way to enhance social connectedness, but traditional approaches create a high friction to frequent connection due to, for example, technological barriers. Through interactive sessions with older adult users, we created design and function prototypes to suit their needs and preferences. The main features of our design are a curved, wide field-of-view screen and corresponding camera and sound setup, and the affordance to easily move the device from room-to-room. An interactive user session with a fully functional prototype validated the potential of this concept for improving communication among older adults and their families.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 14:19:09 GMT" } ]
2023-08-31T00:00:00
[ [ "Weisswange", "Thomas H.", "" ], [ "Schwartz", "Joel B.", "" ], [ "Horowitz", "Aaron J.", "" ], [ "Schmüdderich", "Jens", "" ] ]
new_dataset
0.998962
2308.16053
Yu Zhang
Yu Zhang, Ruike Jiang, Liwenhan Xie, Yuheng Zhao, Can Liu, Tianhong Ding, Siming Chen, Xiaoru Yuan
OldVisOnline: Curating a Dataset of Historical Visualizations
Accepted to IEEE VIS 2023
null
null
null
cs.HC cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the increasing adoption of digitization, more and more historical visualizations created hundreds of years ago are accessible in digital libraries online. It provides a unique opportunity for visualization and history research. Meanwhile, there is no large-scale digital collection dedicated to historical visualizations. The visualizations are scattered in various collections, which hinders retrieval. In this study, we curate the first large-scale dataset dedicated to historical visualizations. Our dataset comprises 13K historical visualization images with corresponding processed metadata from seven digital libraries. In curating the dataset, we propose a workflow to scrape and process heterogeneous metadata. We develop a semi-automatic labeling approach to distinguish visualizations from other artifacts. Our dataset can be accessed with OldVisOnline, a system we have built to browse and label historical visualizations. We discuss our vision of usage scenarios and research opportunities with our dataset, such as textual criticism for historical visualizations. Drawing upon our experience, we summarize recommendations for future efforts to improve our dataset.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 14:19:31 GMT" } ]
2023-08-31T00:00:00
[ [ "Zhang", "Yu", "" ], [ "Jiang", "Ruike", "" ], [ "Xie", "Liwenhan", "" ], [ "Zhao", "Yuheng", "" ], [ "Liu", "Can", "" ], [ "Ding", "Tianhong", "" ], [ "Chen", "Siming", "" ], [ "Yuan", "Xiaoru", "" ] ]
new_dataset
0.999796
2308.16055
Yun-Cheng Wang
Yun-Cheng Wang, Xiou Ge, Bin Wang, C.-C. Jay Kuo
AsyncET: Asynchronous Learning for Knowledge Graph Entity Typing with Auxiliary Relations
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Knowledge graph entity typing (KGET) is a task to predict the missing entity types in knowledge graphs (KG). Previously, KG embedding (KGE) methods tried to solve the KGET task by introducing an auxiliary relation, 'hasType', to model the relationship between entities and their types. However, a single auxiliary relation has limited expressiveness for diverse entity-type patterns. We improve the expressiveness of KGE methods by introducing multiple auxiliary relations in this work. Similar entity types are grouped to reduce the number of auxiliary relations and improve their capability to model entity-type patterns with different granularities. With the presence of multiple auxiliary relations, we propose a method adopting an Asynchronous learning scheme for Entity Typing, named AsyncET, which updates the entity and type embeddings alternatively to keep the learned entity embedding up-to-date and informative for entity type prediction. Experiments are conducted on two commonly used KGET datasets to show that the performance of KGE methods on the KGET task can be substantially improved by the proposed multiple auxiliary relations and asynchronous embedding learning. Furthermore, our method has a significant advantage over state-of-the-art methods in model sizes and time complexity.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 14:24:16 GMT" } ]
2023-08-31T00:00:00
[ [ "Wang", "Yun-Cheng", "" ], [ "Ge", "Xiou", "" ], [ "Wang", "Bin", "" ], [ "Kuo", "C. -C. Jay", "" ] ]
new_dataset
0.968414
2308.16060
Raphael Schumann
Michael Staniek and Raphael Schumann and Maike Z\"ufle and Stefan Riezler
Text-to-OverpassQL: A Natural Language Interface for Complex Geodata Querying of OpenStreetMap
null
null
null
null
cs.CL cs.AI cs.CY cs.DB cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present Text-to-OverpassQL, a task designed to facilitate a natural language interface for querying geodata from OpenStreetMap (OSM). The Overpass Query Language (OverpassQL) allows users to formulate complex database queries and is widely adopted in the OSM ecosystem. Generating Overpass queries from natural language input serves multiple use-cases. It enables novice users to utilize OverpassQL without prior knowledge, assists experienced users with crafting advanced queries, and enables tool-augmented large language models to access information stored in the OSM database. In order to assess the performance of current sequence generation models on this task, we propose OverpassNL, a dataset of 8,352 queries with corresponding natural language inputs. We further introduce task specific evaluation metrics and ground the evaluation of the Text-to-OverpassQL task by executing the queries against the OSM database. We establish strong baselines by finetuning sequence-to-sequence models and adapting large language models with in-context examples. The detailed evaluation reveals strengths and weaknesses of the considered learning strategies, laying the foundations for further research into the Text-to-OverpassQL task.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 14:33:25 GMT" } ]
2023-08-31T00:00:00
[ [ "Staniek", "Michael", "" ], [ "Schumann", "Raphael", "" ], [ "Züfle", "Maike", "" ], [ "Riezler", "Stefan", "" ] ]
new_dataset
0.987643
2308.16082
Sen Fang
Sen Fang, Chunyu Sui, Xuedong Zhang, Yapeng Tian
SignDiff: Learning Diffusion Models for American Sign Language Production
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The field of Sign Language Production (SLP) lacked a large-scale, pre-trained model based on deep learning for continuous American Sign Language (ASL) production in the past decade. This limitation hampers communication for all individuals with disabilities relying on ASL. To address this issue, we undertook the secondary development and utilization of How2Sign, one of the largest publicly available ASL datasets. Despite its significance, prior researchers in the field of sign language have not effectively employed this corpus due to the intricacies involved in American Sign Language Production (ASLP). To conduct large-scale ASLP, we propose SignDiff based on the latest work in related fields, which is a dual-condition diffusion pre-training model that can generate human sign language speakers from a skeleton pose. SignDiff has a novel Frame Reinforcement Network called FR-Net, similar to dense human pose estimation work, which enhances the correspondence between text lexical symbols and sign language dense pose frames reduce the occurrence of multiple fingers in the diffusion model. In addition, our ASLP method proposes two new improved modules and a new loss function to improve the accuracy and quality of sign language skeletal posture and enhance the ability of the model to train on large-scale data. We propose the first baseline for ASL production and report the scores of 17.19 and 12.85 on BLEU-4 on the How2Sign dev/test sets. We also evaluated our model on the previous mainstream dataset called PHOENIX14T, and the main experiments achieved the results of SOTA. In addition, our image quality far exceeds all previous results by 10 percentage points on the SSIM indicator. Finally, we conducted ablation studies and qualitative evaluations for discussion.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 15:14:56 GMT" } ]
2023-08-31T00:00:00
[ [ "Fang", "Sen", "" ], [ "Sui", "Chunyu", "" ], [ "Zhang", "Xuedong", "" ], [ "Tian", "Yapeng", "" ] ]
new_dataset
0.95483
2308.16182
Henghui Ding
Shuting He, Henghui Ding, Chang Liu, Xudong Jiang
GREC: Generalized Referring Expression Comprehension
GREC Technical Report, Project Page: https://henghuiding.github.io/GRES
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The objective of Classic Referring Expression Comprehension (REC) is to produce a bounding box corresponding to the object mentioned in a given textual description. Commonly, existing datasets and techniques in classic REC are tailored for expressions that pertain to a single target, meaning a sole expression is linked to one specific object. Expressions that refer to multiple targets or involve no specific target have not been taken into account. This constraint hinders the practical applicability of REC. This study introduces a new benchmark termed as Generalized Referring Expression Comprehension (GREC). This benchmark extends the classic REC by permitting expressions to describe any number of target objects. To achieve this goal, we have built the first large-scale GREC dataset named gRefCOCO. This dataset encompasses a range of expressions: those referring to multiple targets, expressions with no specific target, and the single-target expressions. The design of GREC and gRefCOCO ensures smooth compatibility with classic REC. The proposed gRefCOCO dataset, a GREC method implementation code, and GREC evaluation code are available at https://github.com/henghuiding/gRefCOCO.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 17:58:50 GMT" } ]
2023-08-31T00:00:00
[ [ "He", "Shuting", "" ], [ "Ding", "Henghui", "" ], [ "Liu", "Chang", "" ], [ "Jiang", "Xudong", "" ] ]
new_dataset
0.987949
2308.16184
Junlong Cheng
Junlong Cheng, Jin Ye, Zhongying Deng, Jianpin Chen, Tianbin Li, Haoyu Wang, Yanzhou Su, Ziyan Huang, Jilong Chen, Lei Jiang, Hui Sun, Junjun He, Shaoting Zhang, Min Zhu, Yu Qiao,
SAM-Med2D
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Segment Anything Model (SAM) represents a state-of-the-art research advancement in natural image segmentation, achieving impressive results with input prompts such as points and bounding boxes. However, our evaluation and recent research indicate that directly applying the pretrained SAM to medical image segmentation does not yield satisfactory performance. This limitation primarily arises from significant domain gap between natural images and medical images. To bridge this gap, we introduce SAM-Med2D, the most comprehensive studies on applying SAM to medical 2D images. Specifically, we first collect and curate approximately 4.6M images and 19.7M masks from public and private datasets, constructing a large-scale medical image segmentation dataset encompassing various modalities and objects. Then, we comprehensively fine-tune SAM on this dataset and turn it into SAM-Med2D. Unlike previous methods that only adopt bounding box or point prompts as interactive segmentation approach, we adapt SAM to medical image segmentation through more comprehensive prompts involving bounding boxes, points, and masks. We additionally fine-tune the encoder and decoder of the original SAM to obtain a well-performed SAM-Med2D, leading to the most comprehensive fine-tuning strategies to date. Finally, we conducted a comprehensive evaluation and analysis to investigate the performance of SAM-Med2D in medical image segmentation across various modalities, anatomical structures, and organs. Concurrently, we validated the generalization capability of SAM-Med2D on 9 datasets from MICCAI 2023 challenge. Overall, our approach demonstrated significantly superior performance and generalization capability compared to SAM.
[ { "version": "v1", "created": "Wed, 30 Aug 2023 17:59:02 GMT" } ]
2023-08-31T00:00:00
[ [ "Cheng", "Junlong", "" ], [ "Ye", "Jin", "" ], [ "Deng", "Zhongying", "" ], [ "Chen", "Jianpin", "" ], [ "Li", "Tianbin", "" ], [ "Wang", "Haoyu", "" ], [ "Su", "Yanzhou", "" ], [ "Huang", "Ziyan", "" ], [ "Chen", "Jilong", "" ], [ "Jiang", "Lei", "" ], [ "Sun", "Hui", "" ], [ "He", "Junjun", "" ], [ "Zhang", "Shaoting", "" ], [ "Zhu", "Min", "" ], [ "Qiao", "Yu", "" ] ]
new_dataset
0.997066
2110.01005
Tamjid Al Rahat
Tamjid Al Rahat, Yu Feng, Yuan Tian
Cerberus: Query-driven Scalable Vulnerability Detection in OAuth Service Provider Implementations
Appeared in ACM Conference on Computer and Communications Security (CCS 2022). Please cite the conference version
Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security
10.1145/3548606.3559381
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
OAuth protocols have been widely adopted to simplify user authentication and service authorization for third-party applications. However, little effort has been devoted to automatically checking the security of the libraries that service providers widely use. In this paper, we formalize the OAuth specifications and security best practices, and design Cerberus, an automated static analyzer, to find logical flaws and identify vulnerabilities in the implementation of OAuth service provider libraries. To efficiently detect security violations in a large codebase of service provider implementation, Cerberus employs a query-driven algorithm for answering queries about OAuth specifications. We demonstrate the effectiveness of Cerberus by evaluating it on datasets of popular OAuth libraries with millions of downloads. Among these high-profile libraries, Cerberus has identified 47 vulnerabilities from ten classes of logical flaws, 24 of which were previously unknown. We got acknowledged by the developers of eight libraries and had three accepted CVEs.
[ { "version": "v1", "created": "Sun, 3 Oct 2021 13:43:38 GMT" }, { "version": "v2", "created": "Mon, 16 May 2022 01:52:13 GMT" }, { "version": "v3", "created": "Thu, 27 Oct 2022 03:49:02 GMT" }, { "version": "v4", "created": "Tue, 7 Mar 2023 03:48:54 GMT" }, { "version": "v5", "created": "Tue, 29 Aug 2023 09:08:27 GMT" } ]
2023-08-30T00:00:00
[ [ "Rahat", "Tamjid Al", "" ], [ "Feng", "Yu", "" ], [ "Tian", "Yuan", "" ] ]
new_dataset
0.966506
2203.09065
Meida Chen
Meida Chen, Qingyong Hu, Zifan Yu, Hugues Thomas, Andrew Feng, Yu Hou, Kyle McCullough, Fengbo Ren, Lucio Soibelman
STPLS3D: A Large-Scale Synthetic and Real Aerial Photogrammetry 3D Point Cloud Dataset
null
null
null
https://bmvc2022.mpi-inf.mpg.de/0429.pdf
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Although various 3D datasets with different functions and scales have been proposed recently, it remains challenging for individuals to complete the whole pipeline of large-scale data collection, sanitization, and annotation. Moreover, the created datasets usually suffer from extremely imbalanced class distribution or partial low-quality data samples. Motivated by this, we explore the procedurally synthetic 3D data generation paradigm to equip individuals with the full capability of creating large-scale annotated photogrammetry point clouds. Specifically, we introduce a synthetic aerial photogrammetry point clouds generation pipeline that takes full advantage of open geospatial data sources and off-the-shelf commercial packages. Unlike generating synthetic data in virtual games, where the simulated data usually have limited gaming environments created by artists, the proposed pipeline simulates the reconstruction process of the real environment by following the same UAV flight pattern on different synthetic terrain shapes and building densities, which ensure similar quality, noise pattern, and diversity with real data. In addition, the precise semantic and instance annotations can be generated fully automatically, avoiding the expensive and time-consuming manual annotation. Based on the proposed pipeline, we present a richly-annotated synthetic 3D aerial photogrammetry point cloud dataset, termed STPLS3D, with more than 16 $km^2$ of landscapes and up to 18 fine-grained semantic categories. For verification purposes, we also provide a parallel dataset collected from four areas in the real environment. Extensive experiments conducted on our datasets demonstrate the effectiveness and quality of the proposed synthetic dataset.
[ { "version": "v1", "created": "Thu, 17 Mar 2022 03:50:40 GMT" }, { "version": "v2", "created": "Thu, 13 Oct 2022 17:56:28 GMT" }, { "version": "v3", "created": "Fri, 14 Oct 2022 01:35:37 GMT" } ]
2023-08-30T00:00:00
[ [ "Chen", "Meida", "" ], [ "Hu", "Qingyong", "" ], [ "Yu", "Zifan", "" ], [ "Thomas", "Hugues", "" ], [ "Feng", "Andrew", "" ], [ "Hou", "Yu", "" ], [ "McCullough", "Kyle", "" ], [ "Ren", "Fengbo", "" ], [ "Soibelman", "Lucio", "" ] ]
new_dataset
0.98502
2210.00429
Chee-Kheng Chng Ck
Chee-Kheng Chng, Alvaro Parra Bustos, Benjamin McCarthy, Tat-Jun Chin
ROSIA: Rotation-Search-Based Star Identification Algorithm
21 pages, 16 figures, Accepted to IEEE Transactions on Aerospace and Electronic Systems
null
10.1109/TAES.2023.3279353
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper presents a rotation-search-based approach for addressing the star identification (Star-ID) problem. The proposed algorithm, ROSIA, is a heuristics-free algorithm that seeks the optimal rotation that maximally aligns the input and catalog stars in their respective coordinates. ROSIA searches the rotation space systematically with the Branch-and-Bound (BnB) method. Crucially affecting the runtime feasibility of ROSIA is the upper bound function that prioritizes the search space. In this paper, we make a theoretical contribution by proposing a tight (provable) upper bound function that enables a 400x speed-up compared to an existing formulation. Coupling the bounding function with an efficient evaluation scheme that leverages stereographic projection and the R-tree data structure, ROSIA achieves feasible operational speed on embedded processors with state-of-the-art performances under different sources of noise. The source code of ROSIA is available at https://github.com/ckchng/ROSIA.
[ { "version": "v1", "created": "Sun, 2 Oct 2022 05:34:19 GMT" }, { "version": "v2", "created": "Tue, 29 Aug 2023 02:32:22 GMT" } ]
2023-08-30T00:00:00
[ [ "Chng", "Chee-Kheng", "" ], [ "Bustos", "Alvaro Parra", "" ], [ "McCarthy", "Benjamin", "" ], [ "Chin", "Tat-Jun", "" ] ]
new_dataset
0.999244
2211.14308
Guillaume Le Moing
Guillaume Le Moing and Jean Ponce and Cordelia Schmid
WALDO: Future Video Synthesis using Object Layer Decomposition and Parametric Flow Prediction
Accepted to ICCV 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents WALDO (WArping Layer-Decomposed Objects), a novel approach to the prediction of future video frames from past ones. Individual images are decomposed into multiple layers combining object masks and a small set of control points. The layer structure is shared across all frames in each video to build dense inter-frame connections. Complex scene motions are modeled by combining parametric geometric transformations associated with individual layers, and video synthesis is broken down into discovering the layers associated with past frames, predicting the corresponding transformations for upcoming ones and warping the associated object regions accordingly, and filling in the remaining image parts. Extensive experiments on multiple benchmarks including urban videos (Cityscapes and KITTI) and videos featuring nonrigid motions (UCF-Sports and H3.6M), show that our method consistently outperforms the state of the art by a significant margin in every case. Code, pretrained models, and video samples synthesized by our approach can be found in the project webpage https://16lemoing.github.io/waldo.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 18:59:46 GMT" }, { "version": "v2", "created": "Tue, 21 Mar 2023 15:22:30 GMT" }, { "version": "v3", "created": "Tue, 29 Aug 2023 07:58:49 GMT" } ]
2023-08-30T00:00:00
[ [ "Moing", "Guillaume Le", "" ], [ "Ponce", "Jean", "" ], [ "Schmid", "Cordelia", "" ] ]
new_dataset
0.998466
2212.01241
Cheng Xu
Cheng Xu and Xiaofeng Hou and Jiacheng Liu and Chao Li and Tianhao Huang and Xiaozhi Zhu and Mo Niu and Lingyu Sun and Peng Tang and Tongqiao Xu and Kwang-Ting Cheng and Minyi Guo
MMBench: Benchmarking End-to-End Multi-modal DNNs and Understanding Their Hardware-Software Implications
null
null
null
null
cs.PF
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The explosive growth of various types of big data and advances in AI technologies have catalyzed a new type of workloads called multi-modal DNNs. Multi-modal DNNs are capable of interpreting and reasoning about information from multiple modalities, making them more applicable to real-world AI scenarios. In recent research, multi-modal DNNs have outperformed the best uni-modal DNN in a wide range of distributed computing applications from traditional multimedia systems to emerging autonomous edge systems. However, despite their importance and superiority, very limited research attention has been devoted to understand the characteristics of multi-modal DNNs and their implications on current computing software/hardware platforms. Existing benchmarks either target uni-modal DNNs or only focus on the algorithm characteristics of multi-modal DNNs. There lacks representative benchmark suites that provide comprehensive system and architecture level analysis of multi-modal networks. To advance the understanding of these multi-modal DNN workloads and facilitate related research, we present MMBench, an open-source, end-to-end benchmark suite consisting of a set of real-world multi-modal DNN workloads with relevant performance metrics for evaluation. We then use MMBench to conduct an in-depth analysis on the characteristics of multi-modal DNNs. We demonstrate their unique characteristics of clear multi-stage execution, frequent synchronization and high heterogeneity, which distinguish them from conventional uni-modal DNNs. Finally, we conduct a case study and extend our benchmark to edge devices. We hope that our work can provide insights for future software/hardware design and optimization to underpin multi-modal DNNs on both cloud and edge computing platforms.
[ { "version": "v1", "created": "Fri, 2 Dec 2022 15:35:04 GMT" }, { "version": "v2", "created": "Fri, 9 Dec 2022 04:31:52 GMT" }, { "version": "v3", "created": "Thu, 10 Aug 2023 06:58:16 GMT" }, { "version": "v4", "created": "Tue, 29 Aug 2023 02:41:10 GMT" } ]
2023-08-30T00:00:00
[ [ "Xu", "Cheng", "" ], [ "Hou", "Xiaofeng", "" ], [ "Liu", "Jiacheng", "" ], [ "Li", "Chao", "" ], [ "Huang", "Tianhao", "" ], [ "Zhu", "Xiaozhi", "" ], [ "Niu", "Mo", "" ], [ "Sun", "Lingyu", "" ], [ "Tang", "Peng", "" ], [ "Xu", "Tongqiao", "" ], [ "Cheng", "Kwang-Ting", "" ], [ "Guo", "Minyi", "" ] ]
new_dataset
0.998774
2301.00135
Xu Gu
Xu Gu, Yuchong Sun, Feiyue Ni, Shizhe Chen, Xihua Wang, Ruihua Song, Boyuan Li, Xiang Cao
TeViS:Translating Text Synopses to Video Storyboards
Accepted to ACM Multimedia 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A video storyboard is a roadmap for video creation which consists of shot-by-shot images to visualize key plots in a text synopsis. Creating video storyboards, however, remains challenging which not only requires cross-modal association between high-level texts and images but also demands long-term reasoning to make transitions smooth across shots. In this paper, we propose a new task called Text synopsis to Video Storyboard (TeViS) which aims to retrieve an ordered sequence of images as the video storyboard to visualize the text synopsis. We construct a MovieNet-TeViS dataset based on the public MovieNet dataset. It contains 10K text synopses each paired with keyframes manually selected from corresponding movies by considering both relevance and cinematic coherence. To benchmark the task, we present strong CLIP-based baselines and a novel VQ-Trans. VQ-Trans first encodes text synopsis and images into a joint embedding space and uses vector quantization (VQ) to improve the visual representation. Then, it auto-regressively generates a sequence of visual features for retrieval and ordering. Experimental results demonstrate that VQ-Trans significantly outperforms prior methods and the CLIP-based baselines. Nevertheless, there is still a large gap compared to human performance suggesting room for promising future work. The code and data are available at: \url{https://ruc-aimind.github.io/projects/TeViS/}
[ { "version": "v1", "created": "Sat, 31 Dec 2022 06:32:36 GMT" }, { "version": "v2", "created": "Mon, 13 Feb 2023 02:09:21 GMT" }, { "version": "v3", "created": "Mon, 14 Aug 2023 13:41:49 GMT" }, { "version": "v4", "created": "Tue, 29 Aug 2023 13:10:56 GMT" } ]
2023-08-30T00:00:00
[ [ "Gu", "Xu", "" ], [ "Sun", "Yuchong", "" ], [ "Ni", "Feiyue", "" ], [ "Chen", "Shizhe", "" ], [ "Wang", "Xihua", "" ], [ "Song", "Ruihua", "" ], [ "Li", "Boyuan", "" ], [ "Cao", "Xiang", "" ] ]
new_dataset
0.999846
2301.12457
Beichen Huang
Beichen Huang, Ran Cheng, Zhuozhao Li, Yaochu Jin, Kay Chen Tan
EvoX: A Distributed GPU-accelerated Framework for Scalable Evolutionary Computation
null
null
null
null
cs.NE
http://creativecommons.org/licenses/by/4.0/
Evolutionary Computation (EC), drawing inspiration from natural evolutionary processes, has solidified its place as an integral facet of Artificial Intelligence. Its unique attributes, such as adaptability and the capability to navigate vast problem spaces, have rendered it indispensable, especially in domains demanding optimization like engineering design. In today's data-driven landscape, the need for scalability in EC is more pronounced than ever, especially with the rise in complex systems and large-scale data. However, many existing EC libraries, designed for modest scales, fall short in catering to the heightened demands of modern problems. The advent of some pioneering GPU-accelerated EC libraries is a step forward, but they too grapple with limitations, particularly in terms of flexibility, computational efficiency, and architectural robustness. To address these challenges, this paper introduces EvoX: a comprehensive, scalable framework tailored for the automated, distributed, and heterogeneous execution of EC algorithms. Central to EvoX is a functional programming model that streamlines the EC algorithm development process, bolstered by a hierarchical state management strategy for efficient distributed execution. Alongside this, leveraging the capabilities of EvoX, we present a rich library of EC algorithms designed to handle a spectrum of problem-solving scenarios. Experimental results demonstrate both the superior system performance and model performance of EvoX. The code of EvoX is available at https://github.com/EMI-Group/EvoX.
[ { "version": "v1", "created": "Sun, 29 Jan 2023 15:00:16 GMT" }, { "version": "v2", "created": "Wed, 1 Feb 2023 08:31:13 GMT" }, { "version": "v3", "created": "Tue, 14 Feb 2023 15:23:57 GMT" }, { "version": "v4", "created": "Thu, 16 Feb 2023 08:43:08 GMT" }, { "version": "v5", "created": "Mon, 20 Mar 2023 07:20:22 GMT" }, { "version": "v6", "created": "Sat, 26 Aug 2023 14:27:55 GMT" }, { "version": "v7", "created": "Tue, 29 Aug 2023 05:49:35 GMT" } ]
2023-08-30T00:00:00
[ [ "Huang", "Beichen", "" ], [ "Cheng", "Ran", "" ], [ "Li", "Zhuozhao", "" ], [ "Jin", "Yaochu", "" ], [ "Tan", "Kay Chen", "" ] ]
new_dataset
0.998548
2302.10469
Xue Xinghua
Xinghua Xue, Cheng Liu, Haitong Huang, Bo Liu, Ying Wang, Bing Yang, Tao Luo, Lei Zhang, Huawei Li and Xiaowei Li
ApproxABFT: Approximate Algorithm-Based Fault Tolerance for Vision Transformers
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision Transformers (ViTs) with outstanding performance becomes a popular backbone of deep learning models for the main-stream vision tasks including classification, object detection, and segmentation. Other than the performance, reliability is also a critical metric for the adoption of ViTs in safety-critical applications such as autonomous driving and robotics. With the observation that the major computing blocks in ViTs such as multi-head attention and feed forward are usually performed with general matrix multiplication (GEMM), we propose a classical algorithm-based fault tolerance (ABFT) strategy originally developed for GEMM to protect ViTs against soft errors in the underlying computing engines. Unlike classical ABFT that will invoke the expensive error recovery procedure whenever computing errors are detected, we leverage the inherent fault-tolerance of ViTs and propose an approximate ABFT, namely ApproxABFT, to invoke the error recovery procedure only when the computing errors are significant enough, which skips many useless error recovery procedures and simplifies the overall GEMM error recovery. Meanwhile, it also relaxes the error threshold in error recovery procedure and ignores minor computing errors, which reduces the error recovery complexity and improves the error recovery quality. In addition, we also apply a fine-grained blocking strategy to ApproxABFT and split GEMM with distinct sizes into smaller sub blocks such that it can smooth the error thresholds across ViTs and further improve the error recovery quality. According to our experiments, the ApproxABFT reduces the computing overhead by 25.92\% to 81.62\% and improves the model accuracy by 2.63\% to 72.56\% compared to the baseline ABFT while the blocking optimization further reduces the computing overhead by 6.56\% to 73.5\% with comparable accuracy.
[ { "version": "v1", "created": "Tue, 21 Feb 2023 06:21:28 GMT" }, { "version": "v2", "created": "Tue, 29 Aug 2023 09:42:40 GMT" } ]
2023-08-30T00:00:00
[ [ "Xue", "Xinghua", "" ], [ "Liu", "Cheng", "" ], [ "Huang", "Haitong", "" ], [ "Liu", "Bo", "" ], [ "Wang", "Ying", "" ], [ "Yang", "Bing", "" ], [ "Luo", "Tao", "" ], [ "Zhang", "Lei", "" ], [ "Li", "Huawei", "" ], [ "Li", "Xiaowei", "" ] ]
new_dataset
0.999022
2303.14672
Ming Qian
Ming Qian, Jincheng Xiong, Gui-Song Xia, Nan Xue
Sat2Density: Faithful Density Learning from Satellite-Ground Image Pairs
ICCV 2023, project page: https://sat2density.github.io/, code: https://github.com/qianmingduowan/Sat2Density
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper aims to develop an accurate 3D geometry representation of satellite images using satellite-ground image pairs. Our focus is on the challenging problem of 3D-aware ground-views synthesis from a satellite image. We draw inspiration from the density field representation used in volumetric neural rendering and propose a new approach, called Sat2Density. Our method utilizes the properties of ground-view panoramas for the sky and non-sky regions to learn faithful density fields of 3D scenes in a geometric perspective. Unlike other methods that require extra depth information during training, our Sat2Density can automatically learn accurate and faithful 3D geometry via density representation without depth supervision. This advancement significantly improves the ground-view panorama synthesis task. Additionally, our study provides a new geometric perspective to understand the relationship between satellite and ground-view images in 3D space.
[ { "version": "v1", "created": "Sun, 26 Mar 2023 10:15:33 GMT" }, { "version": "v2", "created": "Tue, 29 Aug 2023 09:33:59 GMT" } ]
2023-08-30T00:00:00
[ [ "Qian", "Ming", "" ], [ "Xiong", "Jincheng", "" ], [ "Xia", "Gui-Song", "" ], [ "Xue", "Nan", "" ] ]
new_dataset
0.973734
2303.15860
Teng-Hui Huang
Teng-Hui Huang, Thilini Dahanayaka, Kanchana Thilakarathna, Philip H.W. Leong and Hesham El Gamal
The Wyner Variational Autoencoder for Unsupervised Multi-Layer Wireless Fingerprinting
null
null
null
null
cs.IT cs.LG math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless fingerprinting refers to a device identification method leveraging hardware imperfections and wireless channel variations as signatures. Beyond physical layer characteristics, recent studies demonstrated that user behaviors could be identified through network traffic, e.g., packet length, without decryption of the payload. Inspired by these results, we propose a multi-layer fingerprinting framework that jointly considers the multi-layer signatures for improved identification performance. In contrast to previous works, by leveraging the recent multi-view machine learning paradigm, i.e., data with multiple forms, our method can cluster the device information shared among the multi-layer features without supervision. Our information-theoretic approach can be extended to supervised and semi-supervised settings with straightforward derivations. In solving the formulated problem, we obtain a tight surrogate bound using variational inference for efficient optimization. In extracting the shared device information, we develop an algorithm based on the Wyner common information method, enjoying reduced computation complexity as compared to existing approaches. The algorithm can be applied to data distributions belonging to the exponential family class. Empirically, we evaluate the algorithm in a synthetic dataset with real-world video traffic and simulated physical layer characteristics. Our empirical results show that the proposed method outperforms the state-of-the-art baselines in both supervised and unsupervised settings.
[ { "version": "v1", "created": "Tue, 28 Mar 2023 10:05:06 GMT" }, { "version": "v2", "created": "Tue, 29 Aug 2023 03:13:32 GMT" } ]
2023-08-30T00:00:00
[ [ "Huang", "Teng-Hui", "" ], [ "Dahanayaka", "Thilini", "" ], [ "Thilakarathna", "Kanchana", "" ], [ "Leong", "Philip H. W.", "" ], [ "Gamal", "Hesham El", "" ] ]
new_dataset
0.982055
2305.10666
Zelin Ying
Zelin Ying, Chen Li, Yu Dong, Qiuqiang Kong, Qiao Tian, Yuanyuan Huo, Yuxuan Wang
a unified front-end framework for english text-to-speech synthesis
5 pages, 3 figures
null
null
null
cs.CL cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The front-end is a critical component of English text-to-speech (TTS) systems, responsible for extracting linguistic features that are essential for a text-to-speech model to synthesize speech, such as prosodies and phonemes. The English TTS front-end typically consists of a text normalization (TN) module, a prosody word prosody phrase (PWPP) module, and a grapheme-to-phoneme (G2P) module. However, current research on the English TTS front-end focuses solely on individual modules, neglecting the interdependence between them and resulting in sub-optimal performance for each module. Therefore, this paper proposes a unified front-end framework that captures the dependencies among the English TTS front-end modules. Extensive experiments have demonstrated that the proposed method achieves state-of-the-art (SOTA) performance in all modules.
[ { "version": "v1", "created": "Thu, 18 May 2023 02:57:54 GMT" }, { "version": "v2", "created": "Tue, 29 Aug 2023 07:16:52 GMT" } ]
2023-08-30T00:00:00
[ [ "Ying", "Zelin", "" ], [ "Li", "Chen", "" ], [ "Dong", "Yu", "" ], [ "Kong", "Qiuqiang", "" ], [ "Tian", "Qiao", "" ], [ "Huo", "Yuanyuan", "" ], [ "Wang", "Yuxuan", "" ] ]
new_dataset
0.998769
2305.14594
Salem Lahlou
Salem Lahlou, Joseph D. Viviano, Victor Schmidt, Yoshua Bengio
torchgfn: A PyTorch GFlowNet library
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
The growing popularity of generative flow networks (GFlowNets or GFNs) from a range of researchers with diverse backgrounds and areas of expertise necessitates a library which facilitates the testing of new features such as training losses that can be easily compared to standard benchmark implementations, or on a set of common environments. torchgfn is a PyTorch library that aims to address this need. It provides users with a simple API for environments and useful abstractions for samplers and losses. Multiple examples are provided, replicating and unifying published results. The code is available in https://github.com/saleml/torchgfn.
[ { "version": "v1", "created": "Wed, 24 May 2023 00:20:59 GMT" }, { "version": "v2", "created": "Tue, 29 Aug 2023 14:51:08 GMT" } ]
2023-08-30T00:00:00
[ [ "Lahlou", "Salem", "" ], [ "Viviano", "Joseph D.", "" ], [ "Schmidt", "Victor", "" ], [ "Bengio", "Yoshua", "" ] ]
new_dataset
0.998064
2306.06826
Jiaxin Pei
Jiaxin Pei and David Jurgens
When Do Annotator Demographics Matter? Measuring the Influence of Annotator Demographics with the POPQUORN Dataset
null
null
null
null
cs.CL cs.AI cs.CY cs.HC cs.LG
http://creativecommons.org/licenses/by/4.0/
Annotators are not fungible. Their demographics, life experiences, and backgrounds all contribute to how they label data. However, NLP has only recently considered how annotator identity might influence their decisions. Here, we present POPQUORN (the POtato-Prolific dataset for QUestion-Answering, Offensiveness, text Rewriting, and politeness rating with demographic Nuance). POPQUORN contains 45,000 annotations from 1,484 annotators, drawn from a representative sample regarding sex, age, and race as the US population. Through a series of analyses, we show that annotators' background plays a significant role in their judgments. Further, our work shows that backgrounds not previously considered in NLP (e.g., education), are meaningful and should be considered. Our study suggests that understanding the background of annotators and collecting labels from a demographically balanced pool of crowd workers is important to reduce the bias of datasets. The dataset, annotator background, and annotation interface are available at https://github.com/Jiaxin-Pei/potato-prolific-dataset .
[ { "version": "v1", "created": "Mon, 12 Jun 2023 02:26:00 GMT" }, { "version": "v2", "created": "Mon, 28 Aug 2023 21:14:35 GMT" } ]
2023-08-30T00:00:00
[ [ "Pei", "Jiaxin", "" ], [ "Jurgens", "David", "" ] ]
new_dataset
0.984344
2306.09539
Mahan Fathi
Mahan Fathi and Jonathan Pilault and Pierre-Luc Bacon and Christopher Pal and Orhan Firat and Ross Goroshin
Block-State Transformer
null
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
State space models (SSMs) have shown impressive results on tasks that require modeling long-range dependencies and efficiently scale to long sequences owing to their subquadratic runtime complexity. Originally designed for continuous signals, SSMs have shown superior performance on a plethora of tasks, in vision and audio; however, SSMs still lag Transformer performance in Language Modeling tasks. In this work, we propose a hybrid layer named Block-State Transformer (BST), that internally combines an SSM sublayer for long-range contextualization, and a Block Transformer sublayer for short-term representation of sequences. We study three different, and completely parallelizable, variants that integrate SSMs and block-wise attention. We show that our model outperforms similar Transformer-based architectures on language modeling perplexity and generalizes to longer sequences. In addition, the Block-State Transformer demonstrates more than tenfold increase in speed at the layer level compared to the Block-Recurrent Transformer when model parallelization is employed.
[ { "version": "v1", "created": "Thu, 15 Jun 2023 22:48:08 GMT" }, { "version": "v2", "created": "Tue, 29 Aug 2023 01:08:30 GMT" } ]
2023-08-30T00:00:00
[ [ "Fathi", "Mahan", "" ], [ "Pilault", "Jonathan", "" ], [ "Bacon", "Pierre-Luc", "" ], [ "Pal", "Christopher", "" ], [ "Firat", "Orhan", "" ], [ "Goroshin", "Ross", "" ] ]
new_dataset
0.997555
2307.00290
Can Cui
Can Cui, Ruining Deng, Quan Liu, Tianyuan Yao, Shunxing Bao, Lucas W. Remedios, Yucheng Tang, Yuankai Huo
All-in-SAM: from Weak Annotation to Pixel-wise Nuclei Segmentation with Prompt-based Finetuning
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Segment Anything Model (SAM) is a recently proposed prompt-based segmentation model in a generic zero-shot segmentation approach. With the zero-shot segmentation capacity, SAM achieved impressive flexibility and precision on various segmentation tasks. However, the current pipeline requires manual prompts during the inference stage, which is still resource intensive for biomedical image segmentation. In this paper, instead of using prompts during the inference stage, we introduce a pipeline that utilizes the SAM, called all-in-SAM, through the entire AI development workflow (from annotation generation to model finetuning) without requiring manual prompts during the inference stage. Specifically, SAM is first employed to generate pixel-level annotations from weak prompts (e.g., points, bounding box). Then, the pixel-level annotations are used to finetune the SAM segmentation model rather than training from scratch. Our experimental results reveal two key findings: 1) the proposed pipeline surpasses the state-of-the-art (SOTA) methods in a nuclei segmentation task on the public Monuseg dataset, and 2) the utilization of weak and few annotations for SAM finetuning achieves competitive performance compared to using strong pixel-wise annotated data.
[ { "version": "v1", "created": "Sat, 1 Jul 2023 10:12:46 GMT" }, { "version": "v2", "created": "Tue, 29 Aug 2023 03:31:58 GMT" } ]
2023-08-30T00:00:00
[ [ "Cui", "Can", "" ], [ "Deng", "Ruining", "" ], [ "Liu", "Quan", "" ], [ "Yao", "Tianyuan", "" ], [ "Bao", "Shunxing", "" ], [ "Remedios", "Lucas W.", "" ], [ "Tang", "Yucheng", "" ], [ "Huo", "Yuankai", "" ] ]
new_dataset
0.981917
2307.03854
B M Tazbiul Hassan Anik
B M Tazbiul Hassan Anik, Zubayer Islam, Mohamed Abdel-Aty
inTformer: A Time-Embedded Attention-Based Transformer for Crash Likelihood Prediction at Intersections Using Connected Vehicle Data
29 pages, 10 figures, 8 tables
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
The real-time crash likelihood prediction model is an essential component of the proactive traffic safety management system. Over the years, numerous studies have attempted to construct a crash likelihood prediction model in order to enhance traffic safety, but mostly on freeways. In the majority of the existing studies, researchers have primarily employed a deep learning-based framework to identify crash potential. Lately, Transformer has emerged as a potential deep neural network that fundamentally operates through attention-based mechanisms. Transformer has several functional benefits over extant deep learning models such as LSTM, CNN, etc. Firstly, Transformer can readily handle long-term dependencies in a data sequence. Secondly, Transformers can parallelly process all elements in a data sequence during training. Finally, a Transformer does not have the vanishing gradient issue. Realizing the immense possibility of Transformers, this paper proposes inTersection-Transformer (inTformer), a time-embedded attention-based Transformer model that can effectively predict intersection crash likelihood in real-time. The proposed model was evaluated using connected vehicle data extracted from Signal Analytics Platform. Acknowledging the complex traffic operation mechanism at intersection, this study developed zone-specific models by dividing the intersection region into two distinct zones: within-intersection and approach zone. The best inTformer models in 'within-intersection,' and 'approach' zone achieved a sensitivity of 73%, and 70%, respectively. The zone-level models were also compared to earlier studies on crash likelihood prediction at intersections and with several established deep learning models trained on the same connected vehicle dataset.
[ { "version": "v1", "created": "Fri, 7 Jul 2023 22:00:31 GMT" }, { "version": "v2", "created": "Thu, 13 Jul 2023 05:46:11 GMT" }, { "version": "v3", "created": "Mon, 28 Aug 2023 12:50:34 GMT" }, { "version": "v4", "created": "Tue, 29 Aug 2023 15:51:05 GMT" } ]
2023-08-30T00:00:00
[ [ "Anik", "B M Tazbiul Hassan", "" ], [ "Islam", "Zubayer", "" ], [ "Abdel-Aty", "Mohamed", "" ] ]
new_dataset
0.98255
2308.12651
Ayano Nishii
Yuya Higashikawa, Ayano Nishii, Junichi Teruyama, Yuki Tokuni
Sink Location Problems in Dynamic Flow Grid Networks
16 pages, 6 figures, full version of a paper accepted at COCOON 2023
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
A dynamic flow network consists of a directed graph, where nodes called sources represent locations of evacuees, and nodes called sinks represent locations of evacuation facilities. Each source and each sink are given supply representing the number of evacuees and demand representing the maximum number of acceptable evacuees, respectively. Each edge is given capacity and transit time. Here, the capacity of an edge bounds the rate at which evacuees can enter the edge per unit time, and the transit time represents the time which evacuees take to travel across the edge. The evacuation completion time is the minimum time at which each evacuees can arrive at one of the evacuation facilities. Given a dynamic flow network without sinks, once sinks are located on some nodes or edges, the evacuation completion time for this sink location is determined. We then consider the problem of locating sinks to minimize the evacuation completion time, called the sink location problem. The problems have been given polynomial-time algorithms only for limited networks such as paths, cycles, and trees, but no polynomial-time algorithms are known for more complex network classes. In this paper, we prove that the 1-sink location problem can be solved in polynomial-time when an input network is a grid with uniform edge capacity and transit time.
[ { "version": "v1", "created": "Thu, 24 Aug 2023 08:47:15 GMT" }, { "version": "v2", "created": "Tue, 29 Aug 2023 16:59:10 GMT" } ]
2023-08-30T00:00:00
[ [ "Higashikawa", "Yuya", "" ], [ "Nishii", "Ayano", "" ], [ "Teruyama", "Junichi", "" ], [ "Tokuni", "Yuki", "" ] ]
new_dataset
0.966799
2308.14047
Francesco Pirotti
Francesco Pirotti, Alberto Guarnieri, Sebastiano Chiodini, Carlo Bettanini
Automatic coarse co-registration of point clouds from diverse scan geometries: a test of detectors and descriptors
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Point clouds are collected nowadays from a plethora of sensors, some having higher accuracies and higher costs, some having lower accuracies but also lower costs. Not only there is a large choice for different sensors, but also these can be transported by different platforms, which can provide different scan geometries. In this work we test the extraction of four different keypoint detectors and three feature descriptors. We benchmark performance in terms of calculation time and we assess their performance in terms of accuracy in their ability in coarse automatic co-registration of two clouds that are collected with different sensors, platforms and scan geometries. One, which we define as having the higher accuracy, and thus will be used as reference, was surveyed via a UAV flight with a Riegl MiniVUX-3, the other on a bicycle with a Livox Horizon over a walking path with un-even ground.The novelty in this work consists in comparing several strategies for fast alignment of point clouds from very different surveying geometries, as the drone has a bird's eye view and the bicycle a ground-based view. An added challenge is related to the lower cost of the bicycle sensor ensemble that, together with the rough terrain, reasonably results in lower accuracy of the survey. The main idea is to use range images to capture a simplified version of the geometry of the surveyed area and then find the best features to match keypoints. Results show that NARF features detected more keypoints and resulted in a faster co-registration procedure in this scenariowhereas the accuracy of the co-registration is similar to all the combinations of keypoint detectors and features.
[ { "version": "v1", "created": "Sun, 27 Aug 2023 08:55:22 GMT" } ]
2023-08-30T00:00:00
[ [ "Pirotti", "Francesco", "" ], [ "Guarnieri", "Alberto", "" ], [ "Chiodini", "Sebastiano", "" ], [ "Bettanini", "Carlo", "" ] ]
new_dataset
0.993241
2308.14762
Waseem Akram
Waseem Akram, Muhayyuddin Ahmed, Lyes Saad Saoud, Lakmal Seneviratne, and Irfan Hussain
Autonomous Underwater Robotic System for Aquaculture Applications
arXiv admin note: text overlap with arXiv:2308.13826
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Aquaculture is a thriving food-producing sector producing over half of the global fish consumption. However, these aquafarms pose significant challenges such as biofouling, vegetation, and holes within their net pens and have a profound effect on the efficiency and sustainability of fish production. Currently, divers and/or remotely operated vehicles are deployed for inspecting and maintaining aquafarms; this approach is expensive and requires highly skilled human operators. This work aims to develop a robotic-based automatic net defect detection system for aquaculture net pens oriented to on- ROV processing and real-time detection of different aqua-net defects such as biofouling, vegetation, net holes, and plastic. The proposed system integrates both deep learning-based methods for aqua-net defect detection and feedback control law for the vehicle movement around the aqua-net to obtain a clear sequence of net images and inspect the status of the net via performing the inspection tasks. This work contributes to the area of aquaculture inspection, marine robotics, and deep learning aiming to reduce cost, improve quality, and ease of operation.
[ { "version": "v1", "created": "Sat, 26 Aug 2023 10:45:39 GMT" } ]
2023-08-30T00:00:00
[ [ "Akram", "Waseem", "" ], [ "Ahmed", "Muhayyuddin", "" ], [ "Saoud", "Lyes Saad", "" ], [ "Seneviratne", "Lakmal", "" ], [ "Hussain", "Irfan", "" ] ]
new_dataset
0.996685
2308.14816
Zhipeng Cai
Zhipeng Cai and Matthias Mueller
CLNeRF: Continual Learning Meets NeRF
Accepted to ICCV 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Novel view synthesis aims to render unseen views given a set of calibrated images. In practical applications, the coverage, appearance or geometry of the scene may change over time, with new images continuously being captured. Efficiently incorporating such continuous change is an open challenge. Standard NeRF benchmarks only involve scene coverage expansion. To study other practical scene changes, we propose a new dataset, World Across Time (WAT), consisting of scenes that change in appearance and geometry over time. We also propose a simple yet effective method, CLNeRF, which introduces continual learning (CL) to Neural Radiance Fields (NeRFs). CLNeRF combines generative replay and the Instant Neural Graphics Primitives (NGP) architecture to effectively prevent catastrophic forgetting and efficiently update the model when new data arrives. We also add trainable appearance and geometry embeddings to NGP, allowing a single compact model to handle complex scene changes. Without the need to store historical images, CLNeRF trained sequentially over multiple scans of a changing scene performs on-par with the upper bound model trained on all scans at once. Compared to other CL baselines CLNeRF performs much better across standard benchmarks and WAT. The source code, and the WAT dataset are available at https://github.com/IntelLabs/CLNeRF. Video presentation is available at: https://youtu.be/nLRt6OoDGq0?si=8yD6k-8MMBJInQPs
[ { "version": "v1", "created": "Mon, 28 Aug 2023 18:09:13 GMT" } ]
2023-08-30T00:00:00
[ [ "Cai", "Zhipeng", "" ], [ "Mueller", "Matthias", "" ] ]
new_dataset
0.999794
2308.14833
Derek Gloudemans
Derek Gloudemans, Yanbing Wang, Gracie Gumm, William Barbour, Daniel B. Work
The Interstate-24 3D Dataset: a new benchmark for 3D multi-camera vehicle tracking
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This work presents a novel video dataset recorded from overlapping highway traffic cameras along an urban interstate, enabling multi-camera 3D object tracking in a traffic monitoring context. Data is released from 3 scenes containing video from at least 16 cameras each, totaling 57 minutes in length. 877,000 3D bounding boxes and corresponding object tracklets are fully and accurately annotated for each camera field of view and are combined into a spatially and temporally continuous set of vehicle trajectories for each scene. Lastly, existing algorithms are combined to benchmark a number of 3D multi-camera tracking pipelines on the dataset, with results indicating that the dataset is challenging due to the difficulty of matching objects traveling at high speeds across cameras and heavy object occlusion, potentially for hundreds of frames, during congested traffic. This work aims to enable the development of accurate and automatic vehicle trajectory extraction algorithms, which will play a vital role in understanding impacts of autonomous vehicle technologies on the safety and efficiency of traffic.
[ { "version": "v1", "created": "Mon, 28 Aug 2023 18:43:33 GMT" } ]
2023-08-30T00:00:00
[ [ "Gloudemans", "Derek", "" ], [ "Wang", "Yanbing", "" ], [ "Gumm", "Gracie", "" ], [ "Barbour", "William", "" ], [ "Work", "Daniel B.", "" ] ]
new_dataset
0.999868
2308.14835
Robert Bridges
Robert A. Bridges, Brian Weber, Justin M. Beaver, Jared M. Smith, Miki E. Verma, Savannah Norem, Kevin Spakes, Cory Watson, Jeff A. Nichols, Brian Jewell, Michael. D. Iannacone, Chelsey Dunivan Stahl, Kelly M.T. Huffer, T. Sean Oesch
AI ATAC 1: An Evaluation of Prominent Commercial Malware Detectors
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work presents an evaluation of six prominent commercial endpoint malware detectors, a network malware detector, and a file-conviction algorithm from a cyber technology vendor. The evaluation was administered as the first of the Artificial Intelligence Applications to Autonomous Cybersecurity (AI ATAC) prize challenges, funded by / completed in service of the US Navy. The experiment employed 100K files (50/50% benign/malicious) with a stratified distribution of file types, including ~1K zero-day program executables (increasing experiment size two orders of magnitude over previous work). We present an evaluation process of delivering a file to a fresh virtual machine donning the detection technology, waiting 90s to allow static detection, then executing the file and waiting another period for dynamic detection; this allows greater fidelity in the observational data than previous experiments, in particular, resource and time-to-detection statistics. To execute all 800K trials (100K files $\times$ 8 tools), a software framework is designed to choreographed the experiment into a completely automated, time-synced, and reproducible workflow with substantial parallelization. A cost-benefit model was configured to integrate the tools' recall, precision, time to detection, and resource requirements into a single comparable quantity by simulating costs of use. This provides a ranking methodology for cyber competitions and a lens through which to reason about the varied statistical viewpoints of the results. These statistical and cost-model results provide insights on state of commercial malware detection.
[ { "version": "v1", "created": "Mon, 28 Aug 2023 18:46:12 GMT" } ]
2023-08-30T00:00:00
[ [ "Bridges", "Robert A.", "" ], [ "Weber", "Brian", "" ], [ "Beaver", "Justin M.", "" ], [ "Smith", "Jared M.", "" ], [ "Verma", "Miki E.", "" ], [ "Norem", "Savannah", "" ], [ "Spakes", "Kevin", "" ], [ "Watson", "Cory", "" ], [ "Nichols", "Jeff A.", "" ], [ "Jewell", "Brian", "" ], [ "Iannacone", "Michael. D.", "" ], [ "Stahl", "Chelsey Dunivan", "" ], [ "Huffer", "Kelly M. T.", "" ], [ "Oesch", "T. Sean", "" ] ]
new_dataset
0.992579
2308.14852
Hatef Otroshi Shahreza
Hatef Otroshi Shahreza, Anjith George, S\'ebastien Marcel
SynthDistill: Face Recognition with Knowledge Distillation from Synthetic Data
Accepted in the IEEE International Joint Conference on Biometrics (IJCB 2023)
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
State-of-the-art face recognition networks are often computationally expensive and cannot be used for mobile applications. Training lightweight face recognition models also requires large identity-labeled datasets. Meanwhile, there are privacy and ethical concerns with collecting and using large face recognition datasets. While generating synthetic datasets for training face recognition models is an alternative option, it is challenging to generate synthetic data with sufficient intra-class variations. In addition, there is still a considerable gap between the performance of models trained on real and synthetic data. In this paper, we propose a new framework (named SynthDistill) to train lightweight face recognition models by distilling the knowledge of a pretrained teacher face recognition model using synthetic data. We use a pretrained face generator network to generate synthetic face images and use the synthesized images to learn a lightweight student network. We use synthetic face images without identity labels, mitigating the problems in the intra-class variation generation of synthetic datasets. Instead, we propose a novel dynamic sampling strategy from the intermediate latent space of the face generator network to include new variations of the challenging images while further exploring new face images in the training batch. The results on five different face recognition datasets demonstrate the superiority of our lightweight model compared to models trained on previous synthetic datasets, achieving a verification accuracy of 99.52% on the LFW dataset with a lightweight network. The results also show that our proposed framework significantly reduces the gap between training with real and synthetic data. The source code for replicating the experiments is publicly released.
[ { "version": "v1", "created": "Mon, 28 Aug 2023 19:15:27 GMT" } ]
2023-08-30T00:00:00
[ [ "Shahreza", "Hatef Otroshi", "" ], [ "George", "Anjith", "" ], [ "Marcel", "Sébastien", "" ] ]
new_dataset
0.996148
2308.14894
Th\'eo Deschamps-Berger
Th\'eo Deschamps-Berger, Lori Lamel and Laurence Devillers
Multiscale Contextual Learning for Speech Emotion Recognition in Emergency Call Center Conversations
null
null
null
null
cs.CL cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Emotion recognition in conversations is essential for ensuring advanced human-machine interactions. However, creating robust and accurate emotion recognition systems in real life is challenging, mainly due to the scarcity of emotion datasets collected in the wild and the inability to take into account the dialogue context. The CEMO dataset, composed of conversations between agents and patients during emergency calls to a French call center, fills this gap. The nature of these interactions highlights the role of the emotional flow of the conversation in predicting patient emotions, as context can often make a difference in understanding actual feelings. This paper presents a multi-scale conversational context learning approach for speech emotion recognition, which takes advantage of this hypothesis. We investigated this approach on both speech transcriptions and acoustic segments. Experimentally, our method uses the previous or next information of the targeted segment. In the text domain, we tested the context window using a wide range of tokens (from 10 to 100) and at the speech turns level, considering inputs from both the same and opposing speakers. According to our tests, the context derived from previous tokens has a more significant influence on accurate prediction than the following tokens. Furthermore, taking the last speech turn of the same speaker in the conversation seems useful. In the acoustic domain, we conducted an in-depth analysis of the impact of the surrounding emotions on the prediction. While multi-scale conversational context learning using Transformers can enhance performance in the textual modality for emergency call recordings, incorporating acoustic context is more challenging.
[ { "version": "v1", "created": "Mon, 28 Aug 2023 20:31:45 GMT" } ]
2023-08-30T00:00:00
[ [ "Deschamps-Berger", "Théo", "" ], [ "Lamel", "Lori", "" ], [ "Devillers", "Laurence", "" ] ]
new_dataset
0.999052
2308.14898
EPTCS
Enrico Pontelli (New Mexico State University, USA), Stefania Costantini (University of L'Aquila, Italy), Carmine Dodaro (University of Calabria, Italy), Sarah Gaggl (TU Dresden, Germany), Roberta Calegari (University of Bologna, Italy), Artur D'Avila Garcez (City University of London, UK), Francesco Fabiano (University of Udine, Italy), Alessandra Mileo (DCU, Ireland), Alessandra Russo (Imperial College London, UK), Francesca Toni (Imperial College London, UK)
Proceedings 39th International Conference on Logic Programming
null
EPTCS 385, 2023
10.4204/EPTCS.385
null
cs.AI cs.LO cs.PL cs.SC
http://creativecommons.org/licenses/by/4.0/
This volume contains the Technical Communications presented at the 39th International Conference on Logic Programming (ICLP 2023), held at Imperial College London, UK from July 9 to July 15, 2023. Technical Communications included here concern the Main Track, the Doctoral Consortium, the Application and Systems/Demo track, the Recently Published Research Track, the Birds-of-a-Feather track, the Thematic Tracks on Logic Programming and Machine Learning, and Logic Programming and Explainability, Ethics, and Trustworthiness.
[ { "version": "v1", "created": "Mon, 28 Aug 2023 20:46:59 GMT" } ]
2023-08-30T00:00:00
[ [ "Pontelli", "Enrico", "", "New Mexico State University, USA" ], [ "Costantini", "Stefania", "", "University of L'Aquila, Italy" ], [ "Dodaro", "Carmine", "", "University of\n Calabria, Italy" ], [ "Gaggl", "Sarah", "", "TU Dresden, Germany" ], [ "Calegari", "Roberta", "", "University of Bologna, Italy" ], [ "Garcez", "Artur D'Avila", "", "City University of\n London, UK" ], [ "Fabiano", "Francesco", "", "University of Udine, Italy" ], [ "Mileo", "Alessandra", "", "DCU, Ireland" ], [ "Russo", "Alessandra", "", "Imperial College London, UK" ], [ "Toni", "Francesca", "", "Imperial College London, UK" ] ]
new_dataset
0.990204
2308.14899
Nathan Drenkow
Nathan Drenkow, Mathias Unberath
RobustCLEVR: A Benchmark and Framework for Evaluating Robustness in Object-centric Learning
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Object-centric representation learning offers the potential to overcome limitations of image-level representations by explicitly parsing image scenes into their constituent components. While image-level representations typically lack robustness to natural image corruptions, the robustness of object-centric methods remains largely untested. To address this gap, we present the RobustCLEVR benchmark dataset and evaluation framework. Our framework takes a novel approach to evaluating robustness by enabling the specification of causal dependencies in the image generation process grounded in expert knowledge and capable of producing a wide range of image corruptions unattainable in existing robustness evaluations. Using our framework, we define several causal models of the image corruption process which explicitly encode assumptions about the causal relationships and distributions of each corruption type. We generate dataset variants for each causal model on which we evaluate state-of-the-art object-centric methods. Overall, we find that object-centric methods are not inherently robust to image corruptions. Our causal evaluation approach exposes model sensitivities not observed using conventional evaluation processes, yielding greater insight into robustness differences across algorithms. Lastly, while conventional robustness evaluations view corruptions as out-of-distribution, we use our causal framework to show that even training on in-distribution image corruptions does not guarantee increased model robustness. This work provides a step towards more concrete and substantiated understanding of model performance and deterioration under complex corruption processes of the real-world.
[ { "version": "v1", "created": "Mon, 28 Aug 2023 20:52:18 GMT" } ]
2023-08-30T00:00:00
[ [ "Drenkow", "Nathan", "" ], [ "Unberath", "Mathias", "" ] ]
new_dataset
0.999238
2308.14936
Dongxiao Zhu
Chengyin Li, Prashant Khanduri, Yao Qiang, Rafi Ibn Sultan, Indrin Chetty and Dongxiao Zhu
Auto-Prompting SAM for Mobile Friendly 3D Medical Image Segmentation
9 pages, 4 figures, 4 tables
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
The Segment Anything Model (SAM) has rapidly been adopted for segmenting a wide range of natural images. However, recent studies have indicated that SAM exhibits subpar performance on 3D medical image segmentation tasks. In addition to the domain gaps between natural and medical images, disparities in the spatial arrangement between 2D and 3D images, the substantial computational burden imposed by powerful GPU servers, and the time-consuming manual prompt generation impede the extension of SAM to a broader spectrum of medical image segmentation applications. To address these challenges, in this work, we introduce a novel method, AutoSAM Adapter, designed specifically for 3D multi-organ CT-based segmentation. We employ parameter-efficient adaptation techniques in developing an automatic prompt learning paradigm to facilitate the transformation of the SAM model's capabilities to 3D medical image segmentation, eliminating the need for manually generated prompts. Furthermore, we effectively transfer the acquired knowledge of the AutoSAM Adapter to other lightweight models specifically tailored for 3D medical image analysis, achieving state-of-the-art (SOTA) performance on medical image segmentation tasks. Through extensive experimental evaluation, we demonstrate the AutoSAM Adapter as a critical foundation for effectively leveraging the emerging ability of foundation models in 2D natural image segmentation for 3D medical image segmentation.
[ { "version": "v1", "created": "Mon, 28 Aug 2023 23:23:53 GMT" } ]
2023-08-30T00:00:00
[ [ "Li", "Chengyin", "" ], [ "Khanduri", "Prashant", "" ], [ "Qiang", "Yao", "" ], [ "Sultan", "Rafi Ibn", "" ], [ "Chetty", "Indrin", "" ], [ "Zhu", "Dongxiao", "" ] ]
new_dataset
0.99768
2308.14951
Homayoon Beigi
Mustafa Eyceoz, Justin Lee, Siddharth Pittie, Homayoon Beigi
Robust Open-Set Spoken Language Identification and the CU MultiLang Dataset
6pages, 1 table, 6 figures
Recognition Technologies, Inc. Technical Report (2023), RTI-20230328-01
10.13140/RG.2.2.22716.21122
RTI-20230828-01
cs.CL cs.AI cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most state-of-the-art spoken language identification models are closed-set; in other words, they can only output a language label from the set of classes they were trained on. Open-set spoken language identification systems, however, gain the ability to detect when an input exhibits none of the original languages. In this paper, we implement a novel approach to open-set spoken language identification that uses MFCC and pitch features, a TDNN model to extract meaningful feature embeddings, confidence thresholding on softmax outputs, and LDA and pLDA for learning to classify new unknown languages. We present a spoken language identification system that achieves 91.76% accuracy on trained languages and has the capability to adapt to unknown languages on the fly. To that end, we also built the CU MultiLang Dataset, a large and diverse multilingual speech corpus which was used to train and evaluate our system.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 00:44:27 GMT" } ]
2023-08-30T00:00:00
[ [ "Eyceoz", "Mustafa", "" ], [ "Lee", "Justin", "" ], [ "Pittie", "Siddharth", "" ], [ "Beigi", "Homayoon", "" ] ]
new_dataset
0.999256
2308.14961
Beth Malmskog
Beth Malmskog and Na'ama Nevo
Lower Rate Bounds for Hermitian-Lifted Codes for Odd Prime Characteristic
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Locally recoverable codes are error correcting codes with the additional property that every symbol of any codeword can be recovered from a small set of other symbols. This property is particularly desirable in cloud storage applications. A locally recoverable code is said to have availability $t$ if each position has $t$ disjoint recovery sets. Hermitian-lifted codes are locally recoverable codes with high availability first described by Lopez, Malmskog, Matthews, Pi\~nero-Gonzales, and Wootters. The codes are based on the well-known Hermitian curve and incorporate the novel technique of lifting to increase the rate of the code. Lopez et al. lower bounded the rate of the codes defined over fields with characteristic 2. This paper generalizes their work to show that the rate of Hermitian-lifted codes is bounded below by a positive constant depending on $p$ when $q=p^l$ for any odd prime $p$.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 01:28:01 GMT" } ]
2023-08-30T00:00:00
[ [ "Malmskog", "Beth", "" ], [ "Nevo", "Na'ama", "" ] ]
new_dataset
0.986842
2308.14972
Yaonan Zhu
Haokun Liu, Yaonan Zhu, Kenji Kato, Izumi Kondo, Tadayoshi Aoyama, and Yasuhisa Hasegawa
LLM-Based Human-Robot Collaboration Framework for Manipulation Tasks
IEEE MHS 2023
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper presents a novel approach to enhance autonomous robotic manipulation using the Large Language Model (LLM) for logical inference, converting high-level language commands into sequences of executable motion functions. The proposed system combines the advantage of LLM with YOLO-based environmental perception to enable robots to autonomously make reasonable decisions and task planning based on the given commands. Additionally, to address the potential inaccuracies or illogical actions arising from LLM, a combination of teleoperation and Dynamic Movement Primitives (DMP) is employed for action correction. This integration aims to improve the practicality and generalizability of the LLM-based human-robot collaboration system.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 01:54:49 GMT" } ]
2023-08-30T00:00:00
[ [ "Liu", "Haokun", "" ], [ "Zhu", "Yaonan", "" ], [ "Kato", "Kenji", "" ], [ "Kondo", "Izumi", "" ], [ "Aoyama", "Tadayoshi", "" ], [ "Hasegawa", "Yasuhisa", "" ] ]
new_dataset
0.994773
2308.14974
Manar Alalfi
Jian Chen, Manar H. Alalfi, Thomas R. Dean, Ramesh S
SimSched: A tool for Simulating Autosar Implementaion in Simulink
21 pages
null
null
null
cs.SE
http://creativecommons.org/licenses/by-nc-sa/4.0/
AUTOSAR (AUTomotive Open System ARchitecture) is an open industry standard for the automotive sector. It defines the three-layered automotive software architecture. One of these layers is the application layer, where functional behaviors are encapsulated in Software Components (SW-Cs). Inside SW-Cs, a set of runnable entities represents the internal behavior and is realized as a set of tasks. To address AUTOSAR's lack of support for modeling behaviors of runnables, languages such as Simulink are employed. Simulink simulations assume Simulink block behaviors are completed in zero execution time, while real execution requires a finite execution time. This timing mismatch can result in failures to detect unexpected runtime behaviors during the simulation phase. This paper extends the Simulink environment to model the timing properties of tasks. We present a Simulink block that can schedule tasks with non-zero simulation times. It enables a more realistic analysis during model development.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 02:02:14 GMT" } ]
2023-08-30T00:00:00
[ [ "Chen", "Jian", "" ], [ "Alalfi", "Manar H.", "" ], [ "Dean", "Thomas R.", "" ], [ "S", "Ramesh", "" ] ]
new_dataset
0.963932
2308.14994
Manuel Luis Delos Santos
Manuel Luis C. Delos Santos (1), Jerum B. Dasalla (2), Jomar C. Feliciano (3), Dustin Red B. Cabatay (4), ((1)(3)(4) Asian Institute of Computer Studies, Philippines, (2) Philippine State College of Aeronautics)
ICARUS: An Android-Based Unmanned Aerial Vehicle (UAV) Search and Rescue Eye in the Sky
15 pages, 14 figures, Special Issue: IRCCETE 2023
International Journal of Computing Sciences Research (IJCSR), Volume 7, pp. 2272-2286, July 14, 2023
10.25147/ijcsr.2017.001.1.159
ISSN print: 2546-0552; ISSN online: 2546-115X
cs.CY cs.CV
http://creativecommons.org/licenses/by/4.0/
The purpose of this paper is to develop an unmanned aerial vehicle (UAV) using a quadcopter with the capability of video surveillance, map coordinates, a deployable parachute with a medicine kit or a food pack as a payload, a collision warning system, remotely controlled, integrated with an android application to assist in search and rescue operations. Applied research for the development of the functional prototype, quantitative and descriptive statistics to summarize data by describing the relationship between variables in a sample or population. The quadcopter underwent an evaluation using a survey instrument to test its acceptability using predefined variables to select respondents within Caloocan City and Quezon City, Philippines. Demographic profiles and known issues and concerns were answered by 30 respondents. The results were summarized and distributed in Tables 1 and 2. In terms of demographic profiles, the number of SAR operators within the specified areas is distributed equally, most are male, single, and within the age bracket of 31 and above. In issues and concerns, the most common type of search and rescue was ground search and rescue. Human error is the primary cause of most injuries in operating units. The prototype was useful and everyone agreed, in terms of acceptability, drone technology will improve search and rescue operations. The innovative way of utilizing Android and drone technology is a new step towards the improvement of SAR operations in the Philippines. The LiPo battery must be replaced with a higher capacity and the drone operator should undergo a training course and secure a permit from the Civil Aviation Authority of the Philippines (CAAP).
[ { "version": "v1", "created": "Tue, 29 Aug 2023 02:49:16 GMT" } ]
2023-08-30T00:00:00
[ [ "Santos", "Manuel Luis C. Delos", "" ], [ "Dasalla", "Jerum B.", "" ], [ "Feliciano", "Jomar C.", "" ], [ "Cabatay", "Dustin Red B.", "" ] ]
new_dataset
0.999339
2308.15040
Yung-Chin Chen
Yung-Chin Chen, Shimpei Ando, Daichi Fujiki, Shinya Takamaeda-Yamazaki, Kentaro Yoshioka
OSA-HCIM: On-The-Fly Saliency-Aware Hybrid SRAM CIM with Dynamic Precision Configuration
null
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computing-in-Memory (CIM) has shown great potential for enhancing efficiency and performance for deep neural networks (DNNs). However, the lack of flexibility in CIM leads to an unnecessary expenditure of computational resources on less critical operations, and a diminished Signal-to-Noise Ratio (SNR) when handling more complex tasks, significantly hindering the overall performance. Hence, we focus on the integration of CIM with Saliency-Aware Computing -- a paradigm that dynamically tailors computing precision based on the importance of each input. We propose On-the-fly Saliency-Aware Hybrid CIM (OSA-HCIM) offering three primary contributions: (1) On-the-fly Saliency-Aware (OSA) precision configuration scheme, which dynamically sets the precision of each MAC operation based on its saliency, (2) Hybrid CIM Array (HCIMA), which enables simultaneous operation of digital-domain CIM (DCIM) and analog-domain CIM (ACIM) via split-port 6T SRAM, and (3) an integrated framework combining OSA and HCIMA to fulfill diverse accuracy and power demands. Implemented on a 65nm CMOS process, OSA-HCIM demonstrates an exceptional balance between accuracy and resource utilization. Notably, it is the first CIM design to incorporate a dynamic digital-to-analog boundary, providing unprecedented flexibility for saliency-aware computing. OSA-HCIM achieves a 1.95x enhancement in energy efficiency, while maintaining minimal accuracy loss compared to DCIM when tested on CIFAR100 dataset.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 05:49:11 GMT" } ]
2023-08-30T00:00:00
[ [ "Chen", "Yung-Chin", "" ], [ "Ando", "Shimpei", "" ], [ "Fujiki", "Daichi", "" ], [ "Takamaeda-Yamazaki", "Shinya", "" ], [ "Yoshioka", "Kentaro", "" ] ]
new_dataset
0.995876
2308.15050
Taotao Jing
Taotao Jing, Lichen Wang, Naji Khosravan, Zhiqiang Wan, Zachary Bessinger, Zhengming Ding, Sing Bing Kang
iBARLE: imBalance-Aware Room Layout Estimation
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Room layout estimation predicts layouts from a single panorama. It requires datasets with large-scale and diverse room shapes to train the models. However, there are significant imbalances in real-world datasets including the dimensions of layout complexity, camera locations, and variation in scene appearance. These issues considerably influence the model training performance. In this work, we propose the imBalance-Aware Room Layout Estimation (iBARLE) framework to address these issues. iBARLE consists of (1) Appearance Variation Generation (AVG) module, which promotes visual appearance domain generalization, (2) Complex Structure Mix-up (CSMix) module, which enhances generalizability w.r.t. room structure, and (3) a gradient-based layout objective function, which allows more effective accounting for occlusions in complex layouts. All modules are jointly trained and help each other to achieve the best performance. Experiments and ablation studies based on ZInD~\cite{cruz2021zillow} dataset illustrate that iBARLE has state-of-the-art performance compared with other layout estimation baselines.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 06:20:36 GMT" } ]
2023-08-30T00:00:00
[ [ "Jing", "Taotao", "" ], [ "Wang", "Lichen", "" ], [ "Khosravan", "Naji", "" ], [ "Wan", "Zhiqiang", "" ], [ "Bessinger", "Zachary", "" ], [ "Ding", "Zhengming", "" ], [ "Kang", "Sing Bing", "" ] ]
new_dataset
0.991214
2308.15061
Yukun Su
Yukun Su, Yi Yang
AIoT-Based Drum Transcription Robot using Convolutional Neural Networks
null
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the development of information technology, robot technology has made great progress in various fields. These new technologies enable robots to be used in industry, agriculture, education and other aspects. In this paper, we propose a drum robot that can automatically complete music transcription in real-time, which is based on AIoT and fog computing technology. Specifically, this drum robot system consists of a cloud node for data storage, edge nodes for real-time computing, and data-oriented execution application nodes. In order to analyze drumming music and realize drum transcription, we further propose a light-weight convolutional neural network model to classify drums, which can be more effectively deployed in terminal devices for fast edge calculations. The experimental results show that the proposed system can achieve more competitive performance and enjoy a variety of smart applications and services.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 06:50:04 GMT" } ]
2023-08-30T00:00:00
[ [ "Su", "Yukun", "" ], [ "Yang", "Yi", "" ] ]
new_dataset
0.998876
2308.15069
Haksoo Lim
Haksoo Lim, Sewon Park, Minjung Kim, Jaehoon Lee, Seonkyu Lim, Noseong Park
MadSGM: Multivariate Anomaly Detection with Score-based Generative Models
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
The time-series anomaly detection is one of the most fundamental tasks for time-series. Unlike the time-series forecasting and classification, the time-series anomaly detection typically requires unsupervised (or self-supervised) training since collecting and labeling anomalous observations are difficult. In addition, most existing methods resort to limited forms of anomaly measurements and therefore, it is not clear whether they are optimal in all circumstances. To this end, we present a multivariate time-series anomaly detector based on score-based generative models, called MadSGM, which considers the broadest ever set of anomaly measurement factors: i) reconstruction-based, ii) density-based, and iii) gradient-based anomaly measurements. We also design a conditional score network and its denoising score matching loss for the time-series anomaly detection. Experiments on five real-world benchmark datasets illustrate that MadSGM achieves the most robust and accurate predictions.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 07:04:50 GMT" } ]
2023-08-30T00:00:00
[ [ "Lim", "Haksoo", "" ], [ "Park", "Sewon", "" ], [ "Kim", "Minjung", "" ], [ "Lee", "Jaehoon", "" ], [ "Lim", "Seonkyu", "" ], [ "Park", "Noseong", "" ] ]
new_dataset
0.974699
2308.15075
Angel Martin
Felipe Mogoll\'on, Zaloa Fern\'andez, Josu P\'erez and \'Angel Mart\'in
Benchmarking 5G MEC and Cloud infrastructures for planning IoT messaging of CCAM data
6 pages, 5 figures, 6 tables, IEEE International Conference on Intelligent Transportation Systems
null
null
null
cs.NI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Vehicles embed lots of sensors supporting driving and safety. Combined with connectivity, they bring new possibilities for Connected, Cooperative and Automated Mobility (CCAM) services that exploit local and global data for a wide understanding beyond the myopic view of local sensors. Internet of Things (IoT) messaging solutions are ideal for vehicular data as they ship core features like the separation of geographic areas, the fusion of different producers on data/sensor types, and concurrent subscription support. Multi-access Edge Computing (MEC) and Cloud infrastructures are key to hosting a virtualized and distributed IoT platform. Currently, the are no benchmarks for assessing the appropriate size of an IoT platform for multiple vehicular data types such as text, image, binary point clouds and video-formatted samples. This paper formulates and executes the tests to get a benchmarking of the performance of a MEC and Cloud platform according to actors' concurrency, data volumes and business levels parameters.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 07:19:38 GMT" } ]
2023-08-30T00:00:00
[ [ "Mogollón", "Felipe", "" ], [ "Fernández", "Zaloa", "" ], [ "Pérez", "Josu", "" ], [ "Martín", "Ángel", "" ] ]
new_dataset
0.996504
2308.15104
Johanna Ansohn McDougall
Johanna Ansohn McDougall, Alessandro Brighente, Willi Gro{\ss}mann, Ben Ansohn McDougall, Joshua Stock, Hannes Federrath
LoVe is in the Air -- Location Verification of ADS-B Signals using Distributed Public Sensors
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
The Automatic Dependant Surveillance-Broadcast (ADS-B) message scheme was designed without any authentication or encryption of messages in place. It is therefore easily possible to attack it, e.g., by injecting spoofed messages or modifying the transmitted Global Navigation Satellite System (GNSS) coordinates. In order to verify the integrity of the received information, various methods have been suggested, such as multilateration, the use of Kalman filters, group certification, and many others. However, solutions based on modifications of the standard may be difficult and too slow to be implemented due to legal and regulatory issues. A vantage far less explored is the location verification using public sensor data. In this paper, we propose LoVe, a lightweight message verification approach that uses a geospatial indexing scheme to evaluate the trustworthiness of publicly deployed sensors and the ADS-B messages they receive. With LoVe, new messages can be evaluated with respect to the plausibility of their reported coordinates in a location privacy-preserving manner, while using a data-driven and lightweight approach. By testing our approach on two open datasets, we show that LoVe achieves very low false positive rates (between 0 and 0.00106) and very low false negative rates (between 0.00065 and 0.00334) while providing a real-time compatible approach that scales well even with a large sensor set. Compared to currently existing approaches, LoVe neither requires a large number of sensors, nor for messages to be recorded by as many sensors as possible simultaneously in order to verify location claims. Furthermore, it can be directly applied to currently deployed systems thus being backward compatible.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 08:13:08 GMT" } ]
2023-08-30T00:00:00
[ [ "McDougall", "Johanna Ansohn", "" ], [ "Brighente", "Alessandro", "" ], [ "Großmann", "Willi", "" ], [ "McDougall", "Ben Ansohn", "" ], [ "Stock", "Joshua", "" ], [ "Federrath", "Hannes", "" ] ]
new_dataset
0.998046
2308.15136
Hiroyuki Ootomo
Hiroyuki Ootomo, Akira Naruse, Corey Nolet, Ray Wang, Tamas Feher, Yong Wang
CAGRA: Highly Parallel Graph Construction and Approximate Nearest Neighbor Search for GPUs
null
null
null
null
cs.DS cs.CV cs.DB cs.DC cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Approximate Nearest Neighbor Search (ANNS) plays a critical role in various disciplines spanning data mining and artificial intelligence, from information retrieval and computer vision to natural language processing and recommender systems. Data volumes have soared in recent years and the computational cost of an exhaustive exact nearest neighbor search is often prohibitive, necessitating the adoption of approximate techniques. The balanced performance and recall of graph-based approaches have more recently garnered significant attention in ANNS algorithms, however, only a few studies have explored harnessing the power of GPUs and multi-core processors despite the widespread use of massively parallel and general-purpose computing. To bridge this gap, we introduce a novel parallel computing hardware-based proximity graph and search algorithm. By leveraging the high-performance capabilities of modern hardware, our approach achieves remarkable efficiency gains. In particular, our method surpasses existing CPU and GPU-based methods in constructing the proximity graph, demonstrating higher throughput in both large- and small-batch searches while maintaining compatible accuracy. In graph construction time, our method, CAGRA, is 2.2~27x faster than HNSW, which is one of the CPU SOTA implementations. In large-batch query throughput in the 90% to 95% recall range, our method is 33~77x faster than HNSW, and is 3.8~8.8x faster than the SOTA implementations for GPU. For a single query, our method is 3.4~53x faster than HNSW at 95% recall.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 09:10:53 GMT" } ]
2023-08-30T00:00:00
[ [ "Ootomo", "Hiroyuki", "" ], [ "Naruse", "Akira", "" ], [ "Nolet", "Corey", "" ], [ "Wang", "Ray", "" ], [ "Feher", "Tamas", "" ], [ "Wang", "Yong", "" ] ]
new_dataset
0.959113
2308.15139
Goshgar Ismayilov
Goshgar Ismayilov, Can Ozturan
PTTS: Zero-Knowledge Proof-based Private Token Transfer System on Ethereum Blockchain and its Network Flow Based Balance Range Privacy Attack Analysis
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Blockchains are decentralized and immutable databases that are shared among the nodes of the network. Although blockchains have attracted a great scale of attention in the recent years by disrupting the traditional financial systems, the transaction privacy is still a challenging issue that needs to be addressed and analysed. We propose a Private Token Transfer System (PTTS) for the Ethereum public blockchain in the first part of this paper. For the proposed framework, zero-knowledge based protocol has been designed using Zokrates and integrated into our private token smart contract. With the help of web user interface designed, the end users can interact with the smart contract without any third-party setup. In the second part of the paper, we provide security and privacy analysis including the replay attack and the balance range privacy attack which has been modelled as a network flow problem. It is shown that in case some balance ranges are deliberately leaked out to particular organizations or adversial entities, it is possible to extract meaningful information about the user balances by employing minimum cost flow network algorithms that have polynomial complexity. The experimental study reports the Ethereum gas consumption and proof generation times for the proposed framework. It also reports network solution times and goodness rates for a subset of addresses under the balance range privacy attack with respect to number of addresses, number of transactions and ratio of leaked transfer transaction amounts.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 09:13:31 GMT" } ]
2023-08-30T00:00:00
[ [ "Ismayilov", "Goshgar", "" ], [ "Ozturan", "Can", "" ] ]
new_dataset
0.99801
2308.15142
Shuxiao Ma
Shuxiao Ma and Linyuan Wang and Bin Yan
A Multimodal Visual Encoding Model Aided by Introducing Verbal Semantic Information
null
null
null
null
cs.CV cs.AI q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Biological research has revealed that the verbal semantic information in the brain cortex, as an additional source, participates in nonverbal semantic tasks, such as visual encoding. However, previous visual encoding models did not incorporate verbal semantic information, contradicting this biological finding. This paper proposes a multimodal visual information encoding network model based on stimulus images and associated textual information in response to this issue. Our visual information encoding network model takes stimulus images as input and leverages textual information generated by a text-image generation model as verbal semantic information. This approach injects new information into the visual encoding model. Subsequently, a Transformer network aligns image and text feature information, creating a multimodal feature space. A convolutional network then maps from this multimodal feature space to voxel space, constructing the multimodal visual information encoding network model. Experimental results demonstrate that the proposed multimodal visual information encoding network model outperforms previous models under the exact training cost. In voxel prediction of the left hemisphere of subject 1's brain, the performance improves by approximately 15.87%, while in the right hemisphere, the performance improves by about 4.6%. The multimodal visual encoding network model exhibits superior encoding performance. Additionally, ablation experiments indicate that our proposed model better simulates the brain's visual information processing.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 09:21:48 GMT" } ]
2023-08-30T00:00:00
[ [ "Ma", "Shuxiao", "" ], [ "Wang", "Linyuan", "" ], [ "Yan", "Bin", "" ] ]
new_dataset
0.975579
2308.15154
Margherita Gambini
Margherita Gambini, Serena Tardelli, Maurizio Tesconi
The Anatomy of Conspirators: Unveiling Traits using a Comprehensive Twitter Dataset
null
null
null
null
cs.SI cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
The discourse around conspiracy theories is currently thriving amidst the rampant misinformation prevalent in online environments. Research in this field has been focused on detecting conspiracy theories on social media, often relying on limited datasets. In this study, we present a novel methodology for constructing a Twitter dataset that encompasses accounts engaged in conspiracy-related activities throughout the year 2022. Our approach centers on data collection that is independent of specific conspiracy theories and information operations. Additionally, our dataset includes a control group comprising randomly selected users who can be fairly compared to the individuals involved in conspiracy activities. This comprehensive collection effort yielded a total of 15K accounts and 37M tweets extracted from their timelines. We conduct a comparative analysis of the two groups across three dimensions: topics, profiles, and behavioral characteristics. The results indicate that conspiracy and control users exhibit similarity in terms of their profile metadata characteristics. However, they diverge significantly in terms of behavior and activity, particularly regarding the discussed topics, the terminology used, and their stance on trending subjects. Interestingly, there is no significant disparity in the presence of bot users between the two groups, suggesting that conspiracy and automation are orthogonal concepts. Finally, we develop a classifier to identify conspiracy users using 93 features, some of which are commonly employed in literature for troll identification. The results demonstrate a high accuracy level (with an average F1 score of 0.98%), enabling us to uncover the most discriminative features associated with conspiracy-related accounts.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 09:35:23 GMT" } ]
2023-08-30T00:00:00
[ [ "Gambini", "Margherita", "" ], [ "Tardelli", "Serena", "" ], [ "Tesconi", "Maurizio", "" ] ]
new_dataset
0.999286
2308.15161
Daniela P\"ohn
Lukas Hafner and Florian Wutz and Daniela P\"ohn and Wolfgang Hommel
TASEP: A Collaborative Social Engineering Tabletop Role-Playing Game to Prevent Successful Social Engineering Attacks
null
null
10.1145/3600160.3605005
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data breaches resulting from targeted attacks against organizations, e.g., by advanced persistent threat groups, often involve social engineering (SE) as the initial attack vector before malicious software is used, e.g., for persistence, lateral movement, and data exfiltration. While technical security controls, such as the automated detection of phishing emails, can contribute to mitigating SE risks, raising awareness for SE attacks through education and motivation of personnel is an important building block to increasing an organization's resilience. To facilitate hands-on SE awareness training as one component of broader SE awareness campaigns, we created a SE tabletop game called Tabletop As Social Engineering Prevention (TASEP) in two editions for (a) small and medium enterprises and (b) large corporations, respectively. Its game design is inspired by Dungeons & Dragons role-playing games and facilitates LEGO models of the in-game target organizations. Participants switch roles by playing a group of SE penetration testers and conducting a security audit guided by the game master. We evaluated the created game with different student groups, achieving highly immersive and flexible training, resulting in an entertaining way of learning about SE and raising awareness.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 09:44:35 GMT" } ]
2023-08-30T00:00:00
[ [ "Hafner", "Lukas", "" ], [ "Wutz", "Florian", "" ], [ "Pöhn", "Daniela", "" ], [ "Hommel", "Wolfgang", "" ] ]
new_dataset
0.998964
2308.15224
Tae Soo Kim
Tae Soo Kim, Matt Latzke, Jonathan Bragg, Amy X. Zhang, Joseph Chee Chang
Papeos: Augmenting Research Papers with Talk Videos
Accepted to UIST 2023
null
10.1145/3586183.3606770
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Research consumption has been traditionally limited to the reading of academic papers-a static, dense, and formally written format. Alternatively, pre-recorded conference presentation videos, which are more dynamic, concise, and colloquial, have recently become more widely available but potentially under-utilized. In this work, we explore the design space and benefits for combining academic papers and talk videos to leverage their complementary nature to provide a rich and fluid research consumption experience. Based on formative and co-design studies, we present Papeos, a novel reading and authoring interface that allow authors to augment their papers by segmenting and localizing talk videos alongside relevant paper passages with automatically generated suggestions. With Papeos, readers can visually skim a paper through clip thumbnails, and fluidly switch between consuming dense text in the paper or visual summaries in the video. In a comparative lab study (n=16), Papeos reduced mental load, scaffolded navigation, and facilitated more comprehensive reading of papers.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 11:25:30 GMT" } ]
2023-08-30T00:00:00
[ [ "Kim", "Tae Soo", "" ], [ "Latzke", "Matt", "" ], [ "Bragg", "Jonathan", "" ], [ "Zhang", "Amy X.", "" ], [ "Chang", "Joseph Chee", "" ] ]
new_dataset
0.999398
2308.15349
Vakhtang Putkaradze Dr.
Christopher Eldred, Fran\c{c}ois Gay-Balmaz, Sofiia Huraka, Vakhtang Putkaradze
Lie-Poisson Neural Networks (LPNets): Data-Based Computing of Hamiltonian Systems with Symmetries
57 pages, 13 figures
null
null
null
cs.LG math-ph math.MP
http://creativecommons.org/licenses/by/4.0/
An accurate data-based prediction of the long-term evolution of Hamiltonian systems requires a network that preserves the appropriate structure under each time step. Every Hamiltonian system contains two essential ingredients: the Poisson bracket and the Hamiltonian. Hamiltonian systems with symmetries, whose paradigm examples are the Lie-Poisson systems, have been shown to describe a broad category of physical phenomena, from satellite motion to underwater vehicles, fluids, geophysical applications, complex fluids, and plasma physics. The Poisson bracket in these systems comes from the symmetries, while the Hamiltonian comes from the underlying physics. We view the symmetry of the system as primary, hence the Lie-Poisson bracket is known exactly, whereas the Hamiltonian is regarded as coming from physics and is considered not known, or known approximately. Using this approach, we develop a network based on transformations that exactly preserve the Poisson bracket and the special functions of the Lie-Poisson systems (Casimirs) to machine precision. We present two flavors of such systems: one, where the parameters of transformations are computed from data using a dense neural network (LPNets), and another, where the composition of transformations is used as building blocks (G-LPNets). We also show how to adapt these methods to a larger class of Poisson brackets. We apply the resulting methods to several examples, such as rigid body (satellite) motion, underwater vehicles, a particle in a magnetic field, and others. The methods developed in this paper are important for the construction of accurate data-based methods for simulating the long-term dynamics of physical systems.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 14:45:23 GMT" } ]
2023-08-30T00:00:00
[ [ "Eldred", "Christopher", "" ], [ "Gay-Balmaz", "François", "" ], [ "Huraka", "Sofiia", "" ], [ "Putkaradze", "Vakhtang", "" ] ]
new_dataset
0.954916
2308.15402
Mohammad Akhlaqur Rahman
Shahriar Elahi Dhruvo, Mohammad Akhlaqur Rahman, Manash Kumar Mandal, Md. Istiak Hossain Shihab, A. A. Noman Ansary, Kaneez Fatema Shithi, Sanjida Khanom, Rabeya Akter, Safaeid Hossain Arib, M.N. Ansary, Sazia Mehnaz, Rezwana Sultana, Sejuti Rahman, Sayma Sultana Chowdhury, Sabbir Ahmed Chowdhury, Farig Sadeque, Asif Sushmit
Bornil: An open-source sign language data crowdsourcing platform for AI enabled dialect-agnostic communication
6 pages, 7 figures
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
The absence of annotated sign language datasets has hindered the development of sign language recognition and translation technologies. In this paper, we introduce Bornil; a crowdsource-friendly, multilingual sign language data collection, annotation, and validation platform. Bornil allows users to record sign language gestures and lets annotators perform sentence and gloss-level annotation. It also allows validators to make sure of the quality of both the recorded videos and the annotations through manual validation to develop high-quality datasets for deep learning-based Automatic Sign Language Recognition. To demonstrate the system's efficacy; we collected the largest sign language dataset for Bangladeshi Sign Language dialect, perform deep learning based Sign Language Recognition modeling, and report the benchmark performance. The Bornil platform, BornilDB v1.0 Dataset, and the codebases are available on https://bornil.bengali.ai
[ { "version": "v1", "created": "Tue, 29 Aug 2023 16:00:06 GMT" } ]
2023-08-30T00:00:00
[ [ "Dhruvo", "Shahriar Elahi", "" ], [ "Rahman", "Mohammad Akhlaqur", "" ], [ "Mandal", "Manash Kumar", "" ], [ "Shihab", "Md. Istiak Hossain", "" ], [ "Ansary", "A. A. Noman", "" ], [ "Shithi", "Kaneez Fatema", "" ], [ "Khanom", "Sanjida", "" ], [ "Akter", "Rabeya", "" ], [ "Arib", "Safaeid Hossain", "" ], [ "Ansary", "M. N.", "" ], [ "Mehnaz", "Sazia", "" ], [ "Sultana", "Rezwana", "" ], [ "Rahman", "Sejuti", "" ], [ "Chowdhury", "Sayma Sultana", "" ], [ "Chowdhury", "Sabbir Ahmed", "" ], [ "Sadeque", "Farig", "" ], [ "Sushmit", "Asif", "" ] ]
new_dataset
0.999291
2308.15403
Peter Manohar
Omar Alrabiah, Venkatesan Guruswami, Pravesh K. Kothari, Peter Manohar
A Near-Cubic Lower Bound for 3-Query Locally Decodable Codes from Semirandom CSP Refutation
null
null
null
null
cs.CC cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A code $C \colon \{0,1\}^k \to \{0,1\}^n$ is a $q$-locally decodable code ($q$-LDC) if one can recover any chosen bit $b_i$ of the message $b \in \{0,1\}^k$ with good confidence by randomly querying the encoding $x := C(b)$ on at most $q$ coordinates. Existing constructions of $2$-LDCs achieve $n = \exp(O(k))$, and lower bounds show that this is in fact tight. However, when $q = 3$, far less is known: the best constructions achieve $n = \exp(k^{o(1)})$, while the best known results only show a quadratic lower bound $n \geq \tilde{\Omega}(k^2)$ on the blocklength. In this paper, we prove a near-cubic lower bound of $n \geq \tilde{\Omega}(k^3)$ on the blocklength of $3$-query LDCs. This improves on the best known prior works by a polynomial factor in $k$. Our proof relies on a new connection between LDCs and refuting constraint satisfaction problems with limited randomness. Our quantitative improvement builds on the new techniques for refuting semirandom instances of CSPs developed in [GKM22, HKM23] and, in particular, relies on bounding the spectral norm of appropriate Kikuchi matrices.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 16:00:57 GMT" } ]
2023-08-30T00:00:00
[ [ "Alrabiah", "Omar", "" ], [ "Guruswami", "Venkatesan", "" ], [ "Kothari", "Pravesh K.", "" ], [ "Manohar", "Peter", "" ] ]
new_dataset
0.95595
2308.15429
Andrew McNutt
Elsie Lee-Robbins, Andrew McNutt
Only YOU Can Make IEEE VIS Environmentally Sustainable
Accepted to alt.vis2023
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
The IEEE VIS Conference (or VIS) hosts more than 1000 people annually. It brings together visualization researchers and practitioners from across the world to share new research and knowledge. Behind the scenes, a team of volunteers puts together the entire conference and makes sure it runs smoothly. Organizing involves logistics of the conference, ensuring that the attendees have an enjoyable time, allocating rooms to multiple concurrent tracks, and keeping the conference within budget. In recent years, the COVID-19 pandemic has abruptly disrupted plans, forcing organizers to switch to virtual, hybrid, and satellite formats. These alternatives offer many benefits: fewer costs (e.g., travel, venue, institutional), greater accessibility (who can physically travel, who can get visas, who can get child care), and a lower carbon footprint (as people do not need to fly to attend). As many conferences begin to revert to the pre-pandemic status quo of primarily in-person conferences, we suggest that it is an opportune moment to reflect on the benefits and drawbacks of lower-carbon conference formats. To learn more about the logistics of conference organizing, we talked to 6 senior executive-level VIS organizers. We review some of the many considerations that go into planning, particularly with regard to how they influence decisions about alternative formats. We aim to start a discussion about the sustainability of VIS -- including sustainability for finance, volunteers, and, central to this work, the environment -- for the next three years and the next three hundred years.
[ { "version": "v1", "created": "Tue, 29 Aug 2023 16:43:43 GMT" } ]
2023-08-30T00:00:00
[ [ "Lee-Robbins", "Elsie", "" ], [ "McNutt", "Andrew", "" ] ]
new_dataset
0.989381
2111.11011
Tianlun Zheng
Tianlun Zheng, Zhineng Chen, Shancheng Fang, Hongtao Xie, Yu-Gang Jiang
CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition
Paper accepted for publication at IJCV 2023
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Transformer-based encoder-decoder framework is becoming popular in scene text recognition, largely because it naturally integrates recognition clues from both visual and semantic domains. However, recent studies show that the two kinds of clues are not always well registered and therefore, feature and character might be misaligned in difficult text (e.g., with a rare shape). As a result, constraints such as character position are introduced to alleviate this problem. Despite certain success, visual and semantic are still separately modeled and they are merely loosely associated. In this paper, we propose a novel module called Multi-Domain Character Distance Perception (MDCDP) to establish a visually and semantically related position embedding. MDCDP uses the position embedding to query both visual and semantic features following the cross-attention mechanism. The two kinds of clues are fused into the position branch, generating a content-aware embedding that well perceives character spacing and orientation variants, character semantic affinities, and clues tying the two kinds of information. They are summarized as the multi-domain character distance. We develop CDistNet that stacks multiple MDCDPs to guide a gradually precise distance modeling. Thus, the feature-character alignment is well built even various recognition difficulties are presented. We verify CDistNet on ten challenging public datasets and two series of augmented datasets created by ourselves. The experiments demonstrate that CDistNet performs highly competitively. It not only ranks top-tier in standard benchmarks, but also outperforms recent popular methods by obvious margins on real and augmented datasets presenting severe text deformation, poor linguistic support, and rare character layouts. Code is available at https://github.com/simplify23/CDistNet.
[ { "version": "v1", "created": "Mon, 22 Nov 2021 06:27:29 GMT" }, { "version": "v2", "created": "Thu, 25 Nov 2021 02:46:11 GMT" }, { "version": "v3", "created": "Wed, 22 Jun 2022 00:21:12 GMT" }, { "version": "v4", "created": "Fri, 11 Aug 2023 03:17:54 GMT" }, { "version": "v5", "created": "Sun, 27 Aug 2023 02:55:53 GMT" } ]
2023-08-29T00:00:00
[ [ "Zheng", "Tianlun", "" ], [ "Chen", "Zhineng", "" ], [ "Fang", "Shancheng", "" ], [ "Xie", "Hongtao", "" ], [ "Jiang", "Yu-Gang", "" ] ]
new_dataset
0.997374
2201.06096
Jeremy Kepner
Jeremy Kepner, Kenjiro Cho, KC Claffy, Vijay Gadepally, Sarah McGuire, Lauren Milechin, William Arcand, David Bestor, William Bergeron, Chansup Byun, Matthew Hubbell, Michael Houle, Michael Jones, Andrew Prout, Albert Reuther, Antonio Rosa, Siddharth Samsi, Charles Yee, Peter Michaleas
New Phenomena in Large-Scale Internet Traffic
53 pages, 27 figures, 8 tables, 121 references. Portions of this work originally appeared as arXiv:1904.04396v1 which has been split for publication in the book "Massive Graph Analytics" (edited by David Bader)
null
10.1201/9781003033707
null
cs.NI cs.CY cs.DC cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Internet is transforming our society, necessitating a quantitative understanding of Internet traffic. Our team collects and curates the largest publicly available Internet traffic data sets. An analysis of 50 billion packets using 10,000 processors in the MIT SuperCloud reveals a new phenomenon: the importance of otherwise unseen leaf nodes and isolated links in Internet traffic. Our analysis further shows that a two-parameter modified Zipf-Mandelbrot distribution accurately describes a wide variety of source/destination statistics on moving sample windows ranging from 100{,}000 to 100{,}000{,}000 packets over collections that span years and continents. The measured model parameters distinguish different network streams, and the model leaf parameter strongly correlates with the fraction of the traffic in different underlying network topologies.
[ { "version": "v1", "created": "Sun, 16 Jan 2022 17:30:10 GMT" } ]
2023-08-29T00:00:00
[ [ "Kepner", "Jeremy", "" ], [ "Cho", "Kenjiro", "" ], [ "Claffy", "KC", "" ], [ "Gadepally", "Vijay", "" ], [ "McGuire", "Sarah", "" ], [ "Milechin", "Lauren", "" ], [ "Arcand", "William", "" ], [ "Bestor", "David", "" ], [ "Bergeron", "William", "" ], [ "Byun", "Chansup", "" ], [ "Hubbell", "Matthew", "" ], [ "Houle", "Michael", "" ], [ "Jones", "Michael", "" ], [ "Prout", "Andrew", "" ], [ "Reuther", "Albert", "" ], [ "Rosa", "Antonio", "" ], [ "Samsi", "Siddharth", "" ], [ "Yee", "Charles", "" ], [ "Michaleas", "Peter", "" ] ]
new_dataset
0.990778
2202.13799
Donghwee Yoon
Junseok Oh, Donghwee Yoon and Injung Kim
One-shot Ultra-high-Resolution Generative Adversarial Network That Synthesizes 16K Images On A Single GPU
36 pages, 26 figures
null
null
null
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a one-shot ultra-high-resolution generative adversarial network (OUR-GAN) framework that generates non-repetitive 16K (16, 384 x 8, 640) images from a single training image and is trainable on a single consumer GPU. OUR-GAN generates an initial image that is visually plausible and varied in shape at low resolution, and then gradually increases the resolution by adding detail through super-resolution. Since OUR-GAN learns from a real ultra-high-resolution (UHR) image, it can synthesize large shapes with fine details and long-range coherence, which is difficult to achieve with conventional generative models that rely on the patch distribution learned from relatively small images. OUR-GAN can synthesize high-quality 16K images with 12.5 GB of GPU memory and 4K images with only 4.29 GB as it synthesizes a UHR image part by part through seamless subregion-wise super-resolution. Additionally, OUR-GAN improves visual coherence while maintaining diversity by applying vertical positional convolution. In experiments on the ST4K and RAISE datasets, OUR-GAN exhibited improved fidelity, visual coherency, and diversity compared with the baseline one-shot synthesis models. To the best of our knowledge, OUR-GAN is the first one-shot image synthesizer that generates non-repetitive UHR images on a single consumer GPU. The synthesized image samples are presented at https://our-gan.github.io.
[ { "version": "v1", "created": "Mon, 28 Feb 2022 13:48:41 GMT" }, { "version": "v2", "created": "Thu, 21 Apr 2022 08:04:10 GMT" }, { "version": "v3", "created": "Mon, 28 Aug 2023 04:52:53 GMT" } ]
2023-08-29T00:00:00
[ [ "Oh", "Junseok", "" ], [ "Yoon", "Donghwee", "" ], [ "Kim", "Injung", "" ] ]
new_dataset
0.96298
2205.10292
Tommaso Bianchi
Tommaso Bianchi, Surudhi Asokraj, Alessandro Brighente, Mauro Conti, Radha Poovendran
QEVSEC: Quick Electric Vehicle SEcure Charging via Dynamic Wireless Power Transfer
6 pages, conference
2023 IEEE 97th Vehicular Technology Conference (VTC2023-Spring), Florence, Italy, 2023, pp. 1-6
10.1109/VTC2023-Spring57618.2023.10199651
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Dynamic Wireless Power Transfer (DWPT) can be used for on-demand recharging of Electric Vehicles (EV) while driving. However, DWPT raises numerous security and privacy concerns. Recently, researchers demonstrated that DWPT systems are vulnerable to adversarial attacks. In an EV charging scenario, an attacker can prevent the authorized customer from charging, obtain a free charge by billing a victim user and track a target vehicle. State-of-the-art authentication schemes relying on centralized solutions are either vulnerable to various attacks or have high computational complexity, making them unsuitable for a dynamic scenario. In this paper, we propose Quick Electric Vehicle SEcure Charging (QEVSEC), a novel, secure, and efficient authentication protocol for the dynamic charging of EVs. Our idea for QEVSEC originates from multiple vulnerabilities we found in the state-of-the-art protocol that allows tracking of user activity and is susceptible to replay attacks. Based on these observations, the proposed protocol solves these issues and achieves lower computational complexity by using only primitive cryptographic operations in a very short message exchange. QEVSEC provides scalability and a reduced cost in each iteration, thus lowering the impact on the power needed from the grid.
[ { "version": "v1", "created": "Fri, 20 May 2022 16:42:32 GMT" }, { "version": "v2", "created": "Thu, 27 Apr 2023 10:20:25 GMT" }, { "version": "v3", "created": "Mon, 28 Aug 2023 08:18:28 GMT" } ]
2023-08-29T00:00:00
[ [ "Bianchi", "Tommaso", "" ], [ "Asokraj", "Surudhi", "" ], [ "Brighente", "Alessandro", "" ], [ "Conti", "Mauro", "" ], [ "Poovendran", "Radha", "" ] ]
new_dataset
0.999432
2206.04678
Xi Chen
Xi Chen, Yun Xiong, Siqi Wang, Haofen Wang, Tao Sheng, Yao Zhang, Yu Ye
ReCo: A Dataset for Residential Community Layout Planning
9 pages, 8 figures
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Layout planning is centrally important in the field of architecture and urban design. Among the various basic units carrying urban functions, residential community plays a vital part for supporting human life. Therefore, the layout planning of residential community has always been of concern, and has attracted particular attention since the advent of deep learning that facilitates the automated layout generation and spatial pattern recognition. However, the research circles generally suffer from the insufficiency of residential community layout benchmark or high-quality datasets, which hampers the future exploration of data-driven methods for residential community layout planning. The lack of datasets is largely due to the difficulties of large-scale real-world residential data acquisition and long-term expert screening. In order to address the issues and advance a benchmark dataset for various intelligent spatial design and analysis applications in the development of smart city, we introduce Residential Community Layout Planning (ReCo) Dataset, which is the first and largest open-source vector dataset related to real-world community to date. ReCo Dataset is presented in multiple data formats with 37,646 residential community layout plans, covering 598,728 residential buildings with height information. ReCo can be conveniently adapted for residential community layout related urban design tasks, e.g., generative layout design, morphological pattern recognition and spatial evaluation. To validate the utility of ReCo in automated residential community layout planning, two Generative Adversarial Network (GAN) based generative models are further applied to the dataset. We expect ReCo Dataset to inspire more creative and practical work in intelligent design and beyond. The ReCo Dataset is published at: https://www.kaggle.com/fdudsde/reco-dataset.
[ { "version": "v1", "created": "Wed, 8 Jun 2022 17:19:55 GMT" }, { "version": "v2", "created": "Mon, 15 Aug 2022 07:20:56 GMT" }, { "version": "v3", "created": "Sun, 27 Aug 2023 14:35:43 GMT" } ]
2023-08-29T00:00:00
[ [ "Chen", "Xi", "" ], [ "Xiong", "Yun", "" ], [ "Wang", "Siqi", "" ], [ "Wang", "Haofen", "" ], [ "Sheng", "Tao", "" ], [ "Zhang", "Yao", "" ], [ "Ye", "Yu", "" ] ]
new_dataset
0.999852
2206.08955
Sergey A. Slavnov
Sergey Slavnov
Making first order linear logic a generating grammar
Revised and extended version with detailed proofs. arXiv admin note: substantial text overlap with arXiv:2112.15253
null
null
null
cs.CL cs.LO math.LO
http://creativecommons.org/licenses/by/4.0/
It is known that different categorial grammars have surface representation in a fragment of first order multiplicative linear logic (MLL1). We show that the fragment of interest is equivalent to the recently introduced extended tensor type calculus (ETTC). ETTC is a calculus of specific typed terms, which represent tuples of strings, more precisely bipartite graphs decorated with strings. Types are derived from linear logic formulas, and rules correspond to concrete operations on these string-labeled graphs, so that they can be conveniently visualized. This provides the above mentioned fragment of MLL1 that is relevant for language modeling not only with some alternative syntax and intuitive geometric representation, but also with an intrinsic deductive system, which has been absent. In this work we consider a non-trivial notationally enriched variation of the previously introduced {\bf ETTC}, which allows more concise and transparent computations. We present both a cut-free sequent calculus and a natural deduction formalism.
[ { "version": "v1", "created": "Fri, 17 Jun 2022 18:11:34 GMT" }, { "version": "v2", "created": "Fri, 7 Apr 2023 13:58:26 GMT" }, { "version": "v3", "created": "Wed, 23 Aug 2023 05:34:42 GMT" }, { "version": "v4", "created": "Mon, 28 Aug 2023 11:19:57 GMT" } ]
2023-08-29T00:00:00
[ [ "Slavnov", "Sergey", "" ] ]
new_dataset
0.987393
2208.09702
Giovanni Viglietta
Csaba D. T\'oth, Jorge Urrutia, and Giovanni Viglietta
Minimizing Visible Edges in Polyhedra
19 pages, 9 figures
null
null
null
cs.CG cs.DM
http://creativecommons.org/licenses/by/4.0/
We prove that, given a polyhedron $\mathcal P$ in $\mathbb{R}^3$, every point in $\mathbb R^3$ that does not see any vertex of $\mathcal P$ must see eight or more edges of $\mathcal P$, and this bound is tight. More generally, this remains true if $\mathcal P$ is any finite arrangement of internally disjoint polygons in $\mathbb{R}^3$. We also prove that every point in $\mathbb{R}^3$ can see six or more edges of $\mathcal{P}$ (possibly only the endpoints of some these edges) and every point in the interior of $\mathcal{P}$ can see a positive portion of at least six edges of $\mathcal{P}$. These bounds are also tight.
[ { "version": "v1", "created": "Sat, 20 Aug 2022 14:59:58 GMT" }, { "version": "v2", "created": "Sun, 4 Jun 2023 01:54:28 GMT" }, { "version": "v3", "created": "Mon, 28 Aug 2023 12:54:41 GMT" } ]
2023-08-29T00:00:00
[ [ "Tóth", "Csaba D.", "" ], [ "Urrutia", "Jorge", "" ], [ "Viglietta", "Giovanni", "" ] ]
new_dataset
0.984652
2210.08423
Ishan Rajendrakumar Dave
Tushar Sangam, Ishan Rajendrakumar Dave, Waqas Sultani, Mubarak Shah
TransVisDrone: Spatio-Temporal Transformer for Vision-based Drone-to-Drone Detection in Aerial Videos
ICRA 2023
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
Drone-to-drone detection using visual feed has crucial applications, such as detecting drone collisions, detecting drone attacks, or coordinating flight with other drones. However, existing methods are computationally costly, follow non-end-to-end optimization, and have complex multi-stage pipelines, making them less suitable for real-time deployment on edge devices. In this work, we propose a simple yet effective framework, \textit{TransVisDrone}, that provides an end-to-end solution with higher computational efficiency. We utilize CSPDarkNet-53 network to learn object-related spatial features and VideoSwin model to improve drone detection in challenging scenarios by learning spatio-temporal dependencies of drone motion. Our method achieves state-of-the-art performance on three challenging real-world datasets (Average [email protected]): NPS 0.95, FLDrones 0.75, and AOT 0.80, and a higher throughput than previous methods. We also demonstrate its deployment capability on edge devices and its usefulness in detecting drone-collision (encounter). Project: \url{https://tusharsangam.github.io/TransVisDrone-project-page/}.
[ { "version": "v1", "created": "Sun, 16 Oct 2022 03:05:13 GMT" }, { "version": "v2", "created": "Sat, 26 Aug 2023 00:54:05 GMT" } ]
2023-08-29T00:00:00
[ [ "Sangam", "Tushar", "" ], [ "Dave", "Ishan Rajendrakumar", "" ], [ "Sultani", "Waqas", "" ], [ "Shah", "Mubarak", "" ] ]
new_dataset
0.998885
2210.17262
Wei Day
Wei Day, Hao-Sheng Chen, Min-Te Sun
QNet: A Quantum-native Sequence Encoder Architecture
QCE23: 2023 IEEE International Conference on Quantum Computing & Engineering
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
This work proposes QNet, a novel sequence encoder model that entirely inferences on the quantum computer using a minimum number of qubits. Let $n$ and $d$ represent the length of the sequence and the embedding size, respectively. The dot-product attention mechanism requires a time complexity of $O(n^2 \cdot d)$, while QNet has merely $O(n+d)$ quantum circuit depth. In addition, we introduce ResQNet, a quantum-classical hybrid model composed of several QNet blocks linked by residual connections, as an isomorph Transformer Encoder. We evaluated our work on various natural language processing tasks, including text classification, rating score prediction, and named entity recognition. Our models exhibit compelling performance over classical state-of-the-art models with a thousand times fewer parameters. In summary, this work investigates the advantage of machine learning on near-term quantum computers in sequential data by experimenting with natural language processing tasks.
[ { "version": "v1", "created": "Mon, 31 Oct 2022 12:36:37 GMT" }, { "version": "v2", "created": "Mon, 28 Aug 2023 01:17:32 GMT" } ]
2023-08-29T00:00:00
[ [ "Day", "Wei", "" ], [ "Chen", "Hao-Sheng", "" ], [ "Sun", "Min-Te", "" ] ]
new_dataset
0.999387
2211.00945
Xinkuang Wang
Xinkuang Wang, Wenjing Li, Zhongcheng Wu
CarDD: A New Dataset for Vision-based Car Damage Detection
13 pages, 10 figures, full-length paper for Transactions on Intelligent Transportation Systems (2023)
in IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 7, pp. 7202-7214, July 2023
10.1109/TITS.2023.3258480
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic car damage detection has attracted significant attention in the car insurance business. However, due to the lack of high-quality and publicly available datasets, we can hardly learn a feasible model for car damage detection. To this end, we contribute with Car Damage Detection (CarDD), the first public large-scale dataset designed for vision-based car damage detection and segmentation. Our CarDD contains 4,000 highresolution car damage images with over 9,000 well-annotated instances of six damage categories. We detail the image collection, selection, and annotation processes, and present a statistical dataset analysis. Furthermore, we conduct extensive experiments on CarDD with state-of-the-art deep methods for different tasks and provide comprehensive analyses to highlight the specialty of car damage detection. CarDD dataset and the source code are available at https://cardd-ustc.github.io.
[ { "version": "v1", "created": "Wed, 2 Nov 2022 08:09:03 GMT" }, { "version": "v2", "created": "Mon, 28 Aug 2023 11:36:06 GMT" } ]
2023-08-29T00:00:00
[ [ "Wang", "Xinkuang", "" ], [ "Li", "Wenjing", "" ], [ "Wu", "Zhongcheng", "" ] ]
new_dataset
0.999536
2211.01146
Masakazu Yoshimura
Masakazu Yoshimura, Junji Otsuka, Atsushi Irie, Takeshi Ohashi
DynamicISP: Dynamically Controlled Image Signal Processor for Image Recognition
Accepted to ICCV2023. Several updates from v2 including additional experiments and modification of typos in Auto Gain equation
null
null
null
cs.CV cs.AI cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Image Signal Processors (ISPs) play important roles in image recognition tasks as well as in the perceptual quality of captured images. In most cases, experts make a lot of effort to manually tune many parameters of ISPs, but the parameters are sub-optimal. In the literature, two types of techniques have been actively studied: a machine learning-based parameter tuning technique and a DNN-based ISP technique. The former is lightweight but lacks expressive power. The latter has expressive power, but the computational cost is too heavy on edge devices. To solve these problems, we propose "DynamicISP," which consists of multiple classical ISP functions and dynamically controls the parameters of each frame according to the recognition result of the previous frame. We show our method successfully controls the parameters of multiple ISP functions and achieves state-of-the-art accuracy with low computational cost in single and multi-category object detection tasks.
[ { "version": "v1", "created": "Wed, 2 Nov 2022 14:22:50 GMT" }, { "version": "v2", "created": "Mon, 27 Mar 2023 07:02:09 GMT" }, { "version": "v3", "created": "Mon, 28 Aug 2023 02:59:24 GMT" } ]
2023-08-29T00:00:00
[ [ "Yoshimura", "Masakazu", "" ], [ "Otsuka", "Junji", "" ], [ "Irie", "Atsushi", "" ], [ "Ohashi", "Takeshi", "" ] ]
new_dataset
0.998324
2211.07383
Mathias Ibsen
M. Ibsen, C. Rathgeb, F. Brechtel, R. Klepp, K. P\"oppelmann, A. George, S. Marcel, C. Busch
Attacking Face Recognition with T-shirts: Database, Vulnerability Assessment and Detection
null
null
10.1109/ACCESS.2023.3282780
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Face recognition systems are widely deployed for biometric authentication. Despite this, it is well-known that, without any safeguards, face recognition systems are highly vulnerable to presentation attacks. In response to this security issue, several promising methods for detecting presentation attacks have been proposed which show high performance on existing benchmarks. However, an ongoing challenge is the generalization of presentation attack detection methods to unseen and new attack types. To this end, we propose a new T-shirt Face Presentation Attack (TFPA) database of 1,608 T-shirt attacks using 100 unique presentation attack instruments. In an extensive evaluation, we show that this type of attack can compromise the security of face recognition systems and that some state-of-the-art attack detection mechanisms trained on popular benchmarks fail to robustly generalize to the new attacks. Further, we propose three new methods for detecting T-shirt attack images, one which relies on the statistical differences between depth maps of bona fide images and T-shirt attacks, an anomaly detection approach trained on features only extracted from bona fide RGB images, and a fusion approach which achieves competitive detection performance.
[ { "version": "v1", "created": "Mon, 14 Nov 2022 14:11:23 GMT" } ]
2023-08-29T00:00:00
[ [ "Ibsen", "M.", "" ], [ "Rathgeb", "C.", "" ], [ "Brechtel", "F.", "" ], [ "Klepp", "R.", "" ], [ "Pöppelmann", "K.", "" ], [ "George", "A.", "" ], [ "Marcel", "S.", "" ], [ "Busch", "C.", "" ] ]
new_dataset
0.995737
2211.11682
Xiangyang Zhu
Xiangyang Zhu, Renrui Zhang, Bowei He, Ziyu Guo, Ziyao Zeng, Zipeng Qin, Shanghang Zhang, Peng Gao
PointCLIP V2: Prompting CLIP and GPT for Powerful 3D Open-world Learning
Code is available at https://github.com/yangyangyang127/PointCLIP_V2
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large-scale pre-trained models have shown promising open-world performance for both vision and language tasks. However, their transferred capacity on 3D point clouds is still limited and only constrained to the classification task. In this paper, we first collaborate CLIP and GPT to be a unified 3D open-world learner, named as PointCLIP V2, which fully unleashes their potential for zero-shot 3D classification, segmentation, and detection. To better align 3D data with the pre-trained language knowledge, PointCLIP V2 contains two key designs. For the visual end, we prompt CLIP via a shape projection module to generate more realistic depth maps, narrowing the domain gap between projected point clouds with natural images. For the textual end, we prompt the GPT model to generate 3D-specific text as the input of CLIP's textual encoder. Without any training in 3D domains, our approach significantly surpasses PointCLIP by +42.90%, +40.44%, and +28.75% accuracy on three datasets for zero-shot 3D classification. On top of that, V2 can be extended to few-shot 3D classification, zero-shot 3D part segmentation, and 3D object detection in a simple manner, demonstrating our generalization ability for unified 3D open-world learning.
[ { "version": "v1", "created": "Mon, 21 Nov 2022 17:52:43 GMT" }, { "version": "v2", "created": "Sat, 26 Aug 2023 16:14:09 GMT" } ]
2023-08-29T00:00:00
[ [ "Zhu", "Xiangyang", "" ], [ "Zhang", "Renrui", "" ], [ "He", "Bowei", "" ], [ "Guo", "Ziyu", "" ], [ "Zeng", "Ziyao", "" ], [ "Qin", "Zipeng", "" ], [ "Zhang", "Shanghang", "" ], [ "Gao", "Peng", "" ] ]
new_dataset
0.999779
2212.02053
Zhang Yunhua
Yunhua Zhang and Hazel Doughty and Cees G. M. Snoek
Day2Dark: Pseudo-Supervised Activity Recognition beyond Silent Daylight
Under review
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper strives to recognize activities in the dark, as well as in the day. We first establish that state-of-the-art activity recognizers are effective during the day, but not trustworthy in the dark. The main causes are the limited availability of labeled dark videos to learn from, as well as the distribution shift towards the lower color contrast at test-time. To compensate for the lack of labeled dark videos, we introduce a pseudo-supervised learning scheme, which utilizes easy to obtain unlabeled and task-irrelevant dark videos to improve an activity recognizer in low light. As the lower color contrast results in visual information loss, we further propose to incorporate the complementary activity information within audio, which is invariant to illumination. Since the usefulness of audio and visual features differs depending on the amount of illumination, we introduce our `darkness-adaptive' audio-visual recognizer. Experiments on EPIC-Kitchens, Kinetics-Sound, and Charades demonstrate our proposals are superior to image enhancement, domain adaptation and alternative audio-visual fusion methods, and can even improve robustness to local darkness caused by occlusions. Project page: https://xiaobai1217.github.io/Day2Dark/
[ { "version": "v1", "created": "Mon, 5 Dec 2022 06:14:23 GMT" }, { "version": "v2", "created": "Fri, 23 Jun 2023 10:37:59 GMT" }, { "version": "v3", "created": "Sun, 27 Aug 2023 19:41:53 GMT" } ]
2023-08-29T00:00:00
[ [ "Zhang", "Yunhua", "" ], [ "Doughty", "Hazel", "" ], [ "Snoek", "Cees G. M.", "" ] ]
new_dataset
0.997956
2212.04636
Jiaman Li
Jiaman Li, C. Karen Liu, Jiajun Wu
Ego-Body Pose Estimation via Ego-Head Pose Estimation
CVPR 2023 (Award Candidate)
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimating 3D human motion from an egocentric video sequence plays a critical role in human behavior understanding and has various applications in VR/AR. However, naively learning a mapping between egocentric videos and human motions is challenging, because the user's body is often unobserved by the front-facing camera placed on the head of the user. In addition, collecting large-scale, high-quality datasets with paired egocentric videos and 3D human motions requires accurate motion capture devices, which often limit the variety of scenes in the videos to lab-like environments. To eliminate the need for paired egocentric video and human motions, we propose a new method, Ego-Body Pose Estimation via Ego-Head Pose Estimation (EgoEgo), which decomposes the problem into two stages, connected by the head motion as an intermediate representation. EgoEgo first integrates SLAM and a learning approach to estimate accurate head motion. Subsequently, leveraging the estimated head pose as input, EgoEgo utilizes conditional diffusion to generate multiple plausible full-body motions. This disentanglement of head and body pose eliminates the need for training datasets with paired egocentric videos and 3D human motion, enabling us to leverage large-scale egocentric video datasets and motion capture datasets separately. Moreover, for systematic benchmarking, we develop a synthetic dataset, AMASS-Replica-Ego-Syn (ARES), with paired egocentric videos and human motion. On both ARES and real data, our EgoEgo model performs significantly better than the current state-of-the-art methods.
[ { "version": "v1", "created": "Fri, 9 Dec 2022 02:25:20 GMT" }, { "version": "v2", "created": "Sun, 2 Apr 2023 18:13:15 GMT" }, { "version": "v3", "created": "Mon, 28 Aug 2023 02:51:25 GMT" } ]
2023-08-29T00:00:00
[ [ "Li", "Jiaman", "" ], [ "Liu", "C. Karen", "" ], [ "Wu", "Jiajun", "" ] ]
new_dataset
0.995987
2301.01917
Ziwei Sun
Ziwei Sun, Zexi Hua, Hengcao Li, Haiyan Zhong
Flying Bird Object Detection Algorithm in Surveillance Video Based on Motion Information
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A Flying Bird Object Detection algorithm Based on Motion Information (FBOD-BMI) is proposed to solve the problem that the features of the object are not obvious in a single frame, and the size of the object is small (low Signal-to-Noise Ratio (SNR)) in surveillance video. Firstly, a ConvLSTM-PAN model structure is designed to capture suspicious flying bird objects, in which the Convolutional Long and Short Time Memory (ConvLSTM) network aggregated the Spatio-temporal features of the flying bird object on adjacent multi-frame before the input of the model and the Path Aggregation Network (PAN) located the suspicious flying bird objects. Then, an object tracking algorithm is used to track suspicious flying bird objects and calculate their Motion Range (MR). At the same time, the size of the MR of the suspicious flying bird object is adjusted adaptively according to its speed of movement (specifically, if the bird moves slowly, its MR will be expanded according to the speed of the bird to ensure the environmental information needed to detect the flying bird object). Adaptive Spatio-temporal Cubes (ASt-Cubes) of the flying bird objects are generated to ensure that the SNR of the flying bird objects is improved, and the necessary environmental information is retained adaptively. Finally, a LightWeight U-Shape Net (LW-USN) based on ASt-Cubes is designed to detect flying bird objects, which rejects the false detections of the suspicious flying bird objects and returns the position of the real flying bird objects. The monitoring video including the flying birds is collected in the unattended traction substation as the experimental dataset to verify the performance of the algorithm. The experimental results show that the flying bird object detection method based on motion information proposed in this paper can effectively detect the flying bird object in surveillance video.
[ { "version": "v1", "created": "Thu, 5 Jan 2023 05:32:22 GMT" }, { "version": "v2", "created": "Tue, 31 Jan 2023 01:17:32 GMT" }, { "version": "v3", "created": "Sat, 26 Aug 2023 13:49:36 GMT" } ]
2023-08-29T00:00:00
[ [ "Sun", "Ziwei", "" ], [ "Hua", "Zexi", "" ], [ "Li", "Hengcao", "" ], [ "Zhong", "Haiyan", "" ] ]
new_dataset
0.996592
2303.09551
Yi Wei
Yi Wei, Linqing Zhao, Wenzhao Zheng, Zheng Zhu, Jie Zhou, Jiwen Lu
SurroundOcc: Multi-Camera 3D Occupancy Prediction for Autonomous Driving
Accepted to ICCV 2023. Code is available at https://github.com/weiyithu/SurroundOcc
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
3D scene understanding plays a vital role in vision-based autonomous driving. While most existing methods focus on 3D object detection, they have difficulty describing real-world objects of arbitrary shapes and infinite classes. Towards a more comprehensive perception of a 3D scene, in this paper, we propose a SurroundOcc method to predict the 3D occupancy with multi-camera images. We first extract multi-scale features for each image and adopt spatial 2D-3D attention to lift them to the 3D volume space. Then we apply 3D convolutions to progressively upsample the volume features and impose supervision on multiple levels. To obtain dense occupancy prediction, we design a pipeline to generate dense occupancy ground truth without expansive occupancy annotations. Specifically, we fuse multi-frame LiDAR scans of dynamic objects and static scenes separately. Then we adopt Poisson Reconstruction to fill the holes and voxelize the mesh to get dense occupancy labels. Extensive experiments on nuScenes and SemanticKITTI datasets demonstrate the superiority of our method. Code and dataset are available at https://github.com/weiyithu/SurroundOcc
[ { "version": "v1", "created": "Thu, 16 Mar 2023 17:59:08 GMT" }, { "version": "v2", "created": "Sun, 27 Aug 2023 15:33:19 GMT" } ]
2023-08-29T00:00:00
[ [ "Wei", "Yi", "" ], [ "Zhao", "Linqing", "" ], [ "Zheng", "Wenzhao", "" ], [ "Zhu", "Zheng", "" ], [ "Zhou", "Jie", "" ], [ "Lu", "Jiwen", "" ] ]
new_dataset
0.971582
2304.04760
Shenshen Du
Jun Yu, Shenshen Du, Guochen Xie, Renjie Lu, Pengwei Li, Zhongpeng Cai, Keda Lu
SAR2EO: A High-resolution Image Translation Framework with Denoising Enhancement
null
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synthetic Aperture Radar (SAR) to electro-optical (EO) image translation is a fundamental task in remote sensing that can enrich the dataset by fusing information from different sources. Recently, many methods have been proposed to tackle this task, but they are still difficult to complete the conversion from low-resolution images to high-resolution images. Thus, we propose a framework, SAR2EO, aiming at addressing this challenge. Firstly, to generate high-quality EO images, we adopt the coarse-to-fine generator, multi-scale discriminators, and improved adversarial loss in the pix2pixHD model to increase the synthesis quality. Secondly, we introduce a denoising module to remove the noise in SAR images, which helps to suppress the noise while preserving the structural information of the images. To validate the effectiveness of the proposed framework, we conduct experiments on the dataset of the Multi-modal Aerial View Imagery Challenge (MAVIC), which consists of large-scale SAR and EO image pairs. The experimental results demonstrate the superiority of our proposed framework, and we win the first place in the MAVIC held in CVPR PBVS 2023.
[ { "version": "v1", "created": "Sat, 8 Apr 2023 03:39:51 GMT" }, { "version": "v2", "created": "Fri, 25 Aug 2023 17:28:26 GMT" } ]
2023-08-29T00:00:00
[ [ "Yu", "Jun", "" ], [ "Du", "Shenshen", "" ], [ "Xie", "Guochen", "" ], [ "Lu", "Renjie", "" ], [ "Li", "Pengwei", "" ], [ "Cai", "Zhongpeng", "" ], [ "Lu", "Keda", "" ] ]
new_dataset
0.997959
2304.06634
Rui Ribeiro
Rui Ribeiro, Joao P. Carvalho, Lu\'isa Coheur
PGTask: Introducing the Task of Profile Generation from Dialogues
Accepted at SIGDIAL 2023, 4 pages, 2 figures
null
null
null
cs.CL cs.AI cs.LG cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent approaches have attempted to personalize dialogue systems by leveraging profile information into models. However, this knowledge is scarce and difficult to obtain, which makes the extraction/generation of profile information from dialogues a fundamental asset. To surpass this limitation, we introduce the Profile Generation Task (PGTask). We contribute with a new dataset for this problem, comprising profile sentences aligned with related utterances, extracted from a corpus of dialogues. Furthermore, using state-of-the-art methods, we provide a benchmark for profile generation on this novel dataset. Our experiments disclose the challenges of profile generation, and we hope that this introduces a new research direction.
[ { "version": "v1", "created": "Thu, 13 Apr 2023 16:02:19 GMT" }, { "version": "v2", "created": "Sat, 26 Aug 2023 05:55:48 GMT" } ]
2023-08-29T00:00:00
[ [ "Ribeiro", "Rui", "" ], [ "Carvalho", "Joao P.", "" ], [ "Coheur", "Luísa", "" ] ]
new_dataset
0.999713
2304.09807
Shaoyu Chen
Shaoyu Chen, Yunchi Zhang, Bencheng Liao, Jiafeng Xie, Tianheng Cheng, Wei Sui, Qian Zhang, Chang Huang, Wenyu Liu, Xinggang Wang
VMA: Divide-and-Conquer Vectorized Map Annotation System for Large-Scale Driving Scene
https://github.com/hustvl/VMA
null
null
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
High-definition (HD) map serves as the essential infrastructure of autonomous driving. In this work, we build up a systematic vectorized map annotation framework (termed VMA) for efficiently generating HD map of large-scale driving scene. We design a divide-and-conquer annotation scheme to solve the spatial extensibility problem of HD map generation, and abstract map elements with a variety of geometric patterns as unified point sequence representation, which can be extended to most map elements in the driving scene. VMA is highly efficient and extensible, requiring negligible human effort, and flexible in terms of spatial scale and element type. We quantitatively and qualitatively validate the annotation performance on real-world urban and highway scenes, as well as NYC Planimetric Database. VMA can significantly improve map generation efficiency and require little human effort. On average VMA takes 160min for annotating a scene with a range of hundreds of meters, and reduces 52.3% of the human cost, showing great application value. Code: https://github.com/hustvl/VMA.
[ { "version": "v1", "created": "Wed, 19 Apr 2023 16:47:20 GMT" }, { "version": "v2", "created": "Sun, 27 Aug 2023 13:58:18 GMT" } ]
2023-08-29T00:00:00
[ [ "Chen", "Shaoyu", "" ], [ "Zhang", "Yunchi", "" ], [ "Liao", "Bencheng", "" ], [ "Xie", "Jiafeng", "" ], [ "Cheng", "Tianheng", "" ], [ "Sui", "Wei", "" ], [ "Zhang", "Qian", "" ], [ "Huang", "Chang", "" ], [ "Liu", "Wenyu", "" ], [ "Wang", "Xinggang", "" ] ]
new_dataset
0.95356
2305.08562
Tim Fischer
Tim Fischer, Michael Rogenmoser, Matheus Cavalcante, Frank K. G\"urkaynak, Luca Benini
FlooNoC: A Multi-Tbps Wide NoC for Heterogeneous AXI4 Traffic
null
null
10.1109/MDAT.2023.3306720
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Meeting the staggering bandwidth requirements of today's applications challenges the traditional narrow and serialized NoCs, which hit hard bounds on the maximum operating frequency. This paper proposes FlooNoC, an open-source, low-latency, fully AXI4-compatible NoC with wide physical channels for latency-tolerant high-bandwidth non-blocking transactions and decoupled latency-critical short messages. We demonstrate the feasibility of wide channels by integrating a 5x5 router and links within a 9-core compute cluster in 12 nm FinFet technology. Our NoC achieves a bandwidth of 629Gbps per link while running at only 1.23 GHz (at 0.19 pJ/B/hop), with just 10% area overhead post layout.
[ { "version": "v1", "created": "Mon, 15 May 2023 11:42:47 GMT" }, { "version": "v2", "created": "Sun, 6 Aug 2023 18:31:33 GMT" } ]
2023-08-29T00:00:00
[ [ "Fischer", "Tim", "" ], [ "Rogenmoser", "Michael", "" ], [ "Cavalcante", "Matheus", "" ], [ "Gürkaynak", "Frank K.", "" ], [ "Benini", "Luca", "" ] ]
new_dataset
0.950657
2305.13608
Wenxiao Cai
Wenxiao Cai, Ke Jin, Jinyan Hou, Cong Guo, Letian Wu, Wankou Yang
VDD: Varied Drone Dataset for Semantic Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semantic segmentation of drone images is critical to many aerial vision tasks as it provides essential semantic details that can compensate for the lack of depth information from monocular cameras. However, maintaining high accuracy of semantic segmentation models for drones requires diverse, large-scale, and high-resolution datasets, which are rare in the field of aerial image processing. Existing datasets are typically small and focus primarily on urban scenes, neglecting rural and industrial areas. Models trained on such datasets are not sufficiently equipped to handle the variety of inputs seen in drone imagery. In the VDD-Varied Drone Dataset, we offer a large-scale and densely labeled dataset comprising 400 high-resolution images that feature carefully chosen scenes, camera angles, and varied light and weather conditions. Furthermore, we have adapted existing drone datasets to conform to our annotation standards and integrated them with VDD to create a dataset 1.5 times the size of fine annotation of Cityscapes. We have developed a novel DeepLabT model, which combines CNN and Transformer backbones, to provide a reliable baseline for semantic segmentation in drone imagery. Our experiments indicate that DeepLabT performs admirably on VDD and other drone datasets. We expect that our dataset will generate considerable interest in drone image segmentation and serve as a foundation for other drone vision tasks. VDD is freely available on our website at https://vddvdd.com .
[ { "version": "v1", "created": "Tue, 23 May 2023 02:16:14 GMT" }, { "version": "v2", "created": "Sun, 27 Aug 2023 14:11:34 GMT" } ]
2023-08-29T00:00:00
[ [ "Cai", "Wenxiao", "" ], [ "Jin", "Ke", "" ], [ "Hou", "Jinyan", "" ], [ "Guo", "Cong", "" ], [ "Wu", "Letian", "" ], [ "Yang", "Wankou", "" ] ]
new_dataset
0.999786
2306.13192
Fabian Weigend
Fabian C Weigend, Shubham Sonawani, Michael Drolet, Heni Ben Amor
Anytime, Anywhere: Human Arm Pose from Smartwatch Data for Ubiquitous Robot Control and Teleoperation
8 pages, 10, figures, 1 table, conference: IROS
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
This work devises an optimized machine learning approach for human arm pose estimation from a single smartwatch. Our approach results in a distribution of possible wrist and elbow positions, which allows for a measure of uncertainty and the detection of multiple possible arm posture solutions, i.e., multimodal pose distributions. Combining estimated arm postures with speech recognition, we turn the smartwatch into a ubiquitous, low-cost and versatile robot control interface. We demonstrate in two use-cases that this intuitive control interface enables users to swiftly intervene in robot behavior, to temporarily adjust their goal, or to train completely new control policies by imitation. Extensive experiments show that the approach results in a 40% reduction in prediction error over the current state-of-the-art and achieves a mean error of 2.56cm for wrist and elbow positions.
[ { "version": "v1", "created": "Thu, 22 Jun 2023 20:29:00 GMT" }, { "version": "v2", "created": "Mon, 28 Aug 2023 16:22:24 GMT" } ]
2023-08-29T00:00:00
[ [ "Weigend", "Fabian C", "" ], [ "Sonawani", "Shubham", "" ], [ "Drolet", "Michael", "" ], [ "Amor", "Heni Ben", "" ] ]
new_dataset
0.985366
2307.05016
Myung-Hwan Jeon
Jeongyun Kim, Myung-Hwan Jeon, Sangwoo Jung, Wooseong Yang, Minwoo Jung, Jaeho Shin, Ayoung Kim
TRansPose: Large-Scale Multispectral Dataset for Transparent Object
Under review
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transparent objects are encountered frequently in our daily lives, yet recognizing them poses challenges for conventional vision sensors due to their unique material properties, not being well perceived from RGB or depth cameras. Overcoming this limitation, thermal infrared cameras have emerged as a solution, offering improved visibility and shape information for transparent objects. In this paper, we present TRansPose, the first large-scale multispectral dataset that combines stereo RGB-D, thermal infrared (TIR) images, and object poses to promote transparent object research. The dataset includes 99 transparent objects, encompassing 43 household items, 27 recyclable trashes, 29 chemical laboratory equivalents, and 12 non-transparent objects. It comprises a vast collection of 333,819 images and 4,000,056 annotations, providing instance-level segmentation masks, ground-truth poses, and completed depth information. The data was acquired using a FLIR A65 thermal infrared (TIR) camera, two Intel RealSense L515 RGB-D cameras, and a Franka Emika Panda robot manipulator. Spanning 87 sequences, TRansPose covers various challenging real-life scenarios, including objects filled with water, diverse lighting conditions, heavy clutter, non-transparent or translucent containers, objects in plastic bags, and multi-stacked objects. TRansPose dataset can be accessed from the following link: https://sites.google.com/view/transpose-dataset
[ { "version": "v1", "created": "Tue, 11 Jul 2023 05:32:21 GMT" }, { "version": "v2", "created": "Mon, 28 Aug 2023 04:05:15 GMT" } ]
2023-08-29T00:00:00
[ [ "Kim", "Jeongyun", "" ], [ "Jeon", "Myung-Hwan", "" ], [ "Jung", "Sangwoo", "" ], [ "Yang", "Wooseong", "" ], [ "Jung", "Minwoo", "" ], [ "Shin", "Jaeho", "" ], [ "Kim", "Ayoung", "" ] ]
new_dataset
0.999797