id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2307.09488
Matteo Risso
Daniele Jahier Pagliari, Matteo Risso, Beatrice Alessandra Motetti, Alessio Burrello
PLiNIO: A User-Friendly Library of Gradient-based Methods for Complexity-aware DNN Optimization
Accepted at the 2023 Forum on Specification & Design Languages (FDL)
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Accurate yet efficient Deep Neural Networks (DNNs) are in high demand, especially for applications that require their execution on constrained edge devices. Finding such DNNs in a reasonable time for new applications requires automated optimization pipelines since the huge space of hyper-parameter combinations is impossible to explore extensively by hand. In this work, we propose PLiNIO, an open-source library implementing a comprehensive set of state-of-the-art DNN design automation techniques, all based on lightweight gradient-based optimization, under a unified and user-friendly interface. With experiments on several edge-relevant tasks, we show that combining the various optimizations available in PLiNIO leads to rich sets of solutions that Pareto-dominate the considered baselines in terms of accuracy vs model size. Noteworthy, PLiNIO achieves up to 94.34% memory reduction for a <1% accuracy drop compared to a baseline architecture.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 07:11:14 GMT" } ]
2023-07-20T00:00:00
[ [ "Pagliari", "Daniele Jahier", "" ], [ "Risso", "Matteo", "" ], [ "Motetti", "Beatrice Alessandra", "" ], [ "Burrello", "Alessio", "" ] ]
new_dataset
0.996531
2307.09533
Aditya Potukuchi
Charlie Carlson, Ewan Davies, Alexandra Kolla, and Aditya Potukuchi
Approximately counting independent sets in dense bipartite graphs via subspace enumeration
15 pages
null
null
null
cs.DS math.CO
http://creativecommons.org/licenses/by/4.0/
We give a randomized algorithm that approximates the number of independent sets in a dense, regular bipartite graph -- in the language of approximate counting, we give an FPRAS for #BIS on the class of dense, regular bipartite graphs. Efficient counting algorithms typically apply to ``high-temperature'' problems on bounded-degree graphs, and our contribution is a notable exception as it applies to dense graphs in a low-temperature setting. Our methods give a counting-focused complement to the long line of work in combinatorial optimization showing that CSPs such as Max-Cut and Unique Games are easy on dense graphs via spectral arguments. The proof exploits the fact that dense, regular graphs exhibit a kind of small-set expansion (i.e. bounded threshold rank), which via subspace enumeration lets us enumerate small cuts efficiently.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 18:23:24 GMT" } ]
2023-07-20T00:00:00
[ [ "Carlson", "Charlie", "" ], [ "Davies", "Ewan", "" ], [ "Kolla", "Alexandra", "" ], [ "Potukuchi", "Aditya", "" ] ]
new_dataset
0.957902
2307.09549
Richard Derbyshire
Richard Derbyshire, Benjamin Green, Charl van der Walt, David Hutchison
Dead Man's PLC: Towards Viable Cyber Extortion for Operational Technology
13 pages, 19 figures
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
For decades, operational technology (OT) has enjoyed the luxury of being suitably inaccessible so as to experience directly targeted cyber attacks from only the most advanced and well-resourced adversaries. However, security via obscurity cannot last forever, and indeed a shift is happening whereby less advanced adversaries are showing an appetite for targeting OT. With this shift in adversary demographics, there will likely also be a shift in attack goals, from clandestine process degradation and espionage to overt cyber extortion (Cy-X). The consensus from OT cyber security practitioners suggests that, even if encryption-based Cy-X techniques were launched against OT assets, typical recovery practices designed for engineering processes would provide adequate resilience. In response, this paper introduces Dead Man's PLC (DM-PLC), a pragmatic step towards viable OT Cy-X that acknowledges and weaponises the resilience processes typically encountered. Using only existing functionality, DM-PLC considers an entire environment as the entity under ransom, whereby all assets constantly poll one another to ensure the attack remains untampered, treating any deviations as a detonation trigger akin to a Dead Man's switch. A proof of concept of DM-PLC is implemented and evaluated on an academically peer reviewed and industry validated OT testbed to demonstrate its malicious efficacy.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 18:48:47 GMT" } ]
2023-07-20T00:00:00
[ [ "Derbyshire", "Richard", "" ], [ "Green", "Benjamin", "" ], [ "van der Walt", "Charl", "" ], [ "Hutchison", "David", "" ] ]
new_dataset
0.987443
2307.09553
Joseph W. Cutler
Joseph W. Cutler, Christopher Watson, Phillip Hilliard, Harrison Goldstein, Caleb Stanford, Benjamin C. Pierce
Stream Types
Submitted to POPL'24
null
null
null
cs.PL
http://creativecommons.org/licenses/by/4.0/
We propose a rich foundational theory of typed data streams and stream transformers, motivated by two high-level goals: (1) the type of a stream should be able to express complex sequential patterns of events over time, and (2) it should describe the parallel structure of the stream to enable deterministic stream processing on parallel and distributed systems. To this end, we introduce stream types, with operators capturing sequential composition, parallel composition, and iteration, plus a core calculus of transformers over typed streams which naturally supports a number of common streaming idioms, including punctuation, windowing, and parallel partitioning, as first-class constructions. The calculus exploits a Curry-Howard-like correspondence with an ordered variant of the logic of Bunched Implication to program with streams compositionally and uses Brzozowski-style derivatives to enable an incremental, event-based operational semantics. To validate our design, we provide a reference interpreter and machine-checked proofs of the main results.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 19:05:22 GMT" } ]
2023-07-20T00:00:00
[ [ "Cutler", "Joseph W.", "" ], [ "Watson", "Christopher", "" ], [ "Hilliard", "Phillip", "" ], [ "Goldstein", "Harrison", "" ], [ "Stanford", "Caleb", "" ], [ "Pierce", "Benjamin C.", "" ] ]
new_dataset
0.959341
2307.09621
Ka Chun Shum
Ka Chun Shum, Hong-Wing Pang, Binh-Son Hua, Duc Thanh Nguyen, Sai-Kit Yeung
Conditional 360-degree Image Synthesis for Immersive Indoor Scene Decoration
ICCV2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we address the problem of conditional scene decoration for 360-degree images. Our method takes a 360-degree background photograph of an indoor scene and generates decorated images of the same scene in the panorama view. To do this, we develop a 360-aware object layout generator that learns latent object vectors in the 360-degree view to enable a variety of furniture arrangements for an input 360-degree background image. We use this object layout to condition a generative adversarial network to synthesize images of an input scene. To further reinforce the generation capability of our model, we develop a simple yet effective scene emptier that removes the generated furniture and produces an emptied scene for our model to learn a cyclic constraint. We train the model on the Structure3D dataset and show that our model can generate diverse decorations with controllable object layout. Our method achieves state-of-the-art performance on the Structure3D dataset and generalizes well to the Zillow indoor scene dataset. Our user study confirms the immersive experiences provided by the realistic image quality and furniture layout in our generation results. Our implementation will be made available.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 20:28:31 GMT" } ]
2023-07-20T00:00:00
[ [ "Shum", "Ka Chun", "" ], [ "Pang", "Hong-Wing", "" ], [ "Hua", "Binh-Son", "" ], [ "Nguyen", "Duc Thanh", "" ], [ "Yeung", "Sai-Kit", "" ] ]
new_dataset
0.982002
2307.09630
Behrouz Minaei-Bidgoli
Huda AlShuhayeb, Behrouz Minaei-Bidgoli, Mohammad E. Shenassa, Sayyed-Ali Hossayni
Noor-Ghateh: A Benchmark Dataset for Evaluating Arabic Word Segmenters in Hadith Domain
15 pages, 2 figures
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are many complex and rich morphological subtleties in the Arabic language, which are very useful when analyzing traditional Arabic texts, especially in the historical and religious contexts, and help in understanding the meaning of the texts. Vocabulary separation means separating the word into different parts such as root and affix. In the morphological datasets, the variety of labels and the number of data samples helps to evaluate the morphological methods. In this paper, we present a benchmark data set for evaluating the methods of separating Arabic words which include about 223,690 words from the book of Sharia alIslam, which have been labeled by experts. In terms of the volume and variety of words, this dataset is superior to other existing data sets, and as far as we know, there are no Arabic Hadith Domain texts. To evaluate the dataset, we applied different methods such as Farasa, Camel, Madamira, and ALP to the dataset and we reported the annotation quality through four evaluation methods.
[ { "version": "v1", "created": "Thu, 22 Jun 2023 16:50:40 GMT" } ]
2023-07-20T00:00:00
[ [ "AlShuhayeb", "Huda", "" ], [ "Minaei-Bidgoli", "Behrouz", "" ], [ "Shenassa", "Mohammad E.", "" ], [ "Hossayni", "Sayyed-Ali", "" ] ]
new_dataset
0.999836
2307.09652
Jeremy McMahan
Jeremy McMahan, Young Wu, Yudong Chen, Xiaojin Zhu, Qiaomin Xie
VISER: A Tractable Solution Concept for Games with Information Asymmetry
17 pages, 6 figures
null
null
null
cs.GT cs.AI cs.CR cs.MA cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Many real-world games suffer from information asymmetry: one player is only aware of their own payoffs while the other player has the full game information. Examples include the critical domain of security games and adversarial multi-agent reinforcement learning. Information asymmetry renders traditional solution concepts such as Strong Stackelberg Equilibrium (SSE) and Robust-Optimization Equilibrium (ROE) inoperative. We propose a novel solution concept called VISER (Victim Is Secure, Exploiter best-Responds). VISER enables an external observer to predict the outcome of such games. In particular, for security applications, VISER allows the victim to better defend itself while characterizing the most damaging attacks available to the attacker. We show that each player's VISER strategy can be computed independently in polynomial time using linear programming (LP). We also extend VISER to its Markov-perfect counterpart for Markov games, which can be solved efficiently using a series of LPs.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 21:51:47 GMT" } ]
2023-07-20T00:00:00
[ [ "McMahan", "Jeremy", "" ], [ "Wu", "Young", "" ], [ "Chen", "Yudong", "" ], [ "Zhu", "Xiaojin", "" ], [ "Xie", "Qiaomin", "" ] ]
new_dataset
0.978935
2307.09653
Xiaotian Duan
Xiaotian Duan
HAT-CL: A Hard-Attention-to-the-Task PyTorch Library for Continual Learning
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Catastrophic forgetting, the phenomenon in which a neural network loses previously obtained knowledge during the learning of new tasks, poses a significant challenge in continual learning. The Hard-Attention-to-the-Task (HAT) mechanism has shown potential in mitigating this problem, but its practical implementation has been complicated by issues of usability and compatibility, and a lack of support for existing network reuse. In this paper, we introduce HAT-CL, a user-friendly, PyTorch-compatible redesign of the HAT mechanism. HAT-CL not only automates gradient manipulation but also streamlines the transformation of PyTorch modules into HAT modules. It achieves this by providing a comprehensive suite of modules that can be seamlessly integrated into existing architectures. Additionally, HAT-CL offers ready-to-use HAT networks that are smoothly integrated with the TIMM library. Beyond the redesign and reimplementation of HAT, we also introduce novel mask manipulation techniques for HAT, which have consistently shown improvements across various experiments. Our work paves the way for a broader application of the HAT mechanism, opening up new possibilities in continual learning across diverse models and applications.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 21:53:40 GMT" } ]
2023-07-20T00:00:00
[ [ "Duan", "Xiaotian", "" ] ]
new_dataset
0.996694
2307.09670
Eleanor Row
Eleanor Row, Jingjing Tang and George Fazekas
JAZZVAR: A Dataset of Variations found within Solo Piano Performances of Jazz Standards for Music Overpainting
Pre-print accepted for publication at CMMR2023, 12 pages, 4 figures
null
null
null
cs.SD cs.LG eess.AS
http://creativecommons.org/licenses/by/4.0/
Jazz pianists often uniquely interpret jazz standards. Passages from these interpretations can be viewed as sections of variation. We manually extracted such variations from solo jazz piano performances. The JAZZVAR dataset is a collection of 502 pairs of Variation and Original MIDI segments. Each Variation in the dataset is accompanied by a corresponding Original segment containing the melody and chords from the original jazz standard. Our approach differs from many existing jazz datasets in the music information retrieval (MIR) community, which often focus on improvisation sections within jazz performances. In this paper, we outline the curation process for obtaining and sorting the repertoire, the pipeline for creating the Original and Variation pairs, and our analysis of the dataset. We also introduce a new generative music task, Music Overpainting, and present a baseline Transformer model trained on the JAZZVAR dataset for this task. Other potential applications of our dataset include expressive performance analysis and performer identification.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 22:48:54 GMT" } ]
2023-07-20T00:00:00
[ [ "Row", "Eleanor", "" ], [ "Tang", "Jingjing", "" ], [ "Fazekas", "George", "" ] ]
new_dataset
0.999827
2307.09679
Santiago Figueira
Santiago Figueira, Gabriel Goren Roig
A Modal Logic with n-ary Relations Over Paths: Comonadic Semantics and Expressivity
null
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Game comonads give a categorical semantics for comparison games in Finite Model Theory, thus providing an abstract characterisation of logical equivalence for a wide range of logics, each one captured through a specific choice of comonad. However, data-aware logics such as CoreDataXPath present sophisticated notions of bisimulation which defy a straightforward comonadic encoding. In this work we begin the comonadic treatment of data-aware logics by introducing a generalisation of Modal Logic that allows relation symbols of arbitrary arity as atoms of the syntax, which we call Path Predicate Modal Logic or PPML. We motivate this logic as arising from a shift in perspective on a already studied restricted version of CoreDataXPath, called DataGL, and prove that PPML recovers DataGL for a specific choice of signature. We argue that this shift in perspective allows the capturing and designing of new data-aware logics. On the other hand, PPML enjoys an intrinsic motivation in that it extends Modal Logic to predicate over more general models. Having defined the simulation and bisimulation games for PPML and having proven a Hennessy-Milner-type theorem, we define the PPML comonad and prove that it captures these games, following analogous results in the literature. Our treatment is novel in that we explicitly prove that our comonad satisfies the axioms of arboreal categories and arboreal covers. Using the comonadic machinery, we immediately obtain a tree-model property for PPML. Finally, we define a translation functor from relational structures into Kripke structures and use its properties to prove a series of polynomial-time reductions from PPML problems to their Basic Modal Logic counterparts. Our results explain precisely in what sense PPML lets us view general relational structures through the modal lens.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 23:17:28 GMT" } ]
2023-07-20T00:00:00
[ [ "Figueira", "Santiago", "" ], [ "Roig", "Gabriel Goren", "" ] ]
new_dataset
0.971602
2307.09688
Wei Jin
Wei Jin, Haitao Mao, Zheng Li, Haoming Jiang, Chen Luo, Hongzhi Wen, Haoyu Han, Hanqing Lu, Zhengyang Wang, Ruirui Li, Zhen Li, Monica Xiao Cheng, Rahul Goutam, Haiyang Zhang, Karthik Subbian, Suhang Wang, Yizhou Sun, Jiliang Tang, Bing Yin, Xianfeng Tang
Amazon-M2: A Multilingual Multi-locale Shopping Session Dataset for Recommendation and Text Generation
Dataset for KDD Cup 2023, https://kddcup23.github.io/
null
null
null
cs.IR cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modeling customer shopping intentions is a crucial task for e-commerce, as it directly impacts user experience and engagement. Thus, accurately understanding customer preferences is essential for providing personalized recommendations. Session-based recommendation, which utilizes customer session data to predict their next interaction, has become increasingly popular. However, existing session datasets have limitations in terms of item attributes, user diversity, and dataset scale. As a result, they cannot comprehensively capture the spectrum of user behaviors and preferences. To bridge this gap, we present the Amazon Multilingual Multi-locale Shopping Session Dataset, namely Amazon-M2. It is the first multilingual dataset consisting of millions of user sessions from six different locales, where the major languages of products are English, German, Japanese, French, Italian, and Spanish. Remarkably, the dataset can help us enhance personalization and understanding of user preferences, which can benefit various existing tasks as well as enable new tasks. To test the potential of the dataset, we introduce three tasks in this work: (1) next-product recommendation, (2) next-product recommendation with domain shifts, and (3) next-product title generation. With the above tasks, we benchmark a range of algorithms on our proposed dataset, drawing new insights for further research and practice. In addition, based on the proposed dataset and tasks, we hosted a competition in the KDD CUP 2023 and have attracted thousands of users and submissions. The winning solutions and the associated workshop can be accessed at our website https://kddcup23.github.io/.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 00:08:49 GMT" } ]
2023-07-20T00:00:00
[ [ "Jin", "Wei", "" ], [ "Mao", "Haitao", "" ], [ "Li", "Zheng", "" ], [ "Jiang", "Haoming", "" ], [ "Luo", "Chen", "" ], [ "Wen", "Hongzhi", "" ], [ "Han", "Haoyu", "" ], [ "Lu", "Hanqing", "" ], [ "Wang", "Zhengyang", "" ], [ "Li", "Ruirui", "" ], [ "Li", "Zhen", "" ], [ "Cheng", "Monica Xiao", "" ], [ "Goutam", "Rahul", "" ], [ "Zhang", "Haiyang", "" ], [ "Subbian", "Karthik", "" ], [ "Wang", "Suhang", "" ], [ "Sun", "Yizhou", "" ], [ "Tang", "Jiliang", "" ], [ "Yin", "Bing", "" ], [ "Tang", "Xianfeng", "" ] ]
new_dataset
0.999787
2307.09693
Liu He
Liu He, Daniel Aliaga
GlobalMapper: Arbitrary-Shaped Urban Layout Generation
Accepted by ICCV 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modeling and designing urban building layouts is of significant interest in computer vision, computer graphics, and urban applications. A building layout consists of a set of buildings in city blocks defined by a network of roads. We observe that building layouts are discrete structures, consisting of multiple rows of buildings of various shapes, and are amenable to skeletonization for mapping arbitrary city block shapes to a canonical form. Hence, we propose a fully automatic approach to building layout generation using graph attention networks. Our method generates realistic urban layouts given arbitrary road networks, and enables conditional generation based on learned priors. Our results, including user study, demonstrate superior performance as compared to prior layout generation networks, support arbitrary city block and varying building shapes as demonstrated by generating layouts for 28 large cities.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 00:36:05 GMT" } ]
2023-07-20T00:00:00
[ [ "He", "Liu", "" ], [ "Aliaga", "Daniel", "" ] ]
new_dataset
0.975316
2307.09699
Zhihua Jin
Zhihua Jin, Gaoping Huang, Zixin Chen, Shiyi Liu, Yang Chao, Zhenchuan Yang, Quan Li, Huamin Qu
ActorLens: Visual Analytics for High-level Actor Identification in MOBA Games
15 pages, 9 figures
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiplayer Online Battle Arenas (MOBAs) have garnered a substantial player base worldwide. Nevertheless, the presence of noxious players, commonly referred to as "actors", can significantly compromise game fairness by exhibiting negative behaviors that diminish their team's competitive edge. Furthermore, high-level actors tend to engage in more egregious conduct to evade detection, thereby causing harm to the game community and necessitating their identification. To tackle this urgent concern, a partnership was formed with a team of game specialists from a prominent company to facilitate the identification and labeling of high-level actors in MOBA games. We first characterize the problem and abstract data and events from the game scene to formulate design requirements. Subsequently, ActorLens, a visual analytics system, was developed to exclude low-level actors, detect potential high-level actors, and assist users in labeling players. ActorLens furnishes an overview of players' status, summarizes behavioral patterns across three player cohorts (namely, focused players, historical matches of focused players, and matches of other players who played the same hero), and synthesizes key match events. By incorporating multiple views of information, users can proficiently recognize and label high-level actors in MOBA games. We conducted case studies and user studies to demonstrate the efficacy of the system.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 01:04:01 GMT" } ]
2023-07-20T00:00:00
[ [ "Jin", "Zhihua", "" ], [ "Huang", "Gaoping", "" ], [ "Chen", "Zixin", "" ], [ "Liu", "Shiyi", "" ], [ "Chao", "Yang", "" ], [ "Yang", "Zhenchuan", "" ], [ "Li", "Quan", "" ], [ "Qu", "Huamin", "" ] ]
new_dataset
0.996887
2307.09727
Zi Li
Zi Li and Lin Tian and Tony C. W. Mok and Xiaoyu Bai and Puyang Wang and Jia Ge and Jingren Zhou and Le Lu and Xianghua Ye and Ke Yan and Dakai Jin
SAMConvex: Fast Discrete Optimization for CT Registration using Self-supervised Anatomical Embedding and Correlation Pyramid
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimating displacement vector field via a cost volume computed in the feature space has shown great success in image registration, but it suffers excessive computation burdens. Moreover, existing feature descriptors only extract local features incapable of representing the global semantic information, which is especially important for solving large transformations. To address the discussed issues, we propose SAMConvex, a fast coarse-to-fine discrete optimization method for CT registration that includes a decoupled convex optimization procedure to obtain deformation fields based on a self-supervised anatomical embedding (SAM) feature extractor that captures both local and global information. To be specific, SAMConvex extracts per-voxel features and builds 6D correlation volumes based on SAM features, and iteratively updates a flow field by performing lookups on the correlation volumes with a coarse-to-fine scheme. SAMConvex outperforms the state-of-the-art learning-based methods and optimization-based methods over two inter-patient registration datasets (Abdomen CT and HeadNeck CT) and one intra-patient registration dataset (Lung CT). Moreover, as an optimization-based method, SAMConvex only takes $\sim2$s ($\sim5s$ with instance optimization) for one paired images.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 02:28:41 GMT" } ]
2023-07-20T00:00:00
[ [ "Li", "Zi", "" ], [ "Tian", "Lin", "" ], [ "Mok", "Tony C. W.", "" ], [ "Bai", "Xiaoyu", "" ], [ "Wang", "Puyang", "" ], [ "Ge", "Jia", "" ], [ "Zhou", "Jingren", "" ], [ "Lu", "Le", "" ], [ "Ye", "Xianghua", "" ], [ "Yan", "Ke", "" ], [ "Jin", "Dakai", "" ] ]
new_dataset
0.993098
2307.09729
Yixuan Gao
Xiaohong Liu, Xiongkuo Min, Wei Sun, Yulun Zhang, Kai Zhang, Radu Timofte, Guangtao Zhai, Yixuan Gao, Yuqin Cao, Tengchuan Kou, Yunlong Dong, Ziheng Jia, Yilin Li, Wei Wu, Shuming Hu, Sibin Deng, Pengxiang Xiao, Ying Chen, Kai Li, Kai Zhao, Kun Yuan, Ming Sun, Heng Cong, Hao Wang, Lingzhi Fu, Yusheng Zhang, Rongyu Zhang, Hang Shi, Qihang Xu, Longan Xiao, Zhiliang Ma, Mirko Agarla, Luigi Celona, Claudio Rota, Raimondo Schettini, Zhiwei Huang, Yanan Li, Xiaotao Wang, Lei Lei, Hongye Liu, Wei Hong, Ironhead Chuang, Allen Lin, Drake Guan, Iris Chen, Kae Lou, Willy Huang, Yachun Tasi, Yvonne Kao, Haotian Fan, Fangyuan Kong, Shiqi Zhou, Hao Liu, Yu Lai, Shanshan Chen, Wenqi Wang, Haoning Wu, Chaofeng Chen, Chunzheng Zhu, Zekun Guo, Shiling Zhao, Haibing Yin, Hongkui Wang, Hanene Brachemi Meftah, Sid Ahmed Fezza, Wassim Hamidouche, Olivier D\'eforges, Tengfei Shi, Azadeh Mansouri, Hossein Motamednia, Amir Hossein Bakhtiari, Ahmad Mahmoudi Aznaveh
NTIRE 2023 Quality Assessment of Video Enhancement Challenge
null
null
null
null
cs.CV cs.MM eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper reports on the NTIRE 2023 Quality Assessment of Video Enhancement Challenge, which will be held in conjunction with the New Trends in Image Restoration and Enhancement Workshop (NTIRE) at CVPR 2023. This challenge is to address a major challenge in the field of video processing, namely, video quality assessment (VQA) for enhanced videos. The challenge uses the VQA Dataset for Perceptual Video Enhancement (VDPVE), which has a total of 1211 enhanced videos, including 600 videos with color, brightness, and contrast enhancements, 310 videos with deblurring, and 301 deshaked videos. The challenge has a total of 167 registered participants. 61 participating teams submitted their prediction results during the development phase, with a total of 3168 submissions. A total of 176 submissions were submitted by 37 participating teams during the final testing phase. Finally, 19 participating teams submitted their models and fact sheets, and detailed the methods they used. Some methods have achieved better results than baseline methods, and the winning methods have demonstrated superior prediction performance.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 02:33:42 GMT" } ]
2023-07-20T00:00:00
[ [ "Liu", "Xiaohong", "" ], [ "Min", "Xiongkuo", "" ], [ "Sun", "Wei", "" ], [ "Zhang", "Yulun", "" ], [ "Zhang", "Kai", "" ], [ "Timofte", "Radu", "" ], [ "Zhai", "Guangtao", "" ], [ "Gao", "Yixuan", "" ], [ "Cao", "Yuqin", "" ], [ "Kou", "Tengchuan", "" ], [ "Dong", "Yunlong", "" ], [ "Jia", "Ziheng", "" ], [ "Li", "Yilin", "" ], [ "Wu", "Wei", "" ], [ "Hu", "Shuming", "" ], [ "Deng", "Sibin", "" ], [ "Xiao", "Pengxiang", "" ], [ "Chen", "Ying", "" ], [ "Li", "Kai", "" ], [ "Zhao", "Kai", "" ], [ "Yuan", "Kun", "" ], [ "Sun", "Ming", "" ], [ "Cong", "Heng", "" ], [ "Wang", "Hao", "" ], [ "Fu", "Lingzhi", "" ], [ "Zhang", "Yusheng", "" ], [ "Zhang", "Rongyu", "" ], [ "Shi", "Hang", "" ], [ "Xu", "Qihang", "" ], [ "Xiao", "Longan", "" ], [ "Ma", "Zhiliang", "" ], [ "Agarla", "Mirko", "" ], [ "Celona", "Luigi", "" ], [ "Rota", "Claudio", "" ], [ "Schettini", "Raimondo", "" ], [ "Huang", "Zhiwei", "" ], [ "Li", "Yanan", "" ], [ "Wang", "Xiaotao", "" ], [ "Lei", "Lei", "" ], [ "Liu", "Hongye", "" ], [ "Hong", "Wei", "" ], [ "Chuang", "Ironhead", "" ], [ "Lin", "Allen", "" ], [ "Guan", "Drake", "" ], [ "Chen", "Iris", "" ], [ "Lou", "Kae", "" ], [ "Huang", "Willy", "" ], [ "Tasi", "Yachun", "" ], [ "Kao", "Yvonne", "" ], [ "Fan", "Haotian", "" ], [ "Kong", "Fangyuan", "" ], [ "Zhou", "Shiqi", "" ], [ "Liu", "Hao", "" ], [ "Lai", "Yu", "" ], [ "Chen", "Shanshan", "" ], [ "Wang", "Wenqi", "" ], [ "Wu", "Haoning", "" ], [ "Chen", "Chaofeng", "" ], [ "Zhu", "Chunzheng", "" ], [ "Guo", "Zekun", "" ], [ "Zhao", "Shiling", "" ], [ "Yin", "Haibing", "" ], [ "Wang", "Hongkui", "" ], [ "Meftah", "Hanene Brachemi", "" ], [ "Fezza", "Sid Ahmed", "" ], [ "Hamidouche", "Wassim", "" ], [ "Déforges", "Olivier", "" ], [ "Shi", "Tengfei", "" ], [ "Mansouri", "Azadeh", "" ], [ "Motamednia", "Hossein", "" ], [ "Bakhtiari", "Amir Hossein", "" ], [ "Aznaveh", "Ahmad Mahmoudi", "" ] ]
new_dataset
0.99967
2307.09730
Monica Li
Monica S. Li and Hannah S. Stuart
AcousTac: Tactile sensing with acoustic resonance for electronics-free soft skin
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Sound is a rich information medium that transmits through air; people communicate through speech and can even discern material through tapping and listening. To capture frequencies in the human hearing range, commercial microphones typically have a sampling rate of over 40kHz. These accessible acoustic technologies are not yet widely adopted for the explicit purpose of giving robots a sense of touch. Some researchers have used sound to sense tactile information, both monitoring ambient soundscape and with embedded speakers and microphones to measure sounds within structures. However, these options commonly do not provide a direct measure of steady state force, or require electronics integrated somewhere near the contact location. In this work, we present AcousTac, an acoustic tactile sensor for electronics-free force sensitive soft skin. Compliant silicone caps and plastic tubes compose the resonant chambers that emit pneumatic-driven sound measurable with a conventional off-board microphone. The resulting frequency changes depend on the external loads on the compliant end caps. We can tune each AcousTac taxel to specific force and frequency ranges, based on geometric parameters, including tube length and end-cap geometry and thus uniquely sense each taxel simultaneously in an array. We demonstrate AcousTac's functionality on two robotic systems: a 4-taxel array and a 3-taxel astrictive gripper. AcousTac is a promising concept for force sensing on soft robotic surfaces, especially in situations where electronics near the contact are not suitable. Equipping robots with tactile sensing and soft skin provides them with a sense of touch and the ability to safely interact with their surroundings.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 02:47:58 GMT" } ]
2023-07-20T00:00:00
[ [ "Li", "Monica S.", "" ], [ "Stuart", "Hannah S.", "" ] ]
new_dataset
0.999462
2307.09776
Shaun Azzopardi
Shaun Azzopardi, Nir Piterman, Gerardo Schneider, Luca di Stefano
LTL Synthesis on Infinite-State Arenas defined by Programs
null
null
null
null
cs.LO cs.FL cs.PL cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper deals with the problem of automatically and correctly controlling infinite-state reactive programs to achieve LTL goals. Applications include adapting a program to new requirements, or to repair bugs discovered in the original specification or program code. Existing approaches are able to solve this problem for safety and some reachability properties, but require an a priori template of the solution for more general properties. Fully automated approaches for full LTL exist, reducing the problem into successive finite LTL reactive synthesis problems in an abstraction-refinement loop. However, they do not terminate when the number of steps to be completed depends on unbounded variables. Our main insight is that safety abstractions of the program are not enough -- fairness properties are also essential to be able to decide many interesting problems, something missed by existing automated approaches. We thus go beyond the state-of-the-art to allow for automated reactive program control for full LTL, with automated discovery of the knowledge, including fairness, of the program needed to determine realisability. We further implement the approach in a tool, with an associated DSL for reactive programs, and illustrate the approach through several case studies.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 06:33:51 GMT" } ]
2023-07-20T00:00:00
[ [ "Azzopardi", "Shaun", "" ], [ "Piterman", "Nir", "" ], [ "Schneider", "Gerardo", "" ], [ "di Stefano", "Luca", "" ] ]
new_dataset
0.994364
2307.09777
Shuo Huang
Shuo Huang, Chengpeng Hu, Julian Togelius, Jialin Liu
Generating Redstone Style Cities in Minecraft
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Procedurally generating cities in Minecraft provides players more diverse scenarios and could help understand and improve the design of cities in other digital worlds and the real world. This paper presents a city generator that was submitted as an entry to the 2023 Edition of Minecraft Settlement Generation Competition for Minecraft. The generation procedure is composed of six main steps, namely vegetation clearing, terrain reshaping, building layout generation, route planning, streetlight placement, and wall construction. Three algorithms, including a heuristic-based algorithm, an evolving layout algorithm, and a random one are applied to generate the building layout, thus determining where to place different redstone style buildings, and tested by generating cities on random maps in limited time. Experimental results show that the heuristic-based algorithm is capable of finding an acceptable building layout faster for flat maps, while the evolving layout algorithm performs better in evolving layout for rugged maps. A user study is conducted to compare our generator with outstanding entries of the competition's 2022 edition using the competition's evaluation criteria and shows that our generator performs well in the adaptation and functionality criteria
[ { "version": "v1", "created": "Wed, 19 Jul 2023 06:36:01 GMT" } ]
2023-07-20T00:00:00
[ [ "Huang", "Shuo", "" ], [ "Hu", "Chengpeng", "" ], [ "Togelius", "Julian", "" ], [ "Liu", "Jialin", "" ] ]
new_dataset
0.999006
2307.09834
Keeley Erhardt
Keeley Erhardt and Saurabh Khanna
Who Provides the Largest Megaphone? The Role of Google News in Promoting Russian State-Affiliated News Sources
null
The 9th International Conference on Computational Social Science (IC2S2). 2023
null
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
The Internet has not only digitized but also democratized information access across the globe. This gradual but path-breaking move to online information propagation has resulted in search engines playing an increasingly prominent role in shaping access to human knowledge. When an Internet user enters a query, the search engine sorts through the hundreds of billions of possible webpages to determine what to show. Google dominates the search engine market, with Google Search surpassing 80% market share globally every year of the last decade. Only in Russia and China do Google competitors claim more market share, with approximately 60% of Internet users in Russia preferring Yandex (compared to 40% in favor of Google) and more than 80% of China's Internet users accessing Baidu as of 2022. Notwithstanding this long-standing regional variation in Internet search providers, there is limited research showing how these providers compare in terms of propagating state-sponsored information. Our study fills this research gap by focusing on Russian cyberspace and examining how Google and Yandex's search algorithms rank content from Russian state-controlled media (hereon, RSM) outlets. This question is timely and of practical interest given widespread reports indicating that RSM outlets have actively engaged in promoting Kremlin propaganda in the lead-up to, and in the aftermath of, the Russian invasion of Ukraine in February 2022.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 08:44:11 GMT" } ]
2023-07-20T00:00:00
[ [ "Erhardt", "Keeley", "" ], [ "Khanna", "Saurabh", "" ] ]
new_dataset
0.964704
2307.09860
Ke Li
Ke Li, Susanne Schmidt, Tim Rolff, Reinhard Bacher, Wim Leemans, Frank Steinicke
Magic NeRF Lens: Interactive Fusion of Neural Radiance Fields for Virtual Facility Inspection
This work has been submitted to the IEEE TVCG for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.GR cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Large industrial facilities such as particle accelerators and nuclear power plants are critical infrastructures for scientific research and industrial processes. These facilities are complex systems that not only require regular maintenance and upgrades but are often inaccessible to humans due to various safety hazards. Therefore, a virtual reality (VR) system that can quickly replicate real-world remote environments to provide users with a high level of spatial and situational awareness is crucial for facility maintenance planning. However, the exact 3D shapes of these facilities are often too complex to be accurately modeled with geometric primitives through the traditional rasterization pipeline. In this work, we develop Magic NeRF Lens, an interactive framework to support facility inspection in immersive VR using neural radiance fields (NeRF) and volumetric rendering. We introduce a novel data fusion approach that combines the complementary strengths of volumetric rendering and geometric rasterization, allowing a NeRF model to be merged with other conventional 3D data, such as a computer-aided design model. We develop two novel 3D magic lens effects to optimize NeRF rendering by exploiting the properties of human vision and context-aware visualization. We demonstrate the high usability of our framework and methods through a technical benchmark, a visual search user study, and expert reviews. In addition, the source code of our VR NeRF framework is made publicly available for future research and development.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 09:43:47 GMT" } ]
2023-07-20T00:00:00
[ [ "Li", "Ke", "" ], [ "Schmidt", "Susanne", "" ], [ "Rolff", "Tim", "" ], [ "Bacher", "Reinhard", "" ], [ "Leemans", "Wim", "" ], [ "Steinicke", "Frank", "" ] ]
new_dataset
0.99566
2307.09885
Dawen Zhang
Dawen Zhang, Thong Hoang, Shidong Pan, Yongquan Hu, Zhenchang Xing, Mark Staples, Xiwei Xu, Qinghua Lu, Aaron Quigley
Test-takers have a say: understanding the implications of the use of AI in language tests
null
null
null
null
cs.CY cs.AI cs.CL cs.HC
http://creativecommons.org/licenses/by/4.0/
Language tests measure a person's ability to use a language in terms of listening, speaking, reading, or writing. Such tests play an integral role in academic, professional, and immigration domains, with entities such as educational institutions, professional accreditation bodies, and governments using them to assess candidate language proficiency. Recent advances in Artificial Intelligence (AI) and the discipline of Natural Language Processing have prompted language test providers to explore AI's potential applicability within language testing, leading to transformative activity patterns surrounding language instruction and learning. However, with concerns over AI's trustworthiness, it is imperative to understand the implications of integrating AI into language testing. This knowledge will enable stakeholders to make well-informed decisions, thus safeguarding community well-being and testing integrity. To understand the concerns and effects of AI usage in language tests, we conducted interviews and surveys with English test-takers. To the best of our knowledge, this is the first empirical study aimed at identifying the implications of AI adoption in language tests from a test-taker perspective. Our study reveals test-taker perceptions and behavioral patterns. Specifically, we identify that AI integration may enhance perceptions of fairness, consistency, and availability. Conversely, it might incite mistrust regarding reliability and interactivity aspects, subsequently influencing the behaviors and well-being of test-takers. These insights provide a better understanding of potential societal implications and assist stakeholders in making informed decisions concerning AI usage in language testing.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 10:28:59 GMT" } ]
2023-07-20T00:00:00
[ [ "Zhang", "Dawen", "" ], [ "Hoang", "Thong", "" ], [ "Pan", "Shidong", "" ], [ "Hu", "Yongquan", "" ], [ "Xing", "Zhenchang", "" ], [ "Staples", "Mark", "" ], [ "Xu", "Xiwei", "" ], [ "Lu", "Qinghua", "" ], [ "Quigley", "Aaron", "" ] ]
new_dataset
0.982314
2307.09905
Martin Balla
Martin Balla, George E.M. Long, Dominik Jeurissen, James Goodman, Raluca D. Gaina, Diego Perez-Liebana
PyTAG: Challenges and Opportunities for Reinforcement Learning in Tabletop Games
Accepted for Publication in: IEEE Conference on Games (2023)
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
In recent years, Game AI research has made important breakthroughs using Reinforcement Learning (RL). Despite this, RL for modern tabletop games has gained little to no attention, even when they offer a range of unique challenges compared to video games. To bridge this gap, we introduce PyTAG, a Python API for interacting with the Tabletop Games framework (TAG). TAG contains a growing set of more than 20 modern tabletop games, with a common API for AI agents. We present techniques for training RL agents in these games and introduce baseline results after training Proximal Policy Optimisation algorithms on a subset of games. Finally, we discuss the unique challenges complex modern tabletop games provide, now open to RL research through PyTAG.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 11:08:59 GMT" } ]
2023-07-20T00:00:00
[ [ "Balla", "Martin", "" ], [ "Long", "George E. M.", "" ], [ "Jeurissen", "Dominik", "" ], [ "Goodman", "James", "" ], [ "Gaina", "Raluca D.", "" ], [ "Perez-Liebana", "Diego", "" ] ]
new_dataset
0.999183
2307.09972
Zuozhuo Dai
Zuozhuo Dai, Fangtao Shao, Qingkun Su, Zilong Dong, Siyu Zhu
Fine-grained Text-Video Retrieval with Frozen Image Encoders
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
State-of-the-art text-video retrieval (TVR) methods typically utilize CLIP and cosine similarity for efficient retrieval. Meanwhile, cross attention methods, which employ a transformer decoder to compute attention between each text query and all frames in a video, offer a more comprehensive interaction between text and videos. However, these methods lack important fine-grained spatial information as they directly compute attention between text and video-level tokens. To address this issue, we propose CrossTVR, a two-stage text-video retrieval architecture. In the first stage, we leverage existing TVR methods with cosine similarity network for efficient text/video candidate selection. In the second stage, we propose a novel decoupled video text cross attention module to capture fine-grained multimodal information in spatial and temporal dimensions. Additionally, we employ the frozen CLIP model strategy in fine-grained retrieval, enabling scalability to larger pre-trained vision models like ViT-G, resulting in improved retrieval performance. Experiments on text video retrieval datasets demonstrate the effectiveness and scalability of our proposed CrossTVR compared to state-of-the-art approaches.
[ { "version": "v1", "created": "Fri, 14 Jul 2023 02:57:00 GMT" } ]
2023-07-20T00:00:00
[ [ "Dai", "Zuozhuo", "" ], [ "Shao", "Fangtao", "" ], [ "Su", "Qingkun", "" ], [ "Dong", "Zilong", "" ], [ "Zhu", "Siyu", "" ] ]
new_dataset
0.980426
2307.10008
Yunfei Liu
Yunfei Liu, Lijian Lin, Fei Yu, Changyin Zhou, Yu Li
MODA: Mapping-Once Audio-driven Portrait Animation with Dual Attentions
Accepted by ICCV 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Audio-driven portrait animation aims to synthesize portrait videos that are conditioned by given audio. Animating high-fidelity and multimodal video portraits has a variety of applications. Previous methods have attempted to capture different motion modes and generate high-fidelity portrait videos by training different models or sampling signals from given videos. However, lacking correlation learning between lip-sync and other movements (e.g., head pose/eye blinking) usually leads to unnatural results. In this paper, we propose a unified system for multi-person, diverse, and high-fidelity talking portrait generation. Our method contains three stages, i.e., 1) Mapping-Once network with Dual Attentions (MODA) generates talking representation from given audio. In MODA, we design a dual-attention module to encode accurate mouth movements and diverse modalities. 2) Facial composer network generates dense and detailed face landmarks, and 3) temporal-guided renderer syntheses stable videos. Extensive evaluations demonstrate that the proposed system produces more natural and realistic video portraits compared to previous methods.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 14:45:11 GMT" } ]
2023-07-20T00:00:00
[ [ "Liu", "Yunfei", "" ], [ "Lin", "Lijian", "" ], [ "Yu", "Fei", "" ], [ "Zhou", "Changyin", "" ], [ "Li", "Yu", "" ] ]
new_dataset
0.997548
2307.10018
Felipe B. Martins
Aline Lima de Oliveira, Cau\^e Addae da Silva Gomes, Cec\'ilia Virginia Santos da Silva, Charles Matheus de Sousa Alves, Danilo Andrade Martins de Souza, Driele Pires Ferreira Ara\'ujo Xavier, Edgleyson Pereira da Silva, Felipe Bezerra Martins, Lucas Henrique Cavalcanti Santos, Lucas Dias Maciel, Matheus Paix\~ao Gumercindo dos Santos, Matheus Lafayette Vasconcelos, Matheus Vin\'icius Teotonio do Nascimento Andrade, Jo\~ao Guilherme Oliveira Carvalho de Melo, Jo\~ao Pedro Souza Pereira de Moura, Jos\'e Ronald da Silva, Jos\'e Victor Silva Cruz, Pedro Henrique Santana de Morais, Pedro Paulo Salman de Oliveira, Riei Joaquim Matos Rodrigues, Roberto Costa Fernandes, Ryan Vinicius Santos Morais, Tamara Mayara Ramos Teobaldo, Washington Igor dos Santos Silva, Edna Natividade Silva Barros
Rob\^oCIn Small Size League Extended Team Description Paper for RoboCup 2023
null
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Rob\^oCIn has participated in RoboCup Small Size League since 2019, won its first world title in 2022 (Division B), and is currently a three-times Latin-American champion. This paper presents our improvements to defend the Small Size League (SSL) division B title in RoboCup 2023 in Bordeaux, France. This paper aims to share some of the academic research that our team developed over the past year. Our team has successfully published 2 articles related to SSL at two high-impact conferences: the 25th RoboCup International Symposium and the 19th IEEE Latin American Robotics Symposium (LARS 2022). Over the last year, we have been continuously migrating from our past codebase to Unification. We will describe the new architecture implemented and some points of software and AI refactoring. In addition, we discuss the process of integrating machined components into the mechanical system, our development for participating in the vision blackout challenge last year and what we are preparing for this year.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 14:58:30 GMT" } ]
2023-07-20T00:00:00
[ [ "de Oliveira", "Aline Lima", "" ], [ "Gomes", "Cauê Addae da Silva", "" ], [ "da Silva", "Cecília Virginia Santos", "" ], [ "Alves", "Charles Matheus de Sousa", "" ], [ "de Souza", "Danilo Andrade Martins", "" ], [ "Xavier", "Driele Pires Ferreira Araújo", "" ], [ "da Silva", "Edgleyson Pereira", "" ], [ "Martins", "Felipe Bezerra", "" ], [ "Santos", "Lucas Henrique Cavalcanti", "" ], [ "Maciel", "Lucas Dias", "" ], [ "Santos", "Matheus Paixão Gumercindo dos", "" ], [ "Vasconcelos", "Matheus Lafayette", "" ], [ "Andrade", "Matheus Vinícius Teotonio do Nascimento", "" ], [ "de Melo", "João Guilherme Oliveira Carvalho", "" ], [ "de Moura", "João Pedro Souza Pereira", "" ], [ "da Silva", "José Ronald", "" ], [ "Cruz", "José Victor Silva", "" ], [ "de Morais", "Pedro Henrique Santana", "" ], [ "de Oliveira", "Pedro Paulo Salman", "" ], [ "Rodrigues", "Riei Joaquim Matos", "" ], [ "Fernandes", "Roberto Costa", "" ], [ "Morais", "Ryan Vinicius Santos", "" ], [ "Teobaldo", "Tamara Mayara Ramos", "" ], [ "Silva", "Washington Igor dos Santos", "" ], [ "Barros", "Edna Natividade Silva", "" ] ]
new_dataset
0.980907
2307.10022
Konstantinos Pitas
Konstantinos Pitas
Europepolls: A Dataset of Country-Level Opinion Polling Data for the European Union and the UK
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
I propose an open dataset of country-level historical opinion polling data for the European Union and the UK. The dataset aims to fill a gap in available opinion polling data for the European Union. Some existing datasets are restricted to the past five years, limiting research opportunities. At the same time, some larger proprietary datasets exist but are available only in a visual preprocessed time series format. Finally, while other large datasets for individual countries might exist, these could be inaccessible due to language barriers. The data was gathered from Wikipedia, and preprocessed using the pandas library. Both the raw and the preprocessed data are in the .csv format. I hope that given the recent advances in LLMs and deep learning in general, this large dataset will enable researchers to uncover complex interactions between multimodal data (news articles, economic indicators, social media) and voting behavior. The raw data, the preprocessed data, and the preprocessing scripts are available on GitHub.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 15:05:55 GMT" } ]
2023-07-20T00:00:00
[ [ "Pitas", "Konstantinos", "" ] ]
new_dataset
0.999783
2307.10034
Carlo Sartiani
Lyes Attouche, Mohamed-Amine Baazizi, Dario Colazzo, Giorgio Ghelli, Carlo Sartiani, Stefanie Scherzinger
Validation of Modern JSON Schema: Formalization and Complexity
null
null
null
null
cs.DB cs.PL
http://creativecommons.org/licenses/by/4.0/
JSON Schema is the de-facto standard schema language for JSON data. The language went through many minor revisions, but the most recent versions of the language added two novel features, dynamic references and annotation-dependent validation, that change the evaluation model. Modern JSON Schema is the name used to indicate all versions from Draft 2019-09, which are characterized by these new features, while Classical JSON Schema is used to indicate the previous versions. These new "modern" features make the schema language quite difficult to understand, and have generated many discussions about the correct interpretation of their official specifications; for this reason we undertook the task of their formalization. During this process, we also analyzed the complexity of data validation in Modern JSON Schema, with the idea of confirming the PTIME complexity of Classical JSON Schema validation, and we were surprised to discover a completely different truth: data validation, that is expected to be an extremely efficient process, acquires, with Modern JSON Schema features, a PSPACE complexity. In this paper, we give the first formal description of Modern JSON Schema, which we consider a central contribution of the work that we present here. We then prove that its data validation problem is PSPACE-complete. We prove that the origin of the problem lies in dynamic references, and not in annotation-dependent validation. We study the schema and data complexities, showing that the problem is PSPACE-complete with respect to the schema size even with a fixed instance, but is in PTIME when the schema is fixed and only the instance size is allowed to vary. Finally, we run experiments that show that there are families of schemas where the difference in asymptotic complexity between dynamic and static references is extremely visible, even with small schemas.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 15:18:27 GMT" } ]
2023-07-20T00:00:00
[ [ "Attouche", "Lyes", "" ], [ "Baazizi", "Mohamed-Amine", "" ], [ "Colazzo", "Dario", "" ], [ "Ghelli", "Giorgio", "" ], [ "Sartiani", "Carlo", "" ], [ "Scherzinger", "Stefanie", "" ] ]
new_dataset
0.9915
2307.10039
Bryan Tjandra
Bryan Tjandra, Made S. N. Negara, Nyoo S. C. Handoko
Deteksi Sampah di Permukaan dan Dalam Perairan pada Objek Video dengan Metode Robust and Efficient Post-Processing dan Tubelet-Level Bounding Box Linking
14 pages, in Indonesian language, 14 figures
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
Indonesia, as a maritime country, has a significant portion of its territory covered by water. Ineffective waste management has resulted in a considerable amount of trash in Indonesian waters, leading to various issues. The development of an automated trash-collecting robot can be a solution to address this problem. The robot requires a system capable of detecting objects in motion, such as in videos. However, using naive object detection methods in videos has limitations, particularly when image focus is reduced and the target object is obstructed by other objects. This paper's contribution provides an explanation of the methods that can be applied to perform video object detection in an automated trash-collecting robot. The study utilizes the YOLOv5 model and the Robust & Efficient Post Processing (REPP) method, along with tubelet-level bounding box linking on the FloW and Roboflow datasets. The combination of these methods enhances the performance of naive object detection from YOLOv5 by considering the detection results in adjacent frames. The results show that the post-processing stage and tubelet-level bounding box linking can improve the quality of detection, achieving approximately 3% better performance compared to YOLOv5 alone. The use of these methods has the potential to detect surface and underwater trash and can be applied to a real-time image-based trash-collecting robot. Implementing this system is expected to mitigate the damage caused by trash in the past and improve Indonesia's waste management system in the future.
[ { "version": "v1", "created": "Fri, 14 Jul 2023 04:04:15 GMT" } ]
2023-07-20T00:00:00
[ [ "Tjandra", "Bryan", "" ], [ "Negara", "Made S. N.", "" ], [ "Handoko", "Nyoo S. C.", "" ] ]
new_dataset
0.999131
2307.10054
Soheil Abbasloo
Soheil Abbasloo
Internet Congestion Control Benchmarking
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by-sa/4.0/
How do we assess a new Internet congestion control (CC) design? How do we compare it with other existing schemes? Under what scenarios and using what network parameters? These are just a handful of simple questions coming up every time a new CC design is going to be evaluated. Interestingly, the number of specific answers to these questions can be as large as the number of CC designers. In this work, we aim to highlight that the network congestion control, as a hot and active research topic, requires a crystal clear set(s) of \textit{CC Benchmarks} to form a common ground for quantitatively comparing and unambiguously assessing the strengths and weaknesses of a design with respect to the existing ones. As a first step toward that goal, we introduce general benchmarks that can capture the different performance of the existing Internet CC schemes. Using these benchmarks, we rank the Internet CC algorithms and illustrate that there is still lots of room for more innovations and improvements in this topic.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 15:26:56 GMT" } ]
2023-07-20T00:00:00
[ [ "Abbasloo", "Soheil", "" ] ]
new_dataset
0.997808
2307.10080
Nir Weinberger
Nir Weinberger, Ilan Shomorony
Fundamental Limits of Reference-Based Sequence Reordering
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of reconstructing a sequence of independent and identically distributed symbols from a set of equal size, consecutive, fragments, as well as a dependent reference sequence, is considered. First, in the regime in which the fragments are relatively long, and typically no fragment appears more than once, the scaling of the failure probability of maximum likelihood reconstruction algorithm is exactly determined for perfect reconstruction and bounded for partial reconstruction. Second, the regime in which the fragments are relatively short and repeating fragments abound is characterized. A trade-off is stated between the fraction of fragments that cannot be adequately reconstructed vs. the distortion level allowed for the reconstruction of each fragment, while still allowing vanishing failure probability
[ { "version": "v1", "created": "Wed, 19 Jul 2023 15:53:54 GMT" } ]
2023-07-20T00:00:00
[ [ "Weinberger", "Nir", "" ], [ "Shomorony", "Ilan", "" ] ]
new_dataset
0.968373
2307.10088
Christopher Rawles
Christopher Rawles, Alice Li, Daniel Rodriguez, Oriana Riva, Timothy Lillicrap
Android in the Wild: A Large-Scale Dataset for Android Device Control
null
null
null
null
cs.LG cs.CL cs.HC
http://creativecommons.org/licenses/by/4.0/
There is a growing interest in device-control systems that can interpret human natural language instructions and execute them on a digital device by directly controlling its user interface. We present a dataset for device-control research, Android in the Wild (AITW), which is orders of magnitude larger than current datasets. The dataset contains human demonstrations of device interactions, including the screens and actions, and corresponding natural language instructions. It consists of 715k episodes spanning 30k unique instructions, four versions of Android (v10-13),and eight device types (Pixel 2 XL to Pixel 6) with varying screen resolutions. It contains multi-step tasks that require semantic understanding of language and visual context. This dataset poses a new challenge: actions available through the user interface must be inferred from their visual appearance. And, instead of simple UI element-based actions, the action space consists of precise gestures (e.g., horizontal scrolls to operate carousel widgets). We organize our dataset to encourage robustness analysis of device-control systems, i.e., how well a system performs in the presence of new task descriptions, new applications, or new platform versions. We develop two agents and report performance across the dataset. The dataset is available at https://github.com/google-research/google-research/tree/master/android_in_the_wild.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 15:57:24 GMT" } ]
2023-07-20T00:00:00
[ [ "Rawles", "Christopher", "" ], [ "Li", "Alice", "" ], [ "Rodriguez", "Daniel", "" ], [ "Riva", "Oriana", "" ], [ "Lillicrap", "Timothy", "" ] ]
new_dataset
0.999889
2307.10097
Junhao Dong
Junhao Dong, Zhu Meng, Delong Liu, Zhicheng Zhao and Fei Su
Boundary-Refined Prototype Generation: A General End-to-End Paradigm for Semi-Supervised Semantic Segmentation
53 pages, 7 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Prototype-based classification is a classical method in machine learning, and recently it has achieved remarkable success in semi-supervised semantic segmentation. However, the current approach isolates the prototype initialization process from the main training framework, which appears to be unnecessary. Furthermore, while the direct use of K-Means algorithm for prototype generation has considered rich intra-class variance, it may not be the optimal solution for the classification task. To tackle these problems, we propose a novel boundary-refined prototype generation (BRPG) method, which is incorporated into the whole training framework. Specifically, our approach samples and clusters high- and low-confidence features separately based on a confidence threshold, aiming to generate prototypes closer to the class boundaries. Moreover, an adaptive prototype optimization strategy is introduced to make prototype augmentation for categories with scattered feature distributions. Extensive experiments on the PASCAL VOC 2012 and Cityscapes datasets demonstrate the superiority and scalability of the proposed method, outperforming the current state-of-the-art approaches. The code is available at xxxxxxxxxxxxxx.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 16:12:37 GMT" } ]
2023-07-20T00:00:00
[ [ "Dong", "Junhao", "" ], [ "Meng", "Zhu", "" ], [ "Liu", "Delong", "" ], [ "Zhao", "Zhicheng", "" ], [ "Su", "Fei", "" ] ]
new_dataset
0.996539
2307.10166
Jiajie Fan
Jiajie Fan, Laure Vuaille, Hao Wang, Thomas B\"ack
Adversarial Latent Autoencoder with Self-Attention for Structural Image Synthesis
18 pages, 8 figures
null
null
null
cs.CV cs.CE eess.IV
http://creativecommons.org/licenses/by/4.0/
Generative Engineering Design approaches driven by Deep Generative Models (DGM) have been proposed to facilitate industrial engineering processes. In such processes, designs often come in the form of images, such as blueprints, engineering drawings, and CAD models depending on the level of detail. DGMs have been successfully employed for synthesis of natural images, e.g., displaying animals, human faces and landscapes. However, industrial design images are fundamentally different from natural scenes in that they contain rich structural patterns and long-range dependencies, which are challenging for convolution-based DGMs to generate. Moreover, DGM-driven generation process is typically triggered based on random noisy inputs, which outputs unpredictable samples and thus cannot perform an efficient industrial design exploration. We tackle these challenges by proposing a novel model Self-Attention Adversarial Latent Autoencoder (SA-ALAE), which allows generating feasible design images of complex engineering parts. With SA-ALAE, users can not only explore novel variants of an existing design, but also control the generation process by operating in latent space. The potential of SA-ALAE is shown by generating engineering blueprints in a real automotive design task.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 17:50:03 GMT" } ]
2023-07-20T00:00:00
[ [ "Fan", "Jiajie", "" ], [ "Vuaille", "Laure", "" ], [ "Wang", "Hao", "" ], [ "Bäck", "Thomas", "" ] ]
new_dataset
0.99982
2307.10171
Sean Bin Yang
Sean Bin Yang, Jilin Hu, Chenjuan Guo, Bin Yang and Christian S. Jensen
LightPath: Lightweight and Scalable Path Representation Learning
This paper has been accepted by ACM SIGKDD-23
null
null
null
cs.LG cs.AI cs.DB
http://creativecommons.org/licenses/by/4.0/
Movement paths are used widely in intelligent transportation and smart city applications. To serve such applications, path representation learning aims to provide compact representations of paths that enable efficient and accurate operations when used for different downstream tasks such as path ranking and travel cost estimation. In many cases, it is attractive that the path representation learning is lightweight and scalable; in resource-limited environments and under green computing limitations, it is essential. Yet, existing path representation learning studies focus on accuracy and pay at most secondary attention to resource consumption and scalability. We propose a lightweight and scalable path representation learning framework, termed LightPath, that aims to reduce resource consumption and achieve scalability without affecting accuracy, thus enabling broader applicability. More specifically, we first propose a sparse auto-encoder that ensures that the framework achieves good scalability with respect to path length. Next, we propose a relational reasoning framework to enable faster training of more robust sparse path encoders. We also propose global-local knowledge distillation to further reduce the size and improve the performance of sparse path encoders. Finally, we report extensive experiments on two real-world datasets to offer insight into the efficiency, scalability, and effectiveness of the proposed framework.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 17:57:27 GMT" } ]
2023-07-20T00:00:00
[ [ "Yang", "Sean Bin", "" ], [ "Hu", "Jilin", "" ], [ "Guo", "Chenjuan", "" ], [ "Yang", "Bin", "" ], [ "Jensen", "Christian S.", "" ] ]
new_dataset
0.994726
1302.3820
Neal Patwari
Neal Patwari, Lara Brewer, Quinn Tate, Ossi Kaltiokallio, and Maurizio Bocca
Breathfinding: A Wireless Network that Monitors and Locates Breathing in a Home
null
null
10.1109/JSTSP.2013.2287473
null
cs.HC
http://creativecommons.org/licenses/by-nc-sa/3.0/
This paper explores using RSS measurements on many links in a wireless network to estimate the breathing rate of a person, and the location where the breathing is occurring, in a home, while the person is sitting, laying down, standing, or sleeping. The main challenge in breathing rate estimation is that "motion interference", i.e., movements other than a person's breathing, generally cause larger changes in RSS than inhalation and exhalation. We develop a method to estimate breathing rate despite motion interference, and demonstrate its performance during multiple short (3-7 minute) tests and during a longer 66 minute test. Further, for the same experiments, we show the location of the breathing person can be estimated, to within about 2 m average error in a 56 square meter apartment. Being able to locate a breathing person who is not otherwise moving, without calibration, is important for applications in search and rescue, health care, and security.
[ { "version": "v1", "created": "Fri, 15 Feb 2013 17:46:50 GMT" } ]
2023-07-19T00:00:00
[ [ "Patwari", "Neal", "" ], [ "Brewer", "Lara", "" ], [ "Tate", "Quinn", "" ], [ "Kaltiokallio", "Ossi", "" ], [ "Bocca", "Maurizio", "" ] ]
new_dataset
0.966386
1309.0750
Mohammad Noshad
Mohammad Noshad and Maite Brandt-Pearce
Application of Expurgated PPM to Indoor Visible Light Communications - Part I: Single-User Systems
Journal of Lightwave Technology
null
10.1109/JLT.2013.2293341
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visible light communications (VLC) in indoor environments suffer from the limited bandwidth of LEDs as well as from the inter-symbol interference (ISI) imposed by multipath. In this work, transmission schemes to improve the performance of indoor optical wireless communication (OWC) systems are introduced. Expurgated pulse-position modulation (EPPM) is proposed for this application since it can provide a wide range of peak to average power ratios (PAPR) needed for dimming of the indoor illumination. A correlation decoder used at the receiver is shown to be optimal for indoor VLC systems, which are shot noise and background-light limited. Interleaving applied on EPPM in order to decrease the ISI effect in dispersive VLC channels can significantly decrease the error probability. The proposed interleaving technique makes EPPM a better modulation option compared to PPM for VLC systems or any other dispersive OWC system. An overlapped EPPM pulse technique is proposed to increase the transmission rate when bandwidth-limited white LEDs are used as sources.
[ { "version": "v1", "created": "Tue, 3 Sep 2013 17:20:27 GMT" } ]
2023-07-19T00:00:00
[ [ "Noshad", "Mohammad", "" ], [ "Brandt-Pearce", "Maite", "" ] ]
new_dataset
0.965798
1408.2192
Feng Jiang
Feng Jiang and Jie Chen and A. Lee Swindlehurst and Jose A. Lopez-Salcedo
Massive MIMO for Wireless Sensing with a Coherent Multiple Access Channel
32 pages, 6 figures, accepted by IEEE Transactions on Signal Processing, Feb. 2015
null
10.1109/TSP.2015.2417508
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the detection and estimation of a zero-mean Gaussian signal in a wireless sensor network with a coherent multiple access channel, when the fusion center (FC) is configured with a large number of antennas and the wireless channels between the sensor nodes and FC experience Rayleigh fading. For the detection problem, we study the Neyman-Pearson (NP) Detector and Energy Detector (ED), and find optimal values for the sensor transmission gains. For the NP detector which requires channel state information (CSI), we show that detection performance remains asymptotically constant with the number of FC antennas if the sensor transmit power decreases proportionally with the increase in the number of antennas. Performance bounds show that the benefit of multiple antennas at the FC disappears as the transmit power grows. The results of the NP detector are also generalized to the linear minimum mean squared error estimator. For the ED which does not require CSI, we derive optimal gains that maximize the deflection coefficient of the detector, and we show that a constant deflection can be asymptotically achieved if the sensor transmit power scales as the inverse square root of the number of FC antennas. Unlike the NP detector, for high sensor power the multi-antenna ED is observed to empirically have significantly better performance than the single-antenna implementation. A number of simulation results are included to validate the analysis.
[ { "version": "v1", "created": "Sun, 10 Aug 2014 06:58:50 GMT" }, { "version": "v2", "created": "Mon, 23 Mar 2015 00:07:59 GMT" } ]
2023-07-19T00:00:00
[ [ "Jiang", "Feng", "" ], [ "Chen", "Jie", "" ], [ "Swindlehurst", "A. Lee", "" ], [ "Lopez-Salcedo", "Jose A.", "" ] ]
new_dataset
0.998009
1505.07192
Hongyang Li
Hongyang Li, Huchuan Lu, Zhe Lin, Xiaohui Shen, Brian Price
Inner and Inter Label Propagation: Salient Object Detection in the Wild
The full version of the TIP 2015 publication
null
10.1109/TIP.2015.2440174
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/3.0/
In this paper, we propose a novel label propagation based method for saliency detection. A key observation is that saliency in an image can be estimated by propagating the labels extracted from the most certain background and object regions. For most natural images, some boundary superpixels serve as the background labels and the saliency of other superpixels are determined by ranking their similarities to the boundary labels based on an inner propagation scheme. For images of complex scenes, we further deploy a 3-cue-center-biased objectness measure to pick out and propagate foreground labels. A co-transduction algorithm is devised to fuse both boundary and objectness labels based on an inter propagation scheme. The compactness criterion decides whether the incorporation of objectness labels is necessary, thus greatly enhancing computational efficiency. Results on five benchmark datasets with pixel-wise accurate annotations show that the proposed method achieves superior performance compared with the newest state-of-the-arts in terms of different evaluation metrics.
[ { "version": "v1", "created": "Wed, 27 May 2015 05:24:03 GMT" } ]
2023-07-19T00:00:00
[ [ "Li", "Hongyang", "" ], [ "Lu", "Huchuan", "" ], [ "Lin", "Zhe", "" ], [ "Shen", "Xiaohui", "" ], [ "Price", "Brian", "" ] ]
new_dataset
0.993453
1509.01226
Sajjad AbdollahRamezani
Sajjad AbdollahRamezani, Kamalodin Arik, Amin Khavasi, Zahra Kavehvash
Analog Computing Using Graphene-based Metalines
null
null
10.1364/OL.40.005239
null
cs.ET cond-mat.mes-hall physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the new concept of "metalines" for manipulating the amplitude and phase profile of an incident wave locally and independently. Thanks to the highly confined graphene plasmons, a transmit-array of graphene-based metalines is used to realize analog computing on an ultra-compact, integrable and planar platform. By employing the general concepts of spatial Fourier transformation, a well-designed structure of such meta-transmit-array combined with graded index lenses can perform two mathematical operations; i.e. differentiation and integration, with high efficiency. The presented configuration is about 60 times shorter than the recent structure proposed by Silva et al.(Science, 2014, 343, 160-163); moreover, our simulated output responses are in more agreement with the desired analytic results. These findings may lead to remarkable achievements in light-based plasmonic signal processors at nanoscale instead of their bulky conventional dielectric lens-based counterparts.
[ { "version": "v1", "created": "Wed, 2 Sep 2015 19:54:03 GMT" } ]
2023-07-19T00:00:00
[ [ "AbdollahRamezani", "Sajjad", "" ], [ "Arik", "Kamalodin", "" ], [ "Khavasi", "Amin", "" ], [ "Kavehvash", "Zahra", "" ] ]
new_dataset
0.995961
1910.00964
Venet Osmani
Seyedmostafa Sheikhalishahi, Vevake Balaraman, Venet Osmani
Benchmarking machine learning models on multi-centre eICU critical care dataset
Source code to replicate the results https://github.com/mostafaalishahi/eICU_Benchmark
null
10.1371/journal.pone.0235424
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Progress of machine learning in critical care has been difficult to track, in part due to absence of public benchmarks. Other fields of research (such as computer vision and natural language processing) have established various competitions and public benchmarks. Recent availability of large clinical datasets has enabled the possibility of establishing public benchmarks. Taking advantage of this opportunity, we propose a public benchmark suite to address four areas of critical care, namely mortality prediction, estimation of length of stay, patient phenotyping and risk of decompensation. We define each task and compare the performance of both clinical models as well as baseline and deep learning models using eICU critical care dataset of around 73,000 patients. This is the first public benchmark on a multi-centre critical care dataset, comparing the performance of clinical gold standard with our predictive model. We also investigate the impact of numerical variables as well as handling of categorical variables on each of the defined tasks. The source code, detailing our methods and experiments is publicly available such that anyone can replicate our results and build upon our work.
[ { "version": "v1", "created": "Wed, 2 Oct 2019 14:04:24 GMT" }, { "version": "v2", "created": "Wed, 6 May 2020 17:44:40 GMT" }, { "version": "v3", "created": "Thu, 5 Aug 2021 08:36:29 GMT" } ]
2023-07-19T00:00:00
[ [ "Sheikhalishahi", "Seyedmostafa", "" ], [ "Balaraman", "Vevake", "" ], [ "Osmani", "Venet", "" ] ]
new_dataset
0.997697
2002.04077
Joshua Lawrence Benjamin
Joshua L. Benjamin, Thomas Gerard, Domani\c{c} Lavery, Polina Bayvel and Georgios Zervas
PULSE: Optical circuit switched Data Center architecture operating at nanosecond timescales
16 pages, 12 figures
null
10.1109/JLT.2020.2997664
null
cs.NI cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce PULSE, a sub-microsecond optical circuit-switched data centre network architecture controlled by distributed hardware schedulers. PULSE is a flat architecture that uses parallel passive coupler-based broadcast and select networks. We employ a novel transceiver architecture, for dynamic wavelength-timeslot selection, to achieve a reconfiguration time down to O(100ps), establishing timeslots of O(10ns). A novel scheduling algorithm that has a clock period of 2.3ns performs multiple iterations to maximize throughput, wavelength usage and reduce latency, enhancing the overall performance. In order to scale, the single-hop PULSE architecture uses sub-networks that are disjoint by using multiple transceivers for each node in 64 node racks. At the reconfiguration circuit duration (epoch = 120 ns), the scheduling algorithm is shown to achieve up to 93% throughput and 100% wavelength usage of 64 wavelengths, incurring an average latency that ranges from 0.7-1.2 microseconds with best-case 0.4 microsecond median and 5 microsecond tail latency, limited by the timeslot (20 ns) and epoch size (120 ns). We show how the 4096-node PULSE architecture allows up to 260k optical channels to be re-used across sub-networks achieving a capacity of 25.6 Pbps with an energy consumption of 85 pJ/bit.
[ { "version": "v1", "created": "Mon, 10 Feb 2020 20:34:20 GMT" }, { "version": "v2", "created": "Mon, 25 May 2020 09:32:08 GMT" } ]
2023-07-19T00:00:00
[ [ "Benjamin", "Joshua L.", "" ], [ "Gerard", "Thomas", "" ], [ "Lavery", "Domaniç", "" ], [ "Bayvel", "Polina", "" ], [ "Zervas", "Georgios", "" ] ]
new_dataset
0.999546
2011.04408
Hanjiang Hu
Hanjiang Hu, Baoquan Yang, Zhijian Qiao, Shiqi Liu, Jiacheng Zhu, Zuxin Liu, Wenhao Ding, Ding Zhao, Hesheng Wang
SeasonDepth: Cross-Season Monocular Depth Prediction Dataset and Benchmark under Multiple Environments
Accepted by IROS 2023, 23 pages, 13 figures, 10 tables
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Different environments pose a great challenge to the outdoor robust visual perception for long-term autonomous driving, and the generalization of learning-based algorithms on different environments is still an open problem. Although monocular depth prediction has been well studied recently, few works focus on the robustness of learning-based depth prediction across different environments, e.g. changing illumination and seasons, owing to the lack of such a multi-environment real-world dataset and benchmark. To this end, the first cross-season monocular depth prediction dataset and benchmark, SeasonDepth, is introduced to benchmark the depth estimation performance under different environments. We investigate several state-of-the-art representative open-source supervised and self-supervised depth prediction methods using newly-formulated metrics. Through extensive experimental evaluation on the proposed dataset and cross-dataset evaluation with current autonomous driving datasets, the performance and robustness against the influence of multiple environments are analyzed qualitatively and quantitatively. We show that long-term monocular depth prediction is still challenging and believe our work can boost further research on the long-term robustness and generalization for outdoor visual perception. The dataset is available on https://seasondepth.github.io, and the benchmark toolkit is available on https://github.com/ SeasonDepth/SeasonDepth.
[ { "version": "v1", "created": "Mon, 9 Nov 2020 13:24:45 GMT" }, { "version": "v2", "created": "Tue, 8 Jun 2021 14:35:07 GMT" }, { "version": "v3", "created": "Wed, 14 Jul 2021 09:31:15 GMT" }, { "version": "v4", "created": "Sat, 28 Aug 2021 17:07:45 GMT" }, { "version": "v5", "created": "Fri, 17 Dec 2021 02:38:04 GMT" }, { "version": "v6", "created": "Mon, 21 Nov 2022 05:43:10 GMT" }, { "version": "v7", "created": "Mon, 17 Jul 2023 23:30:43 GMT" } ]
2023-07-19T00:00:00
[ [ "Hu", "Hanjiang", "" ], [ "Yang", "Baoquan", "" ], [ "Qiao", "Zhijian", "" ], [ "Liu", "Shiqi", "" ], [ "Zhu", "Jiacheng", "" ], [ "Liu", "Zuxin", "" ], [ "Ding", "Wenhao", "" ], [ "Zhao", "Ding", "" ], [ "Wang", "Hesheng", "" ] ]
new_dataset
0.999637
2112.03587
Yang Liu
Yang Liu, Keze Wang, Lingbo Liu, Haoyuan Lan, Liang Lin
TCGL: Temporal Contrastive Graph for Self-supervised Video Representation Learning
This work has been published in IEEE Transactions on Image Processing. The code is publicly available at https://github.com/YangLiu9208/TCGL. arXiv admin note: substantial text overlap with arXiv:2101.00820
null
10.1109/TIP.2022.3147032
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video self-supervised learning is a challenging task, which requires significant expressive power from the model to leverage rich spatial-temporal knowledge and generate effective supervisory signals from large amounts of unlabeled videos. However, existing methods fail to increase the temporal diversity of unlabeled videos and ignore elaborately modeling multi-scale temporal dependencies in an explicit way. To overcome these limitations, we take advantage of the multi-scale temporal dependencies within videos and proposes a novel video self-supervised learning framework named Temporal Contrastive Graph Learning (TCGL), which jointly models the inter-snippet and intra-snippet temporal dependencies for temporal representation learning with a hybrid graph contrastive learning strategy. Specifically, a Spatial-Temporal Knowledge Discovering (STKD) module is first introduced to extract motion-enhanced spatial-temporal representations from videos based on the frequency domain analysis of discrete cosine transform. To explicitly model multi-scale temporal dependencies of unlabeled videos, our TCGL integrates the prior knowledge about the frame and snippet orders into graph structures, i.e., the intra-/inter- snippet Temporal Contrastive Graphs (TCG). Then, specific contrastive learning modules are designed to maximize the agreement between nodes in different graph views. To generate supervisory signals for unlabeled videos, we introduce an Adaptive Snippet Order Prediction (ASOP) module which leverages the relational knowledge among video snippets to learn the global context representation and recalibrate the channel-wise features adaptively. Experimental results demonstrate the superiority of our TCGL over the state-of-the-art methods on large-scale action recognition and video retrieval benchmarks.The code is publicly available at https://github.com/YangLiu9208/TCGL.
[ { "version": "v1", "created": "Tue, 7 Dec 2021 09:27:56 GMT" }, { "version": "v2", "created": "Wed, 5 Jan 2022 03:44:26 GMT" }, { "version": "v3", "created": "Mon, 7 Mar 2022 07:23:39 GMT" } ]
2023-07-19T00:00:00
[ [ "Liu", "Yang", "" ], [ "Wang", "Keze", "" ], [ "Liu", "Lingbo", "" ], [ "Lan", "Haoyuan", "" ], [ "Lin", "Liang", "" ] ]
new_dataset
0.998875
2202.12364
Rashmi Boragolla
Rashmi Boragolla and Pradeepa Yahampath
Orthonormal Matrix Codebook Design for Adaptive Transform Coding
Accepted as a poster on the DCC 2022
null
10.1109/TIP.2023.3289064
null
cs.IT math.IT
http://creativecommons.org/licenses/by-nc-sa/4.0/
A novel algorithm for designing optimized orthonormal transform-matrix codebooks for adaptive transform coding of a non-stationary vector process is proposed. This algorithm relies on a block-wise stationary model of a non-stationary process and finds a codebook of transform-matrices by minimizing the end-to-end mean square error of transform coding averaged over the distribution of stationary blocks of vectors. The algorithm, which belongs to the class of block-coordinate descent algorithms, solves an intermediate minimization problem involving matrix-orthonormality constraints in a computationally efficient manner by mapping the problem from the Euclidean space to the Stiefel manifold. As such, the algorithm can be broadly applied to any adaptive transform coding problem. Preliminary results obtained with inter-prediction residuals in an H265 video codec are presented to demonstrate the advantage of optimized adaptive transform codes over non-adaptive codes based on the standard DCT.
[ { "version": "v1", "created": "Thu, 24 Feb 2022 21:04:36 GMT" } ]
2023-07-19T00:00:00
[ [ "Boragolla", "Rashmi", "" ], [ "Yahampath", "Pradeepa", "" ] ]
new_dataset
0.977353
2207.12308
Haoxin Ma
Haoxin Ma, Jiangyan Yi, Chenglong Wang, Xinrui Yan, Jianhua Tao, Tao Wang, Shiming Wang, Ruibo Fu
CFAD: A Chinese Dataset for Fake Audio Detection
FAD renamed as CFAD
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fake audio detection is a growing concern and some relevant datasets have been designed for research. However, there is no standard public Chinese dataset under complex conditions.In this paper, we aim to fill in the gap and design a Chinese fake audio detection dataset (CFAD) for studying more generalized detection methods. Twelve mainstream speech-generation techniques are used to generate fake audio. To simulate the real-life scenarios, three noise datasets are selected for noise adding at five different signal-to-noise ratios, and six codecs are considered for audio transcoding (format conversion). CFAD dataset can be used not only for fake audio detection but also for detecting the algorithms of fake utterances for audio forensics. Baseline results are presented with analysis. The results that show fake audio detection methods with generalization remain challenging. The CFAD dataset is publicly available at: https://zenodo.org/record/8122764.
[ { "version": "v1", "created": "Tue, 12 Jul 2022 13:27:21 GMT" }, { "version": "v2", "created": "Mon, 27 Feb 2023 07:13:54 GMT" }, { "version": "v3", "created": "Tue, 18 Jul 2023 04:21:40 GMT" } ]
2023-07-19T00:00:00
[ [ "Ma", "Haoxin", "" ], [ "Yi", "Jiangyan", "" ], [ "Wang", "Chenglong", "" ], [ "Yan", "Xinrui", "" ], [ "Tao", "Jianhua", "" ], [ "Wang", "Tao", "" ], [ "Wang", "Shiming", "" ], [ "Fu", "Ruibo", "" ] ]
new_dataset
0.999775
2209.14971
Puhong Duan
Puhong Duan and Xudong Kang and Pedram Ghamisi
Hyperspectral Remote Sensing Benchmark Database for Oil Spill Detection with an Isolation Forest-Guided Unsupervised Detector
null
null
10.1109/TGRS.2023.3268944
null
cs.CV cs.LG
http://creativecommons.org/publicdomain/zero/1.0/
Oil spill detection has attracted increasing attention in recent years since marine oil spill accidents severely affect environments, natural resources, and the lives of coastal inhabitants. Hyperspectral remote sensing images provide rich spectral information which is beneficial for the monitoring of oil spills in complex ocean scenarios. However, most of the existing approaches are based on supervised and semi-supervised frameworks to detect oil spills from hyperspectral images (HSIs), which require a huge amount of effort to annotate a certain number of high-quality training sets. In this study, we make the first attempt to develop an unsupervised oil spill detection method based on isolation forest for HSIs. First, considering that the noise level varies among different bands, a noise variance estimation method is exploited to evaluate the noise level of different bands, and the bands corrupted by severe noise are removed. Second, kernel principal component analysis (KPCA) is employed to reduce the high dimensionality of the HSIs. Then, the probability of each pixel belonging to one of the classes of seawater and oil spills is estimated with the isolation forest, and a set of pseudo-labeled training samples is automatically produced using the clustering algorithm on the detected probability. Finally, an initial detection map can be obtained by performing the support vector machine (SVM) on the dimension-reduced data, and then, the initial detection result is further optimized with the extended random walker (ERW) model so as to improve the detection accuracy of oil spills. Experiments on airborne hyperspectral oil spill data (HOSD) created by ourselves demonstrate that the proposed method obtains superior detection performance with respect to other state-of-the-art detection approaches.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 02:26:42 GMT" } ]
2023-07-19T00:00:00
[ [ "Duan", "Puhong", "" ], [ "Kang", "Xudong", "" ], [ "Ghamisi", "Pedram", "" ] ]
new_dataset
0.999244
2212.02405
Oleg Zaikin
Oleg Zaikin
Inverting Cryptographic Hash Functions via Cube-and-Conquer
40 pages, 11 figures. A revised submission to JAIR
null
null
null
cs.CR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
MD4 and MD5 are seminal cryptographic hash functions proposed in early 1990s. MD4 consists of 48 steps and produces a 128-bit hash given a message of arbitrary finite size. MD5 is a more secure 64-step extension of MD4. Both MD4 and MD5 are vulnerable to practical collision attacks, yet it is still not realistic to invert them, i.e. to find a message given a hash. In 2007, the 39-step version of MD4 was inverted via reducing to SAT and applying a CDCL solver along with the so-called Dobbertin's constraints. As for MD5, in 2012 its 28-step version was inverted via a CDCL solver for one specified hash without adding any additional constraints. In this study, Cube-and-Conquer (a combination of CDCL and lookahead) is applied to invert step-reduced versions of MD4 and MD5. For this purpose, two algorithms are proposed. The first one generates inversion problems for MD4 by gradually modifying the Dobbertin's constraints. The second algorithm tries the cubing phase of Cube-and-Conquer with different cutoff thresholds to find the one with minimal runtime estimation of the conquer phase. This algorithm operates in two modes: (i) estimating the hardness of a given propositional Boolean formula; (ii) incomplete SAT-solving of a given satisfiable propositional Boolean formula. While the first algorithm is focused on inverting step-reduced MD4, the second one is not area-specific and so is applicable to a variety of classes of hard SAT instances. In this study, 40-, 41-, 42-, and 43-step MD4 are inverted for the first time via the first algorithm and the estimating mode of the second algorithm. 28-step MD5 is inverted for four hashes via the incomplete SAT-solving mode of the second algorithm. For three hashes out of them this is done for the first time.
[ { "version": "v1", "created": "Mon, 5 Dec 2022 16:44:47 GMT" }, { "version": "v2", "created": "Tue, 18 Jul 2023 08:53:42 GMT" } ]
2023-07-19T00:00:00
[ [ "Zaikin", "Oleg", "" ] ]
new_dataset
0.984619
2302.06060
Xuxiang Sun
Xuxiang Sun, Gong Cheng, Lei Pei, Hongda Li, and Junwei Han
Threatening Patch Attacks on Object Detection in Optical Remote Sensing Images
null
null
10.1109/TGRS.2023.3273287
null
cs.CV cs.AI cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advanced Patch Attacks (PAs) on object detection in natural images have pointed out the great safety vulnerability in methods based on deep neural networks. However, little attention has been paid to this topic in Optical Remote Sensing Images (O-RSIs). To this end, we focus on this research, i.e., PAs on object detection in O-RSIs, and propose a more Threatening PA without the scarification of the visual quality, dubbed TPA. Specifically, to address the problem of inconsistency between local and global landscapes in existing patch selection schemes, we propose leveraging the First-Order Difference (FOD) of the objective function before and after masking to select the sub-patches to be attacked. Further, considering the problem of gradient inundation when applying existing coordinate-based loss to PAs directly, we design an IoU-based objective function specific for PAs, dubbed Bounding box Drifting Loss (BDL), which pushes the detected bounding boxes far from the initial ones until there are no intersections between them. Finally, on two widely used benchmarks, i.e., DIOR and DOTA, comprehensive evaluations of our TPA with four typical detectors (Faster R-CNN, FCOS, RetinaNet, and YOLO-v4) witness its remarkable effectiveness. To the best of our knowledge, this is the first attempt to study the PAs on object detection in O-RSIs, and we hope this work can get our readers interested in studying this topic.
[ { "version": "v1", "created": "Mon, 13 Feb 2023 02:35:49 GMT" } ]
2023-07-19T00:00:00
[ [ "Sun", "Xuxiang", "" ], [ "Cheng", "Gong", "" ], [ "Pei", "Lei", "" ], [ "Li", "Hongda", "" ], [ "Han", "Junwei", "" ] ]
new_dataset
0.999089
2304.12125
Iacopo Catalano
Iacopo Catalano, Ha Sier, Xianjia Yu, Tomi Westerlund, Jorge Pena Queralta
UAV Tracking with Solid-State Lidars:Dynamic Multi-Frequency Scan Integration
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the increasing use of drones across various industries, the navigation and tracking of these unmanned aerial vehicles (UAVs) in challenging environments (such as GNSS-denied environments) have become critical issues. In this paper, we propose a novel method for a ground-based UAV tracking system using a solid-state LiDAR, which dynamically adjusts the LiDAR frame integration time based on the distance to the UAV and its speed. Our method fuses two simultaneous scan integration frequencies for high accuracy and persistent tracking, enabling reliable estimates of the UAV state even in challenging scenarios. The use of the Inverse Covariance Intersection method and Kalman filters allow for better tracking accuracy and can handle challenging tracking scenarios. We have performed a number of experiments for evaluating the performance of the proposed tracking system and identifying its limitations. Our experimental results demonstrate that the proposed method achieves comparable tracking performance to the established baseline method, while also providing more reliable and accurate tracking when only one of the frequencies is available or unreliable.
[ { "version": "v1", "created": "Mon, 24 Apr 2023 14:30:20 GMT" }, { "version": "v2", "created": "Tue, 18 Jul 2023 12:05:55 GMT" } ]
2023-07-19T00:00:00
[ [ "Catalano", "Iacopo", "" ], [ "Sier", "Ha", "" ], [ "Yu", "Xianjia", "" ], [ "Westerlund", "Tomi", "" ], [ "Queralta", "Jorge Pena", "" ] ]
new_dataset
0.990889
2305.02763
Vageesh Saxena
Vageesh Saxena, Nils Rethmeier, Gijs Van Dijck, Gerasimos Spanakis
VendorLink: An NLP approach for Identifying & Linking Vendor Migrants & Potential Aliases on Darknet Markets
null
null
null
null
cs.CY cs.CL cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
The anonymity on the Darknet allows vendors to stay undetected by using multiple vendor aliases or frequently migrating between markets. Consequently, illegal markets and their connections are challenging to uncover on the Darknet. To identify relationships between illegal markets and their vendors, we propose VendorLink, an NLP-based approach that examines writing patterns to verify, identify, and link unique vendor accounts across text advertisements (ads) on seven public Darknet markets. In contrast to existing literature, VendorLink utilizes the strength of supervised pre-training to perform closed-set vendor verification, open-set vendor identification, and low-resource market adaption tasks. Through VendorLink, we uncover (i) 15 migrants and 71 potential aliases in the Alphabay-Dreams-Silk dataset, (ii) 17 migrants and 3 potential aliases in the Valhalla-Berlusconi dataset, and (iii) 75 migrants and 10 potential aliases in the Traderoute-Agora dataset. Altogether, our approach can help Law Enforcement Agencies (LEA) make more informed decisions by verifying and identifying migrating vendors and their potential aliases on existing and Low-Resource (LR) emerging Darknet markets.
[ { "version": "v1", "created": "Thu, 4 May 2023 12:04:33 GMT" } ]
2023-07-19T00:00:00
[ [ "Saxena", "Vageesh", "" ], [ "Rethmeier", "Nils", "" ], [ "Van Dijck", "Gijs", "" ], [ "Spanakis", "Gerasimos", "" ] ]
new_dataset
0.98967
2306.01987
Sidong Feng
Sidong Feng, Chunyang Chen
Prompting Is All You Need: Automated Android Bug Replay with Large Language Models
Accepted to 46th International Conference on Software Engineering (ICSE 2024)
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Bug reports are vital for software maintenance that allow users to inform developers of the problems encountered while using the software. As such, researchers have committed considerable resources toward automating bug replay to expedite the process of software maintenance. Nonetheless, the success of current automated approaches is largely dictated by the characteristics and quality of bug reports, as they are constrained by the limitations of manually-crafted patterns and pre-defined vocabulary lists. Inspired by the success of Large Language Models (LLMs) in natural language understanding, we propose AdbGPT, a new lightweight approach to automatically reproduce the bugs from bug reports through prompt engineering, without any training and hard-coding effort. AdbGPT leverages few-shot learning and chain-of-thought reasoning to elicit human knowledge and logical reasoning from LLMs to accomplish the bug replay in a manner similar to a developer. Our evaluations demonstrate the effectiveness and efficiency of our AdbGPT to reproduce 81.3% of bug reports in 253.6 seconds, outperforming the state-of-the-art baselines and ablation studies. We also conduct a small-scale user study to confirm the usefulness of AdbGPT in enhancing developers' bug replay capabilities.
[ { "version": "v1", "created": "Sat, 3 Jun 2023 03:03:52 GMT" }, { "version": "v2", "created": "Tue, 18 Jul 2023 06:20:51 GMT" } ]
2023-07-19T00:00:00
[ [ "Feng", "Sidong", "" ], [ "Chen", "Chunyang", "" ] ]
new_dataset
0.98545
2306.07229
Giuseppe Silano
Daniel Hert and Tomas Baca and Pavel Petracek and Vit Kratky and Robert Penicka and Vojtech Spurny and Matej Petrlik and Matous Vrba and David Zaitlik and Pavel Stoudek and Viktor Walter and Petr Stepan and Jiri Horyna and Vaclav Pritzl and Martin Sramek and Afzal Ahmad and Giuseppe Silano and Daniel Bonilla Licea and Petr Stibinger and Tiago Nascimento and Martin Saska
MRS Drone: A Modular Platform for Real-World Deployment of Aerial Multi-Robot Systems
49 pages, 39 figures, accepted for publication to the Journal of Intelligent & Robotic Systems
Journal of Intelligent & Robotic Systems, 2023, vol. 108, issue 64
10.1007/s10846-023-01879-2
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a modular autonomous Unmanned Aerial Vehicle (UAV) platform called the Multi-robot Systems (MRS) Drone that can be used in a large range of indoor and outdoor applications. The MRS Drone features unique modularity with respect to changes in actuators, frames, and sensory configuration. As the name suggests, the platform is specially tailored for deployment within a MRS group. The MRS Drone contributes to the state-of-the-art of UAV platforms by allowing smooth real-world deployment of multiple aerial robots, as well as by outperforming other platforms with its modularity. For real-world multi-robot deployment in various applications, the platform is easy to both assemble and modify. Moreover, it is accompanied by a realistic simulator to enable safe pre-flight testing and a smooth transition to complex real-world experiments. In this manuscript, we present mechanical and electrical designs, software architecture, and technical specifications to build a fully autonomous multi UAV system. Finally, we demonstrate the full capabilities and the unique modularity of the MRS Drone in various real-world applications that required a diverse range of platform configurations.
[ { "version": "v1", "created": "Mon, 12 Jun 2023 16:41:59 GMT" } ]
2023-07-19T00:00:00
[ [ "Hert", "Daniel", "" ], [ "Baca", "Tomas", "" ], [ "Petracek", "Pavel", "" ], [ "Kratky", "Vit", "" ], [ "Penicka", "Robert", "" ], [ "Spurny", "Vojtech", "" ], [ "Petrlik", "Matej", "" ], [ "Vrba", "Matous", "" ], [ "Zaitlik", "David", "" ], [ "Stoudek", "Pavel", "" ], [ "Walter", "Viktor", "" ], [ "Stepan", "Petr", "" ], [ "Horyna", "Jiri", "" ], [ "Pritzl", "Vaclav", "" ], [ "Sramek", "Martin", "" ], [ "Ahmad", "Afzal", "" ], [ "Silano", "Giuseppe", "" ], [ "Licea", "Daniel Bonilla", "" ], [ "Stibinger", "Petr", "" ], [ "Nascimento", "Tiago", "" ], [ "Saska", "Martin", "" ] ]
new_dataset
0.99928
2306.15788
F\'abio Vin\'icius Moreira Perez
Maria Carolina Penteado, F\'abio Perez
Evaluating GPT-3.5 and GPT-4 on Grammatical Error Correction for Brazilian Portuguese
Download the full source to access the dataset. Accepted to LatinX in AI (LXAI) Research at ICML 2023
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
We investigate the effectiveness of GPT-3.5 and GPT-4, two large language models, as Grammatical Error Correction (GEC) tools for Brazilian Portuguese and compare their performance against Microsoft Word and Google Docs. We introduce a GEC dataset for Brazilian Portuguese with four categories: Grammar, Spelling, Internet, and Fast typing. Our results show that while GPT-4 has higher recall than other methods, LLMs tend to have lower precision, leading to overcorrection. This study demonstrates the potential of LLMs as practical GEC tools for Brazilian Portuguese and encourages further exploration of LLMs for non-English languages and other educational settings.
[ { "version": "v1", "created": "Tue, 27 Jun 2023 20:37:54 GMT" }, { "version": "v2", "created": "Tue, 18 Jul 2023 13:31:56 GMT" } ]
2023-07-19T00:00:00
[ [ "Penteado", "Maria Carolina", "" ], [ "Perez", "Fábio", "" ] ]
new_dataset
0.992695
2307.04964
Songyang Gao
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang
Secrets of RLHF in Large Language Models Part I: PPO
null
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include \textbf{reward models} to measure human preferences, \textbf{Proximal Policy Optimization} (PPO) to optimize policy model outputs, and \textbf{process supervision} to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In the first report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. The absence of open-source implementations has posed significant challenges to the investigation of LLMs alignment. Therefore, we are eager to release technical reports, reward models and PPO codes, aiming to make modest contributions to the advancement of LLMs.
[ { "version": "v1", "created": "Tue, 11 Jul 2023 01:55:24 GMT" }, { "version": "v2", "created": "Tue, 18 Jul 2023 08:44:47 GMT" } ]
2023-07-19T00:00:00
[ [ "Zheng", "Rui", "" ], [ "Dou", "Shihan", "" ], [ "Gao", "Songyang", "" ], [ "Hua", "Yuan", "" ], [ "Shen", "Wei", "" ], [ "Wang", "Binghai", "" ], [ "Liu", "Yan", "" ], [ "Jin", "Senjie", "" ], [ "Liu", "Qin", "" ], [ "Zhou", "Yuhao", "" ], [ "Xiong", "Limao", "" ], [ "Chen", "Lu", "" ], [ "Xi", "Zhiheng", "" ], [ "Xu", "Nuo", "" ], [ "Lai", "Wenbin", "" ], [ "Zhu", "Minghao", "" ], [ "Chang", "Cheng", "" ], [ "Yin", "Zhangyue", "" ], [ "Weng", "Rongxiang", "" ], [ "Cheng", "Wensen", "" ], [ "Huang", "Haoran", "" ], [ "Sun", "Tianxiang", "" ], [ "Yan", "Hang", "" ], [ "Gui", "Tao", "" ], [ "Zhang", "Qi", "" ], [ "Qiu", "Xipeng", "" ], [ "Huang", "Xuanjing", "" ] ]
new_dataset
0.956846
2307.05935
Zeqing Zhang
Zeqing Zhang, Ruixing Jia, Youcan Yan, Ruihua Han, Shijie Lin, Qian Jiang, Liangjun Zhang, Jia Pan
GRAINS: Proximity Sensing of Objects in Granular Materials
35 pages, 5 figures,2 tables. Videos available at https://sites.google.com/view/grains2/home
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Proximity sensing detects an object's presence without contact. However, research has rarely explored proximity sensing in granular materials (GM) due to GM's lack of visual and complex properties. In this paper, we propose a granular-material-embedded autonomous proximity sensing system (GRAINS) based on three granular phenomena (fluidization, jamming, and failure wedge zone). GRAINS can automatically sense buried objects beneath GM in real-time manner (at least ~20 hertz) and perceive them 0.5 ~ 7 centimeters ahead in different granules without the use of vision or touch. We introduce a new spiral trajectory for the probe raking in GM, combining linear and circular motions, inspired by a common granular fluidization technique. Based on the observation of force-raising when granular jamming occurs in the failure wedge zone in front of the probe during its raking, we employ Gaussian process regression to constantly learn and predict the force patterns and detect the force anomaly resulting from granular jamming to identify the proximity sensing of buried objects. Finally, we apply GRAINS to a Bayesian-optimization-algorithm-guided exploration strategy to successfully localize underground objects and outline their distribution using proximity sensing without contact or digging. This work offers a simple yet reliable method with potential for safe operation in building habitation infrastructure on an alien planet without human intervention.
[ { "version": "v1", "created": "Wed, 12 Jul 2023 06:00:14 GMT" }, { "version": "v2", "created": "Tue, 18 Jul 2023 04:51:23 GMT" } ]
2023-07-19T00:00:00
[ [ "Zhang", "Zeqing", "" ], [ "Jia", "Ruixing", "" ], [ "Yan", "Youcan", "" ], [ "Han", "Ruihua", "" ], [ "Lin", "Shijie", "" ], [ "Jiang", "Qian", "" ], [ "Zhang", "Liangjun", "" ], [ "Pan", "Jia", "" ] ]
new_dataset
0.995524
2307.06932
Vasileios Mavroeidis Dr.
Konstantinos Fysarakis, Alexios Lekidis, Vasileios Mavroeidis, Konstantinos Lampropoulos, George Lyberopoulos, Ignasi Garcia-Mil\`a Vidal, Jos\'e Carles Ter\'es i Casals, Eva Rodriguez Luna, Alejandro Antonio Moreno Sancho, Antonios Mavrelos, Marinos Tsantekidis, Sebastian Pape, Argyro Chatzopoulou, Christina Nanou, George Drivas, Vangelis Photiou, George Spanoudakis, Odysseas Koufopavlou
PHOENI2X -- A European Cyber Resilience Framework With Artificial-Intelligence-Assisted Orchestration, Automation and Response Capabilities for Business Continuity and Recovery, Incident Response, and Information Exchange
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As digital technologies become more pervasive in society and the economy, cybersecurity incidents become more frequent and impactful. According to the NIS and NIS2 Directives, EU Member States and their Operators of Essential Services must establish a minimum baseline set of cybersecurity capabilities and engage in cross-border coordination and cooperation. However, this is only a small step towards European cyber resilience. In this landscape, preparedness, shared situational awareness, and coordinated incident response are essential for effective cyber crisis management and resilience. Motivated by the above, this paper presents PHOENI2X, an EU-funded project aiming to design, develop, and deliver a Cyber Resilience Framework providing Artificial-Intelligence-assisted orchestration, automation and response capabilities for business continuity and recovery, incident response, and information exchange, tailored to the needs of Operators of Essential Services and the EU Member State authorities entrusted with cybersecurity.
[ { "version": "v1", "created": "Thu, 13 Jul 2023 17:53:25 GMT" }, { "version": "v2", "created": "Tue, 18 Jul 2023 10:05:40 GMT" } ]
2023-07-19T00:00:00
[ [ "Fysarakis", "Konstantinos", "" ], [ "Lekidis", "Alexios", "" ], [ "Mavroeidis", "Vasileios", "" ], [ "Lampropoulos", "Konstantinos", "" ], [ "Lyberopoulos", "George", "" ], [ "Vidal", "Ignasi Garcia-Milà", "" ], [ "Casals", "José Carles Terés i", "" ], [ "Luna", "Eva Rodriguez", "" ], [ "Sancho", "Alejandro Antonio Moreno", "" ], [ "Mavrelos", "Antonios", "" ], [ "Tsantekidis", "Marinos", "" ], [ "Pape", "Sebastian", "" ], [ "Chatzopoulou", "Argyro", "" ], [ "Nanou", "Christina", "" ], [ "Drivas", "George", "" ], [ "Photiou", "Vangelis", "" ], [ "Spanoudakis", "George", "" ], [ "Koufopavlou", "Odysseas", "" ] ]
new_dataset
0.998785
2307.08763
Kumar Ashutosh
Kumar Ashutosh, Santhosh Kumar Ramakrishnan, Triantafyllos Afouras, Kristen Grauman
Video-Mined Task Graphs for Keystep Recognition in Instructional Videos
Technical Report
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Procedural activity understanding requires perceiving human actions in terms of a broader task, where multiple keysteps are performed in sequence across a long video to reach a final goal state -- such as the steps of a recipe or a DIY fix-it task. Prior work largely treats keystep recognition in isolation of this broader structure, or else rigidly confines keysteps to align with a predefined sequential script. We propose discovering a task graph automatically from how-to videos to represent probabilistically how people tend to execute keysteps, and then leverage this graph to regularize keystep recognition in novel videos. On multiple datasets of real-world instructional videos, we show the impact: more reliable zero-shot keystep localization and improved video representation learning, exceeding the state of the art.
[ { "version": "v1", "created": "Mon, 17 Jul 2023 18:19:36 GMT" } ]
2023-07-19T00:00:00
[ [ "Ashutosh", "Kumar", "" ], [ "Ramakrishnan", "Santhosh Kumar", "" ], [ "Afouras", "Triantafyllos", "" ], [ "Grauman", "Kristen", "" ] ]
new_dataset
0.997927
2307.08781
Eric Orenstein
Eric Orenstein, Kevin Barnard, Lonny Lundsten, Genevi\`eve Patterson, Benjamin Woodward, and Kakani Katija
The FathomNet2023 Competition Dataset
Competition was presented as part of the 10th Fine Grained Visual Categorization workshop at the 2023 Computer Vision and Pattern Recognition conference. 4 pages, 4 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Ocean scientists have been collecting visual data to study marine organisms for decades. These images and videos are extremely valuable both for basic science and environmental monitoring tasks. There are tools for automatically processing these data, but none that are capable of handling the extreme variability in sample populations, image quality, and habitat characteristics that are common in visual sampling of the ocean. Such distribution shifts can occur over very short physical distances and in narrow time windows. Creating models that are able to recognize when an image or video sequence contains a new organism, an unusual collection of animals, or is otherwise out-of-sample is critical to fully leverage visual data in the ocean. The FathomNet2023 competition dataset presents a realistic scenario where the set of animals in the target data differs from the training data. The challenge is both to identify the organisms in a target image and assess whether it is out-of-sample.
[ { "version": "v1", "created": "Mon, 17 Jul 2023 18:50:53 GMT" } ]
2023-07-19T00:00:00
[ [ "Orenstein", "Eric", "" ], [ "Barnard", "Kevin", "" ], [ "Lundsten", "Lonny", "" ], [ "Patterson", "Geneviève", "" ], [ "Woodward", "Benjamin", "" ], [ "Katija", "Kakani", "" ] ]
new_dataset
0.999857
2307.08850
Senthil Yogamani
Sambit Mohapatra, Senthil Yogamani, Varun Ravi Kumar, Stefan Milz, Heinrich Gotzig and Patrick M\"ader
LiDAR-BEVMTN: Real-Time LiDAR Bird's-Eye View Multi-Task Perception Network for Autonomous Driving
null
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
LiDAR is crucial for robust 3D scene perception in autonomous driving. LiDAR perception has the largest body of literature after camera perception. However, multi-task learning across tasks like detection, segmentation, and motion estimation using LiDAR remains relatively unexplored, especially on automotive-grade embedded platforms. We present a real-time multi-task convolutional neural network for LiDAR-based object detection, semantics, and motion segmentation. The unified architecture comprises a shared encoder and task-specific decoders, enabling joint representation learning. We propose a novel Semantic Weighting and Guidance (SWAG) module to transfer semantic features for improved object detection selectively. Our heterogeneous training scheme combines diverse datasets and exploits complementary cues between tasks. The work provides the first embedded implementation unifying these key perception tasks from LiDAR point clouds achieving 3ms latency on the embedded NVIDIA Xavier platform. We achieve state-of-the-art results for two tasks, semantic and motion segmentation, and close to state-of-the-art performance for 3D object detection. By maximizing hardware efficiency and leveraging multi-task synergies, our method delivers an accurate and efficient solution tailored for real-world automated driving deployment. Qualitative results can be seen at https://youtu.be/H-hWRzv2lIY.
[ { "version": "v1", "created": "Mon, 17 Jul 2023 21:22:17 GMT" } ]
2023-07-19T00:00:00
[ [ "Mohapatra", "Sambit", "" ], [ "Yogamani", "Senthil", "" ], [ "Kumar", "Varun Ravi", "" ], [ "Milz", "Stefan", "" ], [ "Gotzig", "Heinrich", "" ], [ "Mäder", "Patrick", "" ] ]
new_dataset
0.9996
2307.08946
Ning Gao
Ning Gao, Qiying Huang, Cen Li, Shi Jin, Michail Matthaiou
EsaNet: Environment Semantics Enabled Physical Layer Authentication
null
null
null
null
cs.CR eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless networks are vulnerable to physical layer spoofing attacks due to the wireless broadcast nature, thus, integrating communications and security (ICAS) is urgently needed for 6G endogenous security. In this letter, we propose an environment semantics enabled physical layer authentication network based on deep learning, namely EsaNet, to authenticate the spoofing from the underlying wireless protocol. Specifically, the frequency independent wireless channel fingerprint (FiFP) is extracted from the channel state information (CSI) of a massive multi-input multi-output (MIMO) system based on environment semantics knowledge. Then, we transform the received signal into a two-dimensional red green blue (RGB) image and apply the you only look once (YOLO), a single-stage object detection network, to quickly capture the FiFP. Next, a lightweight classification network is designed to distinguish the legitimate from the illegitimate users. Finally, the experimental results show that the proposed EsaNet can effectively detect physical layer spoofing attacks and is robust in time-varying wireless environments.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 03:28:26 GMT" } ]
2023-07-19T00:00:00
[ [ "Gao", "Ning", "" ], [ "Huang", "Qiying", "" ], [ "Li", "Cen", "" ], [ "Jin", "Shi", "" ], [ "Matthaiou", "Michail", "" ] ]
new_dataset
0.998728
2307.08985
Seungho Baek
Seungho Baek, Hyerin Im, Jiseung Ryu, Juhyeong Park, Takyeon Lee
PromptCrafter: Crafting Text-to-Image Prompt through Mixed-Initiative Dialogue with LLM
5 pages, AI & HCI Workshop at the 40 International Conference on Machine Learning (ICML) 2023
null
null
null
cs.HC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Text-to-image generation model is able to generate images across a diverse range of subjects and styles based on a single prompt. Recent works have proposed a variety of interaction methods that help users understand the capabilities of models and utilize them. However, how to support users to efficiently explore the model's capability and to create effective prompts are still open-ended research questions. In this paper, we present PromptCrafter, a novel mixed-initiative system that allows step-by-step crafting of text-to-image prompt. Through the iterative process, users can efficiently explore the model's capability, and clarify their intent. PromptCrafter also supports users to refine prompts by answering various responses to clarifying questions generated by a Large Language Model. Lastly, users can revert to a desired step by reviewing the work history. In this workshop paper, we discuss the design process of PromptCrafter and our plans for follow-up studies.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 05:51:00 GMT" } ]
2023-07-19T00:00:00
[ [ "Baek", "Seungho", "" ], [ "Im", "Hyerin", "" ], [ "Ryu", "Jiseung", "" ], [ "Park", "Juhyeong", "" ], [ "Lee", "Takyeon", "" ] ]
new_dataset
0.970902
2307.09000
Tengfei Xue
Tengfei Xue, Yuqian Chen, Chaoyi Zhang, Alexandra J. Golby, Nikos Makris, Yogesh Rathi, Weidong Cai, Fan Zhang, Lauren J. O'Donnell
TractCloud: Registration-free tractography parcellation with a novel local-global streamline point cloud representation
MICCAI 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion MRI tractography parcellation classifies streamlines into anatomical fiber tracts to enable quantification and visualization for clinical and scientific applications. Current tractography parcellation methods rely heavily on registration, but registration inaccuracies can affect parcellation and the computational cost of registration is high for large-scale datasets. Recently, deep-learning-based methods have been proposed for tractography parcellation using various types of representations for streamlines. However, these methods only focus on the information from a single streamline, ignoring geometric relationships between the streamlines in the brain. We propose TractCloud, a registration-free framework that performs whole-brain tractography parcellation directly in individual subject space. We propose a novel, learnable, local-global streamline representation that leverages information from neighboring and whole-brain streamlines to describe the local anatomy and global pose of the brain. We train our framework on a large-scale labeled tractography dataset, which we augment by applying synthetic transforms including rotation, scaling, and translations. We test our framework on five independently acquired datasets across populations and health conditions. TractCloud significantly outperforms several state-of-the-art methods on all testing datasets. TractCloud achieves efficient and consistent whole-brain white matter parcellation across the lifespan (from neonates to elderly subjects, including brain tumor patients) without the need for registration. The robustness and high inference speed of TractCloud make it suitable for large-scale tractography data analysis. Our project page is available at https://tractcloud.github.io/.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 06:35:12 GMT" } ]
2023-07-19T00:00:00
[ [ "Xue", "Tengfei", "" ], [ "Chen", "Yuqian", "" ], [ "Zhang", "Chaoyi", "" ], [ "Golby", "Alexandra J.", "" ], [ "Makris", "Nikos", "" ], [ "Rathi", "Yogesh", "" ], [ "Cai", "Weidong", "" ], [ "Zhang", "Fan", "" ], [ "O'Donnell", "Lauren J.", "" ] ]
new_dataset
0.999691
2307.09002
Susu Cui
Susu Cui, Cong Dong, Meng Shen, Yuling Liu, Bo Jiang, Zhigang Lu
CBSeq: A Channel-level Behavior Sequence For Encrypted Malware Traffic Detection
Submitted to IEEE TIFS
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning and neural networks have become increasingly popular solutions for encrypted malware traffic detection. They mine and learn complex traffic patterns, enabling detection by fitting boundaries between malware traffic and benign traffic. Compared with signature-based methods, they have higher scalability and flexibility. However, affected by the frequent variants and updates of malware, current methods suffer from a high false positive rate and do not work well for unknown malware traffic detection. It remains a critical task to achieve effective malware traffic detection. In this paper, we introduce CBSeq to address the above problems. CBSeq is a method that constructs a stable traffic representation, behavior sequence, to characterize attacking intent and achieve malware traffic detection. We novelly propose the channels with similar behavior as the detection object and extract side-channel content to construct behavior sequence. Unlike benign activities, the behavior sequences of malware and its variant's traffic exhibit solid internal correlations. Moreover, we design the MSFormer, a powerful Transformer-based multi-sequence fusion classifier. It captures the internal similarity of behavior sequence, thereby distinguishing malware traffic from benign traffic. Our evaluations demonstrate that CBSeq performs effectively in various known malware traffic detection and exhibits superior performance in unknown malware traffic detection, outperforming state-of-the-art methods.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 06:38:20 GMT" } ]
2023-07-19T00:00:00
[ [ "Cui", "Susu", "" ], [ "Dong", "Cong", "" ], [ "Shen", "Meng", "" ], [ "Liu", "Yuling", "" ], [ "Jiang", "Bo", "" ], [ "Lu", "Zhigang", "" ] ]
new_dataset
0.997206
2307.09044
Qipeng Li
Qipeng Li, Yuan Zhuang, Yiwen Chen, Jianzhu Huai, Miao Li, Tianbing Ma, Yufei Tang, Xinlian Liang
3D-SeqMOS: A Novel Sequential 3D Moving Object Segmentation in Autonomous Driving
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For the SLAM system in robotics and autonomous driving, the accuracy of front-end odometry and back-end loop-closure detection determine the whole intelligent system performance. But the LiDAR-SLAM could be disturbed by current scene moving objects, resulting in drift errors and even loop-closure failure. Thus, the ability to detect and segment moving objects is essential for high-precision positioning and building a consistent map. In this paper, we address the problem of moving object segmentation from 3D LiDAR scans to improve the odometry and loop-closure accuracy of SLAM. We propose a novel 3D Sequential Moving-Object-Segmentation (3D-SeqMOS) method that can accurately segment the scene into moving and static objects, such as moving and static cars. Different from the existing projected-image method, we process the raw 3D point cloud and build a 3D convolution neural network for MOS task. In addition, to make full use of the spatio-temporal information of point cloud, we propose a point cloud residual mechanism using the spatial features of current scan and the temporal features of previous residual scans. Besides, we build a complete SLAM framework to verify the effectiveness and accuracy of 3D-SeqMOS. Experiments on SemanticKITTI dataset show that our proposed 3D-SeqMOS method can effectively detect moving objects and improve the accuracy of LiDAR odometry and loop-closure detection. The test results show our 3D-SeqMOS outperforms the state-of-the-art method by 12.4%. We extend the proposed method to the SemanticKITTI: Moving Object Segmentation competition and achieve the 2nd in the leaderboard, showing its effectiveness.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 07:55:17 GMT" } ]
2023-07-19T00:00:00
[ [ "Li", "Qipeng", "" ], [ "Zhuang", "Yuan", "" ], [ "Chen", "Yiwen", "" ], [ "Huai", "Jianzhu", "" ], [ "Li", "Miao", "" ], [ "Ma", "Tianbing", "" ], [ "Tang", "Yufei", "" ], [ "Liang", "Xinlian", "" ] ]
new_dataset
0.998788
2307.09070
GyuMin Shim
Gyumin Shim, Jaeseong Lee, Junha Hyung, Jaegul Choo
PixelHuman: Animatable Neural Radiance Fields from Few Images
8 pages
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper, we propose PixelHuman, a novel human rendering model that generates animatable human scenes from a few images of a person with unseen identity, views, and poses. Previous work have demonstrated reasonable performance in novel view and pose synthesis, but they rely on a large number of images to train and are trained per scene from videos, which requires significant amount of time to produce animatable scenes from unseen human images. Our method differs from existing methods in that it can generalize to any input image for animatable human synthesis. Given a random pose sequence, our method synthesizes each target scene using a neural radiance field that is conditioned on a canonical representation and pose-aware pixel-aligned features, both of which can be obtained through deformation fields learned in a data-driven manner. Our experiments show that our method achieves state-of-the-art performance in multiview and novel pose synthesis from few-shot images.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 08:41:17 GMT" } ]
2023-07-19T00:00:00
[ [ "Shim", "Gyumin", "" ], [ "Lee", "Jaeseong", "" ], [ "Hyung", "Junha", "" ], [ "Choo", "Jaegul", "" ] ]
new_dataset
0.991995
2307.09090
Georgios Karachalias
Marco Perone and Georgios Karachalias
Cr\`eme de la Crem: Composable Representable Executable Machines (Architectural Pearl)
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
In this paper we describe how to build software architectures as a composition of state machines, using ideas and principles from the field of Domain-Driven Design. By definition, our approach is modular, allowing one to compose independent subcomponents to create bigger systems, and representable, allowing the implementation of a system to be kept in sync with its graphical representation. In addition to the design itself we introduce the Crem library, which provides a concrete state machine implementation that is both compositional and representable, Crem uses Haskell's advanced type-level features to allow users to specify allowed and forbidden state transitions, and to encode complex state machine -- and therefore domain-specific -- properties. Moreover, since Crem's state machines are representable, Crem can automatically generate graphical representations of systems from their domain implementations.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 09:17:13 GMT" } ]
2023-07-19T00:00:00
[ [ "Perone", "Marco", "" ], [ "Karachalias", "Georgios", "" ] ]
new_dataset
0.999196
2307.09112
Stefan Lionar
Stefan Lionar, Xiangyu Xu, Min Lin, Gim Hee Lee
NU-MCC: Multiview Compressive Coding with Neighborhood Decoder and Repulsive UDF
Project page: https://numcc.github.io/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Remarkable progress has been made in 3D reconstruction from single-view RGB-D inputs. MCC is the current state-of-the-art method in this field, which achieves unprecedented success by combining vision Transformers with large-scale training. However, we identified two key limitations of MCC: 1) The Transformer decoder is inefficient in handling large number of query points; 2) The 3D representation struggles to recover high-fidelity details. In this paper, we propose a new approach called NU-MCC that addresses these limitations. NU-MCC includes two key innovations: a Neighborhood decoder and a Repulsive Unsigned Distance Function (Repulsive UDF). First, our Neighborhood decoder introduces center points as an efficient proxy of input visual features, allowing each query point to only attend to a small neighborhood. This design not only results in much faster inference speed but also enables the exploitation of finer-scale visual features for improved recovery of 3D textures. Second, our Repulsive UDF is a novel alternative to the occupancy field used in MCC, significantly improving the quality of 3D object reconstruction. Compared to standard UDFs that suffer from holes in results, our proposed Repulsive UDF can achieve more complete surface reconstruction. Experimental results demonstrate that NU-MCC is able to learn a strong 3D representation, significantly advancing the state of the art in single-view 3D reconstruction. Particularly, it outperforms MCC by 9.7% in terms of the F1-score on the CO3D-v2 dataset with more than 5x faster running speed.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 10:02:09 GMT" } ]
2023-07-19T00:00:00
[ [ "Lionar", "Stefan", "" ], [ "Xu", "Xiangyu", "" ], [ "Lin", "Min", "" ], [ "Lee", "Gim Hee", "" ] ]
new_dataset
0.955881
2307.09132
Vladimir Vlassov
Gibson Chikafa, Sina Sheikholeslami, Salman Niazi, Jim Dowling, Vladimir Vlassov
Cloud-native RStudio on Kubernetes for Hopsworks
8 pages, 4 figures
null
null
null
cs.DC cs.AI cs.SE
http://creativecommons.org/licenses/by-nc-nd/4.0/
In order to fully benefit from cloud computing, services are designed following the "multi-tenant" architectural model, which is aimed at maximizing resource sharing among users. However, multi-tenancy introduces challenges of security, performance isolation, scaling, and customization. RStudio server is an open-source Integrated Development Environment (IDE) accessible over a web browser for the R programming language. We present the design and implementation of a multi-user distributed system on Hopsworks, a data-intensive AI platform, following the multi-tenant model that provides RStudio as Software as a Service (SaaS). We use the most popular cloud-native technologies: Docker and Kubernetes, to solve the problems of performance isolation, security, and scaling that are present in a multi-tenant environment. We further enable secure data sharing in RStudio server instances to provide data privacy and allow collaboration among RStudio users. We integrate our system with Apache Spark, which can scale and handle Big Data processing workloads. Also, we provide a UI where users can provide custom configurations and have full control of their own RStudio server instances. Our system was tested on a Google Cloud Platform cluster with four worker nodes, each with 30GB of RAM allocated to them. The tests on this cluster showed that 44 RStudio servers, each with 2GB of RAM, can be run concurrently. Our system can scale out to potentially support hundreds of concurrently running RStudio servers by adding more resources (CPUs and RAM) to the cluster or system.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 10:28:55 GMT" } ]
2023-07-19T00:00:00
[ [ "Chikafa", "Gibson", "" ], [ "Sheikholeslami", "Sina", "" ], [ "Niazi", "Salman", "" ], [ "Dowling", "Jim", "" ], [ "Vlassov", "Vladimir", "" ] ]
new_dataset
0.990835
2307.09168
Joohyung Lee
Martin Gebser, Joohyung Lee, Yuliya Lierler
Elementary Sets for Logic Programs
6 pages. AAAI 2006, 244-249. arXiv admin note: substantial text overlap with arXiv:1012.5847
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By introducing the concepts of a loop and a loop formula, Lin and Zhao showed that the answer sets of a nondisjunctive logic program are exactly the models of its Clark's completion that satisfy the loop formulas of all loops. Recently, Gebser and Schaub showed that the Lin-Zhao theorem remains correct even if we restrict loop formulas to a special class of loops called ``elementary loops.'' In this paper, we simplify and generalize the notion of an elementary loop, and clarify its role. We propose the notion of an elementary set, which is almost equivalent to the notion of an elementary loop for nondisjunctive programs, but is simpler, and, unlike elementary loops, can be extended to disjunctive programs without producing unintuitive results. We show that the maximal unfounded elementary sets for the ``relevant'' part of a program are exactly the minimal sets among the nonempty unfounded sets. We also present a graph-theoretic characterization of elementary sets for nondisjunctive programs, which is simpler than the one proposed in (Gebser & Schaub 2005). Unlike the case of nondisjunctive programs, we show that the problem of deciding an elementary set is coNP-complete for disjunctive programs.
[ { "version": "v1", "created": "Sat, 15 Jul 2023 08:00:46 GMT" } ]
2023-07-19T00:00:00
[ [ "Gebser", "Martin", "" ], [ "Lee", "Joohyung", "" ], [ "Lierler", "Yuliya", "" ] ]
new_dataset
0.967959
2307.09243
Daniel De Almeida Braga
Daniel De Almeida Braga, Natalia Kulatova, Mohamed Sabt, Pierre-Alain Fouque, Karthikeyan Bhargavan
From Dragondoom to Dragonstar: Side-channel Attacks and Formally Verified Implementation of WPA3 Dragonfly Handshake
Accepted at 2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P)
null
10.1109/EuroSP57164.2023.00048
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
It is universally acknowledged that Wi-Fi communications are important to secure. Thus, the Wi-Fi Alliance published WPA3 in 2018 with a distinctive security feature: it leverages a Password-Authenticated Key Exchange (PAKE) protocol to protect users' passwords from offline dictionary attacks. Unfortunately, soon after its release, several attacks were reported against its implementations, in response to which the protocol was updated in a best-effort manner. In this paper, we show that the proposed mitigations are not enough, especially for a complex protocol to implement even for savvy developers. Indeed, we present **Dragondoom**, a collection of side-channel vulnerabilities of varying strength allowing attackers to recover users' passwords in widely deployed Wi-Fi daemons, such as hostap in its default settings. Our findings target both password conversion methods, namely the default probabilistic hunting-and-pecking and its newly standardized deterministic alternative based on SSWU. We successfully exploit our leakage in practice through microarchitectural mechanisms, and overcome the limited spatial resolution of Flush+Reload. Our attacks outperform previous works in terms of required measurements. Then, driven by the need to end the spiral of patch-and-hack in Dragonfly implementations, we propose **Dragonstar**, an implementation of Dragonfly leveraging a formally verified implementation of the underlying mathematical operations, thereby removing all the related leakage vector. Our implementation relies on HACL*, a formally verified crypto library guaranteeing secret-independence. We design Dragonstar, so that its integration within hostap requires minimal modifications to the existing project. Our experiments show that the performance of HACL*-based hostap is comparable to OpenSSL-based, implying that Dragonstar is both efficient and proved to be leakage-free.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 13:22:21 GMT" } ]
2023-07-19T00:00:00
[ [ "Braga", "Daniel De Almeida", "" ], [ "Kulatova", "Natalia", "" ], [ "Sabt", "Mohamed", "" ], [ "Fouque", "Pierre-Alain", "" ], [ "Bhargavan", "Karthikeyan", "" ] ]
new_dataset
0.999556
2307.09298
Rodrigo San-Jos\'e
Philippe Gimenez, Diego Ruano, Rodrigo San-Jos\'e
Subfield subcodes of projective Reed-Muller codes
null
null
null
null
cs.IT math.AC math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Explicit bases for the subfield subcodes of projective Reed-Muller codes over the projective plane and their duals are obtained. In particular, we provide a formula for the dimension of these codes. For the general case over the projective space, we are able to generalize the necessary tools to deal with this case as well: we obtain a universal Gr\"obner basis for the vanishing ideal of the set of standard representatives of the projective space and we are able to reduce any monomial with respect to this Gr\"obner basis. With respect to the parameters of these codes, by considering subfield subcodes of projective Reed-Muller codes we are able to obtain long linear codes with good parameters over a small finite field.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 14:38:26 GMT" } ]
2023-07-19T00:00:00
[ [ "Gimenez", "Philippe", "" ], [ "Ruano", "Diego", "" ], [ "San-José", "Rodrigo", "" ] ]
new_dataset
0.998425
2307.09316
Jiahui Liu
Jiahui Liu, Chirui Chang, Jianhui Liu, Xiaoyang Wu, Lan Ma, Xiaojuan Qi
MarS3D: A Plug-and-Play Motion-Aware Model for Semantic Segmentation on Multi-Scan 3D Point Clouds
null
The IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
3D semantic segmentation on multi-scan large-scale point clouds plays an important role in autonomous systems. Unlike the single-scan-based semantic segmentation task, this task requires distinguishing the motion states of points in addition to their semantic categories. However, methods designed for single-scan-based segmentation tasks perform poorly on the multi-scan task due to the lacking of an effective way to integrate temporal information. We propose MarS3D, a plug-and-play motion-aware module for semantic segmentation on multi-scan 3D point clouds. This module can be flexibly combined with single-scan models to allow them to have multi-scan perception abilities. The model encompasses two key designs: the Cross-Frame Feature Embedding module for enriching representation learning and the Motion-Aware Feature Learning module for enhancing motion awareness. Extensive experiments show that MarS3D can improve the performance of the baseline model by a large margin. The code is available at https://github.com/CVMI-Lab/MarS3D.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 14:59:19 GMT" } ]
2023-07-19T00:00:00
[ [ "Liu", "Jiahui", "" ], [ "Chang", "Chirui", "" ], [ "Liu", "Jianhui", "" ], [ "Wu", "Xiaoyang", "" ], [ "Ma", "Lan", "" ], [ "Qi", "Xiaojuan", "" ] ]
new_dataset
0.997725
2307.09320
Ettore Randazzo
Ettore Randazzo and Alexander Mordvintsev
Biomaker CA: a Biome Maker project using Cellular Automata
20 pages, 23 figures. For code base, see https://tinyurl.com/2x8yu34s
null
null
null
cs.AI cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
We introduce Biomaker CA: a Biome Maker project using Cellular Automata (CA). In Biomaker CA, morphogenesis is a first class citizen and small seeds need to grow into plant-like organisms to survive in a nutrient starved environment and eventually reproduce with variation so that a biome survives for long timelines. We simulate complex biomes by means of CA rules in 2D grids and parallelize all of its computation on GPUs through the Python JAX framework. We show how this project allows for several different kinds of environments and laws of 'physics', alongside different model architectures and mutation strategies. We further analyze some configurations to show how plant agents can grow, survive, reproduce, and evolve, forming stable and unstable biomes. We then demonstrate how one can meta-evolve models to survive in a harsh environment either through end-to-end meta-evolution or by a more surgical and efficient approach, called Petri dish meta-evolution. Finally, we show how to perform interactive evolution, where the user decides how to evolve a plant model interactively and then deploys it in a larger environment. We open source Biomaker CA at: https://tinyurl.com/2x8yu34s .
[ { "version": "v1", "created": "Tue, 18 Jul 2023 15:03:40 GMT" } ]
2023-07-19T00:00:00
[ [ "Randazzo", "Ettore", "" ], [ "Mordvintsev", "Alexander", "" ] ]
new_dataset
0.99955
2307.09322
Sara Benatmane
Sara Benatmane, Nuh Aydin, Behloul Djilali, and Prokash Barman
A New Hybrid Cryptosystem Involving DNA,Rabin, One Time Pad and Fiestel
11 pages
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Information security is a crucial need in the modern world. Data security is a real concern, and many customers and organizations need to protect their sensitive information from unauthorized parties and attackers. In previous years, numerous cryptographic schemes have been proposed. DNA cryptography is a new and developing field that combines the computational and biological worlds. DNA cryptography is intriguing due to its high storage capacity, secure data transport, and massive parallel computing. In this paper, a new combination is proposed that offers good security by combining DNA, the Rabin algorithm, one time pad, and a structure inspired by Fiestel. This algorithm employs two keys. The first key is a DNA OTP key which is used for only one secure communication session. The second key, which combines the public and private keys, is a Rabin key. Additionally, by using a Feistel inspired scheme and randomness provided by DNA, the ciphertext is made harder to obtain without the private key.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 15:06:35 GMT" } ]
2023-07-19T00:00:00
[ [ "Benatmane", "Sara", "" ], [ "Aydin", "Nuh", "" ], [ "Djilali", "Behloul", "" ], [ "Barman", "Prokash", "" ] ]
new_dataset
0.993246
2307.09349
Thomas Place
Thomas Place, Marc Zeitoun
A generic characterization of generalized unary temporal logic and two-variable first-order logic
null
null
null
null
cs.FL cs.LO
http://creativecommons.org/licenses/by/4.0/
We investigate an operator on classes of languages. For each class $C$, it outputs a new class $FO^2(I_C)$ associated with a variant of two-variable first-order logic equipped with a signature$I_C$ built from $C$. For $C = \{\emptyset, A^*\}$, we get the variant $FO^2(<)$ equipped with the linear order. For $C = \{\emptyset, \{\varepsilon\},A^+, A^*\}$, we get the variant $FO^2(<,+1)$, which also includes the successor. If $C$ consists of all Boolean combinations of languages $A^*aA^*$ where $a$ is a letter, we get the variant $FO^2(<,Bet)$, which also includes ``between relations''. We prove a generic algebraic characterization of the classes $FO^2(I_C)$. It smoothly and elegantly generalizes the known ones for all aforementioned cases. Moreover, it implies that if $C$ has decidable separation (plus mild properties), then $FO^2(I_C)$ has a decidable membership problem. We actually work with an equivalent definition of \fodc in terms of unary temporal logic. For each class $C$, we consider a variant $TL(C)$ of unary temporal logic whose future/past modalities depend on $C$ and such that $TL(C) = FO^2(I_C)$. Finally, we also characterize $FL(C)$ and $PL(C)$, the pure-future and pure-past restrictions of $TL(C)$. These characterizations as well imply that if \Cs is a class with decidable separation, then $FL(C)$ and $PL(C)$ have decidable membership.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 15:36:07 GMT" } ]
2023-07-19T00:00:00
[ [ "Place", "Thomas", "" ], [ "Zeitoun", "Marc", "" ] ]
new_dataset
0.997056
2307.09351
Guiyu Zhao
Guiyu Zhao and Zhentao Guo and Xin Wang and Hongbin Ma
SphereNet: Learning a Noise-Robust and General Descriptor for Point Cloud Registration
15 pages, under review for IEEE Transactions on Circuits and Systems for Video Technology
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Point cloud registration is to estimate a transformation to align point clouds collected in different perspectives. In learning-based point cloud registration, a robust descriptor is vital for high-accuracy registration. However, most methods are susceptible to noise and have poor generalization ability on unseen datasets. Motivated by this, we introduce SphereNet to learn a noise-robust and unseen-general descriptor for point cloud registration. In our method, first, the spheroid generator builds a geometric domain based on spherical voxelization to encode initial features. Then, the spherical interpolation of the sphere is introduced to realize robustness against noise. Finally, a new spherical convolutional neural network with spherical integrity padding completes the extraction of descriptors, which reduces the loss of features and fully captures the geometric features. To evaluate our methods, a new benchmark 3DMatch-noise with strong noise is introduced. Extensive experiments are carried out on both indoor and outdoor datasets. Under high-intensity noise, SphereNet increases the feature matching recall by more than 25 percentage points on 3DMatch-noise. In addition, it sets a new state-of-the-art performance for the 3DMatch and 3DLoMatch benchmarks with 93.5\% and 75.6\% registration recall and also has the best generalization ability on unseen datasets.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 15:37:35 GMT" } ]
2023-07-19T00:00:00
[ [ "Zhao", "Guiyu", "" ], [ "Guo", "Zhentao", "" ], [ "Wang", "Xin", "" ], [ "Ma", "Hongbin", "" ] ]
new_dataset
0.993035
2307.09356
Dongming Wu
Dongming Wu, Tiancai Wang, Yuang Zhang, Xiangyu Zhang, Jianbing Shen
OnlineRefer: A Simple Online Baseline for Referring Video Object Segmentation
Accepted by ICCV2023. The code is at https://github.com/wudongming97/OnlineRefer
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Referring video object segmentation (RVOS) aims at segmenting an object in a video following human instruction. Current state-of-the-art methods fall into an offline pattern, in which each clip independently interacts with text embedding for cross-modal understanding. They usually present that the offline pattern is necessary for RVOS, yet model limited temporal association within each clip. In this work, we break up the previous offline belief and propose a simple yet effective online model using explicit query propagation, named OnlineRefer. Specifically, our approach leverages target cues that gather semantic information and position prior to improve the accuracy and ease of referring predictions for the current frame. Furthermore, we generalize our online model into a semi-online framework to be compatible with video-based backbones. To show the effectiveness of our method, we evaluate it on four benchmarks, \ie, Refer-Youtube-VOS, Refer-DAVIS17, A2D-Sentences, and JHMDB-Sentences. Without bells and whistles, our OnlineRefer with a Swin-L backbone achieves 63.5 J&F and 64.8 J&F on Refer-Youtube-VOS and Refer-DAVIS17, outperforming all other offline methods.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 15:43:35 GMT" } ]
2023-07-19T00:00:00
[ [ "Wu", "Dongming", "" ], [ "Wang", "Tiancai", "" ], [ "Zhang", "Yuang", "" ], [ "Zhang", "Xiangyu", "" ], [ "Shen", "Jianbing", "" ] ]
new_dataset
0.993367
2307.09364
Roger Moore
Roger K. Moore
Local Minima Drive Communications in Cooperative Interaction
6 page conference paper
null
null
null
cs.AI cs.MA cs.RO
http://creativecommons.org/licenses/by/4.0/
An important open question in human-robot interaction (HRI) is precisely when an agent should decide to communicate, particularly in a cooperative task. Perceptual Control Theory (PCT) tells us that agents are able to cooperate on a joint task simply by sharing the same 'intention', thereby distributing the effort required to complete the task among the agents. This is even true for agents that do not possess the same abilities, so long as the goal is observable, the combined actions are sufficient to complete the task, and there is no local minimum in the search space. If these conditions hold, then a cooperative task can be accomplished without any communication between the contributing agents. However, for tasks that do contain local minima, the global solution can only be reached if at least one of the agents adapts its intention at the appropriate moments, and this can only be achieved by appropriately timed communication. In other words, it is hypothesised that in cooperative tasks, the function of communication is to coordinate actions in a complex search space that contains local minima. These principles have been verified in a computer-based simulation environment in which two independent one-dimensional agents are obliged to cooperate in order to solve a two-dimensional path-finding task.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 15:48:37 GMT" } ]
2023-07-19T00:00:00
[ [ "Moore", "Roger K.", "" ] ]
new_dataset
0.978272
2307.09371
Manolis Ploumidis
Manolis Ploumidis, Fabien Chaix, Nikolaos Chrysos, Marios Assiminakis, Vassilis Flouris, Nikolaos Kallimanis, Nikolaos Kossifidis, Michael Nikoloudakis, Polydoros Petrakis, Nikolaos Dimou, Michael Gianioudis, George Ieronymakis, Aggelos Ioannou, George Kalokerinos, Pantelis Xirouchakis, George Ailamakis, Astrinos Damianakis, Michael Ligerakis, Ioannis Makris, Theocharis Vavouris, Manolis Katevenis, Vassilis Papaefstathiou, Manolis Marazakis, Iakovos Mavroidis
The ExaNeSt Prototype: Evaluation of Efficient HPC Communication Hardware in an ARM-based Multi-FPGA Rack
45 pages, 23 figures
null
null
Report-no:TR-488
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present and evaluate the ExaNeSt Prototype, a liquid-cooled rack prototype consisting of 256 Xilinx ZU9EG MPSoCs, 4 TBytes of DRAM, 16 TBytes of SSD, and configurable interconnection 10-Gbps hardware. We developed this testbed in 2016-2019 to validate the flexibility of FPGAs for experimenting with efficient hardware support for HPC communication among tens of thousands of processors and accelerators in the quest towards Exascale systems and beyond. We present our key design choices reagrding overall system architecture, PCBs and runtime software, and summarize insights resulting from measurement and analysis. Of particular note, our custom interconnect includes a low-cost low-latency network interface, offering user-level zero-copy RDMA, which we have tightly coupled with the ARMv8 processors in the MPSoCs. We have developed a system software runtime on top of these features, and have been able to run MPI. We have evaluated our testbed through MPI microbenchmarks, mini, and full MPI applications. Single hop, one way latency is $1.3$~$\mu$s; approximately $0.47$~$\mu$s out of these are attributed to network interface and the user-space library that exposes its functionality to the runtime. Latency over longer paths increases as expected, reaching $2.55$~$\mu$s for a five-hop path. Bandwidth tests show that, for a single hop, link utilization reaches $82\%$ of the theoretical capacity. Microbenchmarks based on MPI collectives reveal that broadcast latency scales as expected when the number of participating ranks increases. We also implemented a custom Allreduce accelerator in the network interface, which reduces the latency of such collectives by up to $88\%$. We assess performance scaling through weak and strong scaling tests for HPCG, LAMMPS, and the miniFE mini application; for all these tests, parallelization efficiency is at least $69\%$, or better.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 15:51:43 GMT" } ]
2023-07-19T00:00:00
[ [ "Ploumidis", "Manolis", "" ], [ "Chaix", "Fabien", "" ], [ "Chrysos", "Nikolaos", "" ], [ "Assiminakis", "Marios", "" ], [ "Flouris", "Vassilis", "" ], [ "Kallimanis", "Nikolaos", "" ], [ "Kossifidis", "Nikolaos", "" ], [ "Nikoloudakis", "Michael", "" ], [ "Petrakis", "Polydoros", "" ], [ "Dimou", "Nikolaos", "" ], [ "Gianioudis", "Michael", "" ], [ "Ieronymakis", "George", "" ], [ "Ioannou", "Aggelos", "" ], [ "Kalokerinos", "George", "" ], [ "Xirouchakis", "Pantelis", "" ], [ "Ailamakis", "George", "" ], [ "Damianakis", "Astrinos", "" ], [ "Ligerakis", "Michael", "" ], [ "Makris", "Ioannis", "" ], [ "Vavouris", "Theocharis", "" ], [ "Katevenis", "Manolis", "" ], [ "Papaefstathiou", "Vassilis", "" ], [ "Marazakis", "Manolis", "" ], [ "Mavroidis", "Iakovos", "" ] ]
new_dataset
0.962589
2307.09473
Anish Mukherjee
Samir Datta, Asif Khan, Anish Mukherjee
Dynamic Planar Embedding is in DynFO
To appear at MFCS 2023
null
null
null
cs.DS cs.CC cs.LO
http://creativecommons.org/licenses/by/4.0/
Planar Embedding is a drawing of a graph on the plane such that the edges do not intersect each other except at the vertices. We know that testing the planarity of a graph and computing its embedding (if it exists), can efficiently be computed, both sequentially [HT] and in parallel [RR94], when the entire graph is presented as input. In the dynamic setting, the input graph changes one edge at a time through insertion and deletions and planarity testing/embedding has to be updated after every change. By storing auxilliary information we can improve the complexity of dynamic planarity testing/embedding over the obvious recomputation from scratch. In the sequential dynamic setting, there has been a series of works [EGIS, IPR, HIKLR, HR1], culminating in the breakthrough result of polylog(n) sequential time (amortized) planarity testing algorithm of Holm and Rotenberg [HR2]. In this paper, we study planar embedding through the lens of DynFO, a parallel dynamic complexity class introduced by Patnaik et al. [PI] (also [DST95]). We show that it is possible to dynamically maintain whether an edge can be inserted to a planar graph without causing non-planarity in DynFO. We extend this to show how to maintain an embedding of a planar graph under both edge insertions and deletions, while rejecting edge insertions that violate planarity. Our main idea is to maintain embeddings of only the triconnected components and a special two-colouring of separating pairs that enables us to side-step cascading flips when embedding of a biconnected planar graph changes, a major issue for sequential dynamic algorithms [HR1, HR2].
[ { "version": "v1", "created": "Mon, 17 Jul 2023 15:50:46 GMT" } ]
2023-07-19T00:00:00
[ [ "Datta", "Samir", "" ], [ "Khan", "Asif", "" ], [ "Mukherjee", "Anish", "" ] ]
new_dataset
0.999102
2307.09474
En Yu
Liang Zhao, En Yu, Zheng Ge, Jinrong Yang, Haoran Wei, Hongyu Zhou, Jianjian Sun, Yuang Peng, Runpei Dong, Chunrui Han, Xiangyu Zhang
ChatSpot: Bootstrapping Multimodal LLMs via Precise Referring Instruction Tuning
15 pages, 8 figures
null
null
null
cs.CL cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human-AI interactivity is a critical aspect that reflects the usability of multimodal large language models (MLLMs). However, existing end-to-end MLLMs only allow users to interact with them through language instructions, leading to the limitation of the interactive accuracy and efficiency. In this study, we present precise referring instructions that utilize diverse reference representations such as points and boxes as referring prompts to refer to the special region. This enables MLLMs to focus on the region of interest and achieve finer-grained interaction. Based on precise referring instruction, we propose ChatSpot, a unified end-to-end multimodal large language model that supports diverse forms of interactivity including mouse clicks, drag-and-drop, and drawing boxes, which provides a more flexible and seamless interactive experience. We also construct a multi-grained vision-language instruction-following dataset based on existing datasets and GPT-4 generating. Furthermore, we design a series of evaluation tasks to assess the effectiveness of region recognition and interaction. Experimental results showcase ChatSpot's promising performance.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 17:56:06 GMT" } ]
2023-07-19T00:00:00
[ [ "Zhao", "Liang", "" ], [ "Yu", "En", "" ], [ "Ge", "Zheng", "" ], [ "Yang", "Jinrong", "" ], [ "Wei", "Haoran", "" ], [ "Zhou", "Hongyu", "" ], [ "Sun", "Jianjian", "" ], [ "Peng", "Yuang", "" ], [ "Dong", "Runpei", "" ], [ "Han", "Chunrui", "" ], [ "Zhang", "Xiangyu", "" ] ]
new_dataset
0.991261
1909.07750
Raghu Rajan
Raghu Rajan, Jessica Lizeth Borja Diaz, Suresh Guttikonda, Fabio Ferreira, Andr\'e Biedenkapp, Jan Ole von Hartz and Frank Hutter
MDP Playground: An Analysis and Debug Testbed for Reinforcement Learning
Same version as the one published in JAIR Vol. 77 (2023)
Journal of Artificial Intelligence Research 77 (2023) 821-890
10.1613/jair.1.14314
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present MDP Playground, a testbed for Reinforcement Learning (RL) agents with dimensions of hardness that can be controlled independently to challenge agents in different ways and obtain varying degrees of hardness in toy and complex RL environments. We consider and allow control over a wide variety of dimensions, including delayed rewards, sequence lengths, reward density, stochasticity, image representations, irrelevant features, time unit, action range and more. We define a parameterised collection of fast-to-run toy environments in OpenAI Gym by varying these dimensions and propose to use these to understand agents better. We then show how to design experiments using MDP Playground to gain insights on the toy environments. We also provide wrappers that can inject many of these dimensions into any Gym environment. We experiment with these wrappers on Atari and Mujoco to allow for understanding the effects of these dimensions on environments that are more complex than the toy environments. We also compare the effect of the dimensions on the toy and complex environments. Finally, we show how to use MDP Playground to debug agents, to study the interaction of multiple dimensions and describe further use-cases.
[ { "version": "v1", "created": "Tue, 17 Sep 2019 12:41:20 GMT" }, { "version": "v2", "created": "Tue, 3 Dec 2019 12:46:17 GMT" }, { "version": "v3", "created": "Thu, 8 Oct 2020 13:06:35 GMT" }, { "version": "v4", "created": "Fri, 25 Jun 2021 12:38:37 GMT" }, { "version": "v5", "created": "Fri, 14 Jul 2023 11:59:40 GMT" } ]
2023-07-18T00:00:00
[ [ "Rajan", "Raghu", "" ], [ "Diaz", "Jessica Lizeth Borja", "" ], [ "Guttikonda", "Suresh", "" ], [ "Ferreira", "Fabio", "" ], [ "Biedenkapp", "André", "" ], [ "von Hartz", "Jan Ole", "" ], [ "Hutter", "Frank", "" ] ]
new_dataset
0.999709
1911.10519
Priyansh Saxena
Priyansh Saxena, Ram Kishan Dewangan
Three Dimensional Route Planning for Multiple Unmanned Aerial Vehicles using Salp Swarm Algorithm
This work has been previously published in the 'Journal of Experimental & Theoretical Artificial Intelligence' and can be accessed at https://www.tandfonline.com/doi/abs/10.1080/0952813X.2022.2059107
null
10.1080/0952813X.2022.2059107
null
cs.RO cs.MA cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Route planning for multiple Unmanned Aerial Vehicles (UAVs) is a series of translation and rotational steps from a given start location to the destination goal location. The goal of the route planning problem is to determine the most optimal route avoiding any collisions with the obstacles present in the environment. Route planning is an NP-hard optimization problem. In this paper, a newly proposed Salp Swarm Algorithm (SSA) is used, and its performance is compared with deterministic and other Nature-Inspired Algorithms (NIAs). The results illustrate that SSA outperforms all the other meta-heuristic algorithms in route planning for multiple UAVs in a 3D environment. The proposed approach improves the average cost and overall time by 1.25% and 6.035% respectively when compared to recently reported data. Route planning is involved in many real-life applications like robot navigation, self-driving car, autonomous UAV for search and rescue operations in dangerous ground-zero situations, civilian surveillance, military combat and even commercial services like package delivery by drones.
[ { "version": "v1", "created": "Sun, 24 Nov 2019 12:36:18 GMT" }, { "version": "v2", "created": "Wed, 18 Dec 2019 11:31:55 GMT" }, { "version": "v3", "created": "Wed, 19 May 2021 15:14:31 GMT" }, { "version": "v4", "created": "Sun, 16 Jul 2023 12:35:26 GMT" } ]
2023-07-18T00:00:00
[ [ "Saxena", "Priyansh", "" ], [ "Dewangan", "Ram Kishan", "" ] ]
new_dataset
0.99379
2004.11862
Aleksandr Popov
Kevin Buchin, Chenglin Fan, Maarten L\"offler, Aleksandr Popov, Benjamin Raichel, Marcel Roeloffzen
Fr\'echet Distance for Uncertain Curves
48 pages, 11 figures. This is the full version of the paper to be published in ICALP 2020
ACM Transactions on Algorithms 19.3 (2023), article no. 29
10.1145/3597640
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we study a wide range of variants for computing the (discrete and continuous) Fr\'echet distance between uncertain curves. We define an uncertain curve as a sequence of uncertainty regions, where each region is a disk, a line segment, or a set of points. A realisation of a curve is a polyline connecting one point from each region. Given an uncertain curve and a second (certain or uncertain) curve, we seek to compute the lower and upper bound Fr\'echet distance, which are the minimum and maximum Fr\'echet distance for any realisations of the curves. We prove that both the upper and lower bound problems are NP-hard for the continuous Fr\'echet distance in several uncertainty models, and that the upper bound problem remains hard for the discrete Fr\'echet distance. In contrast, the lower bound (discrete and continuous) Fr\'echet distance can be computed in polynomial time. Furthermore, we show that computing the expected discrete Fr\'echet distance is #P-hard when the uncertainty regions are modelled as point sets or line segments. The construction also extends to show #P-hardness for computing the continuous Fr\'echet distance when regions are modelled as point sets. On the positive side, we argue that in any constant dimension there is a FPTAS for the lower bound problem when $\Delta / \delta$ is polynomially bounded, where $\delta$ is the Fr\'echet distance and $\Delta$ bounds the diameter of the regions. We then argue there is a near-linear-time 3-approximation for the decision problem when the regions are convex and roughly $\delta$-separated. Finally, we also study the setting with Sakoe--Chiba time bands, where we restrict the alignment between the two curves, and give polynomial-time algorithms for upper bound and expected discrete and continuous Fr\'echet distance for uncertainty regions modelled as point sets.
[ { "version": "v1", "created": "Fri, 24 Apr 2020 17:12:42 GMT" } ]
2023-07-18T00:00:00
[ [ "Buchin", "Kevin", "" ], [ "Fan", "Chenglin", "" ], [ "Löffler", "Maarten", "" ], [ "Popov", "Aleksandr", "" ], [ "Raichel", "Benjamin", "" ], [ "Roeloffzen", "Marcel", "" ] ]
new_dataset
0.990738
2103.13389
Vadim Sushko
Vadim Sushko, Dan Zhang, Juergen Gall, Anna Khoreva
Generating Novel Scene Compositions from Single Images and Videos
The paper is under consideration at Computer Vision and Image Understanding. Code repository: https://github.com/boschresearch/one-shot-synthesis
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a large dataset for training, generative adversarial networks (GANs) can achieve remarkable performance for the image synthesis task. However, training GANs in extremely low data regimes remains a challenge, as overfitting often occurs, leading to memorization or training divergence. In this work, we introduce SIV-GAN, an unconditional generative model that can generate new scene compositions from a single training image or a single video clip. We propose a two-branch discriminator architecture, with content and layout branches designed to judge internal content and scene layout realism separately from each other. This discriminator design enables synthesis of visually plausible, novel compositions of a scene, with varying content and layout, while preserving the context of the original sample. Compared to previous single image GANs, our model generates more diverse, higher quality images, while not being restricted to a single image setting. We further introduce a new challenging task of learning from a few frames of a single video. In this training setup the training images are highly similar to each other, which makes it difficult for prior GAN models to achieve a synthesis of both high quality and diversity.
[ { "version": "v1", "created": "Wed, 24 Mar 2021 17:59:07 GMT" }, { "version": "v2", "created": "Tue, 19 Oct 2021 10:55:52 GMT" }, { "version": "v3", "created": "Thu, 17 Mar 2022 16:03:00 GMT" }, { "version": "v4", "created": "Sun, 16 Jul 2023 04:42:07 GMT" } ]
2023-07-18T00:00:00
[ [ "Sushko", "Vadim", "" ], [ "Zhang", "Dan", "" ], [ "Gall", "Juergen", "" ], [ "Khoreva", "Anna", "" ] ]
new_dataset
0.997651
2201.06845
Jiale Xu
Yuting Xiao, Jiale Xu, Shenghua Gao
Taylor3DNet: Fast 3D Shape Inference With Landmark Points Based Taylor Series
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Benefiting from the continuous representation ability, deep implicit functions can represent a shape at infinite resolution. However, extracting high-resolution iso-surface from an implicit function requires forward-propagating a network with a large number of parameters for numerous query points, thus preventing the generation speed. Inspired by the Taylor series, we propose Taylo3DNet to accelerate the inference of implicit shape representations. Taylor3DNet exploits a set of discrete landmark points and their corresponding Taylor series coefficients to represent the implicit field of a 3D shape, and the number of landmark points is independent of the resolution of the iso-surface extraction. Once the coefficients corresponding to the landmark points are predicted, the network evaluation for each query point can be simplified as a low-order Taylor series calculation with several nearest landmark points. Based on this efficient representation, our Taylor3DNet achieves a significantly faster inference speed than classical network-based implicit functions. We evaluate our approach on reconstruction tasks with various input types, and the results demonstrate that our approach can improve the inference speed by a large margin without sacrificing the performance compared with state-of-the-art baselines.
[ { "version": "v1", "created": "Tue, 18 Jan 2022 09:47:40 GMT" }, { "version": "v2", "created": "Sun, 16 Jul 2023 09:28:11 GMT" } ]
2023-07-18T00:00:00
[ [ "Xiao", "Yuting", "" ], [ "Xu", "Jiale", "" ], [ "Gao", "Shenghua", "" ] ]
new_dataset
0.998404
2203.14184
Guodong Li
Guodong Li, Min Ye, Sihuang Hu
All the codeword symbols in polar codes have the same SER under the SC decoder
We extend the results in the previous version to polar codes over finite fields. Previously, we only proved the results for binary polar codes
IEEE Transactions on Communications ( Volume: 71, Issue: 7, July 2023)
10.1109/TCOMM.2023.3265687
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider polar codes constructed from the $2\times 2$ kernel $\begin{bmatrix} 1 & 0 \\ \alpha & 1 \end{bmatrix}$ over a finite field $\mathbb{F}_{q}$, where $q=p^s$ is a power of a prime number $p$, and $\alpha$ satisfies that $\mathbb{F}_{p}(\alpha) = \mathbb{F}_{q}$. We prove that for any $\mathbb{F}_{q}$-symmetric memoryless channel, any code length, and any code dimension, all the codeword symbols in such polar codes have the same symbol error rate (SER) under the successive cancellation (SC) decoder.
[ { "version": "v1", "created": "Sun, 27 Mar 2022 02:13:44 GMT" }, { "version": "v2", "created": "Sat, 7 May 2022 03:17:29 GMT" }, { "version": "v3", "created": "Thu, 3 Nov 2022 08:37:07 GMT" } ]
2023-07-18T00:00:00
[ [ "Li", "Guodong", "" ], [ "Ye", "Min", "" ], [ "Hu", "Sihuang", "" ] ]
new_dataset
0.992574
2209.05379
Asha Rani
Asha Rani, Pankaj Yadav, Yashaswi Verma
Action-based Early Autism Diagnosis Using Contrastive Feature Learning
This preprint has not undergone peer review (when applicable) or any postsubmission improvements or corrections. The Version of Record of this article is published in Multimedia Systems (2023), and is available online at https://doi.org/10.1007/s00530-023-01132-8
null
10.1007/s00530-023-01132-8
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autism, also known as Autism Spectrum Disorder (or ASD), is a neurological disorder. Its main symptoms include difficulty in (verbal and/or non-verbal) communication, and rigid/repetitive behavior. These symptoms are often indistinguishable from a normal (control) individual, due to which this disorder remains undiagnosed in early childhood leading to delayed treatment. Since the learning curve is steep during the initial age, an early diagnosis of autism could allow to take adequate interventions at the right time, which might positively affect the growth of an autistic child. Further, the traditional methods of autism diagnosis require multiple visits to a specialized psychiatrist, however this process can be time-consuming. In this paper, we present a learning based approach to automate autism diagnosis using simple and small action video clips of subjects. This task is particularly challenging because the amount of annotated data available is small, and the variations among samples from the two categories (ASD and control) are generally indistinguishable. This is also evident from poor performance of a binary classifier learned using the cross-entropy loss on top of a baseline encoder. To address this, we adopt contrastive feature learning in both self supervised and supervised learning frameworks, and show that these can lead to a significant increase in the prediction accuracy of a binary classifier on this task. We further validate this by conducting thorough experimental analyses under different set-ups on two publicly available datasets.
[ { "version": "v1", "created": "Mon, 12 Sep 2022 16:31:34 GMT" }, { "version": "v2", "created": "Fri, 3 Mar 2023 19:05:56 GMT" }, { "version": "v3", "created": "Tue, 11 Jul 2023 05:59:18 GMT" }, { "version": "v4", "created": "Mon, 17 Jul 2023 10:53:03 GMT" } ]
2023-07-18T00:00:00
[ [ "Rani", "Asha", "" ], [ "Yadav", "Pankaj", "" ], [ "Verma", "Yashaswi", "" ] ]
new_dataset
0.962676
2210.03600
Tam\'as Kar\'acsony
Jo\~ao Carmona and Tam\'as Kar\'acsony and Jo\~ao Paulo Silva Cunha
BlanketSet -- A clinical real-world in-bed action recognition and qualitative semi-synchronised MoCap dataset
4 pages, Dataset available at: https://rdm.inesctec.pt/dataset/nis-2022-004
2023 IEEE 7th Portuguese Meeting on Bioengineering (ENBENG), Porto, Portugal, 2023, pp. 116-119
10.1109/ENBENG58165.2023.10175335
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Clinical in-bed video-based human motion analysis is a very relevant computer vision topic for several relevant biomedical applications. Nevertheless, the main public large datasets (e.g. ImageNet or 3DPW) used for deep learning approaches lack annotated examples for these clinical scenarios. To address this issue, we introduce BlanketSet, an RGB-IR-D action recognition dataset of sequences performed in a hospital bed. This dataset has the potential to help bridge the improvements attained in more general large datasets to these clinical scenarios. Information on how to access the dataset is available at https://rdm.inesctec.pt/dataset/nis-2022-004.
[ { "version": "v1", "created": "Fri, 7 Oct 2022 14:58:27 GMT" }, { "version": "v2", "created": "Mon, 24 Oct 2022 16:17:29 GMT" }, { "version": "v3", "created": "Sun, 19 Mar 2023 19:13:23 GMT" } ]
2023-07-18T00:00:00
[ [ "Carmona", "João", "" ], [ "Karácsony", "Tamás", "" ], [ "Cunha", "João Paulo Silva", "" ] ]
new_dataset
0.999833
2210.12035
Tam\'as Kar\'acsony
Jo\~ao Carmona, Tam\'as Kar\'acsony, Jo\~ao Paulo Silva Cunha
BlanketGen - A synthetic blanket occlusion augmentation pipeline for MoCap datasets
4 pages, Code and further information to generate the dataset is available at: https://gitlab.inesctec.pt/brain-lab/brain-lab-public/blanket-gen-releases
2023 IEEE 7th Portuguese Meeting on Bioengineering (ENBENG), Porto, Portugal, 2023, pp. 112-115
10.1109/ENBENG58165.2023.10175320
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Human motion analysis has seen drastic improvements recently, however, due to the lack of representative datasets, for clinical in-bed scenarios it is still lagging behind. To address this issue, we implemented BlanketGen, a pipeline that augments videos with synthetic blanket occlusions. With this pipeline, we generated an augmented version of the pose estimation dataset 3DPW called BlanketGen-3DPW. We then used this new dataset to fine-tune a Deep Learning model to improve its performance in these scenarios with promising results. Code and further information are available at https://gitlab.inesctec.pt/brain-lab/brain-lab-public/blanket-gen-releases.
[ { "version": "v1", "created": "Fri, 21 Oct 2022 15:27:58 GMT" }, { "version": "v2", "created": "Sun, 19 Mar 2023 19:14:01 GMT" } ]
2023-07-18T00:00:00
[ [ "Carmona", "João", "" ], [ "Karácsony", "Tamás", "" ], [ "Cunha", "João Paulo Silva", "" ] ]
new_dataset
0.997848
2210.13815
Yulin Zhu
Yulin Zhu, Liang Tong, Gaolei Li, Xiapu Luo, Kai Zhou
FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification
null
null
null
null
cs.LG cs.CR cs.SI
http://creativecommons.org/licenses/by/4.0/
Graph Neural Networks (GNNs) are vulnerable to data poisoning attacks, which will generate a poisoned graph as the input to the GNN models. We present FocusedCleaner as a poisoned graph sanitizer to effectively identify the poison injected by attackers. Specifically, FocusedCleaner provides a sanitation framework consisting of two modules: bi-level structural learning and victim node detection. In particular, the structural learning module will reverse the attack process to steadily sanitize the graph while the detection module provides ``the focus" -- a narrowed and more accurate search region -- to structural learning. These two modules will operate in iterations and reinforce each other to sanitize a poisoned graph step by step. As an important application, we show that the adversarial robustness of GNNs trained over the sanitized graph for the node classification task is significantly improved. Extensive experiments demonstrate that FocusedCleaner outperforms the state-of-the-art baselines both on poisoned graph sanitation and improving robustness.
[ { "version": "v1", "created": "Tue, 25 Oct 2022 07:41:57 GMT" }, { "version": "v2", "created": "Mon, 17 Jul 2023 12:33:11 GMT" } ]
2023-07-18T00:00:00
[ [ "Zhu", "Yulin", "" ], [ "Tong", "Liang", "" ], [ "Li", "Gaolei", "" ], [ "Luo", "Xiapu", "" ], [ "Zhou", "Kai", "" ] ]
new_dataset
0.996396
2211.11629
Bowen Li
Bowen Li, Ziyuan Huang, Junjie Ye, Yiming Li, Sebastian Scherer, Hang Zhao, Changhong Fu
PVT++: A Simple End-to-End Latency-Aware Visual Tracking Framework
18 pages, 10 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Visual object tracking is essential to intelligent robots. Most existing approaches have ignored the online latency that can cause severe performance degradation during real-world processing. Especially for unmanned aerial vehicles (UAVs), where robust tracking is more challenging and onboard computation is limited, the latency issue can be fatal. In this work, we present a simple framework for end-to-end latency-aware tracking, i.e., end-to-end predictive visual tracking (PVT++). Unlike existing solutions that naively append Kalman Filters after trackers, PVT++ can be jointly optimized, so that it takes not only motion information but can also leverage the rich visual knowledge in most pre-trained tracker models for robust prediction. Besides, to bridge the training-evaluation domain gap, we propose a relative motion factor, empowering PVT++ to generalize to the challenging and complex UAV tracking scenes. These careful designs have made the small-capacity lightweight PVT++ a widely effective solution. Additionally, this work presents an extended latency-aware evaluation benchmark for assessing an any-speed tracker in the online setting. Empirical results on a robotic platform from the aerial perspective show that PVT++ can achieve significant performance gain on various trackers and exhibit higher accuracy than prior solutions, largely mitigating the degradation brought by latency.
[ { "version": "v1", "created": "Mon, 21 Nov 2022 16:43:33 GMT" }, { "version": "v2", "created": "Wed, 22 Mar 2023 03:28:46 GMT" }, { "version": "v3", "created": "Mon, 17 Jul 2023 03:33:14 GMT" } ]
2023-07-18T00:00:00
[ [ "Li", "Bowen", "" ], [ "Huang", "Ziyuan", "" ], [ "Ye", "Junjie", "" ], [ "Li", "Yiming", "" ], [ "Scherer", "Sebastian", "" ], [ "Zhao", "Hang", "" ], [ "Fu", "Changhong", "" ] ]
new_dataset
0.972251
2211.13435
GuanLin Li
Guanlin Li, Guowen Xu, Tianwei Zhang
A Benchmark of Long-tailed Instance Segmentation with Noisy Labels
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
In this paper, we consider the instance segmentation task on a long-tailed dataset, which contains label noise, i.e., some of the annotations are incorrect. There are two main reasons making this case realistic. First, datasets collected from real world usually obey a long-tailed distribution. Second, for instance segmentation datasets, as there are many instances in one image and some of them are tiny, it is easier to introduce noise into the annotations. Specifically, we propose a new dataset, which is a large vocabulary long-tailed dataset containing label noise for instance segmentation. Furthermore, we evaluate previous proposed instance segmentation algorithms on this dataset. The results indicate that the noise in the training dataset will hamper the model in learning rare categories and decrease the overall performance, and inspire us to explore more effective approaches to address this practical challenge. The code and dataset are available in https://github.com/GuanlinLee/Noisy-LVIS.
[ { "version": "v1", "created": "Thu, 24 Nov 2022 06:34:29 GMT" }, { "version": "v2", "created": "Sat, 15 Jul 2023 08:42:40 GMT" } ]
2023-07-18T00:00:00
[ [ "Li", "Guanlin", "" ], [ "Xu", "Guowen", "" ], [ "Zhang", "Tianwei", "" ] ]
new_dataset
0.997387
2212.05468
Maxwell Pirtle
Maxwell Pirtle (Northeastern University, USA), Luka Jovanovic (Northeastern University, USA), Gene Cooperman (Northeastern University, USA)
McMini: A Programmable DPOR-Based Model Checker for Multithreaded Programs
null
The Art, Science, and Engineering of Programming, 2024, Vol. 8, Issue 1, Article 1
10.22152/programming-journal.org/2024/8/1
null
cs.LO cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Context: Model checking has become a key tool for gaining confidence in correctness of multi-threaded programs. Unit tests and functional tests do not suffice because of race conditions that are not discovered by those tests. This problem is addressed by model checking tools. A simple model checker is useful for detecting race conditions prior to production. Inquiry: Current model checkers hardwire the behavior of common thread operations, and do not recognize application-dependent thread paradigms or functions using simpler primitive operations. This introduces additional operations, causing current model checkers to be excessively slow. In addition, there is no mechanism to model the semantics of the actual thread wakeup policies implemented in the underlying thread library or operating system. Eliminating these constraints can make model checkers faster. Approach: McMini is an **extensible** model checker based on DPOR (Dynamic Partial Order Reduction). A mechanism was invented to declare to McMini new, primitive thread operations, typically in 100~lines or less of C~code. The mechanism was extended to also allow a user of McMini to declare alternative thread wakeup policies, including spurious wakeups from condition variables. Knowledge: In McMini, the user defines new thread operations. The user optimizes these operations by declaring to the DPOR algorithm information that reduces the number of thread schedules to be searched. One declares: (i) under what conditions an operation is enabled; (ii) which thread operations are independent of each other; and (iii) when two operations can be considered as co-enabled. An optional wakeup policy is implemented by defining when a wait operation (on a semaphore, condition variable, etc.) is enabled. A new enqueue thread operation is described, allowing a user to declare alternative wakeup policies. Grounding: McMini was first confirmed to operate correctly and efficiently as a traditional, but extensible model checker for mutex, semaphore, condition variable, and reader-writer lock. McMini's extensibility was then tested on novel primitive operations, representing other useful paradigms for multithreaded operations. An example is readers-and-two-writers. The speed of model checking was found to be five times faster and more, as compared to traditional implementations on top of condition variables. Alternative wakeup policies (e.g., FIFO, LIFO, arbitrary, etc.) were then tested using an enqueue operation. Finally, spurious wakeups were tested with a program that exposes a bug **only** in the presence of a spurious wakeup. Importance: Many applications employ functions for multithreaded paradigms that go beyond the traditional mutex, semaphore, and condition variables. They are defined on top of basic operations. The ability to directly define new primitives for these paradigms makes model checkers run faster by searching fewer thread schedules. The ability to model particular thread wakeup policies, including spurious wakeup for condition variables, is also important. Note that POSIX leaves undefined the wakeup policies of `pthread_mutex_lock`, `sem_wait`, and `pthread_cond_wait`. The POSIX thread implementation then chooses a particular policy (e.g., FIFO, arbitrary), which can be directly modeled by McMini.
[ { "version": "v1", "created": "Sun, 11 Dec 2022 10:52:47 GMT" }, { "version": "v2", "created": "Fri, 14 Jul 2023 18:30:14 GMT" } ]
2023-07-18T00:00:00
[ [ "Pirtle", "Maxwell", "", "Northeastern University, USA" ], [ "Jovanovic", "Luka", "", "Northeastern University, USA" ], [ "Cooperman", "Gene", "", "Northeastern University, USA" ] ]
new_dataset
0.993082
2212.08071
Po-Yao Huang
Po-Yao Huang, Vasu Sharma, Hu Xu, Chaitanya Ryali, Haoqi Fan, Yanghao Li, Shang-Wen Li, Gargi Ghosh, Jitendra Malik, Christoph Feichtenhofer
MAViL: Masked Audio-Video Learners
Technical report
null
null
null
cs.CV cs.MM cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
We present Masked Audio-Video Learners (MAViL) to train audio-visual representations. Our approach learns with three complementary forms of self-supervision: (1) reconstruction of masked audio and video input data, (2) intra- and inter-modal contrastive learning with masking, and (3) self-training by reconstructing joint audio-video contextualized features learned from the first two objectives. Pre-training with MAViL not only enables the model to perform well in audio-visual classification and retrieval tasks but also improves representations of each modality in isolation, without using information from the other modality for fine-tuning or inference. Empirically, MAViL sets a new state-of-the-art on AudioSet (53.1 mAP) and VGGSound (67.1% accuracy). For the first time, a self-supervised audio-visual model outperforms ones that use external supervision on these benchmarks.
[ { "version": "v1", "created": "Thu, 15 Dec 2022 18:59:59 GMT" }, { "version": "v2", "created": "Mon, 17 Jul 2023 05:44:35 GMT" } ]
2023-07-18T00:00:00
[ [ "Huang", "Po-Yao", "" ], [ "Sharma", "Vasu", "" ], [ "Xu", "Hu", "" ], [ "Ryali", "Chaitanya", "" ], [ "Fan", "Haoqi", "" ], [ "Li", "Yanghao", "" ], [ "Li", "Shang-Wen", "" ], [ "Ghosh", "Gargi", "" ], [ "Malik", "Jitendra", "" ], [ "Feichtenhofer", "Christoph", "" ] ]
new_dataset
0.999598
2212.12912
Israel Leyva-Mayorga
Israel Leyva-Mayorga, Marc M. Gost, Marco Moretti, Ana P\'erez-Neira, Miguel \'Angel V\'azquez, Petar Popovski, and Beatriz Soret
Satellite edge computing for real-time and very-high resolution Earth observation
To be published in IEEE Transactions on Communications
null
10.1109/TCOMM.2023.3296584
null
cs.NI astro-ph.IM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In real-time and high-resolution Earth observation imagery, Low Earth Orbit (LEO) satellites capture images that are subsequently transmitted to ground to create an updated map of an area of interest. Such maps provide valuable information for meteorology or environmental monitoring, but can also be employed in near-real time operation for disaster detection, identification, and management. However, the amount of data generated by these applications can easily exceed the communication capabilities of LEO satellites, leading to congestion and packet dropping. To avoid these problems, the Inter-Satellite Links (ISLs) can be used to distribute the data among the satellites for processing. In this paper, we address an energy minimization problem based on a general satellite mobile edge computing (SMEC) framework for real-time and very-high resolution Earth observation. Our results illustrate that the optimal allocation of data and selection of the compression parameters increase the amount of images that the system can support by a factor of 12 when compared to directly downloading the data. Further, energy savings greater than 11% were observed in a real-life scenario of imaging a volcanic island, while a sensitivity analysis of the image acquisition process demonstrates that potential energy savings can be as high as 92%.
[ { "version": "v1", "created": "Sun, 25 Dec 2022 14:36:37 GMT" }, { "version": "v2", "created": "Sun, 16 Jul 2023 16:27:55 GMT" } ]
2023-07-18T00:00:00
[ [ "Leyva-Mayorga", "Israel", "" ], [ "Gost", "Marc M.", "" ], [ "Moretti", "Marco", "" ], [ "Pérez-Neira", "Ana", "" ], [ "Vázquez", "Miguel Ángel", "" ], [ "Popovski", "Petar", "" ], [ "Soret", "Beatriz", "" ] ]
new_dataset
0.992708
2301.01319
Bryce Doerr
Bryce Doerr, Keenan Albee, Monica Ekal, Rodrigo Ventura, Richard Linares
The ReSWARM Microgravity Flight Experiments: Planning, Control, and Model Estimation for On-Orbit Close Proximity Operations
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
On-orbit close proximity operations involve robotic spacecraft maneuvering and making decisions for a growing number of mission scenarios demanding autonomy, including on-orbit assembly, repair, and astronaut assistance. Of these scenarios, on-orbit assembly is an enabling technology that will allow large space structures to be built in-situ, using smaller building block modules. However, robotic on-orbit assembly involves a number of technical hurdles such as changing system models. For instance, grappled modules moved by a free-flying "assembler" robot can cause significant shifts in system inertial properties, which has cascading impacts on motion planning and control portions of the autonomy stack. Further, on-orbit assembly and other scenarios require collision-avoiding motion planning, particularly when operating in a "construction site" scenario of multiple assembler robots and structures. These complicating factors, relevant to many autonomous microgravity robotics use cases, are tackled in the ReSWARM flight experiments as a set of tests on the International Space Station using NASA's Astrobee robots. RElative Satellite sWarming and Robotic Maneuvering, or ReSWARM, demonstrates multiple key technologies for close proximity operations and on-orbit assembly: (1) global long-horizon planning, accomplished using offline and online sampling-based planner options that consider the system dynamics; (2) on-orbit reconfiguration model learning, using the recently-proposed RATTLE information-aware planning framework; and (3) robust control tools to provide low-level control robustness using current system knowledge. These approaches are detailed individually and in an "on-orbit assembly scenario" of multi-waypoint tracking on-orbit. Additionally, detail is provided discussing the practicalities of hardware implementation and unique aspects of working with Astrobee in microgravity.
[ { "version": "v1", "created": "Tue, 3 Jan 2023 19:18:04 GMT" }, { "version": "v2", "created": "Sun, 16 Jul 2023 18:22:21 GMT" } ]
2023-07-18T00:00:00
[ [ "Doerr", "Bryce", "" ], [ "Albee", "Keenan", "" ], [ "Ekal", "Monica", "" ], [ "Ventura", "Rodrigo", "" ], [ "Linares", "Richard", "" ] ]
new_dataset
0.99978
2302.06494
Yanjun Liu
Yanjun Liu and Wenming Yang
Explicit3D: Graph Network with Spatial Inference for Single Image 3D Object Detection
null
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Indoor 3D object detection is an essential task in single image scene understanding, impacting spatial cognition fundamentally in visual reasoning. Existing works on 3D object detection from a single image either pursue this goal through independent predictions of each object or implicitly reason over all possible objects, failing to harness relational geometric information between objects. To address this problem, we propose a dynamic sparse graph pipeline named Explicit3D based on object geometry and semantics features. Taking the efficiency into consideration, we further define a relatedness score and design a novel dynamic pruning algorithm followed by a cluster sampling method for sparse scene graph generation and updating. Furthermore, our Explicit3D introduces homogeneous matrices and defines new relative loss and corner loss to model the spatial difference between target pairs explicitly. Instead of using ground-truth labels as direct supervision, our relative and corner loss are derived from the homogeneous transformation, which renders the model to learn the geometric consistency between objects. The experimental results on the SUN RGB-D dataset demonstrate that our Explicit3D achieves better performance balance than the-state-of-the-art.
[ { "version": "v1", "created": "Mon, 13 Feb 2023 16:19:54 GMT" }, { "version": "v2", "created": "Sat, 15 Jul 2023 10:25:29 GMT" } ]
2023-07-18T00:00:00
[ [ "Liu", "Yanjun", "" ], [ "Yang", "Wenming", "" ] ]
new_dataset
0.957856
2302.13509
Jing Liang
Jing Liang, Sanghyun Son, Ming Lin, Dinesh Manocha
GeoLCR: Attention-based Geometric Loop Closure and Registration
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel algorithm specially designed for loop detection and registration that utilizes Lidar-based perception. Our approach to loop detection involves voxelizing point clouds, followed by an overlap calculation to confirm whether a vehicle has completed a loop. We further enhance the current pose's accuracy via an innovative point-level registration model. The efficacy of our algorithm has been assessed across a range of well-known datasets, including KITTI, KITTI-360, Nuscenes, Complex Urban, NCLT, and MulRan. In comparative terms, our method exhibits up to a twofold increase in the precision of both translation and rotation estimations. Particularly noteworthy is our method's performance on challenging sequences where it outperforms others, being the first to achieve a perfect 100% success rate in loop detection.
[ { "version": "v1", "created": "Mon, 27 Feb 2023 04:16:16 GMT" }, { "version": "v2", "created": "Tue, 28 Feb 2023 01:56:31 GMT" }, { "version": "v3", "created": "Wed, 1 Mar 2023 18:54:09 GMT" }, { "version": "v4", "created": "Thu, 2 Mar 2023 15:14:05 GMT" }, { "version": "v5", "created": "Sat, 4 Mar 2023 03:08:17 GMT" }, { "version": "v6", "created": "Mon, 17 Jul 2023 02:33:00 GMT" } ]
2023-07-18T00:00:00
[ [ "Liang", "Jing", "" ], [ "Son", "Sanghyun", "" ], [ "Lin", "Ming", "" ], [ "Manocha", "Dinesh", "" ] ]
new_dataset
0.999423
2303.03323
Hritik Bansal
Hritik Bansal, Nishad Singhi, Yu Yang, Fan Yin, Aditya Grover, Kai-Wei Chang
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning
22 pages. Accepted at ICCV 2023
null
null
null
cs.CV cs.AI cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal contrastive pretraining has been used to train multimodal representation models, such as CLIP, on large amounts of paired image-text data. However, previous studies have revealed that such models are vulnerable to backdoor attacks. Specifically, when trained on backdoored examples, CLIP learns spurious correlations between the embedded backdoor trigger and the target label, aligning their representations in the joint embedding space. Injecting even a small number of poisoned examples, such as 75 examples in 3 million pretraining data, can significantly manipulate the model's behavior, making it difficult to detect or unlearn such correlations. To address this issue, we propose CleanCLIP, a finetuning framework that weakens the learned spurious associations introduced by backdoor attacks by independently re-aligning the representations for individual modalities. We demonstrate that unsupervised finetuning using a combination of multimodal contrastive and unimodal self-supervised objectives for individual modalities can significantly reduce the impact of the backdoor attack. Additionally, we show that supervised finetuning on task-specific labeled image data removes the backdoor trigger from the CLIP vision encoder. We show empirically that CleanCLIP maintains model performance on benign examples while erasing a range of backdoor attacks on multimodal contrastive learning. The code and checkpoints are available at https://github.com/nishadsinghi/CleanCLIP.
[ { "version": "v1", "created": "Mon, 6 Mar 2023 17:48:32 GMT" }, { "version": "v2", "created": "Wed, 8 Mar 2023 07:04:14 GMT" }, { "version": "v3", "created": "Mon, 17 Jul 2023 06:03:16 GMT" } ]
2023-07-18T00:00:00
[ [ "Bansal", "Hritik", "" ], [ "Singhi", "Nishad", "" ], [ "Yang", "Yu", "" ], [ "Yin", "Fan", "" ], [ "Grover", "Aditya", "" ], [ "Chang", "Kai-Wei", "" ] ]
new_dataset
0.977964