id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.05612
|
D. Adriana G\'omez-Rosal
|
D. Adriana G\'omez-Rosal, Max Bergau, Georg K.J. Fischer, Andreas
Wachaja, Johannes Gr\"ater, Matthias Odenweller, Uwe Piechottka, Fabian
Hoeflinger, Nikhil Gosala, Niklas Wetzel, Daniel B\"uscher, Abhinav Valada,
Wolfram Burgard
|
A Smart Robotic System for Industrial Plant Supervision
|
Final submission for IEEE Sensors 2023
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In today's chemical plants, human field operators perform frequent integrity
checks to guarantee high safety standards, and thus are possibly the first to
encounter dangerous operating conditions. To alleviate their task, we present a
system consisting of an autonomously navigating robot integrated with various
sensors and intelligent data processing. It is able to detect methane leaks and
estimate its flow rate, detect more general gas anomalies, recognize oil films,
localize sound sources and detect failure cases, map the environment in 3D, and
navigate autonomously, employing recognition and avoidance of dynamic
obstacles. We evaluate our system at a wastewater facility in full working
conditions. Our results demonstrate that the system is able to robustly
navigate the plant and provide useful information about critical operating
conditions.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2023 14:54:21 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Sep 2023 15:50:30 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Gómez-Rosal",
"D. Adriana",
""
],
[
"Bergau",
"Max",
""
],
[
"Fischer",
"Georg K. J.",
""
],
[
"Wachaja",
"Andreas",
""
],
[
"Gräter",
"Johannes",
""
],
[
"Odenweller",
"Matthias",
""
],
[
"Piechottka",
"Uwe",
""
],
[
"Hoeflinger",
"Fabian",
""
],
[
"Gosala",
"Nikhil",
""
],
[
"Wetzel",
"Niklas",
""
],
[
"Büscher",
"Daniel",
""
],
[
"Valada",
"Abhinav",
""
],
[
"Burgard",
"Wolfram",
""
]
] |
new_dataset
| 0.991628 |
2308.13963
|
Palash Roy
|
Ajmain Inqiad Alam, Palash Ranjan Roy, Farouq Al-omari, Chanchal Kumar
Roy, Banani Roy, Kevin Schneider
|
GPTCloneBench: A comprehensive benchmark of semantic clones and
cross-language clones using GPT-3 model and SemanticCloneBench
|
Accepted in 39th IEEE International Conference on Software
Maintenance and Evolution(ICSME 2023)
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
With the emergence of Machine Learning, there has been a surge in leveraging
its capabilities for problem-solving across various domains. In the code clone
realm, the identification of type-4 or semantic clones has emerged as a crucial
yet challenging task. Researchers aim to utilize Machine Learning to tackle
this challenge, often relying on the BigCloneBench dataset. However, it's worth
noting that BigCloneBench, originally not designed for semantic clone
detection, presents several limitations that hinder its suitability as a
comprehensive training dataset for this specific purpose. Furthermore, CLCDSA
dataset suffers from a lack of reusable examples aligning with real-world
software systems, rendering it inadequate for cross-language clone detection
approaches. In this work, we present a comprehensive semantic clone and
cross-language clone benchmark, GPTCloneBench by exploiting SemanticCloneBench
and OpenAI's GPT-3 model. In particular, using code fragments from
SemanticCloneBench as sample inputs along with appropriate prompt engineering
for GPT-3 model, we generate semantic and cross-language clones for these
specific fragments and then conduct a combination of extensive manual analysis,
tool-assisted filtering, functionality testing and automated validation in
building the benchmark. From 79,928 clone pairs of GPT-3 output, we created a
benchmark with 37,149 true semantic clone pairs, 19,288 false semantic
pairs(Type-1/Type-2), and 20,770 cross-language clones across four languages
(Java, C, C#, and Python). Our benchmark is 15-fold larger than
SemanticCloneBench, has more functional code examples for software systems and
programming language support than CLCDSA, and overcomes BigCloneBench's
qualities, quantification, and language variety limitations.
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 21:50:34 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Sep 2023 17:44:38 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Alam",
"Ajmain Inqiad",
""
],
[
"Roy",
"Palash Ranjan",
""
],
[
"Al-omari",
"Farouq",
""
],
[
"Roy",
"Chanchal Kumar",
""
],
[
"Roy",
"Banani",
""
],
[
"Schneider",
"Kevin",
""
]
] |
new_dataset
| 0.999592 |
2308.14221
|
Zinuo Li
|
Zinuo Li, Xuhang Chen, Chi-Man Pun, Xiaodong Cun
|
High-Resolution Document Shadow Removal via A Large-Scale Real-World
Dataset and A Frequency-Aware Shadow Erasing Net
|
Accepted by International Conference on Computer Vision 2023 (ICCV
2023)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Shadows often occur when we capture the documents with casual equipment,
which influences the visual quality and readability of the digital copies.
Different from the algorithms for natural shadow removal, the algorithms in
document shadow removal need to preserve the details of fonts and figures in
high-resolution input. Previous works ignore this problem and remove the
shadows via approximate attention and small datasets, which might not work in
real-world situations. We handle high-resolution document shadow removal
directly via a larger-scale real-world dataset and a carefully designed
frequency-aware network. As for the dataset, we acquire over 7k couples of
high-resolution (2462 x 3699) images of real-world document pairs with various
samples under different lighting circumstances, which is 10 times larger than
existing datasets. As for the design of the network, we decouple the
high-resolution images in the frequency domain, where the low-frequency details
and high-frequency boundaries can be effectively learned via the carefully
designed network structure. Powered by our network and dataset, the proposed
method clearly shows a better performance than previous methods in terms of
visual quality and numerical results. The code, models, and dataset are
available at: https://github.com/CXH-Research/DocShadow-SD7K
|
[
{
"version": "v1",
"created": "Sun, 27 Aug 2023 22:45:24 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Aug 2023 02:50:25 GMT"
},
{
"version": "v3",
"created": "Fri, 1 Sep 2023 04:16:20 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Li",
"Zinuo",
""
],
[
"Chen",
"Xuhang",
""
],
[
"Pun",
"Chi-Man",
""
],
[
"Cun",
"Xiaodong",
""
]
] |
new_dataset
| 0.999678 |
2309.00005
|
Ali Zia
|
Yajie Sun, Ali Zia and Jun Zhou
|
High Spectral Spatial Resolution Synthetic HyperSpectral Dataset form
multi-source fusion
|
IJCNN workshop on Multimodal Synthetic Data for Deep Neural Networks
(MSynD), 2023
| null | null | null |
cs.CV cs.LG eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This research paper introduces a synthetic hyperspectral dataset that
combines high spectral and spatial resolution imaging to achieve a
comprehensive, accurate, and detailed representation of observed scenes or
objects. Obtaining such desirable qualities is challenging when relying on a
single camera. The proposed dataset addresses this limitation by leveraging
three modalities: RGB, push-broom visible hyperspectral camera, and snapshot
infrared hyperspectral camera, each offering distinct spatial and spectral
resolutions. Different camera systems exhibit varying photometric properties,
resulting in a trade-off between spatial and spectral resolution. RGB cameras
typically offer high spatial resolution but limited spectral resolution, while
hyperspectral cameras possess high spectral resolution at the expense of
spatial resolution. Moreover, hyperspectral cameras themselves employ different
capturing techniques and spectral ranges, further complicating the acquisition
of comprehensive data. By integrating the photometric properties of these
modalities, a single synthetic hyperspectral image can be generated,
facilitating the exploration of broader spectral-spatial relationships for
improved analysis, monitoring, and decision-making across various fields. This
paper emphasizes the importance of multi-modal fusion in producing a
high-quality synthetic hyperspectral dataset with consistent spectral intervals
between bands.
|
[
{
"version": "v1",
"created": "Sun, 25 Jun 2023 11:17:12 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Sun",
"Yajie",
""
],
[
"Zia",
"Ali",
""
],
[
"Zhou",
"Jun",
""
]
] |
new_dataset
| 0.996924 |
2309.00119
|
Xinyi Wang
|
Xinyi Wang, Paolo Arcaini, Tao Yue, Shaukat Ali
|
QuCAT: A Combinatorial Testing Tool for Quantum Software
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the increased developments in quantum computing, the availability of
systematic and automatic testing approaches for quantum programs is becoming
increasingly essential. To this end, we present the quantum software testing
tool QuCAT for combinatorial testing of quantum programs. QuCAT provides two
functionalities of use. With the first functionality, the tool generates a test
suite of a given strength (e.g., pair-wise). With the second functionality, it
generates test suites with increasing strength until a failure is triggered or
a maximum strength is reached. QuCAT uses two test oracles to check the
correctness of test outputs. We assess the cost and effectiveness of QuCAT with
3 faulty versions of 5 quantum programs. Results show that combinatorial test
suites with a low strength can find faults with limited cost, while a higher
strength performs better to trigger some difficult faults with relatively
higher cost. Repository: https://github.com/Simula-COMPLEX/qucat-tool Video:
https://youtu.be/UsqgOudKLio
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 20:17:38 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Wang",
"Xinyi",
""
],
[
"Arcaini",
"Paolo",
""
],
[
"Yue",
"Tao",
""
],
[
"Ali",
"Shaukat",
""
]
] |
new_dataset
| 0.999163 |
2309.00123
|
Erick Rodrigues
|
Jo\~ao V. C. Mazzochin and Gustavo Tiecker and Erick O. Rodrigues
|
Segmenta\c{c}\~ao e contagem de troncos de madeira utilizando deep
learning e processamento de imagens
|
in Portuguese language, International Conference on Production
Engineering - Americas 2022
| null | null | null |
cs.CV cs.GR cs.MS cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Counting objects in images is a pattern recognition problem that focuses on
identifying an element to determine its incidence and is approached in the
literature as Visual Object Counting (VOC). In this work, we propose a
methodology to count wood logs. First, wood logs are segmented from the image
background. This first segmentation step is obtained using the Pix2Pix
framework that implements Conditional Generative Adversarial Networks (CGANs).
Second, the clusters are counted using Connected Components. The average
accuracy of the segmentation exceeds 89% while the average amount of wood logs
identified based on total accounted is over 97%.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 20:24:14 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Mazzochin",
"João V. C.",
""
],
[
"Tiecker",
"Gustavo",
""
],
[
"Rodrigues",
"Erick O.",
""
]
] |
new_dataset
| 0.990355 |
2309.00149
|
Lino Rodriguez-Coayahuitl PhD
|
Lino Rodriguez-Coayahuitl, Alicia Morales-Reyes, Hugo Jair Escalante
|
TurboGP: A flexible and advanced python based GP library
| null | null | null | null |
cs.NE cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce TurboGP, a Genetic Programming (GP) library fully written in
Python and specifically designed for machine learning tasks. TurboGP implements
modern features not available in other GP implementations, such as island and
cellular population schemes, different types of genetic operations (migration,
protected crossovers), online learning, among other features. TurboGP's most
distinctive characteristic is its native support for different types of GP
nodes to allow different abstraction levels, this makes TurboGP particularly
useful for processing a wide variety of data sources.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 21:50:23 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Rodriguez-Coayahuitl",
"Lino",
""
],
[
"Morales-Reyes",
"Alicia",
""
],
[
"Escalante",
"Hugo Jair",
""
]
] |
new_dataset
| 0.999612 |
2309.00166
|
Reid Priedhorsky
|
Reid Priedhorsky (1), Jordan Ogas (1), Claude H. (Rusty) Davis IV (1),
Z. Noah Hounshel (1 and 2), Ashlyn Lee (1 and 3), Benjamin Stormer (1 and 4),
R. Shane Goff (1) ((1) Los Alamos National Laboratory, (2) University of
North Carolina Wilmington, (3) Colorado State University, (4) University of
Texas at Austin)
|
Charliecloud's layer-free, Git-based container build cache
|
12 pages, 12 figures
| null | null |
LA-UR 23-29388
|
cs.SE cs.DC cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A popular approach to deploying scientific applications in high performance
computing (HPC) is Linux containers, which package an application and all its
dependencies as a single unit. This image is built by interpreting instructions
in a machine-readable recipe, which is faster with a build cache that stores
instruction results for re-use. The standard approach (used e.g. by Docker and
Podman) is a many-layered union filesystem, encoding differences between layers
as tar archives.
Our experiments show this performs similarly to layered caches on both build
time and disk usage, with a considerable advantage for many-instruction
recipes. Our approach also has structural advantages: better diff format, lower
cache overhead, and better file de-duplication. These results show that a
Git-based cache for layer-free container implementations is not only possible
but may outperform the layered approach on important dimensions.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 23:05:16 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Priedhorsky",
"Reid",
"",
"Rusty"
],
[
"Ogas",
"Jordan",
"",
"Rusty"
],
[
"H.",
"Claude",
"",
"Rusty"
],
[
"IV",
"Davis",
"",
"1 and 2"
],
[
"Hounshel",
"Z. Noah",
"",
"1 and 2"
],
[
"Lee",
"Ashlyn",
"",
"1 and 3"
],
[
"Stormer",
"Benjamin",
"",
"1 and 4"
],
[
"Goff",
"R. Shane",
""
]
] |
new_dataset
| 0.999191 |
2309.00216
|
Fei Gao
|
Fei Gao, Yifan Zhu, Chang Jiang, Nannan Wang
|
Human-Inspired Facial Sketch Synthesis with Dynamic Adaptation
|
To appear on ICCV'23
| null | null | null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Facial sketch synthesis (FSS) aims to generate a vivid sketch portrait from a
given facial photo. Existing FSS methods merely rely on 2D representations of
facial semantic or appearance. However, professional human artists usually use
outlines or shadings to covey 3D geometry. Thus facial 3D geometry (e.g. depth
map) is extremely important for FSS. Besides, different artists may use diverse
drawing techniques and create multiple styles of sketches; but the style is
globally consistent in a sketch. Inspired by such observations, in this paper,
we propose a novel Human-Inspired Dynamic Adaptation (HIDA) method. Specially,
we propose to dynamically modulate neuron activations based on a joint
consideration of both facial 3D geometry and 2D appearance, as well as globally
consistent style control. Besides, we use deformable convolutions at
coarse-scales to align deep features, for generating abstract and distinct
outlines. Experiments show that HIDA can generate high-quality sketches in
multiple styles, and significantly outperforms previous methods, over a large
range of challenging faces. Besides, HIDA allows precise style control of the
synthesized sketch, and generalizes well to natural scenes and other artistic
styles. Our code and results have been released online at:
https://github.com/AiArt-HDU/HIDA.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 02:27:05 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Gao",
"Fei",
""
],
[
"Zhu",
"Yifan",
""
],
[
"Jiang",
"Chang",
""
],
[
"Wang",
"Nannan",
""
]
] |
new_dataset
| 0.962591 |
2309.00230
|
Wai Chung Kwan
|
Wai-Chung Kwan, Huimin Wang, Hongru Wang, Zezhong Wang, Xian Wu,
Yefeng Zheng, Kam-Fai Wong
|
JoTR: A Joint Transformer and Reinforcement Learning Framework for
Dialog Policy Learning
|
Our code, models and other related resources are publicly available
at https://github.com/KwanWaiChung/JoTR
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Dialogue policy learning (DPL) is a crucial component of dialogue modelling.
Its primary role is to determine the appropriate abstract response, commonly
referred to as the "dialogue action". Traditional DPL methodologies have
treated this as a sequential decision problem, using pre-defined action
candidates extracted from a corpus. However, these incomplete candidates can
significantly limit the diversity of responses and pose challenges when dealing
with edge cases, which are scenarios that occur only at extreme operating
parameters. To address these limitations, we introduce a novel framework, JoTR.
This framework is unique as it leverages a text-to-text Transformer-based model
to generate flexible dialogue actions. Unlike traditional methods, JoTR
formulates a word-level policy that allows for a more dynamic and adaptable
dialogue action generation, without the need for any action templates. This
setting enhances the diversity of responses and improves the system's ability
to handle edge cases effectively. In addition, JoTR employs reinforcement
learning with a reward-shaping mechanism to efficiently finetune the word-level
dialogue policy, which allows the model to learn from its interactions,
improving its performance over time. We conducted an extensive evaluation of
JoTR to assess its effectiveness. Our extensive evaluation shows that JoTR
achieves state-of-the-art performance on two benchmark dialogue modelling
tasks, as assessed by both user simulators and human evaluators.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 03:19:53 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Kwan",
"Wai-Chung",
""
],
[
"Wang",
"Huimin",
""
],
[
"Wang",
"Hongru",
""
],
[
"Wang",
"Zezhong",
""
],
[
"Wu",
"Xian",
""
],
[
"Zheng",
"Yefeng",
""
],
[
"Wong",
"Kam-Fai",
""
]
] |
new_dataset
| 0.980276 |
2309.00241
|
Saleh ValizadehSotubadi
|
Vahid Pashaei Rad, Vahid Azimi Rad, Saleh Valizadeh Sotubadi
|
Spiking based Cellular Learning Automata (SCLA) algorithm for mobile
robot motion formulation
| null | null | null | null |
cs.RO cs.NE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper a new method called SCLA which stands for Spiking based
Cellular Learning Automata is proposed for a mobile robot to get to the target
from any random initial point. The proposed method is a result of the
integration of both cellular automata and spiking neural networks. The
environment consists of multiple squares of the same size and the robot only
observes the neighboring squares of its current square. It should be stated
that the robot only moves either up and down or right and left. The environment
returns feedback to the learning automata to optimize its decision making in
the next steps resulting in cellular automata training. Simultaneously a
spiking neural network is trained to implement long term improvements and
reductions on the paths. The results show that the integration of both cellular
automata and spiking neural network ends up in reinforcing the proper paths and
training time reduction at the same time.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 04:16:23 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Rad",
"Vahid Pashaei",
""
],
[
"Rad",
"Vahid Azimi",
""
],
[
"Sotubadi",
"Saleh Valizadeh",
""
]
] |
new_dataset
| 0.99852 |
2309.00242
|
Sepideh Aghamolaei
|
Sepideh Aghamolaei and Mohammad Ghodsi
|
A Massively Parallel Dynamic Programming for Approximate Rectangle
Escape Problem
| null | null | null | null |
cs.CG cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sublinear time complexity is required by the massively parallel computation
(MPC) model. Breaking dynamic programs into a set of sparse dynamic programs
that can be divided, solved, and merged in sublinear time.
The rectangle escape problem (REP) is defined as follows: For $n$
axis-aligned rectangles inside an axis-aligned bounding box $B$, extend each
rectangle in only one of the four directions: up, down, left, or right until it
reaches $B$ and the density $k$ is minimized, where $k$ is the maximum number
of extensions of rectangles to the boundary that pass through a point inside
bounding box $B$. REP is NP-hard for $k>1$. If the rectangles are points of a
grid (or unit squares of a grid), the problem is called the square escape
problem (SEP) and it is still NP-hard.
We give a $2$-approximation algorithm for SEP with $k\geq2$ with time
complexity $O(n^{3/2}k^2)$. This improves the time complexity of existing
algorithms which are at least quadratic. Also, the approximation ratio of our
algorithm for $k\geq 3$ is $3/2$ which is tight. We also give a
$8$-approximation algorithm for REP with time complexity $O(n\log n+nk)$ and
give a MPC version of this algorithm for $k=O(1)$ which is the first parallel
algorithm for this problem.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 04:23:15 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Aghamolaei",
"Sepideh",
""
],
[
"Ghodsi",
"Mohammad",
""
]
] |
new_dataset
| 0.982221 |
2309.00246
|
Areej Alhothali
|
Asma Abdulsalam, Areej Alhothali, Saleh Al-Ghamdi
|
Detecting Suicidality in Arabic Tweets Using Machine Learning and Deep
Learning Techniques
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Social media platforms have revolutionized traditional communication
techniques by enabling people globally to connect instantaneously, openly, and
frequently. People use social media to share personal stories and express their
opinion. Negative emotions such as thoughts of death, self-harm, and hardship
are commonly expressed on social media, particularly among younger generations.
As a result, using social media to detect suicidal thoughts will help provide
proper intervention that will ultimately deter others from self-harm and
committing suicide and stop the spread of suicidal ideation on social media. To
investigate the ability to detect suicidal thoughts in Arabic tweets
automatically, we developed a novel Arabic suicidal tweets dataset, examined
several machine learning models, including Na\"ive Bayes, Support Vector
Machine, K-Nearest Neighbor, Random Forest, and XGBoost, trained on word
frequency and word embedding features, and investigated the ability of
pre-trained deep learning models, AraBert, AraELECTRA, and AraGPT2, to identify
suicidal thoughts in Arabic tweets. The results indicate that SVM and RF models
trained on character n-gram features provided the best performance in the
machine learning models, with 86% accuracy and an F1 score of 79%. The results
of the deep learning models show that AraBert model outperforms other machine
and deep learning models, achieving an accuracy of 91\% and an F1-score of 88%,
which significantly improves the detection of suicidal ideation in the Arabic
tweets dataset. To the best of our knowledge, this is the first study to
develop an Arabic suicidality detection dataset from Twitter and to use
deep-learning approaches in detecting suicidality in Arabic posts.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 04:30:59 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Abdulsalam",
"Asma",
""
],
[
"Alhothali",
"Areej",
""
],
[
"Al-Ghamdi",
"Saleh",
""
]
] |
new_dataset
| 0.992777 |
2309.00297
|
Minghao Zhu
|
Minghao Zhu, Xiao Lin, Ronghao Dang, Chengju Liu, and Qijun Chen
|
Fine-Grained Spatiotemporal Motion Alignment for Contrastive Video
Representation Learning
|
ACM MM 2023 Camera Ready
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the most essential property in a video, motion information is critical to
a robust and generalized video representation. To inject motion dynamics,
recent works have adopted frame difference as the source of motion information
in video contrastive learning, considering the trade-off between quality and
cost. However, existing works align motion features at the instance level,
which suffers from spatial and temporal weak alignment across modalities. In
this paper, we present a \textbf{Fi}ne-grained \textbf{M}otion
\textbf{A}lignment (FIMA) framework, capable of introducing well-aligned and
significant motion information. Specifically, we first develop a dense
contrastive learning framework in the spatiotemporal domain to generate
pixel-level motion supervision. Then, we design a motion decoder and a
foreground sampling strategy to eliminate the weak alignments in terms of time
and space. Moreover, a frame-level motion contrastive loss is presented to
improve the temporal diversity of the motion features. Extensive experiments
demonstrate that the representations learned by FIMA possess great
motion-awareness capabilities and achieve state-of-the-art or competitive
results on downstream tasks across UCF101, HMDB51, and Diving48 datasets. Code
is available at \url{https://github.com/ZMHH-H/FIMA}.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 07:03:27 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Zhu",
"Minghao",
""
],
[
"Lin",
"Xiao",
""
],
[
"Dang",
"Ronghao",
""
],
[
"Liu",
"Chengju",
""
],
[
"Chen",
"Qijun",
""
]
] |
new_dataset
| 0.992323 |
2309.00320
|
Edgar Anarossi
|
Edgar Anarossi, Hirotaka Tahara, Naoto Komeno, and Takamitsu Matsubara
|
Deep Segmented DMP Networks for Learning Discontinuous Motions
|
7 pages, Accepted by the 2023 International Conference on Automation
Science and Engineering (CASE 2023)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Discontinuous motion which is a motion composed of multiple continuous
motions with sudden change in direction or velocity in between, can be seen in
state-aware robotic tasks. Such robotic tasks are often coordinated with sensor
information such as image. In recent years, Dynamic Movement Primitives (DMP)
which is a method for generating motor behaviors suitable for robotics has
garnered several deep learning based improvements to allow associations between
sensor information and DMP parameters. While the implementation of deep
learning framework does improve upon DMP's inability to directly associate to
an input, we found that it has difficulty learning DMP parameters for complex
motion which requires large number of basis functions to reconstruct. In this
paper we propose a novel deep learning network architecture called Deep
Segmented DMP Network (DSDNet) which generates variable-length segmented motion
by utilizing the combination of multiple DMP parameters predicting network
architecture, double-stage decoder network, and number of segments predictor.
The proposed method is evaluated on both artificial data (object cutting &
pick-and-place) and real data (object cutting) where our proposed method could
achieve high generalization capability, task-achievement, and data-efficiency
compared to previous method on generating discontinuous long-horizon motions.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 08:08:11 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Anarossi",
"Edgar",
""
],
[
"Tahara",
"Hirotaka",
""
],
[
"Komeno",
"Naoto",
""
],
[
"Matsubara",
"Takamitsu",
""
]
] |
new_dataset
| 0.996769 |
2309.00333
|
Junyi Shi
|
Junyi Shi and Tomasz Piotr Kucner
|
Learning State-Space Models for Mapping Spatial Motion Patterns
|
6 pages, 5 figures, to be published in ECMR 2023 conference
proceedings
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Mapping the surrounding environment is essential for the successful operation
of autonomous robots. While extensive research has focused on mapping geometric
structures and static objects, the environment is also influenced by the
movement of dynamic objects. Incorporating information about spatial motion
patterns can allow mobile robots to navigate and operate successfully in
populated areas. In this paper, we propose a deep state-space model that learns
the map representations of spatial motion patterns and how they change over
time at a certain place. To evaluate our methods, we use two different
datasets: one generated dataset with specific motion patterns and another with
real-world pedestrian data. We test the performance of our model by evaluating
its learning ability, mapping quality, and application to downstream tasks. The
results demonstrate that our model can effectively learn the corresponding
motion pattern, and has the potential to be applied to robotic application
tasks.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 08:40:15 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Shi",
"Junyi",
""
],
[
"Kucner",
"Tomasz Piotr",
""
]
] |
new_dataset
| 0.976783 |
2309.00348
|
Lingxiao Huang
|
Lingxiao Huang, Jung-Hsuan Wu, Chiching Wei, Wilson Li
|
MuraNet: Multi-task Floor Plan Recognition with Relation Attention
|
Document Analysis and Recognition - ICDAR 2023 Workshops. ICDAR 2023.
Lecture Notes in Computer Science, vol 14193. Springer, Cham
| null |
10.1007/978-3-031-41498-5_10
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recognition of information in floor plan data requires the use of
detection and segmentation models. However, relying on several single-task
models can result in ineffective utilization of relevant information when there
are multiple tasks present simultaneously. To address this challenge, we
introduce MuraNet, an attention-based multi-task model for segmentation and
detection tasks in floor plan data. In MuraNet, we adopt a unified encoder
called MURA as the backbone with two separated branches: an enhanced
segmentation decoder branch and a decoupled detection head branch based on
YOLOX, for segmentation and detection tasks respectively. The architecture of
MuraNet is designed to leverage the fact that walls, doors, and windows usually
constitute the primary structure of a floor plan's architecture. By jointly
training the model on both detection and segmentation tasks, we believe MuraNet
can effectively extract and utilize relevant features for both tasks. Our
experiments on the CubiCasa5k public dataset show that MuraNet improves
convergence speed during training compared to single-task models like U-Net and
YOLOv3. Moreover, we observe improvements in the average AP and IoU in
detection and segmentation tasks, respectively.Our ablation experiments
demonstrate that the attention-based unified backbone of MuraNet achieves
better feature extraction in floor plan recognition tasks, and the use of
decoupled multi-head branches for different tasks further improves model
performance. We believe that our proposed MuraNet model can address the
disadvantages of single-task models and improve the accuracy and efficiency of
floor plan data recognition.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 09:10:04 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Huang",
"Lingxiao",
""
],
[
"Wu",
"Jung-Hsuan",
""
],
[
"Wei",
"Chiching",
""
],
[
"Li",
"Wilson",
""
]
] |
new_dataset
| 0.999525 |
2309.00438
|
Anastassia Vybornova
|
Martin Fleischmann and Anastassia Vybornova
|
A shape-based heuristic for the detection of urban block artifacts in
street networks
|
Zenodo: https://doi.org/10.5281/zenodo.8300730 ; GitHub:
https://github.com/martinfleis/urban-block-artifacts
| null | null | null |
cs.CY physics.soc-ph
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Street networks are ubiquitous components of cities, guiding their
development and enabling movement from place to place; street networks are also
the critical components of many urban analytical methods. However, their graph
representation is often designed primarily for transportation purposes. This
representation is less suitable for other use cases where transportation
networks need to be simplified as a mandatory pre-processing step, e.g., in the
case of morphological analysis, visual navigation, or drone flight routing.
While the urgent demand for automated pre-processing methods comes from various
fields, it is still an unsolved challenge. In this article, we tackle this
challenge by proposing a cheap computational heuristic for the identification
of "face artifacts", i.e., geometries that are enclosed by transportation edges
but do not represent urban blocks. The heuristic is based on combining the
frequency distributions of shape compactness metrics and area measurements of
street network face polygons. We test our method on 131 globally sampled large
cities and show that it successfully identifies face artifacts in 89% of
analyzed cities. Our heuristic of detecting artifacts caused by data being
collected for another purpose is the first step towards an automated street
network simplification workflow. Moreover, the proposed face artifact index
uncovers differences in structural rules guiding the development of cities in
different world regions.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 13:11:35 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Fleischmann",
"Martin",
""
],
[
"Vybornova",
"Anastassia",
""
]
] |
new_dataset
| 0.982106 |
2309.00460
|
Johannes Flotzinger
|
Johannes Flotzinger, Philipp J. R\"osch, Thomas Braml
|
dacl10k: Benchmark for Semantic Bridge Damage Segmentation
|
23 pages, 6 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reliably identifying reinforced concrete defects (RCDs)plays a crucial role
in assessing the structural integrity, traffic safety, and long-term durability
of concrete bridges, which represent the most common bridge type worldwide.
Nevertheless, available datasets for the recognition of RCDs are small in terms
of size and class variety, which questions their usability in real-world
scenarios and their role as a benchmark. Our contribution to this problem is
"dacl10k", an exceptionally diverse RCD dataset for multi-label semantic
segmentation comprising 9,920 images deriving from real-world bridge
inspections. dacl10k distinguishes 12 damage classes as well as 6 bridge
components that play a key role in the building assessment and recommending
actions, such as restoration works, traffic load limitations or bridge
closures. In addition, we examine baseline models for dacl10k which are
subsequently evaluated. The best model achieves a mean intersection-over-union
of 0.42 on the test set. dacl10k, along with our baselines, will be openly
accessible to researchers and practitioners, representing the currently biggest
dataset regarding number of images and class diversity for semantic
segmentation in the bridge inspection domain.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 13:46:24 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Flotzinger",
"Johannes",
""
],
[
"Rösch",
"Philipp J.",
""
],
[
"Braml",
"Thomas",
""
]
] |
new_dataset
| 0.999832 |
2309.00465
|
Antony Della Vecchia
|
Antony Della Vecchia, Michael Joswig and Benjamin Lorenz
|
A FAIR File Format for Mathematical Software
| null | null | null | null |
cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe a generic JSON based file format which is suitable for
computations in computer algebra. This is implemented in the computer algebra
system OSCAR, but we also indicate how it can be used in a different context.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 14:03:44 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Della Vecchia",
"Antony",
""
],
[
"Joswig",
"Michael",
""
],
[
"Lorenz",
"Benjamin",
""
]
] |
new_dataset
| 0.999442 |
2309.00505
|
Quan Sun
|
Quan Sun, Wanjing Li and Qi Zhou
|
Rural Access Index: A global study
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Rural Access Index (RAI), one of the UN Sustainable Development Goal
indicators (SDG 9.1.1), represents the proportion of the rural population
residing within 2 km of all-season roads. It reflects the accessibility of
rural residents to transportation services and could provide guidance for the
improvement of road infrastructure. The primary deficiencies in assessing the
RAI include the limited studying area, its incomplete meaning and the absence
of correlation analysis with other influencing factors. To address these
issues, this study proposes the "Not-served Rural Population (NSRP)" as a
complementary indicator to RAI. Utilizing multi-source open data, we analysed
the spatial patterns of RAI and NSRP indicators for 203 countries and then
explored the correlation between these 2 indicators and other 10 relevant
factors. The main findings are as follows: 1) North America, Europe, and
Oceania exhibit relatively high RAI values (>80%) and low NSRP values (<1
million). In contrast, African regions have relatively low RAI values (<40%)
and high NSRP values (>5 million). There is a negative correlation between RAI
and NSRP. 2) There is spatial autocorrelation and significant imbalances in the
distribution of these two indicators. 3) The RAI exhibit a positive correlation
with the factors showing levels of the development of countries such as GDP,
education, indicating that improving the road infrastructure could reduce the
poverty rates and enhance access to education. And in contrast with RAI, NSRP
exhibit the completely negative correlations with these factors.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 14:52:14 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Sun",
"Quan",
""
],
[
"Li",
"Wanjing",
""
],
[
"Zhou",
"Qi",
""
]
] |
new_dataset
| 0.99282 |
2309.00526
|
YouHong Wang
|
Youhong Wang, Yunji Liang, Hao Xu, Shaohui Jiao, Hongkai Yu
|
SQLdepth: Generalizable Self-Supervised Fine-Structured Monocular Depth
Estimation
|
14 pages, 9 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, self-supervised monocular depth estimation has gained popularity
with numerous applications in autonomous driving and robotics. However,
existing solutions primarily seek to estimate depth from immediate visual
features, and struggle to recover fine-grained scene details with limited
generalization. In this paper, we introduce SQLdepth, a novel approach that can
effectively learn fine-grained scene structures from motion. In SQLdepth, we
propose a novel Self Query Layer (SQL) to build a self-cost volume and infer
depth from it, rather than inferring depth from feature maps. The self-cost
volume implicitly captures the intrinsic geometry of the scene within a single
frame. Each individual slice of the volume signifies the relative distances
between points and objects within a latent space. Ultimately, this volume is
compressed to the depth map via a novel decoding approach. Experimental results
on KITTI and Cityscapes show that our method attains remarkable
state-of-the-art performance (AbsRel = $0.082$ on KITTI, $0.052$ on KITTI with
improved ground-truth and $0.106$ on Cityscapes), achieves $9.9\%$, $5.5\%$ and
$4.5\%$ error reduction from the previous best. In addition, our approach
showcases reduced training complexity, computational efficiency, improved
generalization, and the ability to recover fine-grained scene details.
Moreover, the self-supervised pre-trained and metric fine-tuned SQLdepth can
surpass existing supervised methods by significant margins (AbsRel = $0.043$,
$14\%$ error reduction). self-matching-oriented relative distance querying in
SQL improves the robustness and zero-shot generalization capability of
SQLdepth. Code and the pre-trained weights will be publicly available. Code is
available at
\href{https://github.com/hisfog/SQLdepth-Impl}{https://github.com/hisfog/SQLdepth-Impl}.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 15:27:45 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Wang",
"Youhong",
""
],
[
"Liang",
"Yunji",
""
],
[
"Xu",
"Hao",
""
],
[
"Jiao",
"Shaohui",
""
],
[
"Yu",
"Hongkai",
""
]
] |
new_dataset
| 0.979115 |
2309.00550
|
Andreea Iana
|
Andreea Iana, Mehwish Alam, Alexander Grote, Nevena Nikolajevic,
Katharina Ludwig, Philipp M\"uller, Christof Weinhardt, Heiko Paulheim
|
NeMig -- A Bilingual News Collection and Knowledge Graph about Migration
|
Accepted at the 11th International Workshop on News Recommendation
and Analytics (INRA 2023) in conjunction with ACM RecSys 2023
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
News recommendation plays a critical role in shaping the public's worldviews
through the way in which it filters and disseminates information about
different topics. Given the crucial impact that media plays in opinion
formation, especially for sensitive topics, understanding the effects of
personalized recommendation beyond accuracy has become essential in today's
digital society. In this work, we present NeMig, a bilingual news collection on
the topic of migration, and corresponding rich user data. In comparison to
existing news recommendation datasets, which comprise a large variety of
monolingual news, NeMig covers articles on a single controversial topic,
published in both Germany and the US. We annotate the sentiment polarization of
the articles and the political leanings of the media outlets, in addition to
extracting subtopics and named entities disambiguated through Wikidata. These
features can be used to analyze the effects of algorithmic news curation beyond
accuracy-based performance, such as recommender biases and the creation of
filter bubbles. We construct domain-specific knowledge graphs from the news
text and metadata, thus encoding knowledge-level connections between articles.
Importantly, while existing datasets include only click behavior, we collect
user socio-demographic and political information in addition to explicit click
feedback. We demonstrate the utility of NeMig through experiments on the tasks
of news recommenders benchmarking, analysis of biases in recommenders, and news
trends analysis. NeMig aims to provide a useful resource for the news
recommendation community and to foster interdisciplinary research into the
multidimensional effects of algorithmic news curation.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 15:59:14 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Iana",
"Andreea",
""
],
[
"Alam",
"Mehwish",
""
],
[
"Grote",
"Alexander",
""
],
[
"Nikolajevic",
"Nevena",
""
],
[
"Ludwig",
"Katharina",
""
],
[
"Müller",
"Philipp",
""
],
[
"Weinhardt",
"Christof",
""
],
[
"Paulheim",
"Heiko",
""
]
] |
new_dataset
| 0.997278 |
2309.00610
|
Haozhe Xie
|
Haozhe Xie, Zhaoxi Chen, Fangzhou Hong, Ziwei Liu
|
CityDreamer: Compositional Generative Model of Unbounded 3D Cities
|
Project page: https://haozhexie.com/project/city-dreamer
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, extensive research has focused on 3D natural scene
generation, but the domain of 3D city generation has not received as much
exploration. This is due to the greater challenges posed by 3D city generation,
mainly because humans are more sensitive to structural distortions in urban
environments. Additionally, generating 3D cities is more complex than 3D
natural scenes since buildings, as objects of the same class, exhibit a wider
range of appearances compared to the relatively consistent appearance of
objects like trees in natural scenes. To address these challenges, we propose
CityDreamer, a compositional generative model designed specifically for
unbounded 3D cities, which separates the generation of building instances from
other background objects, such as roads, green lands, and water areas, into
distinct modules. Furthermore, we construct two datasets, OSM and GoogleEarth,
containing a vast amount of real-world city imagery to enhance the realism of
the generated 3D cities both in their layouts and appearances. Through
extensive experiments, CityDreamer has proven its superiority over
state-of-the-art methods in generating a wide range of lifelike 3D cities.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 17:57:02 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Xie",
"Haozhe",
""
],
[
"Chen",
"Zhaoxi",
""
],
[
"Hong",
"Fangzhou",
""
],
[
"Liu",
"Ziwei",
""
]
] |
new_dataset
| 0.997126 |
2309.00615
|
Ziyu Guo
|
Ziyu Guo, Renrui Zhang, Xiangyang Zhu, Yiwen Tang, Xianzheng Ma,
Jiaming Han, Kexin Chen, Peng Gao, Xianzhi Li, Hongsheng Li, Pheng-Ann Heng
|
Point-Bind & Point-LLM: Aligning Point Cloud with Multi-modality for 3D
Understanding, Generation, and Instruction Following
|
Work in progress. Code is available at
https://github.com/ZiyuGuo99/Point-Bind_Point-LLM
| null | null | null |
cs.CV cs.AI cs.CL cs.LG cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Point-Bind, a 3D multi-modality model aligning point clouds with
2D image, language, audio, and video. Guided by ImageBind, we construct a joint
embedding space between 3D and multi-modalities, enabling many promising
applications, e.g., any-to-3D generation, 3D embedding arithmetic, and 3D
open-world understanding. On top of this, we further present Point-LLM, the
first 3D large language model (LLM) following 3D multi-modal instructions. By
parameter-efficient fine-tuning techniques, Point-LLM injects the semantics of
Point-Bind into pre-trained LLMs, e.g., LLaMA, which requires no 3D instruction
data, but exhibits superior 3D and multi-modal question-answering capacity. We
hope our work may cast a light on the community for extending 3D point clouds
to multi-modality applications. Code is available at
https://github.com/ZiyuGuo99/Point-Bind_Point-LLM.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 17:59:47 GMT"
}
] | 2023-09-04T00:00:00 |
[
[
"Guo",
"Ziyu",
""
],
[
"Zhang",
"Renrui",
""
],
[
"Zhu",
"Xiangyang",
""
],
[
"Tang",
"Yiwen",
""
],
[
"Ma",
"Xianzheng",
""
],
[
"Han",
"Jiaming",
""
],
[
"Chen",
"Kexin",
""
],
[
"Gao",
"Peng",
""
],
[
"Li",
"Xianzhi",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Heng",
"Pheng-Ann",
""
]
] |
new_dataset
| 0.998739 |
2011.01710
|
Guang Lin
|
Guang Lin, Jianhai Zhang, Yuxi Liu, Tianyang Gao, Wanzeng Kong, Xu
Lei, Tao Qiu
|
BCGGAN: Ballistocardiogram artifact removal in simultaneous EEG-fMRI
using generative adversarial network
| null |
Journal of Neuroscience Methods, Volume 371, 2022, 109498
|
10.1016/j.jneumeth.2022.109498
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to its advantages of high temporal and spatial resolution, the technology
of simultaneous electroencephalogram-functional magnetic resonance imaging
(EEG-fMRI) acquisition and analysis has attracted much attention, and has been
widely used in various research fields of brain science. However, during the
fMRI of the brain, ballistocardiogram (BCG) artifacts can seriously contaminate
the EEG. As an unpaired problem, BCG artifact removal now remains a
considerable challenge. Aiming to provide a solution, this paper proposed a
novel modular generative adversarial network (GAN) and corresponding training
strategy to improve the network performance by optimizing the parameters of
each module. In this manner, we hope to improve the local representation
ability of the network model, thereby improving its overall performance and
obtaining a reliable generator for BCG artifact removal. Moreover, the proposed
method does not rely on additional reference signal or complex hardware
equipment. Experimental results show that, compared with multiple methods, the
technique presented in this paper can remove the BCG artifact more effectively
while retaining essential EEG information.
|
[
{
"version": "v1",
"created": "Tue, 3 Nov 2020 13:54:01 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Nov 2020 01:39:34 GMT"
},
{
"version": "v3",
"created": "Tue, 29 Aug 2023 06:39:04 GMT"
},
{
"version": "v4",
"created": "Wed, 30 Aug 2023 05:08:47 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Lin",
"Guang",
""
],
[
"Zhang",
"Jianhai",
""
],
[
"Liu",
"Yuxi",
""
],
[
"Gao",
"Tianyang",
""
],
[
"Kong",
"Wanzeng",
""
],
[
"Lei",
"Xu",
""
],
[
"Qiu",
"Tao",
""
]
] |
new_dataset
| 0.995379 |
2202.06201
|
Michael Rotman
|
Michael Rotman, Amit Dekel, Shir Gur, Yaron Oz, Lior Wolf
|
Unsupervised Disentanglement with Tensor Product Representations on the
Torus
|
Accepted to ICLR 2022
| null | null | null |
cs.LG cond-mat.dis-nn cs.CV quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The current methods for learning representations with auto-encoders almost
exclusively employ vectors as the latent representations. In this work, we
propose to employ a tensor product structure for this purpose. This way, the
obtained representations are naturally disentangled. In contrast to the
conventional variations methods, which are targeted toward normally distributed
features, the latent space in our representation is distributed uniformly over
a set of unit circles. We argue that the torus structure of the latent space
captures the generative factors effectively. We employ recent tools for
measuring unsupervised disentanglement, and in an extensive set of experiments
demonstrate the advantage of our method in terms of disentanglement,
completeness, and informativeness. The code for our proposed method is
available at https://github.com/rotmanmi/Unsupervised-Disentanglement-Torus.
|
[
{
"version": "v1",
"created": "Sun, 13 Feb 2022 04:23:12 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Rotman",
"Michael",
""
],
[
"Dekel",
"Amit",
""
],
[
"Gur",
"Shir",
""
],
[
"Oz",
"Yaron",
""
],
[
"Wolf",
"Lior",
""
]
] |
new_dataset
| 0.967332 |
2209.07745
|
Martin Zimmermann
|
Enzo Erlich, Shibashis Guha, Isma\"el Jecker, Karoliina Lehtinen,
Martin Zimmermann
|
History-deterministic Parikh Automata
|
arXiv admin note: text overlap with arXiv:2207.07694
| null | null | null |
cs.FL
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Parikh automata extend finite automata by counters that can be tested for
membership in a semilinear set, but only at the end of a run. Thereby, they
preserve many of the desirable properties of finite automata. Deterministic
Parikh automata are strictly weaker than nondeterministic ones, but enjoy
better closure and algorithmic properties. This state of affairs motivates the
study of intermediate forms of nondeterminism. Here, we investigate
history-deterministic Parikh automata, i.e., automata whose nondeterminism can
be resolved on the fly. This restricted form of nondeterminism is well-suited
for applications which classically call for determinism, e.g., solving games
and composition. We show that history-deterministic Parikh automata are
strictly more expressive than deterministic ones, incomparable to unambiguous
ones, and enjoy almost all of the closure properties of deterministic automata.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 07:03:40 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Aug 2023 15:15:43 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Erlich",
"Enzo",
""
],
[
"Guha",
"Shibashis",
""
],
[
"Jecker",
"Ismaël",
""
],
[
"Lehtinen",
"Karoliina",
""
],
[
"Zimmermann",
"Martin",
""
]
] |
new_dataset
| 0.982205 |
2210.17484
|
Santiago Miret
|
Santiago Miret, Kin Long Kelvin Lee, Carmelo Gonzales, Marcel Nassar,
Matthew Spellings
|
The Open MatSci ML Toolkit: A Flexible Framework for Machine Learning in
Materials Science
|
Paper accompanying Open-Source Software from
https://github.com/IntelLabs/matsciml
|
Transactions on Machine Learning Research (2023)
| null |
2835-8856
|
cs.LG cond-mat.mtrl-sci cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present the Open MatSci ML Toolkit: a flexible, self-contained, and
scalable Python-based framework to apply deep learning models and methods on
scientific data with a specific focus on materials science and the OpenCatalyst
Dataset. Our toolkit provides: 1. A scalable machine learning workflow for
materials science leveraging PyTorch Lightning, which enables seamless scaling
across different computation capabilities (laptop, server, cluster) and
hardware platforms (CPU, GPU, XPU). 2. Deep Graph Library (DGL) support for
rapid graph neural network prototyping and development. By publishing and
sharing this toolkit with the research community via open-source release, we
hope to: 1. Lower the entry barrier for new machine learning researchers and
practitioners that want to get started with the OpenCatalyst dataset, which
presently comprises the largest computational materials science dataset. 2.
Enable the scientific community to apply advanced machine learning tools to
high-impact scientific challenges, such as modeling of materials behavior for
clean energy applications. We demonstrate the capabilities of our framework by
enabling three new equivariant neural network models for multiple OpenCatalyst
tasks and arrive at promising results for compute scaling and model
performance.
|
[
{
"version": "v1",
"created": "Mon, 31 Oct 2022 17:11:36 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Miret",
"Santiago",
""
],
[
"Lee",
"Kin Long Kelvin",
""
],
[
"Gonzales",
"Carmelo",
""
],
[
"Nassar",
"Marcel",
""
],
[
"Spellings",
"Matthew",
""
]
] |
new_dataset
| 0.98892 |
2301.00454
|
Zhibin Zou
|
Zhibin Zou and Aveek Dutta
|
Waveforms for xG Non-stationary Channels
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Waveform design for interference cancellation in next-generation wireless
systems, which includes precoding and modulation, aims to achieve orthogonality
among data signals/symbols across all Degrees of Freedom (DoF). Conventional
methods struggle with non-stationary channel states due to high mobility,
density, and time-varying multipath propagation. In this article, we review the
HOGMT-Precoding and MEM modulations for non-stationary channels. We also
discuss practical challenges and future directions.
|
[
{
"version": "v1",
"created": "Sun, 1 Jan 2023 18:08:45 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Apr 2023 13:06:28 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Aug 2023 21:07:36 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Zou",
"Zhibin",
""
],
[
"Dutta",
"Aveek",
""
]
] |
new_dataset
| 0.996019 |
2302.00049
|
Simon Geisler
|
Simon Geisler, Yujia Li, Daniel Mankowitz, Ali Taylan Cemgil, Stephan
G\"unnemann, Cosmin Paduraru
|
Transformers Meet Directed Graphs
|
29 pages
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Transformers were originally proposed as a sequence-to-sequence model for
text but have become vital for a wide range of modalities, including images,
audio, video, and undirected graphs. However, transformers for directed graphs
are a surprisingly underexplored topic, despite their applicability to
ubiquitous domains, including source code and logic circuits. In this work, we
propose two direction- and structure-aware positional encodings for directed
graphs: (1) the eigenvectors of the Magnetic Laplacian - a direction-aware
generalization of the combinatorial Laplacian; (2) directional random walk
encodings. Empirically, we show that the extra directionality information is
useful in various downstream tasks, including correctness testing of sorting
networks and source code understanding. Together with a data-flow-centric graph
construction, our model outperforms the prior state of the art on the Open
Graph Benchmark Code2 relatively by 14.7%.
|
[
{
"version": "v1",
"created": "Tue, 31 Jan 2023 19:33:14 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jun 2023 12:47:34 GMT"
},
{
"version": "v3",
"created": "Thu, 31 Aug 2023 14:38:57 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Geisler",
"Simon",
""
],
[
"Li",
"Yujia",
""
],
[
"Mankowitz",
"Daniel",
""
],
[
"Cemgil",
"Ali Taylan",
""
],
[
"Günnemann",
"Stephan",
""
],
[
"Paduraru",
"Cosmin",
""
]
] |
new_dataset
| 0.999128 |
2302.03022
|
Alistair Weld
|
Joao Cartucho, Alistair Weld, Samyakh Tukra, Haozheng Xu, Hiroki
Matsuzaki, Taiyo Ishikawa, Minjun Kwon, Yong Eun Jang, Kwang-Ju Kim, Gwang
Lee, Bizhe Bai, Lueder Kahrs, Lars Boecking, Simeon Allmendinger, Leopold
Muller, Yitong Zhang, Yueming Jin, Sophia Bano, Francisco Vasconcelos,
Wolfgang Reiter, Jonas Hajek, Bruno Silva, Estevao Lima, Joao L. Vilaca,
Sandro Queiros, Stamatia Giannarou
|
SurgT challenge: Benchmark of Soft-Tissue Trackers for Robotic Surgery
| null | null | null | null |
cs.CV cs.RO eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces the ``SurgT: Surgical Tracking" challenge which was
organised in conjunction with MICCAI 2022. There were two purposes for the
creation of this challenge: (1) the establishment of the first standardised
benchmark for the research community to assess soft-tissue trackers; and (2) to
encourage the development of unsupervised deep learning methods, given the lack
of annotated data in surgery. A dataset of 157 stereo endoscopic videos from 20
clinical cases, along with stereo camera calibration parameters, have been
provided. Participants were assigned the task of developing algorithms to track
the movement of soft tissues, represented by bounding boxes, in stereo
endoscopic videos. At the end of the challenge, the developed methods were
assessed on a previously hidden test subset. This assessment uses benchmarking
metrics that were purposely developed for this challenge, to verify the
efficacy of unsupervised deep learning algorithms in tracking soft-tissue. The
metric used for ranking the methods was the Expected Average Overlap (EAO)
score, which measures the average overlap between a tracker's and the ground
truth bounding boxes. Coming first in the challenge was the deep learning
submission by ICVS-2Ai with a superior EAO score of 0.617. This method employs
ARFlow to estimate unsupervised dense optical flow from cropped images, using
photometric and regularization losses. Second, Jmees with an EAO of 0.583, uses
deep learning for surgical tool segmentation on top of a non-deep learning
baseline method: CSRT. CSRT by itself scores a similar EAO of 0.563. The
results from this challenge show that currently, non-deep learning methods are
still competitive. The dataset and benchmarking tool created for this challenge
have been made publicly available at https://surgt.grand-challenge.org/.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 18:57:30 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Feb 2023 15:09:40 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Aug 2023 20:36:09 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Cartucho",
"Joao",
""
],
[
"Weld",
"Alistair",
""
],
[
"Tukra",
"Samyakh",
""
],
[
"Xu",
"Haozheng",
""
],
[
"Matsuzaki",
"Hiroki",
""
],
[
"Ishikawa",
"Taiyo",
""
],
[
"Kwon",
"Minjun",
""
],
[
"Jang",
"Yong Eun",
""
],
[
"Kim",
"Kwang-Ju",
""
],
[
"Lee",
"Gwang",
""
],
[
"Bai",
"Bizhe",
""
],
[
"Kahrs",
"Lueder",
""
],
[
"Boecking",
"Lars",
""
],
[
"Allmendinger",
"Simeon",
""
],
[
"Muller",
"Leopold",
""
],
[
"Zhang",
"Yitong",
""
],
[
"Jin",
"Yueming",
""
],
[
"Bano",
"Sophia",
""
],
[
"Vasconcelos",
"Francisco",
""
],
[
"Reiter",
"Wolfgang",
""
],
[
"Hajek",
"Jonas",
""
],
[
"Silva",
"Bruno",
""
],
[
"Lima",
"Estevao",
""
],
[
"Vilaca",
"Joao L.",
""
],
[
"Queiros",
"Sandro",
""
],
[
"Giannarou",
"Stamatia",
""
]
] |
new_dataset
| 0.99962 |
2302.08761
|
Moritz Neun
|
Moritz Neun, Christian Eichenberger, Yanan Xin, Cheng Fu, Nina
Wiedemann, Henry Martin, Martin Tomko, Lukas Amb\"uhl, Luca Hermes, Michael
Kopp
|
Metropolitan Segment Traffic Speeds from Massive Floating Car Data in 10
Cities
|
Accepted by IEEE Transactions on Intelligent Transportation Systems
(T-ITS), DOI: https://doi.org/10.1109/TITS.2023.3291737
|
IEEE Transactions on Intelligent Transportation Systems (T-ITS),
2023
|
10.1109/TITS.2023.3291737
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traffic analysis is crucial for urban operations and planning, while the
availability of dense urban traffic data beyond loop detectors is still scarce.
We present a large-scale floating vehicle dataset of per-street segment traffic
information, Metropolitan Segment Traffic Speeds from Massive Floating Car Data
in 10 Cities (MeTS-10), available for 10 global cities with a 15-minute
resolution for collection periods ranging between 108 and 361 days in 2019-2021
and covering more than 1500 square kilometers per metropolitan area. MeTS-10
features traffic speed information at all street levels from main arterials to
local streets for Antwerp, Bangkok, Barcelona, Berlin, Chicago, Istanbul,
London, Madrid, Melbourne and Moscow. The dataset leverages the
industrial-scale floating vehicle Traffic4cast data with speeds and vehicle
counts provided in a privacy-preserving spatio-temporal aggregation. We detail
the efficient matching approach mapping the data to the OpenStreetMap road
graph. We evaluate the dataset by comparing it with publicly available
stationary vehicle detector data (for Berlin, London, and Madrid) and the Uber
traffic speed dataset (for Barcelona, Berlin, and London). The comparison
highlights the differences across datasets in spatio-temporal coverage and
variations in the reported traffic caused by the binning method. MeTS-10
enables novel, city-wide analysis of mobility and traffic patterns for ten
major world cities, overcoming current limitations of spatially sparse vehicle
detector data. The large spatial and temporal coverage offers an opportunity
for joining the MeTS-10 with other datasets, such as traffic surveys in traffic
planning studies or vehicle detector data in traffic control settings.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 08:56:07 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Apr 2023 08:28:46 GMT"
},
{
"version": "v3",
"created": "Thu, 31 Aug 2023 16:21:10 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Neun",
"Moritz",
""
],
[
"Eichenberger",
"Christian",
""
],
[
"Xin",
"Yanan",
""
],
[
"Fu",
"Cheng",
""
],
[
"Wiedemann",
"Nina",
""
],
[
"Martin",
"Henry",
""
],
[
"Tomko",
"Martin",
""
],
[
"Ambühl",
"Lukas",
""
],
[
"Hermes",
"Luca",
""
],
[
"Kopp",
"Michael",
""
]
] |
new_dataset
| 0.999877 |
2303.13241
|
Maximilian Ulmer
|
Maximilian Ulmer, Maximilian Durner, Martin Sundermeyer, Manuel
Stoiber, and Rudolph Triebel
|
6D Object Pose Estimation from Approximate 3D Models for Orbital
Robotics
|
Proceedings of IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS)
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present a novel technique to estimate the 6D pose of objects from single
images where the 3D geometry of the object is only given approximately and not
as a precise 3D model. To achieve this, we employ a dense 2D-to-3D
correspondence predictor that regresses 3D model coordinates for every pixel.
In addition to the 3D coordinates, our model also estimates the pixel-wise
coordinate error to discard correspondences that are likely wrong. This allows
us to generate multiple 6D pose hypotheses of the object, which we then refine
iteratively using a highly efficient region-based approach. We also introduce a
novel pixel-wise posterior formulation by which we can estimate the probability
for each hypothesis and select the most likely one. As we show in experiments,
our approach is capable of dealing with extreme visual conditions including
overexposure, high contrast, or low signal-to-noise ratio. This makes it a
powerful technique for the particularly challenging task of estimating the pose
of tumbling satellites for in-orbit robotic applications. Our method achieves
state-of-the-art performance on the SPEED+ dataset and has won the SPEC2021
post-mortem competition.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 13:18:05 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Mar 2023 07:30:23 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Jun 2023 14:36:42 GMT"
},
{
"version": "v4",
"created": "Thu, 31 Aug 2023 14:15:53 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Ulmer",
"Maximilian",
""
],
[
"Durner",
"Maximilian",
""
],
[
"Sundermeyer",
"Martin",
""
],
[
"Stoiber",
"Manuel",
""
],
[
"Triebel",
"Rudolph",
""
]
] |
new_dataset
| 0.972302 |
2304.01559
|
Jianlin Liu
|
Lixia Wu, Jianlin Liu, Junhong Lou, Haoyuan Hu, Jianbin Zheng, Haomin
Wen, Chao Song, Shu He
|
G2PTL: A Pre-trained Model for Delivery Address and its Applications in
Logistics System
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Text-based delivery addresses, as the data foundation for logistics systems,
contain abundant and crucial location information. How to effectively encode
the delivery address is a core task to boost the performance of downstream
tasks in the logistics system. Pre-trained Models (PTMs) designed for Natural
Language Process (NLP) have emerged as the dominant tools for encoding semantic
information in text. Though promising, those NLP-based PTMs fall short of
encoding geographic knowledge in the delivery address, which considerably trims
down the performance of delivery-related tasks in logistic systems such as
Cainiao. To tackle the above problem, we propose a domain-specific pre-trained
model, named G2PTL, a Geography-Graph Pre-trained model for delivery address in
Logistics field. G2PTL combines the semantic learning capabilities of text
pre-training with the geographical-relationship encoding abilities of graph
modeling. Specifically, we first utilize real-world logistics delivery data to
construct a large-scale heterogeneous graph of delivery addresses, which
contains abundant geographic knowledge and delivery information. Then, G2PTL is
pre-trained with subgraphs sampled from the heterogeneous graph. Comprehensive
experiments are conducted to demonstrate the effectiveness of G2PTL through
four downstream tasks in logistics systems on real-world datasets. G2PTL has
been deployed in production in Cainiao's logistics system, which significantly
improves the performance of delivery-related tasks. The code of G2PTL is
available at https://huggingface.co/Cainiao-AI/G2PTL.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 06:33:03 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 07:41:23 GMT"
},
{
"version": "v3",
"created": "Thu, 31 Aug 2023 11:14:51 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Wu",
"Lixia",
""
],
[
"Liu",
"Jianlin",
""
],
[
"Lou",
"Junhong",
""
],
[
"Hu",
"Haoyuan",
""
],
[
"Zheng",
"Jianbin",
""
],
[
"Wen",
"Haomin",
""
],
[
"Song",
"Chao",
""
],
[
"He",
"Shu",
""
]
] |
new_dataset
| 0.999772 |
2304.05821
|
Deyu An
|
Deyu An, Qiang Zhang, Jianshu Chao, Ting Li, Feng Qiao, Yong Deng,
Zhenpeng Bian
|
DUFormer: Solving Power Line Detection Task in Aerial Images using
Semantic Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned aerial vehicles (UAVs) are frequently used for inspecting power
lines and capturing high-resolution aerial images. However, detecting power
lines in aerial images is difficult,as the foreground data(i.e, power lines) is
small and the background information is abundant.To tackle this problem, we
introduce DUFormer, a semantic segmentation algorithm explicitly designed to
detect power lines in aerial images. We presuppose that it is advantageous to
train an efficient Transformer model with sufficient feature extraction using a
convolutional neural network(CNN) with a strong inductive bias.With this goal
in mind, we introduce a heavy token encoder that performs overlapping feature
remodeling and tokenization. The encoder comprises a pyramid CNN feature
extraction module and a power line feature enhancement module.After successful
local feature extraction for power lines, feature fusion is conducted.Then,the
Transformer block is used for global modeling. The final segmentation result is
achieved by amalgamating local and global features in the decode head.Moreover,
we demonstrate the importance of the joint multi-weight loss function in power
line segmentation. Our experimental results show that our proposed method
outperforms all state-of-the-art methods in power line segmentation on the
publicly accessible TTPLA dataset.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 12:59:02 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Aug 2023 14:15:51 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"An",
"Deyu",
""
],
[
"Zhang",
"Qiang",
""
],
[
"Chao",
"Jianshu",
""
],
[
"Li",
"Ting",
""
],
[
"Qiao",
"Feng",
""
],
[
"Deng",
"Yong",
""
],
[
"Bian",
"Zhenpeng",
""
]
] |
new_dataset
| 0.966749 |
2304.11938
|
Haoye Tian
|
Haoye Tian, Weiqi Lu, Tsz On Li, Xunzhu Tang, Shing-Chi Cheung,
Jacques Klein, Tegawend\'e F. Bissyand\'e
|
Is ChatGPT the Ultimate Programming Assistant -- How far is it?
| null | null | null | null |
cs.SE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, the ChatGPT LLM has received great attention: it can be used as a
bot for discussing source code, prompting it to suggest changes, provide
descriptions or even generate code. Typical demonstrations generally focus on
existing benchmarks, which may have been used in model training (i.e., data
leakage). To assess the feasibility of using an LLM as a useful assistant bot
for programmers, we must assess its realistic capabilities on unseen problems
as well as its capabilities on various tasks. In this paper, we present an
empirical study of ChatGPT's potential as a fully automated programming
assistant, focusing on the tasks of code generation, program repair, and code
summariziation. The study investigates ChatGPT's performance on common
programming problems and compares it with state-of-the-art approaches on two
benchmarks. Among several findings, our study shows that ChatGPT is effective
in dealing with common programming problems. However, our experiments also
reveal limitations in terms of its attention span: detailed descriptions will
constrain the focus of ChatGPT and prevent it from leveraging its vast
knowledge to solve the actual problem. Surprisingly, we have identified the
ability of ChatGPT to reason the original intention of the code. We expect
future work to build on this insight for dealing with the open question of the
oracle problem. Our findings contribute interesting insights to the development
of LLMs for programming assistance, notably by demonstrating the importance of
prompt engineering, and providing a better understanding of ChatGPT's practical
applications for software engineering.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 09:20:13 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Aug 2023 09:02:16 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Tian",
"Haoye",
""
],
[
"Lu",
"Weiqi",
""
],
[
"Li",
"Tsz On",
""
],
[
"Tang",
"Xunzhu",
""
],
[
"Cheung",
"Shing-Chi",
""
],
[
"Klein",
"Jacques",
""
],
[
"Bissyandé",
"Tegawendé F.",
""
]
] |
new_dataset
| 0.991827 |
2305.06966
|
Zhanhong Huang
|
Zhanhong Huang, Xiao Zhang and Xinming Huang
|
Real-Time Joint Simulation of LiDAR Perception and Motion Planning for
Automated Driving
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Real-time perception and motion planning are two crucial tasks for autonomous
driving. While there are many research works focused on improving the
performance of perception and motion planning individually, it is still not
clear how a perception error may adversely impact the motion planning results.
In this work, we propose a joint simulation framework with LiDAR-based
perception and motion planning for real-time automated driving. Taking the
sensor input from the CARLA simulator with additive noise, a LiDAR perception
system is designed to detect and track all surrounding vehicles and to provide
precise orientation and velocity information. Next, we introduce a new
collision bound representation that relaxes the communication cost between the
perception module and the motion planner. A novel collision checking algorithm
is implemented using line intersection checking that is more efficient for long
distance range in comparing to the traditional method of occupancy grid. We
evaluate the joint simulation framework in CARLA for urban driving scenarios.
Experiments show that our proposed automated driving system can execute at 25
Hz, which meets the real-time requirement. The LiDAR perception system has high
accuracy within 20 meters when evaluated with the ground truth. The motion
planning results in consistent safe distance keeping when tested in CARLA urban
driving scenarios.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 16:46:47 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Aug 2023 18:08:15 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Huang",
"Zhanhong",
""
],
[
"Zhang",
"Xiao",
""
],
[
"Huang",
"Xinming",
""
]
] |
new_dataset
| 0.996613 |
2306.05109
|
Robin van de Water
|
Robin van de Water, Hendrik Schmidt, Paul Elbers, Patrick Thoral, Bert
Arnrich, Patrick Rockenschaub
|
Yet Another ICU Benchmark: A Flexible Multi-Center Framework for
Clinical ML
|
Main benchmark: https://github.com/rvandewater/YAIB, Cohort
generation: https://github.com/rvandewater/YAIB-cohorts, Models:
https://github.com/rvandewater/YAIB-models
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Medical applications of machine learning (ML) have experienced a surge in
popularity in recent years. The intensive care unit (ICU) is a natural habitat
for ML given the abundance of available data from electronic health records.
Models have been proposed to address numerous ICU prediction tasks like the
early detection of complications. While authors frequently report
state-of-the-art performance, it is challenging to verify claims of
superiority. Datasets and code are not always published, and cohort
definitions, preprocessing pipelines, and training setups are difficult to
reproduce. This work introduces Yet Another ICU Benchmark (YAIB), a modular
framework that allows researchers to define reproducible and comparable
clinical ML experiments; we offer an end-to-end solution from cohort definition
to model evaluation. The framework natively supports most open-access ICU
datasets (MIMIC III/IV, eICU, HiRID, AUMCdb) and is easily adaptable to future
ICU datasets. Combined with a transparent preprocessing pipeline and extensible
training code for multiple ML and deep learning models, YAIB enables unified
model development. Our benchmark comes with five predefined established
prediction tasks (mortality, acute kidney injury, sepsis, kidney function, and
length of stay) developed in collaboration with clinicians. Adding further
tasks is straightforward by design. Using YAIB, we demonstrate that the choice
of dataset, cohort definition, and preprocessing have a major impact on the
prediction performance - often more so than model class - indicating an urgent
need for YAIB as a holistic benchmarking tool. We provide our work to the
clinical ML community to accelerate method development and enable real-world
clinical implementations. Software Repository:
https://github.com/rvandewater/YAIB.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 11:16:20 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Aug 2023 10:13:12 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"van de Water",
"Robin",
""
],
[
"Schmidt",
"Hendrik",
""
],
[
"Elbers",
"Paul",
""
],
[
"Thoral",
"Patrick",
""
],
[
"Arnrich",
"Bert",
""
],
[
"Rockenschaub",
"Patrick",
""
]
] |
new_dataset
| 0.999178 |
2308.11155
|
Junyu Liu
|
Zihan Pengmei, Yinan Shu, Junyu Liu
|
xxMD: Benchmarking Neural Force Fields Using Extended Dynamics beyond
Equilibrium
|
19 pages, many figures. Data available at
https://github.com/zpengmei/xxMD
| null | null | null |
cs.LG cs.AI physics.chem-ph quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural force fields (NFFs) have gained prominence in computational chemistry
as surrogate models, superseding quantum-chemistry calculations in ab initio
molecular dynamics. The prevalent benchmark for NFFs has been the MD17 dataset
and its subsequent extension. These datasets predominantly comprise geometries
from the equilibrium region of the ground electronic state potential energy
surface, sampling from direct adiabatic dynamics. However, many chemical
reactions entail significant molecular deformations, notably bond breaking. We
demonstrate the constrained distribution of internal coordinates and energies
in the MD17 datasets, underscoring their inadequacy for representing systems
undergoing chemical reactions. Addressing this sampling limitation, we
introduce the xxMD (Extended Excited-state Molecular Dynamics) dataset, derived
from non-adiabatic dynamics. This dataset encompasses energies and forces
ascertained from both multireference wave function theory and density
functional theory. Furthermore, its nuclear configuration spaces authentically
depict chemical reactions, making xxMD a more chemically relevant dataset. Our
re-assessment of equivariant models on the xxMD datasets reveals notably higher
mean absolute errors than those reported for MD17 and its variants. This
observation underscores the challenges faced in crafting a generalizable NFF
model with extrapolation capability. Our proposed xxMD-CASSCF and xxMD-DFT
datasets are available at https://github.com/zpengmei/xxMD.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 03:23:36 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Aug 2023 20:55:07 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Pengmei",
"Zihan",
""
],
[
"Shu",
"Yinan",
""
],
[
"Liu",
"Junyu",
""
]
] |
new_dataset
| 0.988654 |
2308.14500
|
Di Yang
|
Di Yang, Yaohui Wang, Antitza Dantcheva, Quan Kong, Lorenzo Garattoni,
Gianpiero Francesca, Francois Bremond
|
LAC: Latent Action Composition for Skeleton-based Action Segmentation
|
ICCV 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Skeleton-based action segmentation requires recognizing composable actions in
untrimmed videos. Current approaches decouple this problem by first extracting
local visual features from skeleton sequences and then processing them by a
temporal model to classify frame-wise actions. However, their performances
remain limited as the visual features cannot sufficiently express composable
actions. In this context, we propose Latent Action Composition (LAC), a novel
self-supervised framework aiming at learning from synthesized composable
motions for skeleton-based action segmentation. LAC is composed of a novel
generation module towards synthesizing new sequences. Specifically, we design a
linear latent space in the generator to represent primitive motion. New
composed motions can be synthesized by simply performing arithmetic operations
on latent representations of multiple input skeleton sequences. LAC leverages
such synthesized sequences, which have large diversity and complexity, for
learning visual representations of skeletons in both sequence and frame spaces
via contrastive learning. The resulting visual encoder has a high expressive
power and can be effectively transferred onto action segmentation tasks by
end-to-end fine-tuning without the need for additional temporal models. We
conduct a study focusing on transfer-learning and we show that representations
learned from pre-trained LAC outperform the state-of-the-art by a large margin
on TSU, Charades, PKU-MMD datasets.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 11:20:48 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Aug 2023 14:18:58 GMT"
},
{
"version": "v3",
"created": "Thu, 31 Aug 2023 12:02:47 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Yang",
"Di",
""
],
[
"Wang",
"Yaohui",
""
],
[
"Dantcheva",
"Antitza",
""
],
[
"Kong",
"Quan",
""
],
[
"Garattoni",
"Lorenzo",
""
],
[
"Francesca",
"Gianpiero",
""
],
[
"Bremond",
"Francois",
""
]
] |
new_dataset
| 0.986043 |
2308.15690
|
Byunghyun Ban
|
Byunghyun Ban, Donghun Ryu, Su-won Hwang
|
CongNaMul: A Dataset for Advanced Image Processing of Soybean Sprouts
|
Accepted to International Conference on ICT Convergence 2023
| null | null | null |
cs.CV cs.AI cs.LG eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
We present 'CongNaMul', a comprehensive dataset designed for various tasks in
soybean sprouts image analysis. The CongNaMul dataset is curated to facilitate
tasks such as image classification, semantic segmentation, decomposition, and
measurement of length and weight. The classification task provides four classes
to determine the quality of soybean sprouts: normal, broken, spotted, and
broken and spotted, for the development of AI-aided automatic quality
inspection technology. For semantic segmentation, images with varying
complexity, from single sprout images to images with multiple sprouts, along
with human-labelled mask images, are included. The label has 4 different
classes: background, head, body, tail. The dataset also provides images and
masks for the image decomposition task, including two separate sprout images
and their combined form. Lastly, 5 physical features of sprouts (head length,
body length, body thickness, tail length, weight) are provided for image-based
measurement tasks. This dataset is expected to be a valuable resource for a
wide range of research and applications in the advanced analysis of images of
soybean sprouts. Also, we hope that this dataset can assist researchers
studying classification, semantic segmentation, decomposition, and physical
feature measurement in other industrial fields, in evaluating their models. The
dataset is available at the authors' repository. (https://bhban.kr/data)
|
[
{
"version": "v1",
"created": "Wed, 30 Aug 2023 01:14:32 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Aug 2023 02:21:20 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Ban",
"Byunghyun",
""
],
[
"Ryu",
"Donghun",
""
],
[
"Hwang",
"Su-won",
""
]
] |
new_dataset
| 0.999812 |
2308.15975
|
Mel Vecerik
|
Mel Vecerik and Carl Doersch and Yi Yang and Todor Davchev and Yusuf
Aytar and Guangyao Zhou and Raia Hadsell and Lourdes Agapito and Jon Scholz
|
RoboTAP: Tracking Arbitrary Points for Few-Shot Visual Imitation
|
Project website: https://robotap.github.io
| null | null | null |
cs.RO cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For robots to be useful outside labs and specialized factories we need a way
to teach them new useful behaviors quickly. Current approaches lack either the
generality to onboard new tasks without task-specific engineering, or else lack
the data-efficiency to do so in an amount of time that enables practical use.
In this work we explore dense tracking as a representational vehicle to allow
faster and more general learning from demonstration. Our approach utilizes
Track-Any-Point (TAP) models to isolate the relevant motion in a demonstration,
and parameterize a low-level controller to reproduce this motion across changes
in the scene configuration. We show this results in robust robot policies that
can solve complex object-arrangement tasks such as shape-matching, stacking,
and even full path-following tasks such as applying glue and sticking objects
together, all from demonstrations that can be collected in minutes.
|
[
{
"version": "v1",
"created": "Wed, 30 Aug 2023 11:57:04 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Aug 2023 15:29:44 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Vecerik",
"Mel",
""
],
[
"Doersch",
"Carl",
""
],
[
"Yang",
"Yi",
""
],
[
"Davchev",
"Todor",
""
],
[
"Aytar",
"Yusuf",
""
],
[
"Zhou",
"Guangyao",
""
],
[
"Hadsell",
"Raia",
""
],
[
"Agapito",
"Lourdes",
""
],
[
"Scholz",
"Jon",
""
]
] |
new_dataset
| 0.991485 |
2308.16145
|
Erkang Cheng
|
Hengxu Zhang, Pengpeng Liang, Zhiyong Sun, Bo Song, Erkang Cheng
|
CircleFormer: Circular Nuclei Detection in Whole Slide Images with
Circle Queries and Attention
|
Accepted at MICCAI 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Both CNN-based and Transformer-based object detection with bounding box
representation have been extensively studied in computer vision and medical
image analysis, but circular object detection in medical images is still
underexplored. Inspired by the recent anchor free CNN-based circular object
detection method (CircleNet) for ball-shape glomeruli detection in renal
pathology, in this paper, we present CircleFormer, a Transformer-based circular
medical object detection with dynamic anchor circles. Specifically, queries
with circle representation in Transformer decoder iteratively refine the
circular object detection results, and a circle cross attention module is
introduced to compute the similarity between circular queries and image
features. A generalized circle IoU (gCIoU) is proposed to serve as a new
regression loss of circular object detection as well. Moreover, our approach is
easy to generalize to the segmentation task by adding a simple segmentation
branch to CircleFormer. We evaluate our method in circular nuclei detection and
segmentation on the public MoNuSeg dataset, and the experimental results show
that our method achieves promising performance compared with the
state-of-the-art approaches. The effectiveness of each component is validated
via ablation studies as well. Our code is released at
https://github.com/zhanghx-iim-ahu/CircleFormer.
|
[
{
"version": "v1",
"created": "Wed, 30 Aug 2023 17:01:01 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Aug 2023 01:29:35 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Zhang",
"Hengxu",
""
],
[
"Liang",
"Pengpeng",
""
],
[
"Sun",
"Zhiyong",
""
],
[
"Song",
"Bo",
""
],
[
"Cheng",
"Erkang",
""
]
] |
new_dataset
| 0.999308 |
2308.16154
|
Yiqi Zhong
|
Yiqi Zhong, Luming Liang, Ilya Zharkov, Ulrich Neumann
|
MMVP: Motion-Matrix-based Video Prediction
|
ICCV 2023 (Oral)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A central challenge of video prediction lies where the system has to reason
the objects' future motions from image frames while simultaneously maintaining
the consistency of their appearances across frames. This work introduces an
end-to-end trainable two-stream video prediction framework, Motion-Matrix-based
Video Prediction (MMVP), to tackle this challenge. Unlike previous methods that
usually handle motion prediction and appearance maintenance within the same set
of modules, MMVP decouples motion and appearance information by constructing
appearance-agnostic motion matrices. The motion matrices represent the temporal
similarity of each and every pair of feature patches in the input frames, and
are the sole input of the motion prediction module in MMVP. This design
improves video prediction in both accuracy and efficiency, and reduces the
model size. Results of extensive experiments demonstrate that MMVP outperforms
state-of-the-art systems on public data sets by non-negligible large margins
(about 1 db in PSNR, UCF Sports) in significantly smaller model sizes (84% the
size or smaller).
|
[
{
"version": "v1",
"created": "Wed, 30 Aug 2023 17:20:46 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Aug 2023 00:51:45 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Zhong",
"Yiqi",
""
],
[
"Liang",
"Luming",
""
],
[
"Zharkov",
"Ilya",
""
],
[
"Neumann",
"Ulrich",
""
]
] |
new_dataset
| 0.998455 |
2308.16289
|
Marta Misiaszek-Schreyner
|
Marta Misiaszek-Schreyner, Miriam Kosik, Mirek Sopek
|
Time-Bin CKA as a tool for blockchain technology
|
9 pages, 3 figures
| null | null | null |
cs.CR quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
We explore the potential of Time-Bin Conference Key Agreement (TB CKA)
protocol as a means to achieve consensus among multiple parties. We provide an
explanation of the underlying physical implementation, i.e. TB CKA fundamentals
and illustrate how this process can be seen as a natural realization of the
global common coin primitive. Next, we present how TB CKA could be embodied in
classical consensus algorithms to create hybrid classical-quantum solutions to
the Byzantine Agreement problem.
|
[
{
"version": "v1",
"created": "Wed, 30 Aug 2023 19:36:50 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Misiaszek-Schreyner",
"Marta",
""
],
[
"Kosik",
"Miriam",
""
],
[
"Sopek",
"Mirek",
""
]
] |
new_dataset
| 0.996135 |
2308.16336
|
\"Omer Veysel \c{C}a\u{g}atan
|
Omer Veysel Cagatan
|
ToddlerBERTa: Exploiting BabyBERTa for Grammar Learning and Language
Understanding
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present ToddlerBERTa, a BabyBERTa-like language model, exploring its
capabilities through five different models with varied hyperparameters.
Evaluating on BLiMP, SuperGLUE, MSGS, and a Supplement benchmark from the
BabyLM challenge, we find that smaller models can excel in specific tasks,
while larger models perform well with substantial data. Despite training on a
smaller dataset, ToddlerBERTa demonstrates commendable performance, rivalling
the state-of-the-art RoBERTa-base. The model showcases robust language
understanding, even with single-sentence pretraining, and competes with
baselines that leverage broader contextual information. Our work provides
insights into hyperparameter choices, and data utilization, contributing to the
advancement of language models.
|
[
{
"version": "v1",
"created": "Wed, 30 Aug 2023 21:56:36 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Cagatan",
"Omer Veysel",
""
]
] |
new_dataset
| 0.968867 |
2308.16380
|
Xiao Pan
|
Elmira Faraji Zonouz, Xiao Pan, Yu-Cheng Hsu, Tony Yang
|
3D vision-based structural masonry damage detection
|
10 pages, accepted in the Canadian Conference - Pacific Conference on
Earthquake Engineering 2023, Vancouver, British Columbia
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The detection of masonry damage is essential for preventing potentially
disastrous outcomes. Manual inspection can, however, take a long time and be
hazardous to human inspectors. Automation of the inspection process using novel
computer vision and machine learning algorithms can be a more efficient and
safe solution to prevent further deterioration of the masonry structures. Most
existing 2D vision-based methods are limited to qualitative damage
classification, 2D localization, and in-plane quantification. In this study, we
present a 3D vision-based methodology for accurate masonry damage detection,
which offers a more robust solution with a greater field of view, depth of
vision, and the ability to detect failures in complex environments. First,
images of the masonry specimens are collected to generate a 3D point cloud.
Second, 3D point clouds processing methods are developed to evaluate the
masonry damage. We demonstrate the effectiveness of our approach through
experiments on structural masonry components. Our experiments showed the
proposed system can effectively classify damage states and localize and
quantify critical damage features. The result showed the proposed method can
improve the level of autonomy during the inspection of masonry structures.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 00:48:05 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Zonouz",
"Elmira Faraji",
""
],
[
"Pan",
"Xiao",
""
],
[
"Hsu",
"Yu-Cheng",
""
],
[
"Yang",
"Tony",
""
]
] |
new_dataset
| 0.967676 |
2308.16404
|
Xixuan Hao
|
Xixuan Hao, Aozhong Zhang, Xianze Meng and Bin Fu
|
Deformation Robust Text Spotting with Geometric Prior
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of text spotting is to perform text detection and recognition in an
end-to-end manner. Although the diversity of luminosity and orientation in
scene texts has been widely studied, the font diversity and shape variance of
the same character are ignored in recent works, since most characters in
natural images are rendered in standard fonts. To solve this problem, we
present a Chinese Artistic Dataset, termed as ARText, which contains 33,000
artistic images with rich shape deformation and font diversity. Based on this
database, we develop a deformation robust text spotting method (DR TextSpotter)
to solve the recognition problem of complex deformation of characters in
different fonts. Specifically, we propose a geometric prior module to highlight
the important features based on the unsupervised landmark detection
sub-network. A graph convolution network is further constructed to fuse the
character features and landmark features, and then performs semantic reasoning
to enhance the discrimination for different characters. The experiments are
conducted on ARText and IC19-ReCTS datasets. Our results demonstrate the
effectiveness of our proposed method.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 02:13:15 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Hao",
"Xixuan",
""
],
[
"Zhang",
"Aozhong",
""
],
[
"Meng",
"Xianze",
""
],
[
"Fu",
"Bin",
""
]
] |
new_dataset
| 0.999076 |
2308.16406
|
Zehao Dong
|
Zehao Dong, Weidong Cao, Muhan Zhang, Dacheng Tao, Yixin Chen, Xuan
Zhang
|
CktGNN: Circuit Graph Neural Network for Electronic Design Automation
|
Accepted by ICLR (International Conference on Learning
Representations) 2023
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The electronic design automation of analog circuits has been a longstanding
challenge in the integrated circuit field due to the huge design space and
complex design trade-offs among circuit specifications. In the past decades,
intensive research efforts have mostly been paid to automate the transistor
sizing with a given circuit topology. By recognizing the graph nature of
circuits, this paper presents a Circuit Graph Neural Network (CktGNN) that
simultaneously automates the circuit topology generation and device sizing
based on the encoder-dependent optimization subroutines. Particularly, CktGNN
encodes circuit graphs using a two-level GNN framework (of nested GNN) where
circuits are represented as combinations of subgraphs in a known subgraph
basis. In this way, it significantly improves design efficiency by reducing the
number of subgraphs to perform message passing. Nonetheless, another critical
roadblock to advancing learning-assisted circuit design automation is a lack of
public benchmarks to perform canonical assessment and reproducible research. To
tackle the challenge, we introduce Open Circuit Benchmark (OCB), an
open-sourced dataset that contains $10$K distinct operational amplifiers with
carefully-extracted circuit specifications. OCB is also equipped with
communicative circuit generation and evaluation capabilities such that it can
help to generalize CktGNN to design various analog circuits by producing
corresponding datasets. Experiments on OCB show the extraordinary advantages of
CktGNN through representation-based optimization frameworks over other recent
powerful GNN baselines and human experts' manual designs. Our work paves the
way toward a learning-based open-sourced design automation for analog circuits.
Our source code is available at \url{https://github.com/zehao-dong/CktGNN}.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 02:20:25 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Dong",
"Zehao",
""
],
[
"Cao",
"Weidong",
""
],
[
"Zhang",
"Muhan",
""
],
[
"Tao",
"Dacheng",
""
],
[
"Chen",
"Yixin",
""
],
[
"Zhang",
"Xuan",
""
]
] |
new_dataset
| 0.999803 |
2308.16417
|
Peng Yang
|
Yan Cheng, Peng Yang, Ning Zhang, Jiawei Hou
|
Edge-Assisted Lightweight Region-of-Interest Extraction and Transmission
for Vehicle Perception
| null | null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To enhance on-road environmental perception for autonomous driving, accurate
and real-time analytics on high-resolution video frames generated from on-board
cameras be-comes crucial. In this paper, we design a lightweight object
location method based on class activation mapping (CAM) to rapidly capture the
region of interest (RoI) boxes that contain driving safety related objects from
on-board cameras, which can not only improve the inference accuracy of vision
tasks, but also reduce the amount of transmitted data. Considering the limited
on-board computation resources, the RoI boxes extracted from the raw image are
offloaded to the edge for further processing. Considering both the dynamics of
vehicle-to-edge communications and the limited edge resources, we propose an
adaptive RoI box offloading algorithm to ensure prompt and accurate inference
by adjusting the down-sampling rate of each box. Extensive experimental results
on four high-resolution video streams demonstrate that our approach can
effectively improve the overall accuracy by up to 16% and reduce the
transmission demand by up to 49%, compared with other benchmarks.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 03:03:29 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Cheng",
"Yan",
""
],
[
"Yang",
"Peng",
""
],
[
"Zhang",
"Ning",
""
],
[
"Hou",
"Jiawei",
""
]
] |
new_dataset
| 0.996466 |
2308.16426
|
Yasuaki Kobayashi
|
Yasuaki Kobayashi, Kazuhiro Kurita, Yasuko Matsui, Hirotaka Ono
|
Enumerating minimal vertex covers and dominating sets with capacity
and/or connectivity constraints
|
13 pages
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider the problems of enumerating minimal vertex covers
and minimal dominating sets with capacity and/or connectivity constraints. We
develop polynomial-delay enumeration algorithms for these problems on
bounded-degree graphs. For the case of minimal connected vertex cover, our
algorithm runs in polynomial delay even on the class of $d$-claw free graphs,
which extends the result on bounded-degree graphs. To complement these
algorithmic results, we show that the problems of enumerating minimal connected
vertex covers and minimal capacitated vertex covers in bipartite graphs are at
least as hard as enumerating minimal transversals in hypergraphs.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 03:30:43 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Kobayashi",
"Yasuaki",
""
],
[
"Kurita",
"Kazuhiro",
""
],
[
"Matsui",
"Yasuko",
""
],
[
"Ono",
"Hirotaka",
""
]
] |
new_dataset
| 0.995342 |
2308.16435
|
Cara Appel
|
Jonathan S. Koning, Ashwin Subramanian, Mazen Alotaibi, Cara L. Appel,
Christopher M. Sullivan, Thon Chao, Lisa Truong, Robyn L. Tanguay, Pankaj
Jaiswal, Taal Levi, Damon B. Lesmeister
|
Njobvu-AI: An open-source tool for collaborative image labeling and
implementation of computer vision models
|
13 pages, 6 figures. For code and documentation, see
https://github.com/sullichrosu/Njobvu-AI/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Practitioners interested in using computer vision models lack user-friendly
and open-source software that combines features to label training data, allow
multiple users, train new algorithms, review output, and implement new models.
Labeling training data, such as images, is a key step to developing accurate
object detection algorithms using computer vision. This step is often not
compatible with many cloud-based services for marking or labeling image and
video data due to limited internet bandwidth in many regions of the world.
Desktop tools are useful for groups working in remote locations, but users
often do not have the capability to combine projects developed locally by
multiple collaborators. Furthermore, many tools offer features for labeling
data or using pre-trained models for classification, but few allow researchers
to combine these steps to create and apply custom models. Free, open-source,
and user-friendly software that offers a full suite of features (e.g., ability
to work locally and online, and train custom models) is desirable to field
researchers and conservationists that may have limited coding skills. We
developed Njobvu-AI, a free, open-source tool that can be run on both desktop
and server hardware using Node.js, allowing users to label data, combine
projects for collaboration and review, train custom algorithms, and implement
new computer vision models. The name Njobvu-AI (pronounced N-joh-voo AI),
incorporating the Chichewa word for elephant, is inspired by a wildlife
monitoring program in Malawi that was a primary impetus for the development of
this tool and references similarities between the powerful memory of elephants
and properties of computer vision models.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 03:49:41 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Koning",
"Jonathan S.",
""
],
[
"Subramanian",
"Ashwin",
""
],
[
"Alotaibi",
"Mazen",
""
],
[
"Appel",
"Cara L.",
""
],
[
"Sullivan",
"Christopher M.",
""
],
[
"Chao",
"Thon",
""
],
[
"Truong",
"Lisa",
""
],
[
"Tanguay",
"Robyn L.",
""
],
[
"Jaiswal",
"Pankaj",
""
],
[
"Levi",
"Taal",
""
],
[
"Lesmeister",
"Damon B.",
""
]
] |
new_dataset
| 0.999482 |
2308.16437
|
Xiaolu Zhang
|
Zhaoxin Huan, Ke Ding, Ang Li, Xiaolu Zhang, Xu Min, Yong He, Liang
Zhang, Jun Zhou, Linjian Mo, Jinjie Gu, Zhongyi Liu, Wenliang Zhong, Guannan
Zhang
|
AntM$^{2}$C: A Large Scale Dataset For Multi-Scenario Multi-Modal CTR
Prediction
| null | null | null | null |
cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Click-through rate (CTR) prediction is a crucial issue in recommendation
systems. There has been an emergence of various public CTR datasets. However,
existing datasets primarily suffer from the following limitations. Firstly,
users generally click different types of items from multiple scenarios, and
modeling from multiple scenarios can provide a more comprehensive understanding
of users. Existing datasets only include data for the same type of items from a
single scenario. Secondly, multi-modal features are essential in multi-scenario
prediction as they address the issue of inconsistent ID encoding between
different scenarios. The existing datasets are based on ID features and lack
multi-modal features. Third, a large-scale dataset can provide a more reliable
evaluation of models, fully reflecting the performance differences between
models. The scale of existing datasets is around 100 million, which is
relatively small compared to the real-world CTR prediction. To address these
limitations, we propose AntM$^{2}$C, a Multi-Scenario Multi-Modal CTR dataset
based on industrial data from Alipay. Specifically, AntM$^{2}$C provides the
following advantages: 1) It covers CTR data of 5 different types of items,
providing insights into the preferences of users for different items, including
advertisements, vouchers, mini-programs, contents, and videos. 2) Apart from
ID-based features, AntM$^{2}$C also provides 2 multi-modal features, raw text
and image features, which can effectively establish connections between items
with different IDs. 3) AntM$^{2}$C provides 1 billion CTR data with 200
features, including 200 million users and 6 million items. It is currently the
largest-scale CTR dataset available. Based on AntM$^{2}$C, we construct several
typical CTR tasks and provide comparisons with baseline methods. The dataset
homepage is available at https://www.atecup.cn/home.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 03:52:57 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Huan",
"Zhaoxin",
""
],
[
"Ding",
"Ke",
""
],
[
"Li",
"Ang",
""
],
[
"Zhang",
"Xiaolu",
""
],
[
"Min",
"Xu",
""
],
[
"He",
"Yong",
""
],
[
"Zhang",
"Liang",
""
],
[
"Zhou",
"Jun",
""
],
[
"Mo",
"Linjian",
""
],
[
"Gu",
"Jinjie",
""
],
[
"Liu",
"Zhongyi",
""
],
[
"Zhong",
"Wenliang",
""
],
[
"Zhang",
"Guannan",
""
]
] |
new_dataset
| 0.997591 |
2308.16451
|
Jingwei Song
|
Keke Yang, Zheng Zhang, Meng Li, Tuoyu Cao, Maani Ghaffari, and
Jingwei Song
|
Optical flow-based vascular respiratory motion compensation
|
This manuscript has been accepted by IEEE Robotics and Automation
Letters
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper develops a new vascular respiratory motion compensation algorithm,
Motion-Related Compensation (MRC), to conduct vascular respiratory motion
compensation by extrapolating the correlation between invisible vascular and
visible non-vascular. Robot-assisted vascular intervention can significantly
reduce the radiation exposure of surgeons. In robot-assisted image-guided
intervention, blood vessels are constantly moving/deforming due to respiration,
and they are invisible in the X-ray images unless contrast agents are injected.
The vascular respiratory motion compensation technique predicts 2D vascular
roadmaps in live X-ray images. When blood vessels are visible after contrast
agents injection, vascular respiratory motion compensation is conducted based
on the sparse Lucas-Kanade feature tracker. An MRC model is trained to learn
the correlation between vascular and non-vascular motions. During the
intervention, the invisible blood vessels are predicted with visible tissues
and the trained MRC model. Moreover, a Gaussian-based outlier filter is adopted
for refinement. Experiments on in-vivo data sets show that the proposed method
can yield vascular respiratory motion compensation in 0.032 sec, with an
average error 1.086 mm. Our real-time and accurate vascular respiratory motion
compensation approach contributes to modern vascular intervention and surgical
robots.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 04:38:12 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Yang",
"Keke",
""
],
[
"Zhang",
"Zheng",
""
],
[
"Li",
"Meng",
""
],
[
"Cao",
"Tuoyu",
""
],
[
"Ghaffari",
"Maani",
""
],
[
"Song",
"Jingwei",
""
]
] |
new_dataset
| 0.995413 |
2308.16464
|
Anas Nadeem
|
Anas Nadeem, Muhammad Usman Sarwar, Muhammad Zubair Malik
|
MaintainoMATE: A GitHub App for Intelligent Automation of Maintenance
Activities
| null | null | null | null |
cs.SE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Software development projects rely on issue tracking systems at the core of
tracking maintenance tasks such as bug reports, and enhancement requests.
Incoming issue-reports on these issue tracking systems must be managed in an
effective manner. First, they must be labelled and then assigned to a
particular developer with relevant expertise. This handling of issue-reports is
critical and requires thorough scanning of the text entered in an issue-report
making it a labor-intensive task. In this paper, we present a unified framework
called MaintainoMATE, which is capable of automatically categorizing the
issue-reports in their respective category and further assigning the
issue-reports to a developer with relevant expertise. We use the Bidirectional
Encoder Representations from Transformers (BERT), as an underlying model for
MaintainoMATE to learn the contextual information for automatic issue-report
labeling and assignment tasks. We deploy the framework used in this work as a
GitHub application. We empirically evaluate our approach on GitHub
issue-reports to show its capability of assigning labels to the issue-reports.
We were able to achieve an F1-score close to 80\%, which is comparable to
existing state-of-the-art results. Similarly, our initial evaluations show that
we can assign relevant developers to the issue-reports with an F1 score of
54\%, which is a significant improvement over existing approaches. Our initial
findings suggest that MaintainoMATE has the potential of improving software
quality and reducing maintenance costs by accurately automating activities
involved in the maintenance processes. Our future work would be directed
towards improving the issue-assignment module.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 05:15:42 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Nadeem",
"Anas",
""
],
[
"Sarwar",
"Muhammad Usman",
""
],
[
"Malik",
"Muhammad Zubair",
""
]
] |
new_dataset
| 0.999854 |
2308.16495
|
EPTCS
|
Gejza Jen\v{c}a (Slovak University of Technology, Bratislava), Bert
Lindenhovius (Slovak Academy of Sciences, Bratislava)
|
Quantum Suplattices
|
In Proceedings QPL 2023, arXiv:2308.15489
|
EPTCS 384, 2023, pp. 58-74
|
10.4204/EPTCS.384.4
| null |
cs.DM cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Building on the theory of quantum posets, we introduce a non-commutative
version of suplattices, i.e., complete lattices whose morphisms are
supremum-preserving maps, which form a step towards a new notion of quantum
topological spaces. We show that the theory of these quantum suplattices
resembles the classical theory: the opposite quantum poset of a quantum
suplattice is again a quantum suplattice, and quantum suplattices arise as
algebras of a non-commutative version of the monad of downward-closed subsets
of a poset. The existence of this monad is proved by introducing a
non-commutative generalization of monotone relations between quantum posets,
which form a compact closed category. Moreover, we introduce a non-commutative
generalization of Galois connections and we prove that an upper Galois adjoint
of a monotone map between quantum suplattices exists if and only if the map is
a morphism of quantum suplattices. Finally, we prove a quantum version of the
Knaster-Tarski fixpoint theorem: the quantum set of fixpoints of a monotone
endomap on a quantum suplattice form a quantum suplattice.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 06:57:39 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Jenča",
"Gejza",
"",
"Slovak University of Technology, Bratislava"
],
[
"Lindenhovius",
"Bert",
"",
"Slovak Academy of Sciences, Bratislava"
]
] |
new_dataset
| 0.972906 |
2308.16497
|
EPTCS
|
Robin Cockett (University of Calgary), Jean-Simon Pacaud Lemay
(Macquarie University)
|
Moore-Penrose Dagger Categories
|
In Proceedings QPL 2023, arXiv:2308.15489
|
EPTCS 384, 2023, pp. 171-186
|
10.4204/EPTCS.384.10
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
The notion of a Moore-Penrose inverse (M-P inverse) was introduced by Moore
in 1920 and rediscovered by Penrose in 1955. The M-P inverse of a complex
matrix is a special type of inverse which is unique, always exists, and can be
computed using singular value decomposition. In a series of papers in the
1980s, Puystjens and Robinson studied M-P inverses more abstractly in the
context of dagger categories. Despite the fact that dagger categories are now a
fundamental notion in categorical quantum mechanics, the notion of a M-P
inverse has not (to our knowledge) been revisited since their work. One purpose
of this paper is, thus, to renew the study of M-P inverses in dagger
categories.
Here we introduce the notion of a Moore-Penrose dagger category and provide
many examples including complex matrices, finite Hilbert spaces, dagger
groupoids, and inverse categories. We also introduce generalized versions of
singular value decomposition, compact singular value decomposition, and polar
decomposition for maps in a dagger category, and show how, having such a
decomposition is equivalent to having M-P inverses. This allows us to provide
precise characterizations of which maps have M-P inverses in a dagger
idempotent complete category, a dagger kernel category with dagger biproducts
(and negatives), and a dagger category with unique square roots.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 07:00:02 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Cockett",
"Robin",
"",
"University of Calgary"
],
[
"Lemay",
"Jean-Simon Pacaud",
"",
"Macquarie University"
]
] |
new_dataset
| 0.973056 |
2308.16527
|
Ruohuan Fang
|
Ruohuan Fang, Guansong Pang, Lei Zhou, Xiao Bai, Jin Zheng
|
Unsupervised Recognition of Unknown Objects for Open-World Object
Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open-World Object Detection (OWOD) extends object detection problem to a
realistic and dynamic scenario, where a detection model is required to be
capable of detecting both known and unknown objects and incrementally learning
newly introduced knowledge. Current OWOD models, such as ORE and OW-DETR, focus
on pseudo-labeling regions with high objectness scores as unknowns, whose
performance relies heavily on the supervision of known objects. While they can
detect the unknowns that exhibit similar features to the known objects, they
suffer from a severe label bias problem that they tend to detect all regions
(including unknown object regions) that are dissimilar to the known objects as
part of the background. To eliminate the label bias, this paper proposes a
novel approach that learns an unsupervised discriminative model to recognize
true unknown objects from raw pseudo labels generated by unsupervised region
proposal methods. The resulting model can be further refined by a
classification-free self-training method which iteratively extends pseudo
unknown objects to the unlabeled regions. Experimental results show that our
method 1) significantly outperforms the prior SOTA in detecting unknown objects
while maintaining competitive performance of detecting known object classes on
the MS COCO dataset, and 2) achieves better generalization ability on the LVIS
and Objects365 datasets.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 08:17:29 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Fang",
"Ruohuan",
""
],
[
"Pang",
"Guansong",
""
],
[
"Zhou",
"Lei",
""
],
[
"Bai",
"Xiao",
""
],
[
"Zheng",
"Jin",
""
]
] |
new_dataset
| 0.985691 |
2308.16528
|
Ning Gao
|
Ning Gao, Ngo Anh Vien, Hanna Ziesche, Gerhard Neumann
|
SA6D: Self-Adaptive Few-Shot 6D Pose Estimator for Novel and Occluded
Objects
| null |
Conference on Robot Learning (CoRL), 2023
| null | null |
cs.CV cs.LG cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
To enable meaningful robotic manipulation of objects in the real-world, 6D
pose estimation is one of the critical aspects. Most existing approaches have
difficulties to extend predictions to scenarios where novel object instances
are continuously introduced, especially with heavy occlusions. In this work, we
propose a few-shot pose estimation (FSPE) approach called SA6D, which uses a
self-adaptive segmentation module to identify the novel target object and
construct a point cloud model of the target object using only a small number of
cluttered reference images. Unlike existing methods, SA6D does not require
object-centric reference images or any additional object information, making it
a more generalizable and scalable solution across categories. We evaluate SA6D
on real-world tabletop object datasets and demonstrate that SA6D outperforms
existing FSPE methods, particularly in cluttered scenes with occlusions, while
requiring fewer reference images.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 08:19:26 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Gao",
"Ning",
""
],
[
"Vien",
"Ngo Anh",
""
],
[
"Ziesche",
"Hanna",
""
],
[
"Neumann",
"Gerhard",
""
]
] |
new_dataset
| 0.977826 |
2308.16529
|
Yoon Kyung Lee
|
Yoon Kyung Lee, Yoonwon Jung, Gyuyi Kang, Sowon Hahn
|
Developing Social Robots with Empathetic Non-Verbal Cues Using Large
Language Models
| null |
In Proceedings of 2023 IEEE International Conference on Robot &
Human Interactive Communication (RO-MAN)
| null | null |
cs.RO cs.AI cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose augmenting the empathetic capacities of social robots by
integrating non-verbal cues. Our primary contribution is the design and
labeling of four types of empathetic non-verbal cues, abbreviated as SAFE:
Speech, Action (gesture), Facial expression, and Emotion, in a social robot.
These cues are generated using a Large Language Model (LLM). We developed an
LLM-based conversational system for the robot and assessed its alignment with
social cues as defined by human counselors. Preliminary results show distinct
patterns in the robot's responses, such as a preference for calm and positive
social emotions like 'joy' and 'lively', and frequent nodding gestures. Despite
these tendencies, our approach has led to the development of a social robot
capable of context-aware and more authentic interactions. Our work lays the
groundwork for future studies on human-robot interactions, emphasizing the
essential role of both verbal and non-verbal cues in creating social and
empathetic robots.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 08:20:04 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Lee",
"Yoon Kyung",
""
],
[
"Jung",
"Yoonwon",
""
],
[
"Kang",
"Gyuyi",
""
],
[
"Hahn",
"Sowon",
""
]
] |
new_dataset
| 0.998964 |
2308.16562
|
Maria Rigaki
|
Maria Rigaki, Sebastian Garcia
|
The Power of MEME: Adversarial Malware Creation with Model-Based
Reinforcement Learning
|
12 pages, 3 figures, 3 tables. Accepted at ESORICS 2023
| null | null | null |
cs.CR cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Due to the proliferation of malware, defenders are increasingly turning to
automation and machine learning as part of the malware detection tool-chain.
However, machine learning models are susceptible to adversarial attacks,
requiring the testing of model and product robustness. Meanwhile, attackers
also seek to automate malware generation and evasion of antivirus systems, and
defenders try to gain insight into their methods. This work proposes a new
algorithm that combines Malware Evasion and Model Extraction (MEME) attacks.
MEME uses model-based reinforcement learning to adversarially modify Windows
executable binary samples while simultaneously training a surrogate model with
a high agreement with the target model to evade. To evaluate this method, we
compare it with two state-of-the-art attacks in adversarial malware creation,
using three well-known published models and one antivirus product as targets.
Results show that MEME outperforms the state-of-the-art methods in terms of
evasion capabilities in almost all cases, producing evasive malware with an
evasion rate in the range of 32-73%. It also produces surrogate models with a
prediction label agreement with the respective target models between 97-99%.
The surrogate could be used to fine-tune and improve the evasion rate in the
future.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 08:55:27 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Rigaki",
"Maria",
""
],
[
"Garcia",
"Sebastian",
""
]
] |
new_dataset
| 0.977937 |
2308.16570
|
Bruno Sousa Miguel
|
Duarte Dias, Bruno Sousa, Nuno Antunes
|
MONDEO: Multistage Botnet Detection
| null | null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Mobile devices have widespread to become the most used piece of technology.
Due to their characteristics, they have become major targets for botnet-related
malware. FluBot is one example of botnet malware that infects mobile devices.
In particular, FluBot is a DNS-based botnet that uses Domain Generation
Algorithms (DGA) to establish communication with the Command and Control Server
(C2). MONDEO is a multistage mechanism with a flexible design to detect
DNS-based botnet malware. MONDEO is lightweight and can be deployed without
requiring the deployment of software, agents, or configuration in mobile
devices, allowing easy integration in core networks. MONDEO comprises four
detection stages: Blacklisting/Whitelisting, Query rate analysis, DGA analysis,
and Machine learning evaluation. It was created with the goal of processing
streams of packets to identify attacks with high efficiency, in the distinct
phases. MONDEO was tested against several datasets to measure its efficiency
and performance, being able to achieve high performance with RandomForest
classifiers. The implementation is available at github.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 09:12:30 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Dias",
"Duarte",
""
],
[
"Sousa",
"Bruno",
""
],
[
"Antunes",
"Nuno",
""
]
] |
new_dataset
| 0.999562 |
2308.16571
|
Asif Azad
|
Ashrafur Rahman Khan, Asif Azad
|
Document Layout Analysis on BaDLAD Dataset: A Comprehensive MViTv2 Based
Approach
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In the rapidly evolving digital era, the analysis of document layouts plays a
pivotal role in automated information extraction and interpretation. In our
work, we have trained MViTv2 transformer model architecture with cascaded mask
R-CNN on BaDLAD dataset to extract text box, paragraphs, images and tables from
a document. After training on 20365 document images for 36 epochs in a 3 phase
cycle, we achieved a training loss of 0.2125 and a mask loss of 0.19. Our work
extends beyond training, delving into the exploration of potential enhancement
avenues. We investigate the impact of rotation and flip augmentation, the
effectiveness of slicing input images pre-inference, the implications of
varying the resolution of the transformer backbone, and the potential of
employing a dual-pass inference to uncover missed text-boxes. Through these
explorations, we observe a spectrum of outcomes, where some modifications
result in tangible performance improvements, while others offer unique insights
for future endeavors.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 09:12:34 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Khan",
"Ashrafur Rahman",
""
],
[
"Azad",
"Asif",
""
]
] |
new_dataset
| 0.998875 |
2308.16615
|
Lossan Bonde
|
Lossan Bonde, Severin Dembele
|
High Accuracy Location Information Extraction from Social Network Texts
Using Natural Language Processing
| null |
International Journal on Natural Language Computing (IJNLC)
Vol.12, No.4, August 2023
|
10.5121/ijnlc.2023.12401
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Terrorism has become a worldwide plague with severe consequences for the
development of nations. Besides killing innocent people daily and preventing
educational activities from taking place, terrorism is also hindering economic
growth. Machine Learning (ML) and Natural Language Processing (NLP) can
contribute to fighting terrorism by predicting in real-time future terrorist
attacks if accurate data is available. This paper is part of a research project
that uses text from social networks to extract necessary information to build
an adequate dataset for terrorist attack prediction. We collected a set of 3000
social network texts about terrorism in Burkina Faso and used a subset to
experiment with existing NLP solutions. The experiment reveals that existing
solutions have poor accuracy for location recognition, which our solution
resolves. We will extend the solution to extract dates and action information
to achieve the project's goal.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 10:21:24 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Bonde",
"Lossan",
""
],
[
"Dembele",
"Severin",
""
]
] |
new_dataset
| 0.99452 |
2308.16632
|
Changli Wu
|
Changli Wu, Yiwei Ma, Qi Chen, Haowei Wang, Gen Luo, Jiayi Ji,
Xiaoshuai Sun
|
3D-STMN: Dependency-Driven Superpoint-Text Matching Network for
End-to-End 3D Referring Expression Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In 3D Referring Expression Segmentation (3D-RES), the earlier approach adopts
a two-stage paradigm, extracting segmentation proposals and then matching them
with referring expressions. However, this conventional paradigm encounters
significant challenges, most notably in terms of the generation of lackluster
initial proposals and a pronounced deceleration in inference speed. Recognizing
these limitations, we introduce an innovative end-to-end Superpoint-Text
Matching Network (3D-STMN) that is enriched by dependency-driven insights. One
of the keystones of our model is the Superpoint-Text Matching (STM) mechanism.
Unlike traditional methods that navigate through instance proposals, STM
directly correlates linguistic indications with their respective superpoints,
clusters of semantically related points. This architectural decision empowers
our model to efficiently harness cross-modal semantic relationships, primarily
leveraging densely annotated superpoint-text pairs, as opposed to the more
sparse instance-text pairs. In pursuit of enhancing the role of text in guiding
the segmentation process, we further incorporate the Dependency-Driven
Interaction (DDI) module to deepen the network's semantic comprehension of
referring expressions. Using the dependency trees as a beacon, this module
discerns the intricate relationships between primary terms and their associated
descriptors in expressions, thereby elevating both the localization and
segmentation capacities of our model. Comprehensive experiments on the
ScanRefer benchmark reveal that our model not only set new performance
standards, registering an mIoU gain of 11.7 points but also achieve a
staggering enhancement in inference speed, surpassing traditional methods by
95.7 times. The code and models are available at
https://github.com/sosppxo/3D-STMN.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 11:00:03 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Wu",
"Changli",
""
],
[
"Ma",
"Yiwei",
""
],
[
"Chen",
"Qi",
""
],
[
"Wang",
"Haowei",
""
],
[
"Luo",
"Gen",
""
],
[
"Ji",
"Jiayi",
""
],
[
"Sun",
"Xiaoshuai",
""
]
] |
new_dataset
| 0.997719 |
2308.16687
|
Avi Shmidman
|
Shaltiel Shmidman, Avi Shmidman, Moshe Koppel
|
DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present DictaBERT, a new state-of-the-art pre-trained BERT model for
modern Hebrew, outperforming existing models on most benchmarks. Additionally,
we release two fine-tuned versions of the model, designed to perform two
specific foundational tasks in the analysis of Hebrew texts: prefix
segmentation and morphological tagging. These fine-tuned models allow any
developer to perform prefix segmentation and morphological tagging of a Hebrew
sentence with a single call to a HuggingFace model, without the need to
integrate any additional libraries or code. In this paper we describe the
details of the training as well and the results on the different benchmarks. We
release the models to the community, along with sample code demonstrating their
use. We release these models as part of our goal to help further research and
development in Hebrew NLP.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 12:43:18 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Shmidman",
"Shaltiel",
""
],
[
"Shmidman",
"Avi",
""
],
[
"Koppel",
"Moshe",
""
]
] |
new_dataset
| 0.99975 |
2308.16692
|
Dong Zhang Zhang
|
Xin Zhang, Dong Zhang, Shimin Li, Yaqian Zhou, Xipeng Qiu
|
SpeechTokenizer: Unified Speech Tokenizer for Speech Large Language
Models
|
SpeechTokenizer project page is
https://0nutation.github.io/SpeechTokenizer.github.io/
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Current speech large language models build upon discrete speech
representations, which can be categorized into semantic tokens and acoustic
tokens. However, existing speech tokens are not specifically designed for
speech language modeling. To assess the suitability of speech tokens for
building speech language models, we established the first benchmark,
SLMTokBench. Our results indicate that neither semantic nor acoustic tokens are
ideal for this purpose. Therefore, we propose SpeechTokenizer, a unified speech
tokenizer for speech large language models. SpeechTokenizer adopts the
Encoder-Decoder architecture with residual vector quantization (RVQ). Unifying
semantic and acoustic tokens, SpeechTokenizer disentangles different aspects of
speech information hierarchically across different RVQ layers. Furthermore, We
construct a Unified Speech Language Model (USLM) leveraging SpeechTokenizer.
Experiments show that SpeechTokenizer performs comparably to EnCodec in speech
reconstruction and demonstrates strong performance on the SLMTokBench
benchmark. Also, USLM outperforms VALL-E in zero-shot Text-to-Speech tasks.
Code and models are available at
https://github.com/ZhangXInFD/SpeechTokenizer/.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 12:53:09 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Zhang",
"Xin",
""
],
[
"Zhang",
"Dong",
""
],
[
"Li",
"Shimin",
""
],
[
"Zhou",
"Yaqian",
""
],
[
"Qiu",
"Xipeng",
""
]
] |
new_dataset
| 0.997288 |
2308.16705
|
Nayeon Lee
|
Nayeon Lee, Chani Jung, Junho Myung, Jiho Jin, Juho Kim, Alice Oh
|
CReHate: Cross-cultural Re-annotation of English Hate Speech Dataset
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
English datasets predominantly reflect the perspectives of certain
nationalities, which can lead to cultural biases in models and datasets. This
is particularly problematic in tasks heavily influenced by subjectivity, such
as hate speech detection. To delve into how individuals from different
countries perceive hate speech, we introduce CReHate, a cross-cultural
re-annotation of the sampled SBIC dataset. This dataset includes annotations
from five distinct countries: Australia, Singapore, South Africa, the United
Kingdom, and the United States. Our thorough statistical analysis highlights
significant differences based on nationality, with only 59.4% of the samples
achieving consensus among all countries. We also introduce a culturally
sensitive hate speech classifier via transfer learning, adept at capturing
perspectives of different nationalities. These findings underscore the need to
re-evaluate certain aspects of NLP research, especially with regard to the
nuanced nature of hate speech in the English language.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 13:14:47 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Lee",
"Nayeon",
""
],
[
"Jung",
"Chani",
""
],
[
"Myung",
"Junho",
""
],
[
"Jin",
"Jiho",
""
],
[
"Kim",
"Juho",
""
],
[
"Oh",
"Alice",
""
]
] |
new_dataset
| 0.999694 |
2308.16743
|
Jacopo Panerati
|
Spencer Teetaert (1), Wenda Zhao (1), Niu Xinyuan (2), Hashir Zahir
(2), Huiyu Leong (2), Michel Hidalgo (3), Gerardo Puga (3), Tomas Lorente
(3), Nahuel Espinosa (3), John Alejandro Duarte Carrasco (3), Kaizheng Zhang
(4), Jian Di (4), Tao Jin (4), Xiaohan Li (4), Yijia Zhou (4), Xiuhua Liang
(4), Chenxu Zhang (4), Antonio Loquercio (5), Siqi Zhou (1 and 6), Lukas
Brunke (1 and 6), Melissa Greeff (1), Wolfgang Hoenig (7), Jacopo Panerati
(1), Angela P. Schoellig (1 and 6) ((1) University of Toronto Institute for
Aerospace Studies, (2) Team H2, (3) Team Ekumen, (4) University of Science
and Technology of China, (5) University of California Berkeley, (6) Technical
University of Munich, (7) Technical University of Berlin)
|
A Remote Sim2real Aerial Competition: Fostering Reproducibility and
Solutions' Diversity in Robotics Challenges
|
13 pages, 16 figures, 4 tables
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Shared benchmark problems have historically been a fundamental driver of
progress for scientific communities. In the context of academic conferences,
competitions offer the opportunity to researchers with different origins,
backgrounds, and levels of seniority to quantitatively compare their ideas. In
robotics, a hot and challenging topic is sim2real-porting approaches that work
well in simulation to real robot hardware. In our case, creating a hybrid
competition with both simulation and real robot components was also dictated by
the uncertainties around travel and logistics in the post-COVID-19 world.
Hence, this article motivates and describes an aerial sim2real robot
competition that ran during the 2022 IEEE/RSJ International Conference on
Intelligent Robots and Systems, from the specification of the competition task,
to the details of the software infrastructure supporting simulation and
real-life experiments, to the approaches of the top-placed teams and the
lessons learned by participants and organizers.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 14:02:41 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Teetaert",
"Spencer",
"",
"1 and 6"
],
[
"Zhao",
"Wenda",
"",
"1 and 6"
],
[
"Xinyuan",
"Niu",
"",
"1 and 6"
],
[
"Zahir",
"Hashir",
"",
"1 and 6"
],
[
"Leong",
"Huiyu",
"",
"1 and 6"
],
[
"Hidalgo",
"Michel",
"",
"1 and 6"
],
[
"Puga",
"Gerardo",
"",
"1 and 6"
],
[
"Lorente",
"Tomas",
"",
"1 and 6"
],
[
"Espinosa",
"Nahuel",
"",
"1 and 6"
],
[
"Carrasco",
"John Alejandro Duarte",
"",
"1 and 6"
],
[
"Zhang",
"Kaizheng",
"",
"1 and 6"
],
[
"Di",
"Jian",
"",
"1 and 6"
],
[
"Jin",
"Tao",
"",
"1 and 6"
],
[
"Li",
"Xiaohan",
"",
"1 and 6"
],
[
"Zhou",
"Yijia",
"",
"1 and 6"
],
[
"Liang",
"Xiuhua",
"",
"1 and 6"
],
[
"Zhang",
"Chenxu",
"",
"1 and 6"
],
[
"Loquercio",
"Antonio",
"",
"1 and 6"
],
[
"Zhou",
"Siqi",
"",
"1 and 6"
],
[
"Brunke",
"Lukas",
"",
"1 and 6"
],
[
"Greeff",
"Melissa",
"",
"1 and 6"
],
[
"Hoenig",
"Wolfgang",
"",
"1 and 6"
],
[
"Panerati",
"Jacopo",
"",
"1 and 6"
],
[
"Schoellig",
"Angela P.",
"",
"1 and 6"
]
] |
new_dataset
| 0.984827 |
2308.16744
|
Mohsen Koohi Esfahani
|
Mohsen Koohi Esfahani, Paolo Boldi, Hans Vandierendonck, Peter
Kilpatrick, Sebastiano Vigna
|
MS-BioGraphs: Sequence Similarity Graph Datasets
| null | null | null | null |
cs.DC cs.AR cs.CE cs.DM cs.PF
|
http://creativecommons.org/licenses/by/4.0/
|
Progress in High-Performance Computing in general, and High-Performance Graph
Processing in particular, is highly dependent on the availability of
publicly-accessible, relevant, and realistic data sets.
To ensure continuation of this progress, we (i) investigate and optimize the
process of generating large sequence similarity graphs as an HPC challenge and
(ii) demonstrate this process in creating MS-BioGraphs, a new family of
publicly available real-world edge-weighted graph datasets with up to $2.5$
trillion edges, that is, $6.6$ times greater than the largest graph published
recently. The largest graph is created by matching (i.e., all-to-all similarity
aligning) $1.7$ billion protein sequences. The MS-BioGraphs family includes
also seven subgraphs with different sizes and direction types.
We describe two main challenges we faced in generating large graph datasets
and our solutions, that are, (i) optimizing data structures and algorithms for
this multi-step process and (ii) WebGraph parallel compression technique. We
present a comparative study of structural characteristics of MS-BioGraphs.
The datasets are available online on
https://blogs.qub.ac.uk/DIPSA/MS-BioGraphs .
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 14:04:28 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Esfahani",
"Mohsen Koohi",
""
],
[
"Boldi",
"Paolo",
""
],
[
"Vandierendonck",
"Hans",
""
],
[
"Kilpatrick",
"Peter",
""
],
[
"Vigna",
"Sebastiano",
""
]
] |
new_dataset
| 0.991711 |
2308.16813
|
Tim Scargill
|
Tim Scargill and Ying Chen and Tianyi Hu and Maria Gorlatova
|
SiTAR: Situated Trajectory Analysis for In-the-Wild Pose Error
Estimation
|
To appear in Proceedings of IEEE ISMAR 2023
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Virtual content instability caused by device pose tracking error remains a
prevalent issue in markerless augmented reality (AR), especially on smartphones
and tablets. However, when examining environments which will host AR
experiences, it is challenging to determine where those instability artifacts
will occur; we rarely have access to ground truth pose to measure pose error,
and even if pose error is available, traditional visualizations do not connect
that data with the real environment, limiting their usefulness. To address
these issues we present SiTAR (Situated Trajectory Analysis for Augmented
Reality), the first situated trajectory analysis system for AR that
incorporates estimates of pose tracking error. We start by developing the first
uncertainty-based pose error estimation method for visual-inertial simultaneous
localization and mapping (VI-SLAM), which allows us to obtain pose error
estimates without ground truth; we achieve an average accuracy of up to 96.1%
and an average F1 score of up to 0.77 in our evaluations on four VI-SLAM
datasets. Next we present our SiTAR system, implemented for ARCore devices,
combining a backend that supplies uncertainty-based pose error estimates with a
frontend that generates situated trajectory visualizations. Finally, we
evaluate the efficacy of SiTAR in realistic conditions by testing three
visualization techniques in an in-the-wild study with 15 users and 13 diverse
environments; this study reveals the impact both environment scale and the
properties of surfaces present can have on user experience and task
performance.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 15:41:21 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Scargill",
"Tim",
""
],
[
"Chen",
"Ying",
""
],
[
"Hu",
"Tianyi",
""
],
[
"Gorlatova",
"Maria",
""
]
] |
new_dataset
| 0.999792 |
2308.16857
|
Md Simul Hasan Talukder
|
Md Sakib Ullah Sourav, Mohammad Sultan Mahmud, Md Simul Hasan
Talukder, Rejwan Bin Sulaiman, Abdullah Yasin
|
IoMT-Blockchain based Secured Remote Patient Monitoring Framework for
Neuro-Stimulation Device
|
8 Figures and 2 Tables
| null | null | null |
cs.CR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Biomedical Engineering's Internet of Medical Things (IoMT) is helping to
improve the accuracy, dependability, and productivity of electronic equipment
in the healthcare business. Real-time sensory data from patients may be
delivered and subsequently analyzed through rapid development of wearable IoMT
devices, such as neuro-stimulation devices with a range of functions. Data from
the Internet of Things is gathered, analyzed, and stored in a single location.
However, single-point failure, data manipulation, privacy difficulties, and
other challenges might arise as a result of centralization. Due to its
decentralized nature, blockchain (BC) can alleviate these issues. The viability
of establishing a non-invasive remote neurostimulation system employing
IoMT-based transcranial Direct Current Stimulation is investigated in this work
(tDCS). A hardware-based prototype tDCS device has been developed that can be
operated over the internet using an android application. Our suggested
framework addresses the problems of IoMTBC-based systems, meets the criteria of
real-time remote patient monitoring systems, and incorporates literature best
practices in the relevant fields.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 16:59:58 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Sourav",
"Md Sakib Ullah",
""
],
[
"Mahmud",
"Mohammad Sultan",
""
],
[
"Talukder",
"Md Simul Hasan",
""
],
[
"Sulaiman",
"Rejwan Bin",
""
],
[
"Yasin",
"Abdullah",
""
]
] |
new_dataset
| 0.99635 |
2308.16876
|
Jiaben Chen
|
Jiaben Chen, Huaizu Jiang
|
SportsSloMo: A New Benchmark and Baselines for Human-centric Video Frame
Interpolation
|
Project Page: https://neu-vi.github.io/SportsSlomo/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Human-centric video frame interpolation has great potential for improving
people's entertainment experiences and finding commercial applications in the
sports analysis industry, e.g., synthesizing slow-motion videos. Although there
are multiple benchmark datasets available in the community, none of them is
dedicated for human-centric scenarios. To bridge this gap, we introduce
SportsSloMo, a benchmark consisting of more than 130K video clips and 1M video
frames of high-resolution ($\geq$720p) slow-motion sports videos crawled from
YouTube. We re-train several state-of-the-art methods on our benchmark, and the
results show a decrease in their accuracy compared to other datasets. It
highlights the difficulty of our benchmark and suggests that it poses
significant challenges even for the best-performing methods, as human bodies
are highly deformable and occlusions are frequent in sports videos. To improve
the accuracy, we introduce two loss terms considering the human-aware priors,
where we add auxiliary supervision to panoptic segmentation and human keypoints
detection, respectively. The loss terms are model agnostic and can be easily
plugged into any video frame interpolation approaches. Experimental results
validate the effectiveness of our proposed loss terms, leading to consistent
performance improvement over 5 existing models, which establish strong baseline
models on our benchmark. The dataset and code can be found at:
https://neu-vi.github.io/SportsSlomo/.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 17:23:50 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Chen",
"Jiaben",
""
],
[
"Jiang",
"Huaizu",
""
]
] |
new_dataset
| 0.999208 |
2308.16877
|
Zane Fink
|
Zane Fink, Konstantinos Parasyris, Giorgis Georgakoudis, Harshitha
Menon
|
HPAC-Offload: Accelerating HPC Applications with Portable Approximate
Computing on the GPU
|
12 pages, 12 pages. Accepted at SC23
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
The end of Dennard scaling and the slowdown of Moore's law led to a shift in
technology trends toward parallel architectures, particularly in HPC systems.
To continue providing performance benefits, HPC should embrace Approximate
Computing (AC), which trades application quality loss for improved performance.
However, existing AC techniques have not been extensively applied and evaluated
in state-of-the-art hardware architectures such as GPUs, the primary execution
vehicle for HPC applications today.
This paper presents HPAC-Offload, a pragma-based programming model that
extends OpenMP offload applications to support AC techniques, allowing portable
approximations across different GPU architectures. We conduct a comprehensive
performance analysis of HPAC-Offload across GPU-accelerated HPC applications,
revealing that AC techniques can significantly accelerate HPC applications
(1.64x LULESH on AMD, 1.57x NVIDIA) with minimal quality loss (0.1%). Our
analysis offers deep insights into the performance of GPU-based AC that guide
the future development of AC algorithms and systems for these architectures.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 17:32:44 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Fink",
"Zane",
""
],
[
"Parasyris",
"Konstantinos",
""
],
[
"Georgakoudis",
"Giorgis",
""
],
[
"Menon",
"Harshitha",
""
]
] |
new_dataset
| 0.977665 |
2308.16880
|
Inwoo Hwang
|
Inwoo Hwang, Hyeonwoo Kim, Young Min Kim
|
Text2Scene: Text-driven Indoor Scene Stylization with Part-aware Details
|
Accepted to CVPR 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose Text2Scene, a method to automatically create realistic textures
for virtual scenes composed of multiple objects. Guided by a reference image
and text descriptions, our pipeline adds detailed texture on labeled 3D
geometries in the room such that the generated colors respect the hierarchical
structure or semantic parts that are often composed of similar materials.
Instead of applying flat stylization on the entire scene at a single step, we
obtain weak semantic cues from geometric segmentation, which are further
clarified by assigning initial colors to segmented parts. Then we add texture
details for individual objects such that their projections on image space
exhibit feature embedding aligned with the embedding of the input. The
decomposition makes the entire pipeline tractable to a moderate amount of
computation resources and memory. As our framework utilizes the existing
resources of image and text embedding, it does not require dedicated datasets
with high-quality textures designed by skillful artists. To the best of our
knowledge, it is the first practical and scalable approach that can create
detailed and realistic textures of the desired style that maintain structural
context for scenes with multiple objects.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 17:37:23 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Hwang",
"Inwoo",
""
],
[
"Kim",
"Hyeonwoo",
""
],
[
"Kim",
"Young Min",
""
]
] |
new_dataset
| 0.999458 |
2308.16884
|
Lucas Bandarkar
|
Lucas Bandarkar, Davis Liang, Benjamin Muller, Mikel Artetxe, Satya
Narayan Shukla, Donald Husa, Naman Goyal, Abhinandan Krishnan, Luke
Zettlemoyer, Madian Khabsa
|
The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122
Language Variants
|
27 pages, 13 figures
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present Belebele, a multiple-choice machine reading comprehension (MRC)
dataset spanning 122 language variants. Significantly expanding the language
coverage of natural language understanding (NLU) benchmarks, this dataset
enables the evaluation of text models in high-, medium-, and low-resource
languages. Each question is based on a short passage from the Flores-200
dataset and has four multiple-choice answers. The questions were carefully
curated to discriminate between models with different levels of general
language comprehension. The English dataset on its own proves difficult enough
to challenge state-of-the-art language models. Being fully parallel, this
dataset enables direct comparison of model performance across all languages. We
use this dataset to evaluate the capabilities of multilingual masked language
models (MLMs) and large language models (LLMs). We present extensive results
and find that despite significant cross-lingual transfer in English-centric
LLMs, much smaller MLMs pretrained on balanced multilingual data still
understand far more languages. We also observe that larger vocabulary size and
conscious vocabulary construction correlate with better performance on
low-resource languages. Overall, Belebele opens up new avenues for evaluating
and analyzing the multilingual capabilities of NLP systems.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 17:43:08 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Bandarkar",
"Lucas",
""
],
[
"Liang",
"Davis",
""
],
[
"Muller",
"Benjamin",
""
],
[
"Artetxe",
"Mikel",
""
],
[
"Shukla",
"Satya Narayan",
""
],
[
"Husa",
"Donald",
""
],
[
"Goyal",
"Naman",
""
],
[
"Krishnan",
"Abhinandan",
""
],
[
"Zettlemoyer",
"Luke",
""
],
[
"Khabsa",
"Madian",
""
]
] |
new_dataset
| 0.999817 |
2308.16894
|
Manuel Kaufmann
|
Manuel Kaufmann, Jie Song, Chen Guo, Kaiyue Shen, Tianjian Jiang,
Chengcheng Tang, Juan Zarate, Otmar Hilliges
|
EMDB: The Electromagnetic Database of Global 3D Human Pose and Shape in
the Wild
|
Accepted to ICCV 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present EMDB, the Electromagnetic Database of Global 3D Human Pose and
Shape in the Wild. EMDB is a novel dataset that contains high-quality 3D SMPL
pose and shape parameters with global body and camera trajectories for
in-the-wild videos. We use body-worn, wireless electromagnetic (EM) sensors and
a hand-held iPhone to record a total of 58 minutes of motion data, distributed
over 81 indoor and outdoor sequences and 10 participants. Together with
accurate body poses and shapes, we also provide global camera poses and body
root trajectories. To construct EMDB, we propose a multi-stage optimization
procedure, which first fits SMPL to the 6-DoF EM measurements and then refines
the poses via image observations. To achieve high-quality results, we leverage
a neural implicit avatar model to reconstruct detailed human surface geometry
and appearance, which allows for improved alignment and smoothness via a dense
pixel-level objective. Our evaluations, conducted with a multi-view volumetric
capture system, indicate that EMDB has an expected accuracy of 2.3 cm
positional and 10.6 degrees angular error, surpassing the accuracy of previous
in-the-wild datasets. We evaluate existing state-of-the-art monocular RGB
methods for camera-relative and global pose estimation on EMDB. EMDB is
publicly available under https://ait.ethz.ch/emdb
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 17:56:19 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Kaufmann",
"Manuel",
""
],
[
"Song",
"Jie",
""
],
[
"Guo",
"Chen",
""
],
[
"Shen",
"Kaiyue",
""
],
[
"Jiang",
"Tianjian",
""
],
[
"Tang",
"Chengcheng",
""
],
[
"Zarate",
"Juan",
""
],
[
"Hilliges",
"Otmar",
""
]
] |
new_dataset
| 0.999874 |
2308.16905
|
Sirui Xu
|
Sirui Xu, Zhengyuan Li, Yu-Xiong Wang, Liang-Yan Gui
|
InterDiff: Generating 3D Human-Object Interactions with Physics-Informed
Diffusion
|
ICCV 2023; Project Page: https://sirui-xu.github.io/InterDiff/
| null | null | null |
cs.CV cs.AI cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper addresses a novel task of anticipating 3D human-object
interactions (HOIs). Most existing research on HOI synthesis lacks
comprehensive whole-body interactions with dynamic objects, e.g., often limited
to manipulating small or static objects. Our task is significantly more
challenging, as it requires modeling dynamic objects with various shapes,
capturing whole-body motion, and ensuring physically valid interactions. To
this end, we propose InterDiff, a framework comprising two key steps: (i)
interaction diffusion, where we leverage a diffusion model to encode the
distribution of future human-object interactions; (ii) interaction correction,
where we introduce a physics-informed predictor to correct denoised HOIs in a
diffusion step. Our key insight is to inject prior knowledge that the
interactions under reference with respect to contact points follow a simple
pattern and are easily predictable. Experiments on multiple human-object
interaction datasets demonstrate the effectiveness of our method for this task,
capable of producing realistic, vivid, and remarkably long-term 3D HOI
predictions.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 17:59:08 GMT"
}
] | 2023-09-01T00:00:00 |
[
[
"Xu",
"Sirui",
""
],
[
"Li",
"Zhengyuan",
""
],
[
"Wang",
"Yu-Xiong",
""
],
[
"Gui",
"Liang-Yan",
""
]
] |
new_dataset
| 0.9893 |
1811.03325
|
Xiaoshi Zhong
|
Xiaoshi Zhong and Xiang Yu and Erik Cambria and Jagath C. Rajapakse
|
Marshall-Olkin Power-Law Distributions in Length-Frequency of Entities
|
33 pages, 3 figures (30 subfigures), 8 tables. To appear in
Knowledge-Based Systems
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Entities involve important concepts with concrete meanings and play important
roles in numerous linguistic tasks. Entities have different forms in different
linguistic tasks and researchers treat those different forms as different
concepts. In this paper, we are curious to know whether there are some common
characteristics that connect those different forms of entities. Specifically,
we investigate the underlying distributions of entities from different types
and different languages, trying to figure out some common characteristics
behind those diverse entities. After analyzing twelve datasets about different
types of entities and eighteen datasets about entities in different languages,
we find that while these entities are dramatically diverse from each other in
many aspects, their length-frequencies can be well characterized by a family of
Marshall-Olkin power-law (MOPL) distributions. We conduct experiments on those
thirty datasets about entities in different types and different languages, and
experimental results demonstrate that MOPL models characterize the
length-frequencies of entities much better than two state-of-the-art power-law
models and an alternative log-normal model. Experimental results also
demonstrate that MOPL models are scalable to the length-frequency of entities
in large-scale real-world datasets.
|
[
{
"version": "v1",
"created": "Thu, 8 Nov 2018 09:16:19 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Nov 2018 14:23:31 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Dec 2018 15:27:40 GMT"
},
{
"version": "v4",
"created": "Wed, 10 May 2023 08:47:37 GMT"
},
{
"version": "v5",
"created": "Wed, 30 Aug 2023 04:39:22 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Zhong",
"Xiaoshi",
""
],
[
"Yu",
"Xiang",
""
],
[
"Cambria",
"Erik",
""
],
[
"Rajapakse",
"Jagath C.",
""
]
] |
new_dataset
| 0.990223 |
2211.02423
|
Aleksandr Chuklin
|
Aleksandr Chuklin, Justin Zhao, Mihir Kale
|
CLSE: Corpus of Linguistically Significant Entities
|
Proceedings of the 2nd Workshop on Natural Language Generation,
Evaluation, and Metrics (GEM 2022) at EMNLP 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
One of the biggest challenges of natural language generation (NLG) is the
proper handling of named entities. Named entities are a common source of
grammar mistakes such as wrong prepositions, wrong article handling, or
incorrect entity inflection. Without factoring linguistic representation, such
errors are often underrepresented when evaluating on a small set of arbitrarily
picked argument values, or when translating a dataset from a linguistically
simpler language, like English, to a linguistically complex language, like
Russian. However, for some applications, broadly precise grammatical
correctness is critical -- native speakers may find entity-related grammar
errors silly, jarring, or even offensive.
To enable the creation of more linguistically diverse NLG datasets, we
release a Corpus of Linguistically Significant Entities (CLSE) annotated by
linguist experts. The corpus includes 34 languages and covers 74 different
semantic types to support various applications from airline ticketing to video
games. To demonstrate one possible use of CLSE, we produce an augmented version
of the Schema-Guided Dialog Dataset, SGD-CLSE. Using the CLSE's entities and a
small number of human translations, we create a linguistically representative
NLG evaluation benchmark in three languages: French (high-resource), Marathi
(low-resource), and Russian (highly inflected language). We establish quality
baselines for neural, template-based, and hybrid NLG systems and discuss the
strengths and weaknesses of each approach.
|
[
{
"version": "v1",
"created": "Fri, 4 Nov 2022 12:56:12 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Aug 2023 12:30:33 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Chuklin",
"Aleksandr",
""
],
[
"Zhao",
"Justin",
""
],
[
"Kale",
"Mihir",
""
]
] |
new_dataset
| 0.997753 |
2211.12436
|
Beerend Gerats
|
Beerend G.A. Gerats, Jelmer M. Wolterink, Ivo A.M.J. Broeders
|
Dynamic Depth-Supervised NeRF for Multi-View RGB-D Operating Room Images
|
Accepted to the Workshop on Ambient Intelligence for HealthCare 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The operating room (OR) is an environment of interest for the development of
sensing systems, enabling the detection of people, objects, and their semantic
relations. Due to frequent occlusions in the OR, these systems often rely on
input from multiple cameras. While increasing the number of cameras generally
increases algorithm performance, there are hard limitations to the number and
locations of cameras in the OR. Neural Radiance Fields (NeRF) can be used to
render synthetic views from arbitrary camera positions, virtually enlarging the
number of cameras in the dataset. In this work, we explore the use of NeRF for
view synthesis of dynamic scenes in the OR, and we show that regularisation
with depth supervision from RGB-D sensor data results in higher image quality.
We optimise a dynamic depth-supervised NeRF with up to six synchronised cameras
that capture the surgical field in five distinct phases before and during a
knee replacement surgery. We qualitatively inspect views rendered by a virtual
camera that moves 180 degrees around the surgical field at differing time
values. Quantitatively, we evaluate view synthesis from an unseen camera
position in terms of PSNR, SSIM and LPIPS for the colour channels and in MAE
and error percentage for the estimated depth. We find that NeRFs can be used to
generate geometrically consistent views, also from interpolated camera
positions and at interpolated time intervals. Views are generated from an
unseen camera pose with an average PSNR of 18.2 and a depth estimation error of
2.0%. Our results show the potential of a dynamic NeRF for view synthesis in
the OR and stress the relevance of depth supervision in a clinical setting.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 17:45:06 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Aug 2023 08:40:16 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Gerats",
"Beerend G. A.",
""
],
[
"Wolterink",
"Jelmer M.",
""
],
[
"Broeders",
"Ivo A. M. J.",
""
]
] |
new_dataset
| 0.951138 |
2211.12542
|
Yan Xia
|
Yan Xia, Mariia Gladkova, Rui Wang, Qianyun Li, Uwe Stilla, Jo\~ao F.
Henriques, Daniel Cremers
|
CASSPR: Cross Attention Single Scan Place Recognition
|
Accepted by ICCV2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Place recognition based on point clouds (LiDAR) is an important component for
autonomous robots or self-driving vehicles. Current SOTA performance is
achieved on accumulated LiDAR submaps using either point-based or voxel-based
structures. While voxel-based approaches nicely integrate spatial context
across multiple scales, they do not exhibit the local precision of point-based
methods. As a result, existing methods struggle with fine-grained matching of
subtle geometric features in sparse single-shot Li- DAR scans. To overcome
these limitations, we propose CASSPR as a method to fuse point-based and
voxel-based approaches using cross attention transformers. CASSPR leverages a
sparse voxel branch for extracting and aggregating information at lower
resolution and a point-wise branch for obtaining fine-grained local
information. CASSPR uses queries from one branch to try to match structures in
the other branch, ensuring that both extract self-contained descriptors of the
point cloud (rather than one branch dominating), but using both to inform the
output global descriptor of the point cloud. Extensive experiments show that
CASSPR surpasses the state-of-the-art by a large margin on several datasets
(Oxford RobotCar, TUM, USyd). For instance, it achieves AR@1 of 85.6% on the
TUM dataset, surpassing the strongest prior model by ~15%. Our code is publicly
available.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 19:18:30 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Aug 2023 18:40:19 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Xia",
"Yan",
""
],
[
"Gladkova",
"Mariia",
""
],
[
"Wang",
"Rui",
""
],
[
"Li",
"Qianyun",
""
],
[
"Stilla",
"Uwe",
""
],
[
"Henriques",
"João F.",
""
],
[
"Cremers",
"Daniel",
""
]
] |
new_dataset
| 0.979223 |
2212.03741
|
Ronghui Li
|
Ronghui Li, Junfan Zhao, Yachao Zhang, Mingyang Su, Zeping Ren, Han
Zhang, Yansong Tang, Xiu Li
|
FineDance: A Fine-grained Choreography Dataset for 3D Full Body Dance
Generation
|
Accepted by ICCV 2023
| null | null | null |
cs.CV cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating full-body and multi-genre dance sequences from given music is a
challenging task, due to the limitations of existing datasets and the inherent
complexity of the fine-grained hand motion and dance genres. To address these
problems, we propose FineDance, which contains 14.6 hours of music-dance paired
data, with fine-grained hand motions, fine-grained genres (22 dance genres),
and accurate posture. To the best of our knowledge, FineDance is the largest
music-dance paired dataset with the most dance genres. Additionally, to address
monotonous and unnatural hand movements existing in previous methods, we
propose a full-body dance generation network, which utilizes the diverse
generation capabilities of the diffusion model to solve monotonous problems,
and use expert nets to solve unreal problems. To further enhance the
genre-matching and long-term stability of generated dances, we propose a
Genre&Coherent aware Retrieval Module. Besides, we propose a novel metric named
Genre Matching Score to evaluate the genre-matching degree between dance and
music. Quantitative and qualitative experiments demonstrate the quality of
FineDance, and the state-of-the-art performance of FineNet. The FineDance
Dataset and more qualitative samples can be found at our website.
|
[
{
"version": "v1",
"created": "Wed, 7 Dec 2022 16:10:08 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Dec 2022 15:49:30 GMT"
},
{
"version": "v3",
"created": "Wed, 1 Mar 2023 07:09:41 GMT"
},
{
"version": "v4",
"created": "Wed, 30 Aug 2023 04:18:50 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Li",
"Ronghui",
""
],
[
"Zhao",
"Junfan",
""
],
[
"Zhang",
"Yachao",
""
],
[
"Su",
"Mingyang",
""
],
[
"Ren",
"Zeping",
""
],
[
"Zhang",
"Han",
""
],
[
"Tang",
"Yansong",
""
],
[
"Li",
"Xiu",
""
]
] |
new_dataset
| 0.999921 |
2303.02862
|
Jianping Jiang
|
Jianping Jiang, Jiahe Li, Baowen Zhang, Xiaoming Deng, Boxin Shi
|
EvHandPose: Event-based 3D Hand Pose Estimation with Sparse Supervision
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event camera shows great potential in 3D hand pose estimation, especially
addressing the challenges of fast motion and high dynamic range in a low-power
way. However, due to the asynchronous differential imaging mechanism, it is
challenging to design event representation to encode hand motion information
especially when the hands are not moving (causing motion ambiguity), and it is
infeasible to fully annotate the temporally dense event stream. In this paper,
we propose EvHandPose with novel hand flow representations in Event-to-Pose
module for accurate hand pose estimation and alleviating the motion ambiguity
issue. To solve the problem under sparse annotation, we design contrast
maximization and hand-edge constraints in Pose-to-IWE (Image with Warped
Events) module and formulate EvHandPose in a weakly-supervision framework. We
further build EvRealHands, the first large-scale real-world event-based hand
pose dataset on several challenging scenes to bridge the real-synthetic domain
gap. Experiments on EvRealHands demonstrate that EvHandPose outperforms
previous event-based methods under all evaluation scenes, achieves accurate and
stable hand pose estimation with high temporal resolution in fast motion and
strong light scenes compared with RGB-based methods, generalizes well to
outdoor scenes and another type of event camera, and shows the potential for
the hand gesture recognition task.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 03:27:17 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Aug 2023 03:21:29 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Jiang",
"Jianping",
""
],
[
"Li",
"Jiahe",
""
],
[
"Zhang",
"Baowen",
""
],
[
"Deng",
"Xiaoming",
""
],
[
"Shi",
"Boxin",
""
]
] |
new_dataset
| 0.987025 |
2305.09438
|
Nadav Schneider
|
Nadav Schneider, Tal Kadosh, Niranjan Hasabnis, Timothy Mattson, Yuval
Pinter, Gal Oren
|
MPI-rical: Data-Driven MPI Distributed Parallelism Assistance with
Transformers
| null | null | null | null |
cs.DC cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Message Passing Interface (MPI) plays a crucial role in distributed memory
parallelization across multiple nodes. However, parallelizing MPI code
manually, and specifically, performing domain decomposition, is a challenging,
error-prone task. In this paper, we address this problem by developing
MPI-RICAL, a novel data-driven, programming-assistance tool that assists
programmers in writing domain decomposition based distributed memory
parallelization code. Specifically, we train a supervised language model to
suggest MPI functions and their proper locations in the code on the fly. We
also introduce MPICodeCorpus, the first publicly available corpus of MPI-based
parallel programs that is created by mining more than 15,000 open-source
repositories on GitHub. Experimental results have been done on MPICodeCorpus
and more importantly, on a compiled benchmark of MPI-based parallel programs
for numerical computations that represent real-world scientific applications.
MPI-RICAL achieves F1 scores between 0.87-0.91 on these programs, demonstrating
its accuracy in suggesting correct MPI functions at appropriate code
locations.. The source code used in this work, as well as other relevant
sources, are available at:
https://github.com/Scientific-Computing-Lab-NRCN/MPI-rical
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 13:50:24 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Aug 2023 04:54:10 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Aug 2023 14:56:16 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Schneider",
"Nadav",
""
],
[
"Kadosh",
"Tal",
""
],
[
"Hasabnis",
"Niranjan",
""
],
[
"Mattson",
"Timothy",
""
],
[
"Pinter",
"Yuval",
""
],
[
"Oren",
"Gal",
""
]
] |
new_dataset
| 0.998105 |
2305.12596
|
Shivangi Yadav
|
Shivangi Yadav and Arun Ross
|
iWarpGAN: Disentangling Identity and Style to Generate Synthetic Iris
Images
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Generative Adversarial Networks (GANs) have shown success in approximating
complex distributions for synthetic image generation. However, current
GAN-based methods for generating biometric images, such as iris, have certain
limitations: (a) the synthetic images often closely resemble images in the
training dataset; (b) the generated images lack diversity in terms of the
number of unique identities represented in them; and (c) it is difficult to
generate multiple images pertaining to the same identity. To overcome these
issues, we propose iWarpGAN that disentangles identity and style in the context
of the iris modality by using two transformation pathways: Identity
Transformation Pathway to generate unique identities from the training set, and
Style Transformation Pathway to extract the style code from a reference image
and output an iris image using this style. By concatenating the transformed
identity code and reference style code, iWarpGAN generates iris images with
both inter- and intra-class variations. The efficacy of the proposed method in
generating such iris DeepFakes is evaluated both qualitatively and
quantitatively using ISO/IEC 29794-6 Standard Quality Metrics and the VeriEye
iris matcher. Further, the utility of the synthetically generated images is
demonstrated by improving the performance of deep learning based iris matchers
that augment synthetic data with real data during the training process.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 23:10:14 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Aug 2023 03:55:54 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Yadav",
"Shivangi",
""
],
[
"Ross",
"Arun",
""
]
] |
new_dataset
| 0.998544 |
2305.13820
|
Laurie Burchell
|
Laurie Burchell, Alexandra Birch, Nikolay Bogoychev and Kenneth
Heafield
|
An Open Dataset and Model for Language Identification
|
To be published in ACL 2023
| null |
10.18653/v1/2023.acl-short.75
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Language identification (LID) is a fundamental step in many natural language
processing pipelines. However, current LID systems are far from perfect,
particularly on lower-resource languages. We present a LID model which achieves
a macro-average F1 score of 0.93 and a false positive rate of 0.033 across 201
languages, outperforming previous work. We achieve this by training on a
curated dataset of monolingual data, the reliability of which we ensure by
auditing a sample from each source and each language manually. We make both the
model and the dataset available to the research community. Finally, we carry
out detailed analysis into our model's performance, both in comparison to
existing open models and by language class.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 08:43:42 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Burchell",
"Laurie",
""
],
[
"Birch",
"Alexandra",
""
],
[
"Bogoychev",
"Nikolay",
""
],
[
"Heafield",
"Kenneth",
""
]
] |
new_dataset
| 0.999526 |
2305.18221
|
Bin Wang
|
Bin Wang, Hongyi Pan, Armstrong Aboah, Zheyuan Zhang, Elif Keles, Drew
Torigian, Baris Turkbey, Elizabeth Krupinski, Jayaram Udupa, Ulas Bagci
|
GazeGNN: A Gaze-Guided Graph Neural Network for Chest X-ray
Classification
|
WACV 2024
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Eye tracking research is important in computer vision because it can help us
understand how humans interact with the visual world. Specifically for
high-risk applications, such as in medical imaging, eye tracking can help us to
comprehend how radiologists and other medical professionals search, analyze,
and interpret images for diagnostic and clinical purposes. Hence, the
application of eye tracking techniques in disease classification has become
increasingly popular in recent years. Contemporary works usually transform gaze
information collected by eye tracking devices into visual attention maps (VAMs)
to supervise the learning process. However, this is a time-consuming
preprocessing step, which stops us from applying eye tracking to radiologists'
daily work. To solve this problem, we propose a novel gaze-guided graph neural
network (GNN), GazeGNN, to leverage raw eye-gaze data without being converted
into VAMs. In GazeGNN, to directly integrate eye gaze into image
classification, we create a unified representation graph that models both
images and gaze pattern information. With this benefit, we develop a real-time,
real-world, end-to-end disease classification algorithm for the first time in
the literature. This achievement demonstrates the practicality and feasibility
of integrating real-time eye tracking techniques into the daily work of
radiologists. To our best knowledge, GazeGNN is the first work that adopts GNN
to integrate image and eye-gaze data. Our experiments on the public chest X-ray
dataset show that our proposed method exhibits the best classification
performance compared to existing methods. The code is available at
https://github.com/ukaukaaaa/GazeGNN.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 17:01:54 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jun 2023 01:03:20 GMT"
},
{
"version": "v3",
"created": "Tue, 29 Aug 2023 20:52:57 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Wang",
"Bin",
""
],
[
"Pan",
"Hongyi",
""
],
[
"Aboah",
"Armstrong",
""
],
[
"Zhang",
"Zheyuan",
""
],
[
"Keles",
"Elif",
""
],
[
"Torigian",
"Drew",
""
],
[
"Turkbey",
"Baris",
""
],
[
"Krupinski",
"Elizabeth",
""
],
[
"Udupa",
"Jayaram",
""
],
[
"Bagci",
"Ulas",
""
]
] |
new_dataset
| 0.998669 |
2305.18415
|
Johann Brehmer Mr
|
Johann Brehmer, Pim de Haan, S\"onke Behrends, Taco Cohen
|
Geometric Algebra Transformers
|
v2: more experiments, more baselines
| null | null | null |
cs.LG cs.RO stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Problems involving geometric data arise in physics, chemistry, robotics,
computer vision, and many other fields. Such data can take numerous forms, such
as points, direction vectors, translations, or rotations, but to date there is
no single architecture that can be applied to such a wide variety of geometric
types while respecting their symmetries. In this paper we introduce the
Geometric Algebra Transformer (GATr), a general-purpose architecture for
geometric data. GATr represents inputs, outputs, and hidden states in the
projective geometric (or Clifford) algebra, which offers an efficient
16-dimensional vector-space representation of common geometric objects as well
as operators acting on them. GATr is equivariant with respect to E(3), the
symmetry group of 3D Euclidean space. As a Transformer, GATr is versatile,
efficient, and scalable. We demonstrate GATr in problems from n-body modeling
to wall-shear-stress estimation on large arterial meshes to robotic motion
planning. GATr consistently outperforms both non-geometric and equivariant
baselines in terms of error, data efficiency, and scalability.
|
[
{
"version": "v1",
"created": "Sun, 28 May 2023 18:48:50 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Aug 2023 07:39:14 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Brehmer",
"Johann",
""
],
[
"de Haan",
"Pim",
""
],
[
"Behrends",
"Sönke",
""
],
[
"Cohen",
"Taco",
""
]
] |
new_dataset
| 0.997905 |
2305.19773
|
Matteo Nerini
|
Matteo Nerini, Bruno Clerckx
|
Pareto Frontier for the Performance-Complexity Trade-off in Beyond
Diagonal Reconfigurable Intelligent Surfaces
|
Accepted by IEEE for publication
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Reconfigurable intelligent surface (RIS) is an emerging technology allowing
to control the propagation environment in wireless communications. Recently,
beyond diagonal RIS (BD-RIS) has been proposed to reach higher performance than
conventional RIS, at the expense of higher circuit complexity. Multiple BD-RIS
architectures have been developed with the goal of reaching a favorable
trade-off between performance and circuit complexity. However, the fundamental
limits of this trade-off are still unexplored. In this paper, we fill this gap
by deriving the expression of the Pareto frontier for the
performance-complexity trade-off in BD-RIS. Additionally, we characterize the
optimal BD-RIS architectures reaching this Pareto frontier.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 12:06:47 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Aug 2023 17:18:36 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Nerini",
"Matteo",
""
],
[
"Clerckx",
"Bruno",
""
]
] |
new_dataset
| 0.983916 |
2306.00549
|
Stephan-Daniel Gravert
|
Stephan-Daniel Gravert, Elia Varini, Amirhossein Kazemipour, Mike Y.
Michelis, Thomas Buchner, Ronan Hinchet, Robert K. Katzschmann
|
Low Voltage Electrohydraulic Actuators for Untethered Robotics
|
Stephan-Daniel Gravert and Elia Varini contributed equally to this
work
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rigid robots can be precise in repetitive tasks, but struggle in unstructured
environments. Nature's versatility in such environments inspires researchers to
develop biomimetic robots that incorporate compliant and contracting artificial
muscles. Among the recently proposed artificial muscle technologies,
electrohydraulic actuators are promising since they offer performance
comparable to that of mammalian muscles in terms of speed and power density.
However, they require high driving voltages and have safety concerns due to
exposed electrodes. These high voltages lead to either bulky or inefficient
driving electronics that make untethered, high-degree-of-freedom bio-inspired
robots difficult to realize. Here, we present hydraulically amplified low
voltage electrostatic (HALVE) actuators that match mammalian skeletal muscles
in average power density (50.5 W kg-1) and peak strain rate (971 % s-1) at a
driving voltage of just 1100 V. This driving voltage is approx. 5-7 times lower
compared to other electrohydraulic actuators using paraelectric dielectrics.
Furthermore, HALVE actuators are safe to touch, waterproof, and self-clearing,
which makes them easy to implement in wearables and robotics. We characterize,
model, and physically validate key performance metrics of the actuator and
compare its performance to state-of-the-art electrohydraulic designs. Finally,
we demonstrate the utility of our actuators on two muscle-based
electrohydraulic robots: an untethered soft robotic swimmer and a robotic
gripper. We foresee that HALVE actuators can become a key building block for
future highly-biomimetic untethered robots and wearables with many independent
artificial muscles such as biomimetic hands, faces, or exoskeletons.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 11:10:05 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Aug 2023 14:40:43 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Gravert",
"Stephan-Daniel",
""
],
[
"Varini",
"Elia",
""
],
[
"Kazemipour",
"Amirhossein",
""
],
[
"Michelis",
"Mike Y.",
""
],
[
"Buchner",
"Thomas",
""
],
[
"Hinchet",
"Ronan",
""
],
[
"Katzschmann",
"Robert K.",
""
]
] |
new_dataset
| 0.999379 |
2306.03204
|
Levente Juhasz
|
Levente Juh\'asz and Peter Mooney and Hartwig H. Hochmair and Boyuan
Guan
|
ChatGPT as a mapping assistant: A novel method to enrich maps with
generative AI and content derived from street-level photographs
|
Submitted to The Fourth Spatial Data Science Symposium
|
Spatial Data Science Symposium 2023
|
10.25436/E2ZW27
| null |
cs.CY cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper explores the concept of leveraging generative AI as a mapping
assistant for enhancing the efficiency of collaborative mapping. We present
results of an experiment that combines multiple sources of volunteered
geographic information (VGI) and large language models (LLMs). Three analysts
described the content of crowdsourced Mapillary street-level photographs taken
along roads in a small test area in Miami, Florida. GPT-3.5-turbo was
instructed to suggest the most appropriate tagging for each road in
OpenStreetMap (OSM). The study also explores the utilization of BLIP-2, a
state-of-the-art multimodal pre-training method as an artificial analyst of
street-level photographs in addition to human analysts. Results demonstrate two
ways to effectively increase the accuracy of mapping suggestions without
modifying the underlying AI models: by (1) providing a more detailed
description of source photographs, and (2) combining prompt engineering with
additional context (e.g. location and objects detected along a road). The first
approach increases the suggestion accuracy by up to 29%, and the second one by
up to 20%.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 19:26:21 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Juhász",
"Levente",
""
],
[
"Mooney",
"Peter",
""
],
[
"Hochmair",
"Hartwig H.",
""
],
[
"Guan",
"Boyuan",
""
]
] |
new_dataset
| 0.990175 |
2306.08637
|
Carl Doersch
|
Carl Doersch, Yi Yang, Mel Vecerik, Dilara Gokay, Ankush Gupta, Yusuf
Aytar, Joao Carreira, Andrew Zisserman
|
TAPIR: Tracking Any Point with per-frame Initialization and temporal
Refinement
|
Published at ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel model for Tracking Any Point (TAP) that effectively tracks
any queried point on any physical surface throughout a video sequence. Our
approach employs two stages: (1) a matching stage, which independently locates
a suitable candidate point match for the query point on every other frame, and
(2) a refinement stage, which updates both the trajectory and query features
based on local correlations. The resulting model surpasses all baseline methods
by a significant margin on the TAP-Vid benchmark, as demonstrated by an
approximate 20% absolute average Jaccard (AJ) improvement on DAVIS. Our model
facilitates fast inference on long and high-resolution video sequences. On a
modern GPU, our implementation has the capacity to track points faster than
real-time, and can be flexibly extended to higher-resolution videos. Given the
high-quality trajectories extracted from a large dataset, we demonstrate a
proof-of-concept diffusion model which generates trajectories from static
images, enabling plausible animations. Visualizations, source code, and
pretrained models can be found on our project webpage.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 17:07:51 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Aug 2023 14:28:37 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Doersch",
"Carl",
""
],
[
"Yang",
"Yi",
""
],
[
"Vecerik",
"Mel",
""
],
[
"Gokay",
"Dilara",
""
],
[
"Gupta",
"Ankush",
""
],
[
"Aytar",
"Yusuf",
""
],
[
"Carreira",
"Joao",
""
],
[
"Zisserman",
"Andrew",
""
]
] |
new_dataset
| 0.995186 |
2306.10799
|
Ziqiao Peng
|
Ziqiao Peng, Yihao Luo, Yue Shi, Hao Xu, Xiangyu Zhu, Jun He, Hongyan
Liu, Zhaoxin Fan
|
SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend
3D Talking Faces
|
Accepted by ACM MM 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Speech-driven 3D face animation technique, extending its applications to
various multimedia fields. Previous research has generated promising realistic
lip movements and facial expressions from audio signals. However, traditional
regression models solely driven by data face several essential problems, such
as difficulties in accessing precise labels and domain gaps between different
modalities, leading to unsatisfactory results lacking precision and coherence.
To enhance the visual accuracy of generated lip movement while reducing the
dependence on labeled data, we propose a novel framework SelfTalk, by involving
self-supervision in a cross-modals network system to learn 3D talking faces.
The framework constructs a network system consisting of three modules: facial
animator, speech recognizer, and lip-reading interpreter. The core of SelfTalk
is a commutative training diagram that facilitates compatible features exchange
among audio, text, and lip shape, enabling our models to learn the intricate
connection between these factors. The proposed framework leverages the
knowledge learned from the lip-reading interpreter to generate more plausible
lip shapes. Extensive experiments and user studies demonstrate that our
proposed approach achieves state-of-the-art performance both qualitatively and
quantitatively. We recommend watching the supplementary video.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 09:39:10 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Aug 2023 05:01:31 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Peng",
"Ziqiao",
""
],
[
"Luo",
"Yihao",
""
],
[
"Shi",
"Yue",
""
],
[
"Xu",
"Hao",
""
],
[
"Zhu",
"Xiangyu",
""
],
[
"He",
"Jun",
""
],
[
"Liu",
"Hongyan",
""
],
[
"Fan",
"Zhaoxin",
""
]
] |
new_dataset
| 0.994984 |
2308.01889
|
Bavo Van Kerrebroeck
|
Bavo Van Kerrebroeck, Kristel Cromb\'e, St\'ephanie Wilain, Marc
Leman, Pieter-Jan Maes
|
The virtual drum circle: polyrhythmic music interactions in extended
reality
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Emerging technologies in the domain of extended reality offer rich, new
possibilities for the study and practice of joint music performance. Apart from
the technological challenges, bringing music players together in extended
reality raises important questions on their performance and embodied
coordination. In this study, we designed an extended reality platform to assess
a remote, bidirectional polyrhythmic interaction between two players, mediated
in real time by their three-dimensional embodied avatars and a shared, virtual
drum circle. We leveraged a multi-layered analysis framework to assess their
performance quality, embodied co-regulation and first-person interaction
experience, using statistical techniques for time-series analysis and
mixed-effect regression and focusing on contrasts of visual coupling (not
seeing / seeing as avatars / seeing as real) and auditory context (metronome /
music). Results reveal that an auditory context with music improved the
performance output as measured by a prediction error, increased movement energy
and levels of experienced agency. Visual coupling impacted experiential
qualities and induced prosocial effects with increased levels of partner
realism resulting in increased levels of shared agency and self-other merging.
Embodied co-regulation between players was impacted by auditory context and
visual coupling, suggesting prediction-based compensatory mechanisms to deal
with the novelty, difficulty, and expressivity in the musical interaction. This
study contributes to the understanding of music performance in extended reality
by using a methodological approach to demonstrate how co-regulation between
players is impacted by visual coupling and auditory context and provides a
basis and future directions for further action-oriented research.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 17:31:55 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Aug 2023 14:30:03 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Van Kerrebroeck",
"Bavo",
""
],
[
"Crombé",
"Kristel",
""
],
[
"Wilain",
"Stéphanie",
""
],
[
"Leman",
"Marc",
""
],
[
"Maes",
"Pieter-Jan",
""
]
] |
new_dataset
| 0.999079 |
2308.07016
|
Tan Yuedong
|
Yuedong Tan
|
HHTrack: Hyperspectral Object Tracking Using Hybrid Attention
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hyperspectral imagery provides abundant spectral information beyond the
visible RGB bands, offering rich discriminative details about objects in a
scene. Leveraging such data has the potential to enhance visual tracking
performance. In this paper, we propose a hyperspectral object tracker based on
hybrid attention (HHTrack). The core of HHTrack is a hyperspectral hybrid
attention (HHA) module that unifies feature extraction and fusion within one
component through token interactions. A hyperspectral bands fusion (HBF) module
is also introduced to selectively aggregate spatial and spectral signatures
from the full hyperspectral input. Extensive experiments demonstrate the
state-of-the-art performance of HHTrack on benchmark Near Infrared (NIR), Red
Near Infrared (Red-NIR), and Visible (VIS) hyperspectral tracking datasets. Our
work provides new insights into harnessing the strengths of transformers and
hyperspectral fusion to advance robust object tracking.
|
[
{
"version": "v1",
"created": "Mon, 14 Aug 2023 09:04:06 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Aug 2023 07:01:42 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Tan",
"Yuedong",
""
]
] |
new_dataset
| 0.990883 |
2308.08176
|
Qi Lv
|
Siqi Song, Qi Lv, Lei Geng, Ziqiang Cao, and Guohong Fu
|
RSpell: Retrieval-augmented Framework for Domain Adaptive Chinese
Spelling Check
| null |
NLPCC 2023
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Chinese Spelling Check (CSC) refers to the detection and correction of
spelling errors in Chinese texts. In practical application scenarios, it is
important to make CSC models have the ability to correct errors across
different domains. In this paper, we propose a retrieval-augmented spelling
check framework called RSpell, which searches corresponding domain terms and
incorporates them into CSC models. Specifically, we employ pinyin fuzzy
matching to search for terms, which are combined with the input and fed into
the CSC model. Then, we introduce an adaptive process control mechanism to
dynamically adjust the impact of external knowledge on the model. Additionally,
we develop an iterative strategy for the RSpell framework to enhance reasoning
capabilities. We conducted experiments on CSC datasets in three domains: law,
medicine, and official document writing. The results demonstrate that RSpell
achieves state-of-the-art performance in both zero-shot and fine-tuning
scenarios, demonstrating the effectiveness of the retrieval-augmented CSC
framework. Our code is available at https://github.com/47777777/Rspell.
|
[
{
"version": "v1",
"created": "Wed, 16 Aug 2023 07:12:23 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Song",
"Siqi",
""
],
[
"Lv",
"Qi",
""
],
[
"Geng",
"Lei",
""
],
[
"Cao",
"Ziqiang",
""
],
[
"Fu",
"Guohong",
""
]
] |
new_dataset
| 0.969308 |
2308.10028
|
Zhihao Wen
|
Zhihao Wen, Yuan Fang, Yihan Liu, Yang Guo, Shuji Hao
|
Voucher Abuse Detection with Prompt-based Fine-tuning on Graph Neural
Networks
|
7 pages, Accepted by CIKM23 Applied Research Track
| null |
10.1145/3583780.3615505
| null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Voucher abuse detection is an important anomaly detection problem in
E-commerce. While many GNN-based solutions have emerged, the supervised
paradigm depends on a large quantity of labeled data. A popular alternative is
to adopt self-supervised pre-training using label-free data, and further
fine-tune on a downstream task with limited labels. Nevertheless, the
"pre-train, fine-tune" paradigm is often plagued by the objective gap between
pre-training and downstream tasks. Hence, we propose VPGNN, a prompt-based
fine-tuning framework on GNNs for voucher abuse detection. We design a novel
graph prompting function to reformulate the downstream task into a similar
template as the pretext task in pre-training, thereby narrowing the objective
gap. Extensive experiments on both proprietary and public datasets demonstrate
the strength of VPGNN in both few-shot and semi-supervised scenarios. Moreover,
an online deployment of VPGNN in a production environment shows a 23.4%
improvement over two existing deployed models.
|
[
{
"version": "v1",
"created": "Sat, 19 Aug 2023 14:25:59 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Aug 2023 06:33:32 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Wen",
"Zhihao",
""
],
[
"Fang",
"Yuan",
""
],
[
"Liu",
"Yihan",
""
],
[
"Guo",
"Yang",
""
],
[
"Hao",
"Shuji",
""
]
] |
new_dataset
| 0.9561 |
2308.10421
|
Guanglei Yang
|
Jian Zou, Tianyu Huang, Guanglei Yang, Zhenhua Guo, Wangmeng Zuo
|
UniM$^2$AE: Multi-modal Masked Autoencoders with Unified 3D
Representation for 3D Perception in Autonomous Driving
|
Code available at https://github.com/hollow-503/UniM2AE
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Masked Autoencoders (MAE) play a pivotal role in learning potent
representations, delivering outstanding results across various 3D perception
tasks essential for autonomous driving. In real-world driving scenarios, it's
commonplace to deploy multiple sensors for comprehensive environment
perception. While integrating multi-modal features from these sensors can
produce rich and powerful features, there is a noticeable gap in MAE methods
addressing this integration. This research delves into multi-modal Masked
Autoencoders tailored for a unified representation space in autonomous driving,
aiming to pioneer a more efficient fusion of two distinct modalities. To
intricately marry the semantics inherent in images with the geometric
intricacies of LiDAR point clouds, the UniM$^2$AE is proposed. This model
stands as a potent yet straightforward, multi-modal self-supervised
pre-training framework, mainly consisting of two designs. First, it projects
the features from both modalities into a cohesive 3D volume space, ingeniously
expanded from the bird's eye view (BEV) to include the height dimension. The
extension makes it possible to back-project the informative features, obtained
by fusing features from both modalities, into their native modalities to
reconstruct the multiple masked inputs. Second, the Multi-modal 3D Interactive
Module (MMIM) is invoked to facilitate the efficient inter-modal interaction
during the interaction process. Extensive experiments conducted on the nuScenes
Dataset attest to the efficacy of UniM$^2$AE, indicating enhancements in 3D
object detection and BEV map segmentation by 1.2\%(NDS) and 6.5\% (mIoU),
respectively. Code is available at https://github.com/hollow-503/UniM2AE.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 02:13:40 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Aug 2023 02:32:08 GMT"
}
] | 2023-08-31T00:00:00 |
[
[
"Zou",
"Jian",
""
],
[
"Huang",
"Tianyu",
""
],
[
"Yang",
"Guanglei",
""
],
[
"Guo",
"Zhenhua",
""
],
[
"Zuo",
"Wangmeng",
""
]
] |
new_dataset
| 0.993284 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.