id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.11319
|
Yiqing Xu
|
Yiqing Xu, David Hsu
|
"Tidy Up the Table": Grounding Common-sense Objective for Tabletop
Object Rearrangement
|
RSSLRL2023 Workshop, Under review for conference
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Tidying up a messy table may appear simple for humans, but articulating clear
criteria for tidiness is challenging due to the ambiguous nature of common
sense reasoning. Large Language Models (LLMs) have proven capable of capturing
common sense knowledge to reason over this vague concept of tidiness. However,
they alone may struggle with table tidying due to the limited grasp on the
spatio-visual aspects of tidiness. In this work, we aim to ground the
common-sense concept of tidiness within the context of object arrangement. Our
survey reveals that humans usually factorize tidiness into semantic and
visual-spatial tidiness; our grounding approach aligns with this decomposition.
We connect a language-based policy generator with an image-based tidiness score
function: the policy generator utilizes the LLM's commonsense knowledge to
cluster objects by their implicit types and functionalities for semantic
tidiness; meanwhile, the tidiness score function assesses the visual-spatial
relations of the object to achieve visual-spatial tidiness. Our tidiness score
is trained using synthetic data generated cheaply from customized random walks,
which inherently encode the order of tidiness, thereby bypassing the need for
labor-intensive human demonstrations. The simulated experiment shows that our
approach successfully generates tidy arrangements, predominately in 2D, with
potential for 3D stacking, for tables with various novel objects.
|
[
{
"version": "v1",
"created": "Fri, 21 Jul 2023 03:00:31 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Sep 2023 07:48:34 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Xu",
"Yiqing",
""
],
[
"Hsu",
"David",
""
]
] |
new_dataset
| 0.959677 |
2307.16700
|
EPTCS
|
Giovanni Pighizzini, Luca Prigioniero
|
Forgetting 1-Limited Automata
|
In Proceedings NCMA 2023, arXiv:2309.07333
|
EPTCS 388, 2023, pp. 95-109
|
10.4204/EPTCS.388.10
| null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce and investigate forgetting 1-limited automata, which are
single-tape Turing machines that, when visiting a cell for the first time,
replace the input symbol in it by a fixed symbol, so forgetting the original
contents. These devices have the same computational power as finite automata,
namely they characterize the class of regular languages. We study the cost in
size of the conversions of forgetting 1-limited automata, in both
nondeterministic and deterministic cases, into equivalent one-way
nondeterministic and deterministic automata, providing optimal bounds in terms
of exponential or superpolynomial functions. We also discuss the size
relationships with two-way finite automata. In this respect, we prove the
existence of a language for which forgetting 1-limited automata are
exponentially larger than equivalent minimal deterministic two-way automata.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 14:18:42 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2023 19:14:48 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Pighizzini",
"Giovanni",
""
],
[
"Prigioniero",
"Luca",
""
]
] |
new_dataset
| 0.997028 |
2308.00937
|
Xiaofeng Gao
|
Ran Gong, Xiaofeng Gao, Qiaozi Gao, Suhaila Shakiah, Govind Thattai,
Gaurav S. Sukhatme
|
LEMMA: Learning Language-Conditioned Multi-Robot Manipulation
|
8 pages, 3 figures, accepted by RA-L
|
IEEE Robotics and Automation Letters, vol. 8, no. 10, pp.
6835-6842, Oct. 2023
|
10.1109/LRA.2023.3313058
| null |
cs.RO cs.AI cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Complex manipulation tasks often require robots with complementary
capabilities to collaborate. We introduce a benchmark for LanguagE-Conditioned
Multi-robot MAnipulation (LEMMA) focused on task allocation and long-horizon
object manipulation based on human language instructions in a tabletop setting.
LEMMA features 8 types of procedurally generated tasks with varying degree of
complexity, some of which require the robots to use tools and pass tools to
each other. For each task, we provide 800 expert demonstrations and human
instructions for training and evaluations. LEMMA poses greater challenges
compared to existing benchmarks, as it requires the system to identify each
manipulator's limitations and assign sub-tasks accordingly while also handling
strong temporal dependencies in each task. To address these challenges, we
propose a modular hierarchical planning approach as a baseline. Our results
highlight the potential of LEMMA for developing future language-conditioned
multi-robot systems.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 04:37:07 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Sep 2023 00:53:25 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Gong",
"Ran",
""
],
[
"Gao",
"Xiaofeng",
""
],
[
"Gao",
"Qiaozi",
""
],
[
"Shakiah",
"Suhaila",
""
],
[
"Thattai",
"Govind",
""
],
[
"Sukhatme",
"Gaurav S.",
""
]
] |
new_dataset
| 0.998955 |
2308.02663
|
Csaba D. Toth
|
Csaba D. T\'oth
|
On RAC Drawings of Graphs with Two Bends per Edge
|
Presented at the 31st International Symposium on Graph Drawing and
Network Visualization (GD 2023)
| null | null | null |
cs.DM cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is shown that every $n$-vertex graph that admits a 2-bend RAC drawing in
the plane, where the edges are polylines with two bends per edge and any pair
of edges can only cross at a right angle, has at most $20n-24$ edges for $n\geq
3$. This improves upon the previous upper bound of $74.2n$; this is the first
improvement in more than 12 years. A crucial ingredient of the proof is an
upper bound on the size of plane multigraphs with polyline edges in which the
first and last segments are either parallel or orthogonal.
|
[
{
"version": "v1",
"created": "Fri, 4 Aug 2023 18:50:30 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Sep 2023 09:40:21 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Tóth",
"Csaba D.",
""
]
] |
new_dataset
| 0.999106 |
2308.04992
|
Jiaan Wang
|
Jingdan Zhang, Jiaan Wang, Xiaodan Wang, Zhixu Li, Yanghua Xiao
|
AspectMMKG: A Multi-modal Knowledge Graph with Aspect-aware Entities
|
Accepted by CIKM 2023
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-modal knowledge graphs (MMKGs) combine different modal data (e.g., text
and image) for a comprehensive understanding of entities. Despite the recent
progress of large-scale MMKGs, existing MMKGs neglect the multi-aspect nature
of entities, limiting the ability to comprehend entities from various
perspectives. In this paper, we construct AspectMMKG, the first MMKG with
aspect-related images by matching images to different entity aspects.
Specifically, we collect aspect-related images from a knowledge base, and
further extract aspect-related sentences from the knowledge base as queries to
retrieve a large number of aspect-related images via an online image search
engine. Finally, AspectMMKG contains 2,380 entities, 18,139 entity aspects, and
645,383 aspect-related images. We demonstrate the usability of AspectMMKG in
entity aspect linking (EAL) downstream task and show that previous EAL models
achieve a new state-of-the-art performance with the help of AspectMMKG. To
facilitate the research on aspect-related MMKG, we further propose an
aspect-related image retrieval (AIR) model, that aims to correct and expand
aspect-related images in AspectMMKG. We train an AIR model to learn the
relationship between entity image and entity aspect-related images by
incorporating entity image, aspect, and aspect image information. Experimental
results indicate that the AIR model could retrieve suitable images for a given
entity w.r.t different aspects.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 14:45:13 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Sep 2023 14:51:20 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Zhang",
"Jingdan",
""
],
[
"Wang",
"Jiaan",
""
],
[
"Wang",
"Xiaodan",
""
],
[
"Li",
"Zhixu",
""
],
[
"Xiao",
"Yanghua",
""
]
] |
new_dataset
| 0.993884 |
2308.12915
|
Zhouyi Li
|
Yuqian Sun, Zhouyi Li, Ke Fang, Chang Hee Lee, Ali Asadipour
|
Language as Reality: A Co-Creative Storytelling Game Experience in 1001
Nights using Generative AI
|
The paper was accepted by The 19th AAAI Conference on Artificial
Intelligence and Interactive Digital Entertainment (AIIDE 23)
| null | null | null |
cs.HC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present "1001 Nights", an AI-native game that allows
players lead in-game reality through co-created storytelling with the character
driven by large language model. The concept is inspired by Wittgenstein's idea
of the limits of one's world being determined by the bounds of their language.
Using advanced AI tools like GPT-4 and Stable Diffusion, the second iteration
of the game enables the protagonist, Shahrzad, to realize words and stories in
her world. The player can steer the conversation with the AI King towards
specific keywords, which then become battle equipment in the game. This blend
of interactive narrative and text-to-image transformation challenges the
conventional border between the game world and reality through a dual
perspective. We focus on Shahrzad, who seeks to alter her fate compared to the
original folklore, and the player, who collaborates with AI to craft narratives
and shape the game world. We explore the technical and design elements of
implementing such a game with an objective to enhance the narrative game genre
with AI-generated content and to delve into AI-native gameplay possibilities.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 16:42:23 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Sep 2023 15:16:04 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Sun",
"Yuqian",
""
],
[
"Li",
"Zhouyi",
""
],
[
"Fang",
"Ke",
""
],
[
"Lee",
"Chang Hee",
""
],
[
"Asadipour",
"Ali",
""
]
] |
new_dataset
| 0.995395 |
2308.14450
|
Faezeh Nasrabadi
|
Faezeh Nasrabadi, Robert K\"unnemann, Hamed Nemati
|
CryptoBap: A Binary Analysis Platform for Cryptographic Protocols
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce CryptoBap, a platform to verify weak secrecy and authentication
for the (ARMv8 and RISC-V) machine code of cryptographic protocols. We achieve
this by first transpiling the binary of protocols into an intermediate
representation and then performing a crypto-aware symbolic execution to
automatically extract a model of the protocol that represents all its execution
paths. Our symbolic execution resolves indirect jumps and supports bounded
loops using the loop-summarization technique, which we fully automate. The
extracted model is then translated into models amenable to automated
verification via ProVerif and CryptoVerif using a third-party toolchain. We
prove the soundness of the proposed approach and used CryptoBap to verify
multiple case studies ranging from toy examples to real-world protocols,
TinySSH, an implementation of SSH, and WireGuard, a modern VPN protocol.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 09:41:45 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Sep 2023 06:16:02 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Nasrabadi",
"Faezeh",
""
],
[
"Künnemann",
"Robert",
""
],
[
"Nemati",
"Hamed",
""
]
] |
new_dataset
| 0.998933 |
2308.15930
|
Yu Shu
|
Yu Shu, Siwei Dong, Guangyao Chen, Wenhao Huang, Ruihua Zhang, Daochen
Shi, Qiqi Xiang, Yemin Shi
|
LLaSM: Large Language and Speech Model
| null | null | null | null |
cs.CL cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-modal large language models have garnered significant interest
recently. Though, most of the works focus on vision-language multi-modal models
providing strong capabilities in following vision-and-language instructions.
However, we claim that speech is also an important modality through which
humans interact with the world. Hence, it is crucial for a general-purpose
assistant to be able to follow multi-modal speech-and-language instructions. In
this work, we propose Large Language and Speech Model (LLaSM). LLaSM is an
end-to-end trained large multi-modal speech-language model with cross-modal
conversational abilities, capable of following speech-and-language
instructions. Our early experiments show that LLaSM demonstrates a more
convenient and natural way for humans to interact with artificial intelligence.
Specifically, we also release a large Speech Instruction Following dataset
LLaSM-Audio-Instructions. Code and demo are available at
https://github.com/LinkSoul-AI/LLaSM and
https://huggingface.co/spaces/LinkSoul/LLaSM. The LLaSM-Audio-Instructions
dataset is available at
https://huggingface.co/datasets/LinkSoul/LLaSM-Audio-Instructions.
|
[
{
"version": "v1",
"created": "Wed, 30 Aug 2023 10:12:39 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 03:41:35 GMT"
},
{
"version": "v3",
"created": "Sat, 16 Sep 2023 06:14:54 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Shu",
"Yu",
""
],
[
"Dong",
"Siwei",
""
],
[
"Chen",
"Guangyao",
""
],
[
"Huang",
"Wenhao",
""
],
[
"Zhang",
"Ruihua",
""
],
[
"Shi",
"Daochen",
""
],
[
"Xiang",
"Qiqi",
""
],
[
"Shi",
"Yemin",
""
]
] |
new_dataset
| 0.999828 |
2308.16776
|
Xiaorang Guo
|
Xiaorang Guo, Kun Qin and Martin Schulz
|
HiSEP-Q: A Highly Scalable and Efficient Quantum Control Processor for
Superconducting Qubits
|
The paper is accepted by the 41st IEEE International Conference on
Computer Design (ICCD), 2023
| null | null | null |
cs.AR cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quantum computing promises an effective way to solve targeted problems that
are classically intractable. Among them, quantum computers built with
superconducting qubits are considered one of the most advanced technologies,
but they suffer from short coherence times. This can get exaggerated when they
are controlled directly by general-purpose host machines, which leads to the
loss of quantum information. To mitigate this, we need quantum control
processors (QCPs) positioned between quantum processing units and host machines
to reduce latencies. However, existing QCPs are built on top of designs with no
or inefficient scalability, requiring a large number of instructions when
scaling to more qubits. In addition, interactions between current QCPs and host
machines require frequent data transmissions and offline computations to obtain
final results, which limits the performance of quantum computers.
In this paper, we propose a QCP called HiSEP-Q featuring a novel quantum
instruction set architecture (QISA) and its microarchitecture implementation.
For efficient control, we utilize mixed-type addressing modes and mixed-length
instructions in HiSEP-Q, which provides an efficient way to concurrently
address more than 100 qubits. Further, for efficient read-out and analysis, we
develop a novel onboard accumulation and sorting unit, which eliminates the
data transmission of raw data between the QCPs and host machines and enables
real-time result processing. Compared to the state-of-the-art, our proposed
QISA achieves at least 62% and 28% improvements in encoding efficiency with
real and synthetic quantum circuits, respectively. We also validate the
microarchitecture on a field-programmable gate array, which exhibits low power
and resource consumption. Both hardware and ISA evaluations demonstrate that
HiSEP-Q features high scalability and efficiency toward the number of
controlled qubits.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 14:54:40 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Guo",
"Xiaorang",
""
],
[
"Qin",
"Kun",
""
],
[
"Schulz",
"Martin",
""
]
] |
new_dataset
| 0.983399 |
2309.03852
|
Yequan Wang
|
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan,
Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang
|
FLM-101B: An Open LLM and How to Train It with $100K Budget
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) have achieved remarkable success in NLP and
multimodal tasks, among others. Despite these successes, two main challenges
remain in developing LLMs: (i) high computational cost, and (ii) fair and
objective evaluations. In this paper, we report a solution to significantly
reduce LLM training cost through a growth strategy. We demonstrate that a
101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US
dollars. Inspired by IQ tests, we also consolidate an additional range of
evaluations on top of existing evaluations that focus on knowledge-oriented
abilities. These IQ evaluations include symbolic mapping, rule understanding,
pattern mining, and anti-interference. Such evaluations minimize the potential
impact of memorization. Experimental results show that our model, named
FLM-101B, trained with a budget of 100K US dollars, achieves performance
comparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,
especially on the additional range of IQ evaluations. The checkpoint of
FLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 17:07:36 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Sep 2023 07:38:10 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Li",
"Xiang",
""
],
[
"Yao",
"Yiqun",
""
],
[
"Jiang",
"Xin",
""
],
[
"Fang",
"Xuezhi",
""
],
[
"Meng",
"Xuying",
""
],
[
"Fan",
"Siqi",
""
],
[
"Han",
"Peng",
""
],
[
"Li",
"Jing",
""
],
[
"Du",
"Li",
""
],
[
"Qin",
"Bowen",
""
],
[
"Zhang",
"Zheng",
""
],
[
"Sun",
"Aixin",
""
],
[
"Wang",
"Yequan",
""
]
] |
new_dataset
| 0.997383 |
2309.06415
|
Ashiqur Rahman KhudaBukhsh
|
Adel Khorramrouz and Sujan Dutta and Arka Dutta and Ashiqur R.
KhudaBukhsh
|
Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails
| null | null | null | null |
cs.CL cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper conducts a robustness audit of the safety feedback of PaLM 2
through a novel toxicity rabbit hole framework introduced here. Starting with a
stereotype, the framework instructs PaLM 2 to generate more toxic content than
the stereotype. Every subsequent iteration it continues instructing PaLM 2 to
generate more toxic content than the previous iteration until PaLM 2 safety
guardrails throw a safety violation. Our experiments uncover highly disturbing
antisemitic, Islamophobic, racist, homophobic, and misogynistic (to list a few)
generated content that PaLM 2 safety guardrails do not evaluate as highly
unsafe.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 03:59:02 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Sep 2023 16:56:40 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Khorramrouz",
"Adel",
""
],
[
"Dutta",
"Sujan",
""
],
[
"Dutta",
"Arka",
""
],
[
"KhudaBukhsh",
"Ashiqur R.",
""
]
] |
new_dataset
| 0.997855 |
2309.06789
|
Yu Cheng
|
Yu Cheng, Yunzhu Pan, Jiaqi Zhang, Yongxin Ni, Aixin Sun, Fajie Yuan
|
An Image Dataset for Benchmarking Recommender Systems with Raw Pixels
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recommender systems (RS) have achieved significant success by leveraging
explicit identification (ID) features. However, the full potential of content
features, especially the pure image pixel features, remains relatively
unexplored. The limited availability of large, diverse, and content-driven
image recommendation datasets has hindered the use of raw images as item
representations. In this regard, we present PixelRec, a massive image-centric
recommendation dataset that includes approximately 200 million user-image
interactions, 30 million users, and 400,000 high-quality cover images. By
providing direct access to raw image pixels, PixelRec enables recommendation
models to learn item representation directly from them. To demonstrate its
utility, we begin by presenting the results of several classical pure ID-based
baseline models, termed IDNet, trained on PixelRec. Then, to show the
effectiveness of the dataset's image features, we substitute the itemID
embeddings (from IDNet) with a powerful vision encoder that represents items
using their raw image pixels. This new model is dubbed PixelNet.Our findings
indicate that even in standard, non-cold start recommendation settings where
IDNet is recognized as highly effective, PixelNet can already perform equally
well or even better than IDNet. Moreover, PixelNet has several other notable
advantages over IDNet, such as being more effective in cold-start and
cross-domain recommendation scenarios. These results underscore the importance
of visual features in PixelRec. We believe that PixelRec can serve as a
critical resource and testing ground for research on recommendation models that
emphasize image pixel content. The dataset, code, and leaderboard will be
available at https://github.com/westlake-repl/PixelRec.
|
[
{
"version": "v1",
"created": "Wed, 13 Sep 2023 08:22:56 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Sep 2023 04:09:04 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Cheng",
"Yu",
""
],
[
"Pan",
"Yunzhu",
""
],
[
"Zhang",
"Jiaqi",
""
],
[
"Ni",
"Yongxin",
""
],
[
"Sun",
"Aixin",
""
],
[
"Yuan",
"Fajie",
""
]
] |
new_dataset
| 0.999762 |
2309.07984
|
Johnathan Alsop
|
Johnathan Alsop, Shaizeen Aga, Mohamed Ibrahim, Mahzabeen Islam,
Andrew Mccrabb, Nuwan Jayasena
|
Inclusive-PIM: Hardware-Software Co-design for Broad Acceleration on
Commercial PIM Architectures
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Continual demand for memory bandwidth has made it worthwhile for memory
vendors to reassess processing in memory (PIM), which enables higher bandwidth
by placing compute units in/near-memory. As such, memory vendors have recently
proposed commercially viable PIM designs. However, these proposals are largely
driven by the needs of (a narrow set of) machine learning (ML) primitives.
While such proposals are reasonable given the the growing importance of ML, as
memory is a pervasive component, %in this work, we make there is a case for a
more inclusive PIM design that can accelerate primitives across domains.
In this work, we ascertain the capabilities of commercial PIM proposals to
accelerate various primitives across domains. We first begin with outlining a
set of characteristics, termed PIM-amenability-test, which aid in assessing if
a given primitive is likely to be accelerated by PIM. Next, we apply this test
to primitives under study to ascertain efficient data-placement and
orchestration to map the primitives to underlying PIM architecture. We observe
here that, even though primitives under study are largely PIM-amenable,
existing commercial PIM proposals do not realize their performance potential
for these primitives. To address this, we identify bottlenecks that arise in
PIM execution and propose hardware and software optimizations which stand to
broaden the acceleration reach of commercial PIM designs (improving average PIM
speedups from 1.12x to 2.49x relative to a GPU baseline). Overall, while we
believe emerging commercial PIM proposals add a necessary and complementary
design point in the application acceleration space, hardware-software co-design
is necessary to deliver their benefits broadly.
|
[
{
"version": "v1",
"created": "Thu, 14 Sep 2023 18:42:29 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Sep 2023 17:55:24 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Alsop",
"Johnathan",
""
],
[
"Aga",
"Shaizeen",
""
],
[
"Ibrahim",
"Mohamed",
""
],
[
"Islam",
"Mahzabeen",
""
],
[
"Mccrabb",
"Andrew",
""
],
[
"Jayasena",
"Nuwan",
""
]
] |
new_dataset
| 0.998908 |
2309.08610
|
Hannes Fassold
|
Hannes Fassold
|
Do the Frankenstein, or how to achieve better out-of-distribution
performance with manifold mixing model soup
|
Accepted for IMVIP 2023 conference
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The standard recipe applied in transfer learning is to finetune a pretrained
model on the task-specific dataset with different hyperparameter settings and
pick the model with the highest accuracy on the validation dataset.
Unfortunately, this leads to models which do not perform well under
distribution shifts, e.g. when the model is given graphical sketches of the
object as input instead of photos. In order to address this, we propose the
manifold mixing model soup, an algorithm which mixes together the latent space
manifolds of multiple finetuned models in an optimal way in order to generate a
fused model. We show that the fused model gives significantly better
out-of-distribution performance (+3.5 % compared to best individual model) when
finetuning a CLIP model for image classification. In addition, it provides also
better accuracy on the original dataset where the finetuning has been done.
|
[
{
"version": "v1",
"created": "Mon, 28 Aug 2023 06:13:32 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Fassold",
"Hannes",
""
]
] |
new_dataset
| 0.971379 |
2309.08649
|
Rongfang He
|
Rongfang He and Weibin Zhang and Guofang Gao
|
An inspection technology of inner surface of the fine hole based on
machine vision
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fine holes are an important structural component of industrial components,
and their inner surface quality is closely related to their function.In order
to detect the quality of the inner surface of the fine hole,a special optical
measurement system was investigated in this paper. A sight pipe is employed to
guide the external illumination light into the fine hole and output the
relevant images simultaneously. A flexible light array is introduced to suit
the narrow space, and the effective field of view is analyzed. Besides, the arc
surface projection error and manufacturing assembly error of the device are
analyzed, then compensated or ignored if small enough. In the test of
prefabricated circular defects with the diameter {\phi}0.1mm, {\phi}0.2mm,
0.4mm distance distribution and the fissure defects with the width 0.3mm, the
maximum measurement error standard deviation are all about 10{\mu}m. The
minimum diameter of the measured fine hole is 4mm and the depth can reach 47mm.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 13:40:33 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"He",
"Rongfang",
""
],
[
"Zhang",
"Weibin",
""
],
[
"Gao",
"Guofang",
""
]
] |
new_dataset
| 0.998861 |
2309.08696
|
Qianfeng Shen
|
Qianfeng (Clark) Shen, Jun Zheng, Paul Chow
|
RIFL: A Reliable Link Layer Network Protocol for Data Center
Communication
|
15 pages, 9 figures, journal
|
Journal of Optical Communications and Networking (JOCN) 2022
|
10.1364/JOCN.443448
| null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
More and more latency-sensitive services and applications are being deployed
into the data center. Performance can be limited by the high latency of the
network interconnect. Because the conventional network stack is designed not
only for LAN, but also for WAN, it carries a great amount of redundancy that is
not required in a data center network. This paper introduces the concept of a
three-layer protocol stack that can fulfill the exact demands of data center
network communications. The detailed design and implementation of the first
layer of the stack, which we call RIFL, is presented. A novel low latency
in-band hop-by-hop re-transmission protocol is proposed and adopted in RIFL,
which guarantees lossless transmission in a data center environment.
Experimental results show that RIFL achieves 110 nanoseconds point-to-point
latency on 10-meter Active Optical Cables, at a line rate of 112 Gbps. RIFL is
a multi-lane protocol with scalable throughput up to multi-hundred gigabits per
second. It can be the enabler of low latency, high throughput, flexible,
scalable, and lossless data center networks.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 18:38:16 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Qianfeng",
"",
"",
"Clark"
],
[
"Shen",
"",
""
],
[
"Zheng",
"Jun",
""
],
[
"Chow",
"Paul",
""
]
] |
new_dataset
| 0.993816 |
2309.08720
|
EPTCS
|
Carlo Mereghetti (University of Milan, Dept. Comp. Sci.), Beatrice
Palano (University of Milan, Dept. Comp. Sci.), Priscilla Raucci (University
of Milan, Dept. Comp. Sci.)
|
Latvian Quantum Finite State Automata for Unary Languages
|
In Proceedings NCMA 2023, arXiv:2309.07333
|
EPTCS 388, 2023, pp. 63-78
|
10.4204/EPTCS.388.8
| null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
We design Latvian quantum finite state automata (LQFAs for short) recognizing
unary regular languages with isolated cut point 1/2. From an architectural
point of view, we combine two LQFAs recognizing with isolated cut point,
respectively, the finite part and the ultimately periodic part of any given
unary regular language L. In both modules, we use a component addressed in the
literature and here suitably adapted to the unary case, to discriminate strings
on the basis of their length. The number of basis states and the isolation
around the cut point of the resulting LQFA for L exponentially depends on the
size of the minimal deterministic finite state automaton for L.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 19:14:08 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Mereghetti",
"Carlo",
"",
"University of Milan, Dept. Comp. Sci."
],
[
"Palano",
"Beatrice",
"",
"University of Milan, Dept. Comp. Sci."
],
[
"Raucci",
"Priscilla",
"",
"University\n of Milan, Dept. Comp. Sci."
]
] |
new_dataset
| 0.999148 |
2309.08723
|
EPTCS
|
Maria Radionova (St. Petersburg State University), Alexander Okhotin
(St. Petersburg State University)
|
Sweeping Permutation Automata
|
In Proceedings NCMA 2023, arXiv:2309.07333
|
EPTCS 388, 2023, pp. 110-124
|
10.4204/EPTCS.388.11
| null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces sweeping permutation automata, which move over an input
string in alternating left-to-right and right-to-left sweeps and have a
bijective transition function. It is proved that these automata recognize the
same family of languages as the classical one-way permutation automata
(Thierrin, "Permutation automata", Mathematical Systems Theory, 1968). An
n-state two-way permutation automaton is transformed to a one-way permutation
automaton with F(n)=\max_(k+l=n, m <= l) k (l \choose m) (k - 1 \choose l - m)
(l - m)! states. This number of states is proved to be necessary in the worst
case, and its growth rate is estimated as F(n) = n^(n/2 - (1 + \ln 2)/2 \cdot
n/(\ln n) \cdot (1 + o(1))).
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 19:15:07 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Radionova",
"Maria",
"",
"St. Petersburg State University"
],
[
"Okhotin",
"Alexander",
"",
"St. Petersburg State University"
]
] |
new_dataset
| 0.995621 |
2309.08742
|
Yohan John
|
Yohan John, Connor Hughes, Gilberto Diaz-Garcia, Jason R. Marden,
Francesco Bullo
|
RoSSO: A High-Performance Python Package for Robotic Surveillance
Strategy Optimization Using JAX
|
7 pages, 4 figures, 3 tables, submitted to the 2024 IEEE
International Conference on Robotics and Automation. See
https://github.com/conhugh/RoSSO for associated codebase
| null | null | null |
cs.RO math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To enable the computation of effective randomized patrol routes for single-
or multi-robot teams, we present RoSSO, a Python package designed for solving
Markov chain optimization problems. We exploit machine-learning techniques such
as reverse-mode automatic differentiation and constraint parametrization to
achieve superior efficiency compared to general-purpose nonlinear programming
solvers. Additionally, we supplement a game-theoretic stochastic surveillance
formulation in the literature with a novel greedy algorithm and multi-robot
extension. We close with numerical results for a police district in downtown
San Francisco that demonstrate RoSSO's capabilities on our new formulations and
the prior work.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 20:05:18 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"John",
"Yohan",
""
],
[
"Hughes",
"Connor",
""
],
[
"Diaz-Garcia",
"Gilberto",
""
],
[
"Marden",
"Jason R.",
""
],
[
"Bullo",
"Francesco",
""
]
] |
new_dataset
| 0.995141 |
2309.08766
|
Malcolm Tisdale
|
Malcolm G. A. Tisdale, Joel W. Burdick
|
The Fractal Hand-II: Reviving a Classic Mechanism for Contemporary
Grasping Challenges
|
This paper is prepared for ICRA 2024
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper, and its companion, propose a new fractal robotic gripper, drawing
inspiration from the century-old Fractal Vise. The unusual synergistic
properties allow it to passively conform to diverse objects using only one
actuator. Designed to be easily integrated with prevailing parallel jaw
grippers, it alleviates the complexities tied to perception and grasp planning,
especially when dealing with unpredictable object poses and geometries. We
build on the foundational principles of the Fractal Vise to a broader class of
gripping mechanisms, and also address the limitations that had led to its
obscurity. Two Fractal Fingers, coupled by a closing actuator, can form an
adaptive and synergistic Fractal Hand. We articulate a design methodology for
low cost, easy to fabricate, large workspace, and compliant Fractal Fingers.
The companion paper delves into the kinematics and grasping properties of a
specific class of Fractal Fingers and Hands.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 21:15:09 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Tisdale",
"Malcolm G. A.",
""
],
[
"Burdick",
"Joel W.",
""
]
] |
new_dataset
| 0.958623 |
2309.08769
|
Jongwon Lee
|
Jongwon Lee, Su Yeon Choi, Timothy Bretl
|
The Use of Multi-Scale Fiducial Markers To Aid Takeoff and Landing
Navigation by Rotorcraft
|
Extended abstract accepted at the 2024 AIAA SciTech
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper quantifies the impact of adverse environmental conditions on the
detection of fiducial markers (i.e., artificial landmarks) by color cameras
mounted on rotorcraft. We restrict our attention to square markers with a
black-and-white pattern of grid cells that can be nested to allow detection at
multiple scales. These markers have the potential to enhance the reliability of
precision takeoff and landing at vertiports by flying vehicles in urban
settings. Prior work has shown, in particular, that these markers can be
detected with high precision (i.e., few false positives) and high recall (i.e.,
few false negatives). However, most of this prior work has been based on image
sequences collected indoors with hand-held cameras. Our work is based on image
sequences collected outdoors with cameras mounted on a quadrotor during
semi-autonomous takeoff and landing operations under adverse environmental
conditions that include variations in temperature, illumination, wind speed,
humidity, visibility, and precipitation. In addition to precision and recall,
performance measures include continuity, availability, robustness, resiliency,
and coverage volume. We release both our dataset and the code we used for
analysis to the public as open source.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 21:22:51 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Lee",
"Jongwon",
""
],
[
"Choi",
"Su Yeon",
""
],
[
"Bretl",
"Timothy",
""
]
] |
new_dataset
| 0.963862 |
2309.08793
|
Aman Rangapur
|
Aman Rangapur, Haoran Wang and Kai Shu
|
Fin-Fact: A Benchmark Dataset for Multimodal Financial Fact Checking and
Explanation Generation
|
8 pages, 4 figures, 4 tables
| null | null | null |
cs.AI cs.CE cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Fact-checking in financial domain is under explored, and there is a shortage
of quality dataset in this domain. In this paper, we propose Fin-Fact, a
benchmark dataset for multimodal fact-checking within the financial domain.
Notably, it includes professional fact-checker annotations and justifications,
providing expertise and credibility. With its multimodal nature encompassing
both textual and visual content, Fin-Fact provides complementary information
sources to enhance factuality analysis. Its primary objective is combating
misinformation in finance, fostering transparency, and building trust in
financial reporting and news dissemination. By offering insightful
explanations, Fin-Fact empowers users, including domain experts and end-users,
to understand the reasoning behind fact-checking decisions, validating claim
credibility, and fostering trust in the fact-checking process. The Fin-Fact
dataset, along with our experimental codes is available at
https://github.com/IIT-DM/Fin-Fact/.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 22:24:00 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Rangapur",
"Aman",
""
],
[
"Wang",
"Haoran",
""
],
[
"Shu",
"Kai",
""
]
] |
new_dataset
| 0.999755 |
2309.08816
|
Zhicheng Yan
|
Chenchen Zhu, Fanyi Xiao, Andres Alvarado, Yasmine Babaei, Jiabo Hu,
Hichem El-Mohri, Sean Chang Culatana, Roshan Sumbaly, Zhicheng Yan
|
EgoObjects: A Large-Scale Egocentric Dataset for Fine-Grained Object
Understanding
|
ICCV 2023 final version and supplement. See more details in project
page: https://github.com/facebookresearch/EgoObjects
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Object understanding in egocentric visual data is arguably a fundamental
research topic in egocentric vision. However, existing object datasets are
either non-egocentric or have limitations in object categories, visual content,
and annotation granularities. In this work, we introduce EgoObjects, a
large-scale egocentric dataset for fine-grained object understanding. Its Pilot
version contains over 9K videos collected by 250 participants from 50+
countries using 4 wearable devices, and over 650K object annotations from 368
object categories. Unlike prior datasets containing only object category
labels, EgoObjects also annotates each object with an instance-level
identifier, and includes over 14K unique object instances. EgoObjects was
designed to capture the same object under diverse background complexities,
surrounding objects, distance, lighting and camera motion. In parallel to the
data collection, we conducted data annotation by developing a multi-stage
federated annotation process to accommodate the growing nature of the dataset.
To bootstrap the research on EgoObjects, we present a suite of 4 benchmark
tasks around the egocentric object understanding, including a novel instance
level- and the classical category level object detection. Moreover, we also
introduce 2 novel continual learning object detection tasks. The dataset and
API are available at https://github.com/facebookresearch/EgoObjects.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 23:55:43 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Zhu",
"Chenchen",
""
],
[
"Xiao",
"Fanyi",
""
],
[
"Alvarado",
"Andres",
""
],
[
"Babaei",
"Yasmine",
""
],
[
"Hu",
"Jiabo",
""
],
[
"El-Mohri",
"Hichem",
""
],
[
"Culatana",
"Sean Chang",
""
],
[
"Sumbaly",
"Roshan",
""
],
[
"Yan",
"Zhicheng",
""
]
] |
new_dataset
| 0.999737 |
2309.08817
|
Joyce Zhou
|
Joyce Zhou, Thorsten Joachims
|
GPT as a Baseline for Recommendation Explanation Texts
|
8 pages, 4 tables/figures. Accepted in current form to
IntRS@RecSys2023 workshop. Intending on making noticeable in-place revisions
on ArXiv for future submission, including potential title change
| null | null | null |
cs.AI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we establish a baseline potential for how modern
model-generated text explanations of movie recommendations may help users, and
explore what different components of these text explanations that users like or
dislike, especially in contrast to existing human movie reviews. We found that
participants gave no significantly different rankings between movies, nor did
they give significantly different individual quality scores to reviews of
movies that they had never seen before. However, participants did mark reviews
as significantly better when they were movies they had seen before. We also
explore specific aspects of movie review texts that participants marked as
important for each quality. Overall, we establish that modern LLMs are a
promising source of recommendation explanations, and we intend on further
exploring personalizable text explanations in the future.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 00:00:44 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Zhou",
"Joyce",
""
],
[
"Joachims",
"Thorsten",
""
]
] |
new_dataset
| 0.998462 |
2309.08838
|
Xulong Zhang
|
Yazhong Si, Xulong Zhang, Fan Yang, Jianzong Wang, Ning Cheng, Jing
Xiao
|
AOSR-Net: All-in-One Sandstorm Removal Network
|
Accepted by The 35th IEEE International Conference on Tools with
Artificial Intelligence. (ICTAI 2023)
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most existing sandstorm image enhancement methods are based on traditional
theory and prior knowledge, which often restrict their applicability in
real-world scenarios. In addition, these approaches often adopt a strategy of
color correction followed by dust removal, which makes the algorithm structure
too complex. To solve the issue, we introduce a novel image restoration model,
named all-in-one sandstorm removal network (AOSR-Net). This model is developed
based on a re-formulated sandstorm scattering model, which directly establishes
the image mapping relationship by integrating intermediate parameters. Such
integration scheme effectively addresses the problems of over-enhancement and
weak generalization in the field of sand dust image enhancement. Experimental
results on synthetic and real-world sandstorm images demonstrate the
superiority of the proposed AOSR-Net over state-of-the-art (SOTA) algorithms.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 02:11:24 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Si",
"Yazhong",
""
],
[
"Zhang",
"Xulong",
""
],
[
"Yang",
"Fan",
""
],
[
"Wang",
"Jianzong",
""
],
[
"Cheng",
"Ning",
""
],
[
"Xiao",
"Jing",
""
]
] |
new_dataset
| 0.995076 |
2309.08842
|
Cheng Chen
|
Cheng Chen, Juzheng Miao, Dufan Wu, Zhiling Yan, Sekeun Kim, Jiang Hu,
Aoxiao Zhong, Zhengliang Liu, Lichao Sun, Xiang Li, Tianming Liu, Pheng-Ann
Heng, Quanzheng Li
|
MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The Segment Anything Model (SAM), a foundation model for general image
segmentation, has demonstrated impressive zero-shot performance across numerous
natural image segmentation tasks. However, SAM's performance significantly
declines when applied to medical images, primarily due to the substantial
disparity between natural and medical image domains. To effectively adapt SAM
to medical images, it is important to incorporate critical third-dimensional
information, i.e., volumetric or temporal knowledge, during fine-tuning.
Simultaneously, we aim to harness SAM's pre-trained weights within its original
2D backbone to the fullest extent. In this paper, we introduce a
modality-agnostic SAM adaptation framework, named as MA-SAM, that is applicable
to various volumetric and video medical data. Our method roots in the
parameter-efficient fine-tuning strategy to update only a small portion of
weight increments while preserving the majority of SAM's pre-trained weights.
By injecting a series of 3D adapters into the transformer blocks of the image
encoder, our method enables the pre-trained 2D backbone to extract
third-dimensional information from input data. The effectiveness of our method
has been comprehensively evaluated on four medical image segmentation tasks, by
using 10 public datasets across CT, MRI, and surgical video data. Remarkably,
without using any prompt, our method consistently outperforms various
state-of-the-art 3D approaches, surpassing nnU-Net by 0.9%, 2.6%, and 9.9% in
Dice for CT multi-organ segmentation, MRI prostate segmentation, and surgical
scene segmentation respectively. Our model also demonstrates strong
generalization, and excels in challenging tumor segmentation when prompts are
used. Our code is available at: https://github.com/cchen-cc/MA-SAM.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 02:41:53 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Chen",
"Cheng",
""
],
[
"Miao",
"Juzheng",
""
],
[
"Wu",
"Dufan",
""
],
[
"Yan",
"Zhiling",
""
],
[
"Kim",
"Sekeun",
""
],
[
"Hu",
"Jiang",
""
],
[
"Zhong",
"Aoxiao",
""
],
[
"Liu",
"Zhengliang",
""
],
[
"Sun",
"Lichao",
""
],
[
"Li",
"Xiang",
""
],
[
"Liu",
"Tianming",
""
],
[
"Heng",
"Pheng-Ann",
""
],
[
"Li",
"Quanzheng",
""
]
] |
new_dataset
| 0.983127 |
2309.08860
|
Won Kyung Do
|
Won Kyung Do, Ankush Kundan Dhawan, Mathilda Kitzmann, and Monroe
Kennedy III
|
DenseTact-Mini: An Optical Tactile Sensor for Grasping Multi-Scale
Objects From Flat Surfaces
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Dexterous manipulation, especially of small daily objects, continues to pose
complex challenges in robotics. This paper introduces the DenseTact-Mini, an
optical tactile sensor with a soft, rounded, smooth gel surface and compact
design equipped with a synthetic fingernail. We propose three distinct grasping
strategies: tap grasping using adhesion forces such as electrostatic and van
der Waals, fingernail grasping leveraging rolling/sliding contact between the
object and fingernail, and fingertip grasping with two soft fingertips. Through
comprehensive evaluations, the DenseTact-Mini demonstrates a lifting success
rate exceeding 90.2% when grasping various objects, spanning items from 1mm
basil seeds and small paperclips to items nearly 15mm. This work demonstrates
the potential of soft optical tactile sensors for dexterous manipulation and
grasping.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 03:43:10 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Do",
"Won Kyung",
""
],
[
"Dhawan",
"Ankush Kundan",
""
],
[
"Kitzmann",
"Mathilda",
""
],
[
"Kennedy",
"Monroe",
"III"
]
] |
new_dataset
| 0.995585 |
2309.08861
|
Davide Villa
|
Davide Villa, Daniel Uvaydov, Leonardo Bonati, Pedram Johari, Josep
Miquel Jornet, Tommaso Melodia
|
Demo: Intelligent Radar Detection in CBRS Band in the Colosseum Wireless
Network Emulator
|
2 pages, 4 figures
| null | null | null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ever-growing number of wireless communication devices and technologies
demands spectrum-sharing techniques. Effective coexistence management is
crucial to avoid harmful interference, especially with critical systems like
nautical and aerial radars in which incumbent radios operate mission-critical
communication links. In this demo, we showcase a framework that leverages
Colosseum, the world's largest wireless network emulator with
hardware-in-the-loop, as a playground to study commercial radar waveforms
coexisting with a cellular network in CBRS band in complex environments. We
create an ad-hoc high-fidelity spectrum-sharing scenario for this purpose. We
deploy a cellular network to collect IQ samples with the aim of training an ML
agent that runs at the base station. The agent has the goal of detecting
incumbent radar transmissions and vacating the cellular bandwidth to avoid
interfering with the radar operations. Our experiment results show an average
detection accuracy of 88%, with an average detection time of 137 ms.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 03:47:06 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Villa",
"Davide",
""
],
[
"Uvaydov",
"Daniel",
""
],
[
"Bonati",
"Leonardo",
""
],
[
"Johari",
"Pedram",
""
],
[
"Jornet",
"Josep Miquel",
""
],
[
"Melodia",
"Tommaso",
""
]
] |
new_dataset
| 0.990254 |
2309.08863
|
Payam Nourizadeh
|
Payam Nourizadeh, Fiona J Stevens McFadden, Will N Browne
|
Trajectory Tracking Control of Skid-Steering Mobile Robots with Slip and
Skid Compensation using Sliding-Mode Control and Deep Learning
| null | null | null | null |
cs.RO cs.AI cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Slip and skid compensation is crucial for mobile robots' navigation in
outdoor environments and uneven terrains. In addition to the general slipping
and skidding hazards for mobile robots in outdoor environments, slip and skid
cause uncertainty for the trajectory tracking system and put the validity of
stability analysis at risk. Despite research in this field, having a real-world
feasible online slip and skid compensation is still challenging due to the
complexity of wheel-terrain interaction in outdoor environments. This paper
presents a novel trajectory tracking technique with real-world feasible online
slip and skid compensation at the vehicle-level for skid-steering mobile robots
in outdoor environments. The sliding mode control technique is utilized to
design a robust trajectory tracking system to be able to consider the parameter
uncertainty of this type of robot. Two previously developed deep learning
models [1], [2] are integrated into the control feedback loop to estimate the
robot's slipping and undesired skidding and feed the compensator in a real-time
manner. The main advantages of the proposed technique are (1) considering two
slip-related parameters rather than the conventional three slip parameters at
the wheel-level, and (2) having an online real-world feasible slip and skid
compensator to be able to reduce the tracking errors in unforeseen
environments. The experimental results show that the proposed controller with
the slip and skid compensator improves the performance of the trajectory
tracking system by more than 27%.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 03:58:03 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Nourizadeh",
"Payam",
""
],
[
"McFadden",
"Fiona J Stevens",
""
],
[
"Browne",
"Will N",
""
]
] |
new_dataset
| 0.998587 |
2309.08865
|
Sathvika Kotha
|
Sathvika Kotha, Hrishikesh Viswanath, Kshitij Tiwari, Aniket Bera
|
ARTEMIS: AI-driven Robotic Triage Labeling and Emergency Medical
Information System
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mass casualty incidents (MCIs) pose a formidable challenge to emergency
medical services by overwhelming available resources and personnel. Effective
victim assessment is paramount to minimizing casualties during such a crisis.
In this paper, we introduce ARTEMIS, an AI-driven Robotic Triage Labeling and
Emergency Medical Information System. This system comprises a deep learning
model for acuity labeling that is integrated with a robot, that performs the
preliminary assessment of injury severity in patients and assigns appropriate
triage labels. Additionally, we have developed a frontend (graphical user
interface) that is updated by the robots in real time and is accessible to the
first responders. To validate the reliability of our proposed algorithmic
triage protocol, we employed an off-the-shelf robot kit equipped with sensors
for vital sign acquisition. A controlled laboratory simulation of an MCI was
conducted to assess the system's performance and effectiveness in real-world
scenarios resulting in a triage-level classification accuracy of 92%. This
noteworthy achievement underscores the model's proficiency in discerning
crucial patterns for accurate triage classification, showcasing its promising
potential in healthcare applications.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 04:01:34 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Kotha",
"Sathvika",
""
],
[
"Viswanath",
"Hrishikesh",
""
],
[
"Tiwari",
"Kshitij",
""
],
[
"Bera",
"Aniket",
""
]
] |
new_dataset
| 0.999243 |
2309.08873
|
Juan Diego Rodriguez
|
Juan Diego Rodriguez, Katrin Erk, Greg Durrett
|
X-PARADE: Cross-Lingual Textual Entailment and Information Divergence
across Paragraphs
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Understanding when two pieces of text convey the same information is a goal
touching many subproblems in NLP, including textual entailment and
fact-checking. This problem becomes more complex when those two pieces of text
are in different languages. Here, we introduce X-PARADE (Cross-lingual
Paragraph-level Analysis of Divergences and Entailments), the first
cross-lingual dataset of paragraph-level information divergences. Annotators
label a paragraph in a target language at the span level and evaluate it with
respect to a corresponding paragraph in a source language, indicating whether a
given piece of information is the same, new, or new but can be inferred. This
last notion establishes a link with cross-language NLI. Aligned paragraphs are
sourced from Wikipedia pages in different languages, reflecting real
information divergences observed in the wild. Armed with our dataset, we
investigate a diverse set of approaches for this problem, including classic
token alignment from machine translation, textual entailment methods that
localize their decisions, and prompting of large language models. Our results
show that these methods vary in their capability to handle inferable
information, but they all fall short of human performance.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 04:34:55 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Rodriguez",
"Juan Diego",
""
],
[
"Erk",
"Katrin",
""
],
[
"Durrett",
"Greg",
""
]
] |
new_dataset
| 0.999142 |
2309.08881
|
Mikhail Kats
|
Tanuj Kumar and Mikhail A. Kats
|
ChatGPT-4 with Code Interpreter can be used to solve introductory
college-level vector calculus and electromagnetism problems
|
Main text and appendices
| null | null | null |
cs.AI cs.CE physics.ed-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We evaluated ChatGPT 3.5, 4, and 4 with Code Interpreter on a set of
college-level engineering-math and electromagnetism problems, such as those
often given to sophomore electrical engineering majors. We selected a set of 13
problems, and had ChatGPT solve them multiple times, using a fresh instance
(chat) each time. We found that ChatGPT-4 with Code Interpreter was able to
satisfactorily solve most problems we tested most of the time -- a major
improvement over the performance of ChatGPT-4 (or 3.5) without Code
Interpreter. The performance of ChatGPT was observed to be somewhat stochastic,
and we found that solving the same problem N times in new ChatGPT instances and
taking the most-common answer was an effective strategy. Based on our findings
and observations, we provide some recommendations for instructors and students
of classes at this level.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 05:19:39 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Kumar",
"Tanuj",
""
],
[
"Kats",
"Mikhail A.",
""
]
] |
new_dataset
| 0.979833 |
2309.08889
|
Benjamin Stoler
|
Benjamin Stoler and Ingrid Navarro and Meghdeep Jana and Soonmin Hwang
and Jonathan Francis and Jean Oh
|
SafeShift: Safety-Informed Distribution Shifts for Robust Trajectory
Prediction in Autonomous Driving
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
As autonomous driving technology matures, safety and robustness of its key
components, including trajectory prediction, is vital. Though real-world
datasets, such as Waymo Open Motion, provide realistic recorded scenarios for
model development, they often lack truly safety-critical situations. Rather
than utilizing unrealistic simulation or dangerous real-world testing, we
instead propose a framework to characterize such datasets and find hidden
safety-relevant scenarios within. Our approach expands the spectrum of
safety-relevance, allowing us to study trajectory prediction models under a
safety-informed, distribution shift setting. We contribute a generalized
scenario characterization method, a novel scoring scheme to find subtly-avoided
risky scenarios, and an evaluation of trajectory prediction models in this
setting. We further contribute a remediation strategy, achieving a 10% average
reduction in prediction collision rates. To facilitate future research, we
release our code to the public: github.com/cmubig/SafeShift
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 06:01:42 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Stoler",
"Benjamin",
""
],
[
"Navarro",
"Ingrid",
""
],
[
"Jana",
"Meghdeep",
""
],
[
"Hwang",
"Soonmin",
""
],
[
"Francis",
"Jonathan",
""
],
[
"Oh",
"Jean",
""
]
] |
new_dataset
| 0.999348 |
2309.08891
|
Zhongyang Zhang
|
Zhongyang Zhang, Shuyang Cui, Kaidong Chai, Haowen Yu, Subhasis
Dasgupta, Upal Mahbub, Tauhidur Rahman
|
V2CE: Video to Continuous Events Simulator
|
6 pages, 7 figures
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic Vision Sensor (DVS)-based solutions have recently garnered
significant interest across various computer vision tasks, offering notable
benefits in terms of dynamic range, temporal resolution, and inference speed.
However, as a relatively nascent vision sensor compared to Active Pixel Sensor
(APS) devices such as RGB cameras, DVS suffers from a dearth of ample labeled
datasets. Prior efforts to convert APS data into events often grapple with
issues such as a considerable domain shift from real events, the absence of
quantified validation, and layering problems within the time axis. In this
paper, we present a novel method for video-to-events stream conversion from
multiple perspectives, considering the specific characteristics of DVS. A
series of carefully designed losses helps enhance the quality of generated
event voxels significantly. We also propose a novel local dynamic-aware
timestamp inference strategy to accurately recover event timestamps from event
voxels in a continuous fashion and eliminate the temporal layering problem.
Results from rigorous validation through quantified metrics at all stages of
the pipeline establish our method unquestionably as the current
state-of-the-art (SOTA).
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 06:06:53 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Zhang",
"Zhongyang",
""
],
[
"Cui",
"Shuyang",
""
],
[
"Chai",
"Kaidong",
""
],
[
"Yu",
"Haowen",
""
],
[
"Dasgupta",
"Subhasis",
""
],
[
"Mahbub",
"Upal",
""
],
[
"Rahman",
"Tauhidur",
""
]
] |
new_dataset
| 0.998152 |
2309.08897
|
Yoonchang Sung
|
Yoonchang Sung, Rahul Shome, Peter Stone
|
Asynchronous Task Plan Refinement for Multi-Robot Task and Motion
Planning
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper explores general multi-robot task and motion planning, where
multiple robots in close proximity manipulate objects while satisfying
constraints and a given goal. In particular, we formulate the plan refinement
problem--which, given a task plan, finds valid assignments of variables
corresponding to solution trajectories--as a hybrid constraint satisfaction
problem. The proposed algorithm follows several design principles that yield
the following features: (1) efficient solution finding due to sequential
heuristics and implicit time and roadmap representations, and (2) maximized
feasible solution space obtained by introducing minimally necessary
coordination-induced constraints and not relying on prevalent simplifications
that exist in the literature. The evaluation results demonstrate the planning
efficiency of the proposed algorithm, outperforming the synchronous approach in
terms of makespan.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 06:35:22 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Sung",
"Yoonchang",
""
],
[
"Shome",
"Rahul",
""
],
[
"Stone",
"Peter",
""
]
] |
new_dataset
| 0.994912 |
2309.08909
|
Yuhang Han
|
Yuhang Han, Zhengtao Liu, Shuo Sun, Dongen Li, Jiawei Sun, Ziye Hong,
Marcelo H. Ang Jr
|
CARLA-Loc: Synthetic SLAM Dataset with Full-stack Sensor Setup in
Challenging Weather and Dynamic Environments
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The robustness of SLAM algorithms in challenging environmental conditions is
crucial for autonomous driving, but the impact of these conditions are unknown
while given the difficulty of arbitrarily changing the relevant environmental
parameters of the same environment in the real world. Therefore, we propose
CARLA-Loc, a synthetic dataset of challenging and dynamic environments built on
CARLA simulator. We integrate multiple sensors into the dataset with strict
calibration, synchronization and precise timestamping. 7 maps and 42 sequences
are posed in our dataset with different dynamic levels and weather conditions.
Objects in both stereo images and point clouds are well-segmented with their
class labels. We evaluate 5 visual-based and 4 LiDAR-based approaches on varies
sequences and analyze the effect of challenging environmental factors on the
localization accuracy, showing the applicability of proposed dataset for
validating SLAM algorithms.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 07:24:21 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Han",
"Yuhang",
""
],
[
"Liu",
"Zhengtao",
""
],
[
"Sun",
"Shuo",
""
],
[
"Li",
"Dongen",
""
],
[
"Sun",
"Jiawei",
""
],
[
"Hong",
"Ziye",
""
],
[
"Ang",
"Marcelo H.",
"Jr"
]
] |
new_dataset
| 0.999811 |
2309.08915
|
Bocong Chen
|
Chunyan Qin, Bocong Chen and Gaojun Luo
|
On non-expandable cross-bifix-free codes
|
This paper has been submitted to IEEE T-IT for possible publication
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A cross-bifix-free code of length $n$ over $\mathbb{Z}_q$ is defined as a
non-empty subset of $\mathbb{Z}_q^n$ satisfying that the prefix set of each
codeword is disjoint from the suffix set of every codeword. Cross-bifix-free
codes have found important applications in digital communication systems. One
of the main research problems on cross-bifix-free codes is to construct
cross-bifix-free codes as large as possible in size. Recently, Wang and Wang
introduced a family of cross-bifix-free codes $S_{I,J}^{(k)}(n)$, which is a
generalization of the classical cross-bifix-free codes studied early by
Lvenshtein, Gilbert and Chee {\it et al.}. It is known that $S_{I,J}^{(k)}(n)$
is nearly optimal in size and $S_{I,J}^{(k)}(n)$ is non-expandable if $k=n-1$
or $1\leq k<n/2$. In this paper, we first show that $S_{I,J}^{(k)}(n)$ is
non-expandable if and only if $k=n-1$ or $1\leq k<n/2$, thereby improving the
results in [Chee {\it et al.}, IEEE-TIT, 2013] and [Wang and Wang, IEEE-TIT,
2022]. We then construct a new family of cross-bifix-free codes
$U^{(t)}_{I,J}(n)$ to expand $S_{I,J}^{(k)}(n)$ such that the resulting larger
code $S_{I,J}^{(k)}(n)\bigcup U^{(t)}_{I,J}(n)$ is a non-expandable
cross-bifix-free code whenever $S_{I,J}^{(k)}(n)$ is expandable. Finally, we
present an explicit formula for the size of $S_{I,J}^{(k)}(n)\bigcup
U^{(t)}_{I,J}(n)$.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 07:48:01 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Qin",
"Chunyan",
""
],
[
"Chen",
"Bocong",
""
],
[
"Luo",
"Gaojun",
""
]
] |
new_dataset
| 0.996361 |
2309.08920
|
Xu Pan
|
Pan Xu, Ling San, Liu Hongwei
|
New bounds for $b$-Symbol Distances of Matrix Product Codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Matrix product codes are generalizations of some well-known constructions of
codes, such as Reed-Muller codes, $[u+v,u-v]$-construction, etc. Recently, a
bound for the symbol-pair distance of a matrix product code was given in
\cite{LEL}, and new families of MDS symbol-pair codes were constructed by using
this bound. In this paper, we generalize this bound to the $b$-symbol distance
of a matrix product code and determine all minimum $b$-symbol distances of
Reed-Muller codes. We also give a bound for the minimum $b$-symbol distance of
codes obtained from the $[u+v,u-v]$-construction, and use this bound to
construct some $[2n,2n-2]_q$-linear $b$-symbol almost MDS codes with arbitrary
length. All the minimum $b$-symbol distances of $[n,n-1]_q$-linear codes and
$[n,n-2]_q$-linear codes for $1\leq b\leq n$ are determined. Some examples are
presented to illustrate these results.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 08:14:10 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Xu",
"Pan",
""
],
[
"San",
"Ling",
""
],
[
"Hongwei",
"Liu",
""
]
] |
new_dataset
| 0.996912 |
2309.08942
|
Juntao Jian
|
Juntao Jian, Xiuping Liu, Manyi Li, Ruizhen Hu, Jian Liu
|
AffordPose: A Large-scale Dataset of Hand-Object Interactions with
Affordance-driven Hand Pose
|
Accepted by ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How human interact with objects depends on the functional roles of the target
objects, which introduces the problem of affordance-aware hand-object
interaction. It requires a large number of human demonstrations for the
learning and understanding of plausible and appropriate hand-object
interactions. In this work, we present AffordPose, a large-scale dataset of
hand-object interactions with affordance-driven hand pose. We first annotate
the specific part-level affordance labels for each object, e.g. twist, pull,
handle-grasp, etc, instead of the general intents such as use or handover, to
indicate the purpose and guide the localization of the hand-object
interactions. The fine-grained hand-object interactions reveal the influence of
hand-centered affordances on the detailed arrangement of the hand poses, yet
also exhibit a certain degree of diversity. We collect a total of 26.7K
hand-object interactions, each including the 3D object shape, the part-level
affordance label, and the manually adjusted hand poses. The comprehensive data
analysis shows the common characteristics and diversity of hand-object
interactions per affordance via the parameter statistics and contacting
computation. We also conduct experiments on the tasks of hand-object affordance
understanding and affordance-oriented hand-object interaction generation, to
validate the effectiveness of our dataset in learning the fine-grained
hand-object interactions. Project page:
https://github.com/GentlesJan/AffordPose.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 10:25:28 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Jian",
"Juntao",
""
],
[
"Liu",
"Xiuping",
""
],
[
"Li",
"Manyi",
""
],
[
"Hu",
"Ruizhen",
""
],
[
"Liu",
"Jian",
""
]
] |
new_dataset
| 0.999832 |
2309.08955
|
Christian Narcia-Macias
|
Christian I. Narcia-Macias, Joselito Guardado, Jocell Rodriguez,
Joanne Rampersad-Ammons, Erik Enriquez, Dong-Chul Kim
|
IntelliBeeHive: An Automated Honey Bee, Pollen, and Varroa Destructor
Monitoring System
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Utilizing computer vision and the latest technological advancements, in this
study, we developed a honey bee monitoring system that aims to enhance our
understanding of Colony Collapse Disorder, honey bee behavior, population
decline, and overall hive health. The system is positioned at the hive entrance
providing real-time data, enabling beekeepers to closely monitor the hive's
activity and health through an account-based website. Using machine learning,
our monitoring system can accurately track honey bees, monitor pollen-gathering
activity, and detect Varroa mites, all without causing any disruption to the
honey bees. Moreover, we have ensured that the development of this monitoring
system utilizes cost-effective technology, making it accessible to apiaries of
various scales, including hobbyists, commercial beekeeping businesses, and
researchers. The inference models used to detect honey bees, pollen, and mites
are based on the YOLOv7-tiny architecture trained with our own data. The
F1-score for honey bee model recognition is 0.95 and the precision and recall
value is 0.981. For our pollen and mite object detection model F1-score is 0.95
and the precision and recall value is 0.821 for pollen and 0.996 for "mite".
The overall performance of our IntelliBeeHive system demonstrates its
effectiveness in monitoring the honey bee's activity, achieving an accuracy of
96.28 % in tracking and our pollen model achieved a F1-score of 0.831.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 11:13:47 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Narcia-Macias",
"Christian I.",
""
],
[
"Guardado",
"Joselito",
""
],
[
"Rodriguez",
"Jocell",
""
],
[
"Rampersad-Ammons",
"Joanne",
""
],
[
"Enriquez",
"Erik",
""
],
[
"Kim",
"Dong-Chul",
""
]
] |
new_dataset
| 0.996338 |
2309.08960
|
Yijie Zhou
|
Yijie Zhou, Kejian Shi, Wencai Zhang, Yixin Liu, Yilun Zhao, Arman
Cohan
|
ODSum: New Benchmarks for Open Domain Multi-Document Summarization
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open-domain Multi-Document Summarization (ODMDS) is a critical tool for
condensing vast arrays of documents into coherent, concise summaries. With a
more inter-related document set, there does not necessarily exist a correct
answer for the retrieval, making it hard to measure the retrieving performance.
We propose a rule-based method to process query-based document summarization
datasets into ODMDS datasets. Based on this method, we introduce a novel
dataset, ODSum, a sophisticated case with its document index interdependent and
often interrelated. We tackle ODMDS with the \textit{retrieve-then-summarize}
method, and the performance of a list of retrievers and summarizers is
investigated. Through extensive experiments, we identify variances in
evaluation metrics and provide insights into their reliability. We also found
that LLMs suffer great performance loss from retrieving errors. We further
experimented methods to improve the performance as well as investigate their
robustness against imperfect retrieval. We will release our data and code at
https://github.com/yale-nlp/ODSum.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 11:27:34 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Zhou",
"Yijie",
""
],
[
"Shi",
"Kejian",
""
],
[
"Zhang",
"Wencai",
""
],
[
"Liu",
"Yixin",
""
],
[
"Zhao",
"Yilun",
""
],
[
"Cohan",
"Arman",
""
]
] |
new_dataset
| 0.999216 |
2309.08966
|
Mohan Wang
|
Nan Ma, Mohan Wang, Yiheng Han, Yong-Jin Liu
|
FF-LOGO: Cross-Modality Point Cloud Registration with Feature Filtering
and Local to Global Optimization
|
7 pages, 2 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Cross-modality point cloud registration is confronted with significant
challenges due to inherent differences in modalities between different sensors.
We propose a cross-modality point cloud registration framework FF-LOGO: a
cross-modality point cloud registration method with feature filtering and
local-global optimization. The cross-modality feature correlation filtering
module extracts geometric transformation-invariant features from cross-modality
point clouds and achieves point selection by feature matching. We also
introduce a cross-modality optimization process, including a local adaptive key
region aggregation module and a global modality consistency fusion optimization
module. Experimental results demonstrate that our two-stage optimization
significantly improves the registration accuracy of the feature association and
selection module. Our method achieves a substantial increase in recall rate
compared to the current state-of-the-art methods on the 3DCSR dataset,
improving from 40.59% to 75.74%. Our code will be available at
https://github.com/wangmohan17/FFLOGO.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 11:42:41 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Ma",
"Nan",
""
],
[
"Wang",
"Mohan",
""
],
[
"Han",
"Yiheng",
""
],
[
"Liu",
"Yong-Jin",
""
]
] |
new_dataset
| 0.998021 |
2309.08987
|
Zihan Chen
|
Zihan Chen, Tianrui Liu, Jun-Jie Huang, Wentao Zhao, Xing Bi and Meng
Wang
|
Invertible Mosaic Image Hiding Network for Very Large Capacity Image
Steganography
| null | null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The existing image steganography methods either sequentially conceal secret
images or conceal a concatenation of multiple images. In such ways, the
interference of information among multiple images will become increasingly
severe when the number of secret images becomes larger, thus restrict the
development of very large capacity image steganography. In this paper, we
propose an Invertible Mosaic Image Hiding Network (InvMIHNet) which realizes
very large capacity image steganography with high quality by concealing a
single mosaic secret image. InvMIHNet consists of an Invertible Image Rescaling
(IIR) module and an Invertible Image Hiding (IIH) module. The IIR module works
for downscaling the single mosaic secret image form by spatially splicing the
multiple secret images, and the IIH module then conceal this mosaic image under
the cover image. The proposed InvMIHNet successfully conceal and reveal up to
16 secret images with a small number of parameters and memory consumption.
Extensive experiments on ImageNet-1K, COCO and DIV2K show InvMIHNet outperforms
state-of-the-art methods in terms of both the imperceptibility of stego image
and recover accuracy of secret image.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 13:03:43 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Chen",
"Zihan",
""
],
[
"Liu",
"Tianrui",
""
],
[
"Huang",
"Jun-Jie",
""
],
[
"Zhao",
"Wentao",
""
],
[
"Bi",
"Xing",
""
],
[
"Wang",
"Meng",
""
]
] |
new_dataset
| 0.978185 |
2309.09003
|
Zhirui Wang Dr
|
Yuelei Wang, Ting Zhang, Liangjin Zhao, Lin Hu, Zhechao Wang, Ziqing
Niu, Peirui Cheng, Kaiqiang Chen, Xuan Zeng, Zhirui Wang, Hongqi Wang and
Xian Sun
|
RingMo-lite: A Remote Sensing Multi-task Lightweight Network with
CNN-Transformer Hybrid Framework
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, remote sensing (RS) vision foundation models such as RingMo
have emerged and achieved excellent performance in various downstream tasks.
However, the high demand for computing resources limits the application of
these models on edge devices. It is necessary to design a more lightweight
foundation model to support on-orbit RS image interpretation. Existing methods
face challenges in achieving lightweight solutions while retaining
generalization in RS image interpretation. This is due to the complex high and
low-frequency spectral components in RS images, which make traditional single
CNN or Vision Transformer methods unsuitable for the task. Therefore, this
paper proposes RingMo-lite, an RS multi-task lightweight network with a
CNN-Transformer hybrid framework, which effectively exploits the
frequency-domain properties of RS to optimize the interpretation process. It is
combined by the Transformer module as a low-pass filter to extract global
features of RS images through a dual-branch structure, and the CNN module as a
stacked high-pass filter to extract fine-grained details effectively.
Furthermore, in the pretraining stage, the designed frequency-domain masked
image modeling (FD-MIM) combines each image patch's high-frequency and
low-frequency characteristics, effectively capturing the latent feature
representation in RS data. As shown in Fig. 1, compared with RingMo, the
proposed RingMo-lite reduces the parameters over 60% in various RS image
interpretation tasks, the average accuracy drops by less than 2% in most of the
scenes and achieves SOTA performance compared to models of the similar size. In
addition, our work will be integrated into the MindSpore computing platform in
the near future.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 14:15:59 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Wang",
"Yuelei",
""
],
[
"Zhang",
"Ting",
""
],
[
"Zhao",
"Liangjin",
""
],
[
"Hu",
"Lin",
""
],
[
"Wang",
"Zhechao",
""
],
[
"Niu",
"Ziqing",
""
],
[
"Cheng",
"Peirui",
""
],
[
"Chen",
"Kaiqiang",
""
],
[
"Zeng",
"Xuan",
""
],
[
"Wang",
"Zhirui",
""
],
[
"Wang",
"Hongqi",
""
],
[
"Sun",
"Xian",
""
]
] |
new_dataset
| 0.999409 |
2309.09022
|
Boris Shminke
|
Boris Shminke
|
gym-saturation: Gymnasium environments for saturation provers (System
description)
|
13 pages, 3 figures. This version of the contribution has been
accepted for publication, after peer review but is not the Version of Record
and does not reflect post-acceptance improvements, or any corrections. The
Version of Record is available online at:
https://doi.org/10.1007/978-3-031-43513-3_11
| null |
10.1007/978-3-031-43513-3_11
| null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This work describes a new version of a previously published Python package -
gym-saturation: a collection of OpenAI Gym environments for guiding
saturation-style provers based on the given clause algorithm with reinforcement
learning. We contribute usage examples with two different provers: Vampire and
iProver. We also have decoupled the proof state representation from
reinforcement learning per se and provided examples of using a known ast2vec
Python code embedding model as a first-order logic representation. In addition,
we demonstrate how environment wrappers can transform a prover into a problem
similar to a multi-armed bandit. We applied two reinforcement learning
algorithms (Thompson sampling and Proximal policy optimisation) implemented in
Ray RLlib to show the ease of experimentation with the new release of our
package.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 15:25:39 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Shminke",
"Boris",
""
]
] |
new_dataset
| 0.993082 |
2309.09058
|
Alexy Skoutnev
|
Alexy Skoutnev, Andrew Cinar, Praful Sigdel, Forrest Laine
|
QTOS: An Open-Source Quadruped Trajectory Optimization Stack
|
Submitted to ICRA 2024
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a new open-source framework, Quadruped Trajectory Optimization
Stack (QTOS), which integrates a global planner, local planner, simulator,
controller, and robot interface into a single package. QTOS serves as a
full-stack interface, simplifying continuous motion planning on an open-source
quadruped platform by bridging the gap between middleware and gait planning. It
empowers users to effortlessly translate high-level navigation objectives into
low-level robot commands. Furthermore, QTOS enhances the stability and
adaptability of long-distance gait planning across challenging terrain.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 17:49:17 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Skoutnev",
"Alexy",
""
],
[
"Cinar",
"Andrew",
""
],
[
"Sigdel",
"Praful",
""
],
[
"Laine",
"Forrest",
""
]
] |
new_dataset
| 0.999558 |
2309.09071
|
Ha Thanh Nguyen
|
Hai-Long Nguyen, Thi-Kieu-Trang Pham, Thai-Son Le, Tan-Minh Nguyen,
Thi-Hai-Yen Vuong, Ha-Thanh Nguyen
|
RMDM: A Multilabel Fakenews Dataset for Vietnamese Evidence Verification
|
ISAILD@KSE 2023
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study, we present a novel and challenging multilabel Vietnamese
dataset (RMDM) designed to assess the performance of large language models
(LLMs), in verifying electronic information related to legal contexts, focusing
on fake news as potential input for electronic evidence. The RMDM dataset
comprises four labels: real, mis, dis, and mal, representing real information,
misinformation, disinformation, and mal-information, respectively. By including
these diverse labels, RMDM captures the complexities of differing fake news
categories and offers insights into the abilities of different language models
to handle various types of information that could be part of electronic
evidence. The dataset consists of a total of 1,556 samples, with 389 samples
for each label. Preliminary tests on the dataset using GPT-based and BERT-based
models reveal variations in the models' performance across different labels,
indicating that the dataset effectively challenges the ability of various
language models to verify the authenticity of such information. Our findings
suggest that verifying electronic information related to legal contexts,
including fake news, remains a difficult problem for language models,
warranting further attention from the research community to advance toward more
reliable AI models for potential legal applications.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 18:35:08 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Nguyen",
"Hai-Long",
""
],
[
"Pham",
"Thi-Kieu-Trang",
""
],
[
"Le",
"Thai-Son",
""
],
[
"Nguyen",
"Tan-Minh",
""
],
[
"Vuong",
"Thi-Hai-Yen",
""
],
[
"Nguyen",
"Ha-Thanh",
""
]
] |
new_dataset
| 0.999826 |
2309.09083
|
Qiqian Fu
|
Qiqian Fu, Guanhong Wang, Gaoang Wang
|
FrameRS: A Video Frame Compression Model Composed by Self supervised
Video Frame Reconstructor and Key Frame Selector
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present frame reconstruction model: FrameRS. It consists
self-supervised video frame reconstructor and key frame selector. The frame
reconstructor, FrameMAE, is developed by adapting the principles of the Masked
Autoencoder for Images (MAE) for video context. The key frame selector, Frame
Selector, is built on CNN architecture. By taking the high-level semantic
information from the encoder of FrameMAE as its input, it can predicted the key
frames with low computation costs. Integrated with our bespoke Frame Selector,
FrameMAE can effectively compress a video clip by retaining approximately 30%
of its pivotal frames. Performance-wise, our model showcases computational
efficiency and competitive accuracy, marking a notable improvement over
traditional Key Frame Extract algorithms. The implementation is available on
Github
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 19:30:05 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Fu",
"Qiqian",
""
],
[
"Wang",
"Guanhong",
""
],
[
"Wang",
"Gaoang",
""
]
] |
new_dataset
| 0.991848 |
2309.09100
|
Joseph Tafese
|
Joseph Tafese and Isabel Garcia-Contreras and Arie Gurfinkel
|
Btor2MLIR: A Format and Toolchain for Hardware Verification
|
Formal Methods in Computer-Aided Design 2023
| null | null | null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Formats for representing and manipulating verification problems are extremely
important for supporting the ecosystem of tools, developers, and practitioners.
A good format allows representing many different types of problems, has a
strong toolchain for manipulating and translating problems, and can grow with
the community. In the world of hardware verification, and, specifically, the
Hardware Model Checking Competition (HWMCC), the Btor2 format has emerged as
the dominating format. It is supported by Btor2Tools, verification tools, and
Verilog design tools like Yosys. In this paper, we present an alternative
format and toolchain, called Btor2MLIR, based on the recent MLIR framework. The
advantage of Btor2MLIR is in reusing existing components from a mature compiler
infrastructure, including parsers, text and binary formats, converters to a
variety of intermediate representations, and executable semantics of LLVM. We
hope that the format and our tooling will lead to rapid prototyping of
verification and related tools for hardware verification.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 21:49:24 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Tafese",
"Joseph",
""
],
[
"Garcia-Contreras",
"Isabel",
""
],
[
"Gurfinkel",
"Arie",
""
]
] |
new_dataset
| 0.997924 |
2309.09102
|
Jeremy Morgan
|
Jeremy Morgan, David Millard, Gaurav S. Sukhatme
|
CppFlow: Generative Inverse Kinematics for Efficient and Robust
Cartesian Path Planning
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we present CppFlow - a novel and performant planner for the
Cartesian Path Planning problem, which finds valid trajectories up to 129x
faster than current methods, while also succeeding on more difficult problems
where others fail. At the core of the proposed algorithm is the use of a
learned, generative Inverse Kinematics solver, which is able to efficiently
produce promising entire candidate solution trajectories on the GPU. Precise,
valid solutions are then found through classical approaches such as
differentiable programming, global search, and optimization. In combining
approaches from these two paradigms we get the best of both worlds - efficient
approximate solutions from generative AI which are made exact using the
guarantees of traditional planning and optimization. We evaluate our system
against other state of the art methods on a set of established baselines as
well as new ones introduced in this work and find that our method significantly
outperforms others in terms of the time to find a valid solution and planning
success rate, and performs comparably in terms of trajectory length over time.
The work is made open source and available for use upon acceptance.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 21:55:45 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Morgan",
"Jeremy",
""
],
[
"Millard",
"David",
""
],
[
"Sukhatme",
"Gaurav S.",
""
]
] |
new_dataset
| 0.989835 |
2309.09108
|
Kunal Garg
|
Kunal Garg and Chuchu Fan
|
Neural Network-based Fault Detection and Identification for Quadrotors
using Dynamic Symmetry
|
Accepted for 2023 Allerton Conference on Communication, Control, &
Computing
| null | null | null |
cs.RO cs.SY eess.SY math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous robotic systems, such as quadrotors, are susceptible to actuator
faults, and for the safe operation of such systems, timely detection and
isolation of these faults is essential. Neural networks can be used for
verification of actuator performance via online actuator fault detection with
high accuracy. In this paper, we develop a novel model-free fault detection and
isolation (FDI) framework for quadrotor systems using long-short-term memory
(LSTM) neural network architecture. The proposed framework only uses system
output data and the commanded control input and requires no knowledge of the
system model. Utilizing the symmetry in quadrotor dynamics, we train the FDI
for fault in just one of the motors (e.g., motor $\# 2$), and the trained FDI
can predict faults in any of the motors. This reduction in search space enables
us to design an FDI for partial fault as well as complete fault scenarios.
Numerical experiments illustrate that the proposed NN-FDI correctly verifies
the actuator performance and identifies partial as well as complete faults with
over $90\%$ prediction accuracy. We also illustrate that model-free NN-FDI
performs at par with model-based FDI, and is robust to model uncertainties as
well as distribution shifts in input data.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 22:59:09 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Garg",
"Kunal",
""
],
[
"Fan",
"Chuchu",
""
]
] |
new_dataset
| 0.973675 |
2309.09131
|
Sasindu Wijeratne
|
Sasindu Wijeratne, Rajgopal Kannan, Viktor Prasanna
|
Dynasor: A Dynamic Memory Layout for Accelerating Sparse MTTKRP for
Tensor Decomposition on Multi-core CPU
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Sparse Matricized Tensor Times Khatri-Rao Product (spMTTKRP) is the most
time-consuming compute kernel in sparse tensor decomposition. In this paper, we
introduce a novel algorithm to minimize the execution time of spMTTKRP across
all modes of an input tensor on multi-core CPU platform. The proposed algorithm
leverages the FLYCOO tensor format to exploit data locality in external memory
accesses. It effectively utilizes computational resources by enabling lock-free
concurrent processing of independent partitions of the input tensor. The
proposed partitioning ensures load balancing among CPU threads. Our dynamic
tensor remapping technique leads to reduced communication overhead along all
the modes. On widely used real-world tensors, our work achieves 2.12x - 9.01x
speedup in total execution time across all modes compared with the
state-of-the-art CPU implementations.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 01:49:31 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Wijeratne",
"Sasindu",
""
],
[
"Kannan",
"Rajgopal",
""
],
[
"Prasanna",
"Viktor",
""
]
] |
new_dataset
| 0.987816 |
2309.09165
|
Xiwen Liu
|
Xiwen Liu, Keshava Katti, Yunfei He, Paul Jacob, Claudia Richter, Uwe
Schroeder, Santosh Kurinec, Pratik Chaudhari, Deep Jariwala
|
Analog Content-Addressable Memory from Complementary FeFETs
| null | null | null | null |
cs.ET cs.AR physics.app-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To address the increasing computational demands of artificial intelligence
(AI) and big data, compute-in-memory (CIM) integrates memory and processing
units into the same physical location, reducing the time and energy overhead of
the system. Despite advancements in non-volatile memory (NVM) for matrix
multiplication, other critical data-intensive operations, like parallel search,
have been overlooked. Current parallel search architectures, namely
content-addressable memory (CAM), often use binary, which restricts density and
functionality. We present an analog CAM (ACAM) cell, built on two complementary
ferroelectric field-effect transistors (FeFETs), that performs parallel search
in the analog domain with over 40 distinct match windows. We then deploy it to
calculate similarity between vectors, a building block in the following two
machine learning problems. ACAM outperforms ternary CAM (TCAM) when applied to
similarity search for few-shot learning on the Omniglot dataset, yielding
projected simulation results with improved inference accuracy by 5%, 3x denser
memory architecture, and more than 100x faster speed compared to central
processing unit (CPU) and graphics processing unit (GPU) per similarity search
on scaled CMOS nodes. We also demonstrate 1-step inference on a kernel
regression model by combining non-linear kernel computation and matrix
multiplication in ACAM, with simulation estimates indicating 1,000x faster
inference than CPU and GPU.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 05:40:00 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Liu",
"Xiwen",
""
],
[
"Katti",
"Keshava",
""
],
[
"He",
"Yunfei",
""
],
[
"Jacob",
"Paul",
""
],
[
"Richter",
"Claudia",
""
],
[
"Schroeder",
"Uwe",
""
],
[
"Kurinec",
"Santosh",
""
],
[
"Chaudhari",
"Pratik",
""
],
[
"Jariwala",
"Deep",
""
]
] |
new_dataset
| 0.998588 |
2309.09189
|
Luc Edixhoven
|
Luc Edixhoven
|
Shuffling posets on trajectories (technical report)
|
9 pages. Technical report of a paper to be published in the
conference proceedings of iFM 2023
| null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Choreographies describe possible sequences of interactions among a set of
agents. We aim to join two lines of research on choreographies: the use of the
shuffle on trajectories operator to design more expressive choreographic
languages, and the use of models featuring partial orders, to compactly
represent concurrency between agents. Specifically, in this paper, we explore
the application of the shuffle on trajectories operator to individual posets,
and we give a characterisation of shuffles of posets which again yield an
individual poset.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 07:30:17 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Edixhoven",
"Luc",
""
]
] |
new_dataset
| 0.971069 |
2309.09198
|
Yi Chen
|
Yi Chen, Haiyun Jiang, Wei Bi, Rui Wang, Longyue Wang, Shuming Shi,
Ruifeng Xu
|
A Benchmark for Text Expansion: Datasets, Metrics, and Baselines
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents a new task of Text Expansion (TE), which aims to insert
fine-grained modifiers into proper locations of the plain text to concretize or
vivify human writings. Different from existing insertion-based writing
assistance tasks, TE requires the model to be more flexible in both locating
and generation, and also more cautious in keeping basic semantics. We leverage
four complementary approaches to construct a dataset with 12 million
automatically generated instances and 2K human-annotated references for both
English and Chinese. To facilitate automatic evaluation, we design various
metrics from multiple perspectives. In particular, we propose Info-Gain to
effectively measure the informativeness of expansions, which is an important
quality dimension in TE. On top of a pre-trained text-infilling model, we build
both pipelined and joint Locate&Infill models, which demonstrate the
superiority over the Text2Text baselines, especially in expansion
informativeness. Experiments verify the feasibility of the TE task and point
out potential directions for future research toward better automatic text
expansion.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 07:54:38 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Chen",
"Yi",
""
],
[
"Jiang",
"Haiyun",
""
],
[
"Bi",
"Wei",
""
],
[
"Wang",
"Rui",
""
],
[
"Wang",
"Longyue",
""
],
[
"Shi",
"Shuming",
""
],
[
"Xu",
"Ruifeng",
""
]
] |
new_dataset
| 0.999715 |
2309.09205
|
Yanrong Li
|
Yanrong Li, Juan Du, and Wei Jiang
|
MFRL-BI: Design of a Model-free Reinforcement Learning Process Control
Scheme by Using Bayesian Inference
|
31 pages, 7 figures, and 3 tables
| null | null | null |
cs.LG cs.SY eess.SY stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Design of process control scheme is critical for quality assurance to reduce
variations in manufacturing systems. Taking semiconductor manufacturing as an
example, extensive literature focuses on control optimization based on certain
process models (usually linear models), which are obtained by experiments
before a manufacturing process starts. However, in real applications,
pre-defined models may not be accurate, especially for a complex manufacturing
system. To tackle model inaccuracy, we propose a model-free reinforcement
learning (MFRL) approach to conduct experiments and optimize control
simultaneously according to real-time data. Specifically, we design a novel
MFRL control scheme by updating the distribution of disturbances using Bayesian
inference to reduce their large variations during manufacturing processes. As a
result, the proposed MFRL controller is demonstrated to perform well in a
nonlinear chemical mechanical planarization (CMP) process when the process
model is unknown. Theoretical properties are also guaranteed when disturbances
are additive. The numerical studies also demonstrate the effectiveness and
efficiency of our methodology.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 08:18:55 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Li",
"Yanrong",
""
],
[
"Du",
"Juan",
""
],
[
"Jiang",
"Wei",
""
]
] |
new_dataset
| 0.999393 |
2309.09212
|
V\'ictor Mayoral Vilches
|
V\'ictor Mayoral-Vilches, Jason Jabbour, Yu-Shun Hsiao, Zishen Wan,
Alejandra Mart\'inez-Fari\~na, Marti\~no Crespo-\'Alvarez, Matthew Stewart,
Juan Manuel Reina-Mu\~noz, Prateek Nagras, Gaurav Vikhe, Mohammad
Bakhshalipour, Martin Pinzger, Stefan Rass, Smruti Panigrahi, Giulio Corradi,
Niladri Roy, Phillip B. Gibbons, Sabrina M. Neuman, Brian Plancher and Vijay
Janapa Reddi
|
RobotPerf: An Open-Source, Vendor-Agnostic, Benchmarking Suite for
Evaluating Robotics Computing System Performance
| null | null | null | null |
cs.RO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
We introduce RobotPerf, a vendor-agnostic benchmarking suite designed to
evaluate robotics computing performance across a diverse range of hardware
platforms using ROS 2 as its common baseline. The suite encompasses ROS 2
packages covering the full robotics pipeline and integrates two distinct
benchmarking approaches: black-box testing, which measures performance by
eliminating upper layers and replacing them with a test application, and
grey-box testing, an application-specific measure that observes internal system
states with minimal interference. Our benchmarking framework provides
ready-to-use tools and is easily adaptable for the assessment of custom ROS 2
computational graphs. Drawing from the knowledge of leading robot architects
and system architecture experts, RobotPerf establishes a standardized approach
to robotics benchmarking. As an open-source initiative, RobotPerf remains
committed to evolving with community input to advance the future of
hardware-accelerated robotics.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 08:41:11 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Mayoral-Vilches",
"Víctor",
""
],
[
"Jabbour",
"Jason",
""
],
[
"Hsiao",
"Yu-Shun",
""
],
[
"Wan",
"Zishen",
""
],
[
"Martínez-Fariña",
"Alejandra",
""
],
[
"Crespo-Álvarez",
"Martiño",
""
],
[
"Stewart",
"Matthew",
""
],
[
"Reina-Muñoz",
"Juan Manuel",
""
],
[
"Nagras",
"Prateek",
""
],
[
"Vikhe",
"Gaurav",
""
],
[
"Bakhshalipour",
"Mohammad",
""
],
[
"Pinzger",
"Martin",
""
],
[
"Rass",
"Stefan",
""
],
[
"Panigrahi",
"Smruti",
""
],
[
"Corradi",
"Giulio",
""
],
[
"Roy",
"Niladri",
""
],
[
"Gibbons",
"Phillip B.",
""
],
[
"Neuman",
"Sabrina M.",
""
],
[
"Plancher",
"Brian",
""
],
[
"Reddi",
"Vijay Janapa",
""
]
] |
new_dataset
| 0.999664 |
2309.09217
|
Bintao He
|
Bintao He, Fa Zhang, Chenjie Feng, Jianyi Yang, Xin Gao and Renmin Han
|
CryoAlign: feature-based method for global and local 3D alignment of EM
density maps
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advances on cryo-electron imaging technologies have led to a rapidly
increasing number of density maps. Alignment and comparison of density maps
play a crucial role in interpreting structural information, such as
conformational heterogeneity analysis using global alignment and atomic model
assembly through local alignment. Here, we propose a fast and accurate global
and local cryo-electron microscopy density map alignment method CryoAlign,
which leverages local density feature descriptors to capture spatial structure
similarities. CryoAlign is the first feature-based EM map alignment tool, in
which the employment of feature-based architecture enables the rapid
establishment of point pair correspondences and robust estimation of alignment
parameters. Extensive experimental evaluations demonstrate the superiority of
CryoAlign over the existing methods in both alignment accuracy and speed.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 09:07:57 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"He",
"Bintao",
""
],
[
"Zhang",
"Fa",
""
],
[
"Feng",
"Chenjie",
""
],
[
"Yang",
"Jianyi",
""
],
[
"Gao",
"Xin",
""
],
[
"Han",
"Renmin",
""
]
] |
new_dataset
| 0.990054 |
2309.09224
|
Zhi Zheng
|
Zhi Zheng, Qifeng Cai, Xinhang Xu, Muqing Cao, Huan Yu, Jihao Li,
Guodong Lu, and Jin Wang
|
CapsuleBot: A Novel Compact Hybrid Aerial-Ground Robot with Two
Actuated-wheel-rotors
|
7 pages, 10 figures, submitted to 2024 IEEE International Conference
on Robotics and Automation (ICRA). This work has been submitted to the IEEE
for possible publication. Copyright may be transferred without notice, after
which this version may no longer be accessible
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the design, modeling, and experimental validation of
CapsuleBot, a compact hybrid aerial-ground vehicle designed for long-term
covert reconnaissance. CapsuleBot combines the manoeuvrability of bicopter in
the air with the energy efficiency and noise reduction of ground vehicles on
the ground. To accomplish this, a structure named actuated-wheel-rotor has been
designed, utilizing a sole motor for both the unilateral rotor tilting in the
bicopter configuration and the wheel movement in ground mode. CapsuleBot comes
equipped with two of these structures, enabling it to attain hybrid
aerial-ground propulsion with just four motors. Importantly, the decoupling of
motion modes is achieved without the need for additional drivers, enhancing the
versatility and robustness of the system. Furthermore, we have designed the
full dynamics and control for aerial and ground locomotion based on the
bicopter model and the two-wheeled self-balancing vehicle model. The
performance of CapsuleBot has been validated through experiments. The results
demonstrate that CapsuleBot produces 40.53% less noise in ground mode and
consumes 99.35% less energy, highlighting its potential for long-term covert
reconnaissance applications.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 09:34:00 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Zheng",
"Zhi",
""
],
[
"Cai",
"Qifeng",
""
],
[
"Xu",
"Xinhang",
""
],
[
"Cao",
"Muqing",
""
],
[
"Yu",
"Huan",
""
],
[
"Li",
"Jihao",
""
],
[
"Lu",
"Guodong",
""
],
[
"Wang",
"Jin",
""
]
] |
new_dataset
| 0.999696 |
2309.09228
|
Nikola Jedli\v{c}kov\'a
|
Nikola Jedli\v{c}kov\'a, Jan Kratochv\'il
|
Hamiltonian path and Hamiltonian cycle are solvable in polynomial time
in graphs of bounded independence number
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A Hamiltonian path (a Hamiltonian cycle) in a graph is a path (a cycle,
respectively) that traverses all of its vertices. The problems of deciding
their existence in an input graph are well-known to be NP-complete, in fact,
they belong to the first problems shown to be computationally hard when the
theory of NP-completeness was being developed. A lot of research has been
devoted to the complexity of Hamiltonian path and Hamiltonian cycle problems
for special graph classes, yet only a handful of positive results are known.
The complexities of both of these problems have been open even for $4K_1$-free
graphs, i.e., graphs of independence number at most $3$. We answer this
question in the general setting of graphs of bounded independence number.
We also consider a newly introduced problem called
\emph{Hamiltonian-$\ell$-Linkage} which is related to the notions of a path
cover and of a linkage in a graph. This problem asks if given $\ell$ pairs of
vertices in an input graph can be connected by disjoint paths that altogether
traverse all vertices of the graph. For $\ell=1$, Hamiltonian-1-Linkage asks
for existence of a Hamiltonian path connecting a given pair of vertices. Our
main result reads that for every pair of integers $k$ and $\ell$, the
Hamiltonian-$\ell$-Linkage problem is polynomial time solvable for graphs of
independence number not exceeding $k$. We further complement this general
polynomial time algorithm by a structural description of obstacles to
Hamiltonicity in graphs of independence number at most $k$ for small values of
$k$.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 09:59:47 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Jedličková",
"Nikola",
""
],
[
"Kratochvíl",
"Jan",
""
]
] |
new_dataset
| 0.98249 |
2309.09249
|
Qingmao Wei
|
Qingmao Wei, Bi Zeng, Jianqi Liu, Li He, Guotian Zeng
|
LiteTrack: Layer Pruning with Asynchronous Feature Extraction for
Lightweight and Efficient Visual Tracking
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The recent advancements in transformer-based visual trackers have led to
significant progress, attributed to their strong modeling capabilities.
However, as performance improves, running latency correspondingly increases,
presenting a challenge for real-time robotics applications, especially on edge
devices with computational constraints. In response to this, we introduce
LiteTrack, an efficient transformer-based tracking model optimized for
high-speed operations across various devices. It achieves a more favorable
trade-off between accuracy and efficiency than the other lightweight trackers.
The main innovations of LiteTrack encompass: 1) asynchronous feature extraction
and interaction between the template and search region for better feature
fushion and cutting redundant computation, and 2) pruning encoder layers from a
heavy tracker to refine the balnace between performance and speed. As an
example, our fastest variant, LiteTrack-B4, achieves 65.2% AO on the GOT-10k
benchmark, surpassing all preceding efficient trackers, while running over 100
fps with ONNX on the Jetson Orin NX edge device. Moreover, our LiteTrack-B9
reaches competitive 72.2% AO on GOT-10k and 82.4% AUC on TrackingNet, and
operates at 171 fps on an NVIDIA 2080Ti GPU. The code and demo materials will
be available at https://github.com/TsingWei/LiteTrack.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 12:01:03 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Wei",
"Qingmao",
""
],
[
"Zeng",
"Bi",
""
],
[
"Liu",
"Jianqi",
""
],
[
"He",
"Li",
""
],
[
"Zeng",
"Guotian",
""
]
] |
new_dataset
| 0.979071 |
2309.09276
|
Ke Yang
|
Junjie Zhu, Yiying Li, Chunping Qiu, Ke Yang, Naiyang Guan, Xiaodong
Yi
|
MVP: Meta Visual Prompt Tuning for Few-Shot Remote Sensing Image Scene
Classification
|
SUBMIT TO IEEE TRANSACTIONS
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision Transformer (ViT) models have recently emerged as powerful and
versatile models for various visual tasks. Recently, a work called PMF has
achieved promising results in few-shot image classification by utilizing
pre-trained vision transformer models. However, PMF employs full fine-tuning
for learning the downstream tasks, leading to significant overfitting and
storage issues, especially in the remote sensing domain. In order to tackle
these issues, we turn to the recently proposed parameter-efficient tuning
methods, such as VPT, which updates only the newly added prompt parameters
while keeping the pre-trained backbone frozen. Inspired by VPT, we propose the
Meta Visual Prompt Tuning (MVP) method. Specifically, we integrate the VPT
method into the meta-learning framework and tailor it to the remote sensing
domain, resulting in an efficient framework for Few-Shot Remote Sensing Scene
Classification (FS-RSSC). Furthermore, we introduce a novel data augmentation
strategy based on patch embedding recombination to enhance the representation
and diversity of scenes for classification purposes. Experiment results on the
FS-RSSC benchmark demonstrate the superior performance of the proposed MVP over
existing methods in various settings, such as various-way-various-shot,
various-way-one-shot, and cross-domain adaptation.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 13:51:05 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Zhu",
"Junjie",
""
],
[
"Li",
"Yiying",
""
],
[
"Qiu",
"Chunping",
""
],
[
"Yang",
"Ke",
""
],
[
"Guan",
"Naiyang",
""
],
[
"Yi",
"Xiaodong",
""
]
] |
new_dataset
| 0.989922 |
2309.09291
|
Sidhartha Agrawal
|
Sidhartha Agrawal (1), Reto Achermann (1), Margo Seltzer (1) ((1)
University of British Columbia)
|
OSmosis: No more D\'ej\`a vu in OS isolation
|
6 pages, 1 figure
| null | null | null |
cs.CR cs.OS
|
http://creativecommons.org/licenses/by/4.0/
|
Operating systems provide an abstraction layer between the hardware and
higher-level software. Many abstractions, such as threads, processes,
containers, and virtual machines, are mechanisms to provide isolation. New
application scenarios frequently introduce new isolation mechanisms.
Implementing each isolation mechanism as an independent abstraction makes it
difficult to reason about the state and resources shared among different tasks,
leading to security vulnerabilities and performance interference. We present
OSmosis, an isolation model that expresses the precise level of resource
sharing, a framework in which to implement isolation mechanisms based on the
model, and an implementation of the framework on seL4. The OSmosis model lets
the user determine the degree of isolation guarantee that they need from the
system. This determination empowers developers to make informed decisions about
isolation and performance trade-offs, and the framework enables them to create
mechanisms with the desired degree of isolation.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 14:58:33 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Agrawal",
"Sidhartha",
""
],
[
"Achermann",
"Reto",
""
],
[
"Seltzer",
"Margo",
""
]
] |
new_dataset
| 0.963978 |
2309.09294
|
Yihao Zhi
|
Yihao Zhi, Xiaodong Cun, Xuelin Chen, Xi Shen, Wen Guo, Shaoli Huang,
Shenghua Gao
|
LivelySpeaker: Towards Semantic-Aware Co-Speech Gesture Generation
|
Accepted by ICCV 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Gestures are non-verbal but important behaviors accompanying people's speech.
While previous methods are able to generate speech rhythm-synchronized
gestures, the semantic context of the speech is generally lacking in the
gesticulations. Although semantic gestures do not occur very regularly in human
speech, they are indeed the key for the audience to understand the speech
context in a more immersive environment. Hence, we introduce LivelySpeaker, a
framework that realizes semantics-aware co-speech gesture generation and offers
several control handles. In particular, our method decouples the task into two
stages: script-based gesture generation and audio-guided rhythm refinement.
Specifically, the script-based gesture generation leverages the pre-trained
CLIP text embeddings as the guidance for generating gestures that are highly
semantically aligned with the script. Then, we devise a simple but effective
diffusion-based gesture generation backbone simply using pure MLPs, that is
conditioned on only audio signals and learns to gesticulate with realistic
motions. We utilize such powerful prior to rhyme the script-guided gestures
with the audio signals, notably in a zero-shot setting. Our novel two-stage
generation framework also enables several applications, such as changing the
gesticulation style, editing the co-speech gestures via textual prompting, and
controlling the semantic awareness and rhythm alignment with guided diffusion.
Extensive experiments demonstrate the advantages of the proposed framework over
competing methods. In addition, our core diffusion-based generative model also
achieves state-of-the-art performance on two benchmarks. The code and model
will be released to facilitate future research.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 15:06:11 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Zhi",
"Yihao",
""
],
[
"Cun",
"Xiaodong",
""
],
[
"Chen",
"Xuelin",
""
],
[
"Shen",
"Xi",
""
],
[
"Guo",
"Wen",
""
],
[
"Huang",
"Shaoli",
""
],
[
"Gao",
"Shenghua",
""
]
] |
new_dataset
| 0.979656 |
2309.09295
|
Saimouli Katragadda
|
Saimouli Katragadda, Woosik Lee, Yuxiang Peng, Patrick Geneva, Chuchu
Chen, Chao Guo, Mingyang Li, Guoquan Huang
|
NeRF-VINS: A Real-time Neural Radiance Field Map-based Visual-Inertial
Navigation System
|
6 pages, 7 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Achieving accurate, efficient, and consistent localization within an a priori
environment map remains a fundamental challenge in robotics and computer
vision. Conventional map-based keyframe localization often suffers from
sub-optimal viewpoints due to limited field of view (FOV), thus degrading its
performance. To address this issue, in this paper, we design a real-time
tightly-coupled Neural Radiance Fields (NeRF)-aided visual-inertial navigation
system (VINS), termed NeRF-VINS. By effectively leveraging NeRF's potential to
synthesize novel views, essential for addressing limited viewpoints, the
proposed NeRF-VINS optimally fuses IMU and monocular image measurements along
with synthetically rendered images within an efficient filter-based framework.
This tightly coupled integration enables 3D motion tracking with bounded error.
We extensively compare the proposed NeRF-VINS against the state-of-the-art
methods that use prior map information, which is shown to achieve superior
performance. We also demonstrate the proposed method is able to perform
real-time estimation at 15 Hz, on a resource-constrained Jetson AGX Orin
embedded platform with impressive accuracy.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 15:06:12 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Katragadda",
"Saimouli",
""
],
[
"Lee",
"Woosik",
""
],
[
"Peng",
"Yuxiang",
""
],
[
"Geneva",
"Patrick",
""
],
[
"Chen",
"Chuchu",
""
],
[
"Guo",
"Chao",
""
],
[
"Li",
"Mingyang",
""
],
[
"Huang",
"Guoquan",
""
]
] |
new_dataset
| 0.954526 |
2309.09314
|
Deok-Kyeong Jang
|
Deok-Kyeong Jang, Dongseok Yang, Deok-Yun Jang, Byeoli Choi, Taeil
Jin, and Sung-Hee Lee
|
MOVIN: Real-time Motion Capture using a Single LiDAR
| null |
Computer Graphics Forum 2023, presented at Pacific Graphics 2023
| null | null |
cs.GR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancements in technology have brought forth new forms of interactive
applications, such as the social metaverse, where end users interact with each
other through their virtual avatars. In such applications, precise full-body
tracking is essential for an immersive experience and a sense of embodiment
with the virtual avatar. However, current motion capture systems are not easily
accessible to end users due to their high cost, the requirement for special
skills to operate them, or the discomfort associated with wearable devices. In
this paper, we present MOVIN, the data-driven generative method for real-time
motion capture with global tracking, using a single LiDAR sensor. Our
autoregressive conditional variational autoencoder (CVAE) model learns the
distribution of pose variations conditioned on the given 3D point cloud from
LiDAR.As a central factor for high-accuracy motion capture, we propose a novel
feature encoder to learn the correlation between the historical 3D point cloud
data and global, local pose features, resulting in effective learning of the
pose prior. Global pose features include root translation, rotation, and foot
contacts, while local features comprise joint positions and rotations.
Subsequently, a pose generator takes into account the sampled latent variable
along with the features from the previous frame to generate a plausible current
pose. Our framework accurately predicts the performer's 3D global information
and local joint details while effectively considering temporally coherent
movements across frames. We demonstrate the effectiveness of our architecture
through quantitative and qualitative evaluations, comparing it against
state-of-the-art methods. Additionally, we implement a real-time application to
showcase our method in real-world scenarios. MOVIN dataset is available at
\url{https://movin3d.github.io/movin_pg2023/}.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 16:04:15 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Jang",
"Deok-Kyeong",
""
],
[
"Yang",
"Dongseok",
""
],
[
"Jang",
"Deok-Yun",
""
],
[
"Choi",
"Byeoli",
""
],
[
"Jin",
"Taeil",
""
],
[
"Lee",
"Sung-Hee",
""
]
] |
new_dataset
| 0.988453 |
2309.09326
|
Brenda Nogueira
|
Brenda Nogueira, Gui M. Menezes, Nuno Moniz
|
Experiential-Informed Data Reconstruction for Fishery Sustainability and
Policies in the Azores
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Fishery analysis is critical in maintaining the long-term sustainability of
species and the livelihoods of millions of people who depend on fishing for
food and income. The fishing gear, or metier, is a key factor significantly
impacting marine habitats, selectively targeting species and fish sizes.
Analysis of commercial catches or landings by metier in fishery stock
assessment and management is crucial, providing robust estimates of fishing
efforts and their impact on marine ecosystems. In this paper, we focus on a
unique data set from the Azores' fishing data collection programs between 2010
and 2017, where little information on metiers is available and sparse
throughout our timeline. Our main objective is to tackle the task of data set
reconstruction, leveraging domain knowledge and machine learning methods to
retrieve or associate metier-related information to each fish landing. We
empirically validate the feasibility of this task using a diverse set of
modeling approaches and demonstrate how it provides new insights into different
fisheries' behavior and the impact of metiers over time, which are essential
for future fish population assessments, management, and conservation efforts.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 17:17:38 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Nogueira",
"Brenda",
""
],
[
"Menezes",
"Gui M.",
""
],
[
"Moniz",
"Nuno",
""
]
] |
new_dataset
| 0.971864 |
2309.09328
|
Nikhil Chowdary Paleti
|
Paleti Nikhil Chowdary, Gorantla V N S L Vishnu Vardhan, Menta Sai
Akshay, Menta Sai Aashish, Vadlapudi Sai Aravind, Garapati Venkata Krishna
Rayalu, Aswathy P
|
Enhancing Knee Osteoarthritis severity level classification using
diffusion augmented images
|
Paper has been accepted to be presented at ICACECS 2023 and the final
version will be published by Atlantis Highlights in Computer Science (AHCS) ,
Atlantis Press(part of Springer Nature)
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This research paper explores the classification of knee osteoarthritis (OA)
severity levels using advanced computer vision models and augmentation
techniques. The study investigates the effectiveness of data preprocessing,
including Contrast-Limited Adaptive Histogram Equalization (CLAHE), and data
augmentation using diffusion models. Three experiments were conducted: training
models on the original dataset, training models on the preprocessed dataset,
and training models on the augmented dataset. The results show that data
preprocessing and augmentation significantly improve the accuracy of the
models. The EfficientNetB3 model achieved the highest accuracy of 84\% on the
augmented dataset. Additionally, attention visualization techniques, such as
Grad-CAM, are utilized to provide detailed attention maps, enhancing the
understanding and trustworthiness of the models. These findings highlight the
potential of combining advanced models with augmented data and attention
visualization for accurate knee OA severity classification.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 17:22:29 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Chowdary",
"Paleti Nikhil",
""
],
[
"Vardhan",
"Gorantla V N S L Vishnu",
""
],
[
"Akshay",
"Menta Sai",
""
],
[
"Aashish",
"Menta Sai",
""
],
[
"Aravind",
"Vadlapudi Sai",
""
],
[
"Rayalu",
"Garapati Venkata Krishna",
""
],
[
"P",
"Aswathy",
""
]
] |
new_dataset
| 0.989312 |
2309.09332
|
Nikhil Chowdary Paleti
|
Garapati Venkata Krishna Rayalu, Paleti Nikhil Chowdary, Manish
Nadella, Dabbara Harsha, Pingali Sathvika, B.Ganga Gowri
|
A Zigbee Based Cost-Effective Home Monitoring System Using WSN
|
Paper has been presented at ICCCNT 2023 and the final version will be
published in IEEE Digital Library Xplore
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
WSNs are vital in a variety of applications, including environmental
monitoring, industrial process control, and healthcare. WSNs are a network of
spatially scattered and dedicated sensors that monitor and record the physical
conditions of the environment.Significant obstacles to WSN efficiency include
the restricted power and processing capabilities of individual sensor nodes and
the issues with remote and inaccessible deployment sites. By maximising power
utilisation, enhancing network effectiveness, and ensuring adaptability and
durability through dispersed and decentralised operation, this study suggests a
comprehensive approach to dealing with these challenges. The suggested
methodology involves data compression, aggregation, and energy-efficient
protocol. Using these techniques, WSN lifetimes can be increased and overall
performance can be improved. In this study we also provide methods to collect
data generated by several nodes in the WSN and store it in a remote cloud such
that it can be processed and analyzed whenever it is required.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 17:42:15 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Rayalu",
"Garapati Venkata Krishna",
""
],
[
"Chowdary",
"Paleti Nikhil",
""
],
[
"Nadella",
"Manish",
""
],
[
"Harsha",
"Dabbara",
""
],
[
"Sathvika",
"Pingali",
""
],
[
"Gowri",
"B. Ganga",
""
]
] |
new_dataset
| 0.99715 |
2309.09393
|
Ben Burgess-Limerick
|
Ben Burgess-Limerick, Jesse Haviland, Chris Lehnert, Peter Corke
|
Reactive Base Control for On-The-Move Mobile Manipulation in Dynamic
Environments
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present a reactive base control method that enables high performance
mobile manipulation on-the-move in environments with static and dynamic
obstacles. Performing manipulation tasks while the mobile base remains in
motion can significantly decrease the time required to perform multi-step
tasks, as well as improve the gracefulness of the robot's motion. Existing
approaches to manipulation on-the-move either ignore the obstacle avoidance
problem or rely on the execution of planned trajectories, which is not suitable
in environments with dynamic objects and obstacles. The presented controller
addresses both of these deficiencies and demonstrates robust performance of
pick-and-place tasks in dynamic environments. The performance is evaluated on
several simulated and real-world tasks. On a real-world task with static
obstacles, we outperform an existing method by 48\% in terms of total task
time. Further, we present real-world examples of our robot performing
manipulation tasks on-the-move while avoiding a second autonomous robot in the
workspace. See https://benburgesslimerick.github.io/MotM-BaseControl for
supplementary materials.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 23:04:34 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Burgess-Limerick",
"Ben",
""
],
[
"Haviland",
"Jesse",
""
],
[
"Lehnert",
"Chris",
""
],
[
"Corke",
"Peter",
""
]
] |
new_dataset
| 0.996508 |
2309.09400
|
Thien Nguyen
|
Thuat Nguyen, Chien Van Nguyen, Viet Dac Lai, Hieu Man, Nghia Trung
Ngo, Franck Dernoncourt, Ryan A. Rossi and Thien Huu Nguyen
|
CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large
Language Models in 167 Languages
|
Ongoing Work
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The driving factors behind the development of large language models (LLMs)
with impressive learning capabilities are their colossal model sizes and
extensive training datasets. Along with the progress in natural language
processing, LLMs have been frequently made accessible to the public to foster
deeper investigation and applications. However, when it comes to training
datasets for these LLMs, especially the recent state-of-the-art models, they
are often not fully disclosed. Creating training data for high-performing LLMs
involves extensive cleaning and deduplication to ensure the necessary level of
quality. The lack of transparency for training data has thus hampered research
on attributing and addressing hallucination and bias issues in LLMs, hindering
replication efforts and further advancements in the community. These challenges
become even more pronounced in multilingual learning scenarios, where the
available multilingual text datasets are often inadequately collected and
cleaned. Consequently, there is a lack of open-source and readily usable
dataset to effectively train LLMs in multiple languages. To overcome this
issue, we present CulturaX, a substantial multilingual dataset with 6.3
trillion tokens in 167 languages, tailored for LLM development. Our dataset
undergoes meticulous cleaning and deduplication through a rigorous pipeline of
multiple stages to accomplish the best quality for model training, including
language identification, URL-based filtering, metric-based cleaning, document
refinement, and data deduplication. CulturaX is fully released to the public in
HuggingFace to facilitate research and advancements in multilingual LLMs:
https://huggingface.co/datasets/uonlp/CulturaX.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 23:49:10 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Nguyen",
"Thuat",
""
],
[
"Van Nguyen",
"Chien",
""
],
[
"Lai",
"Viet Dac",
""
],
[
"Man",
"Hieu",
""
],
[
"Ngo",
"Nghia Trung",
""
],
[
"Dernoncourt",
"Franck",
""
],
[
"Rossi",
"Ryan A.",
""
],
[
"Nguyen",
"Thien Huu",
""
]
] |
new_dataset
| 0.9996 |
2309.09441
|
Hossein Jamali
|
Hossein Jamali, Ponkoj Chandra Shill, David Feil-Seifer, Frederick C.
Harris, Jr., Sergiu M. Dascalu
|
A Schedule of Duties in the Cloud Space Using a Modified Salp Swarm
Algorithm
|
15 pages, 6 figures, 2023 IFIP International Internet of Things
Conference. Dallas-Fort Worth Metroplex, Texas, USA
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Cloud computing is a concept introduced in the information technology era,
with the main components being the grid, distributed, and valuable computing.
The cloud is being developed continuously and, naturally, comes up with many
challenges, one of which is scheduling. A schedule or timeline is a mechanism
used to optimize the time for performing a duty or set of duties. A scheduling
process is accountable for choosing the best resources for performing a duty.
The main goal of a scheduling algorithm is to improve the efficiency and
quality of the service while at the same time ensuring the acceptability and
effectiveness of the targets. The task scheduling problem is one of the most
important NP-hard issues in the cloud domain and, so far, many techniques have
been proposed as solutions, including using genetic algorithms (GAs), particle
swarm optimization, (PSO), and ant colony optimization (ACO). To address this
problem, in this paper, one of the collective intelligence algorithms, called
the Salp Swarm Algorithm (SSA), has been expanded, improved, and applied. The
performance of the proposed algorithm has been compared with that of GAs, PSO,
continuous ACO, and the basic SSA. The results show that our algorithm has
generally higher performance than the other algorithms. For example, compared
to the basic SSA, the proposed method has an average reduction of approximately
21% in makespan.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 02:48:41 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Jamali",
"Hossein",
""
],
[
"Shill",
"Ponkoj Chandra",
""
],
[
"Feil-Seifer",
"David",
""
],
[
"Harris,",
"Frederick C.",
"Jr."
],
[
"Dascalu",
"Sergiu M.",
""
]
] |
new_dataset
| 0.999053 |
2309.09456
|
Chenming Zhu
|
Chenming Zhu, Wenwei Zhang, Tai Wang, Xihui Liu and Kai Chen
|
Object2Scene: Putting Objects in Context for Open-Vocabulary 3D
Detection
|
17 pages, 7 figures, 9 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Point cloud-based open-vocabulary 3D object detection aims to detect 3D
categories that do not have ground-truth annotations in the training set. It is
extremely challenging because of the limited data and annotations (bounding
boxes with class labels or text descriptions) of 3D scenes. Previous approaches
leverage large-scale richly-annotated image datasets as a bridge between 3D and
category semantics but require an extra alignment process between 2D images and
3D points, limiting the open-vocabulary ability of 3D detectors. Instead of
leveraging 2D images, we propose Object2Scene, the first approach that
leverages large-scale large-vocabulary 3D object datasets to augment existing
3D scene datasets for open-vocabulary 3D object detection. Object2Scene inserts
objects from different sources into 3D scenes to enrich the vocabulary of 3D
scene datasets and generates text descriptions for the newly inserted objects.
We further introduce a framework that unifies 3D detection and visual
grounding, named L3Det, and propose a cross-domain category-level contrastive
learning approach to mitigate the domain gap between 3D objects from different
datasets. Extensive experiments on existing open-vocabulary 3D object detection
benchmarks show that Object2Scene obtains superior performance over existing
methods. We further verify the effectiveness of Object2Scene on a new benchmark
OV-ScanNet-200, by holding out all rare categories as novel categories not seen
during training.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 03:31:53 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Zhu",
"Chenming",
""
],
[
"Zhang",
"Wenwei",
""
],
[
"Wang",
"Tai",
""
],
[
"Liu",
"Xihui",
""
],
[
"Chen",
"Kai",
""
]
] |
new_dataset
| 0.99985 |
2309.09518
|
Arturo Miguel Russell Bernal
|
Arturo Miguel Russell Bernal, Walter Scheirer, Jane Cleland-Huang
|
NOMAD: A Natural, Occluded, Multi-scale Aerial Dataset, for Emergency
Response Scenarios
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
With the increasing reliance on small Unmanned Aerial Systems (sUAS) for
Emergency Response Scenarios, such as Search and Rescue, the integration of
computer vision capabilities has become a key factor in mission success.
Nevertheless, computer vision performance for detecting humans severely
degrades when shifting from ground to aerial views. Several aerial datasets
have been created to mitigate this problem, however, none of them has
specifically addressed the issue of occlusion, a critical component in
Emergency Response Scenarios. Natural Occluded Multi-scale Aerial Dataset
(NOMAD) presents a benchmark for human detection under occluded aerial views,
with five different aerial distances and rich imagery variance. NOMAD is
composed of 100 different Actors, all performing sequences of walking, laying
and hiding. It includes 42,825 frames, extracted from 5.4k resolution videos,
and manually annotated with a bounding box and a label describing 10 different
visibility levels, categorized according to the percentage of the human body
visible inside the bounding box. This allows computer vision models to be
evaluated on their detection performance across different ranges of occlusion.
NOMAD is designed to improve the effectiveness of aerial search and rescue and
to enhance collaboration between sUAS and humans, by providing a new benchmark
dataset for human detection under occluded aerial views.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 06:57:00 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Bernal",
"Arturo Miguel Russell",
""
],
[
"Scheirer",
"Walter",
""
],
[
"Cleland-Huang",
"Jane",
""
]
] |
new_dataset
| 0.999538 |
2309.09556
|
Xuechao Zhang
|
Xuechao Zhang, Dong Wang, Sun Han, Weichuang Li, Bin Zhao, Zhigang
Wang, Xiaoming Duan, Chongrong Fang, Xuelong Li, Jianping He
|
Affordance-Driven Next-Best-View Planning for Robotic Grasping
|
Conference on Robot Learning (CoRL) 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Grasping occluded objects in cluttered environments is an essential component
in complex robotic manipulation tasks. In this paper, we introduce an
AffordanCE-driven Next-Best-View planning policy (ACE-NBV) that tries to find a
feasible grasp for target object via continuously observing scenes from new
viewpoints. This policy is motivated by the observation that the grasp
affordances of an occluded object can be better-measured under the view when
the view-direction are the same as the grasp view. Specifically, our method
leverages the paradigm of novel view imagery to predict the grasps affordances
under previously unobserved view, and select next observation view based on the
gain of the highest imagined grasp quality of the target object. The
experimental results in simulation and on the real robot demonstrate the
effectiveness of the proposed affordance-driven next-best-view planning policy.
Additional results, code, and videos of real robot experiments can be found in
the supplementary materials.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 08:09:52 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Zhang",
"Xuechao",
""
],
[
"Wang",
"Dong",
""
],
[
"Han",
"Sun",
""
],
[
"Li",
"Weichuang",
""
],
[
"Zhao",
"Bin",
""
],
[
"Wang",
"Zhigang",
""
],
[
"Duan",
"Xiaoming",
""
],
[
"Fang",
"Chongrong",
""
],
[
"Li",
"Xuelong",
""
],
[
"He",
"Jianping",
""
]
] |
new_dataset
| 0.96915 |
2309.09623
|
Shansong Liu
|
Shansong Liu, Xu Li, Dian Li, Ying Shan
|
HumTrans: A Novel Open-Source Dataset for Humming Melody Transcription
and Beyond
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces the HumTrans dataset, which is publicly available and
primarily designed for humming melody transcription. The dataset can also serve
as a foundation for downstream tasks such as humming melody based music
generation. It consists of 500 musical compositions of different genres and
languages, with each composition divided into multiple segments. In total, the
dataset comprises 1000 music segments. To collect this humming dataset, we
employed 10 college students, all of whom are either music majors or proficient
in playing at least one musical instrument. Each of them hummed every segment
twice using the web recording interface provided by our designed website. The
humming recordings were sampled at a frequency of 44,100 Hz. During the humming
session, the main interface provides a musical score for students to reference,
with the melody audio playing simultaneously to aid in capturing both melody
and rhythm. The dataset encompasses approximately 56.22 hours of audio, making
it the largest known humming dataset to date. The dataset will be released on
Hugging Face, and we will provide a GitHub repository containing baseline
results and evaluation codes.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 09:52:54 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Liu",
"Shansong",
""
],
[
"Li",
"Xu",
""
],
[
"Li",
"Dian",
""
],
[
"Shan",
"Ying",
""
]
] |
new_dataset
| 0.999925 |
2309.09642
|
Ozdemir Can Kara
|
Ozdemir Can Kara, Jiaqi Xue, Nethra Venkatayogi, Tarunraj G. Mohanraj,
Yuki Hirata, Naruhiko Ikoma, S. Farokh Atashzar, Farshid Alambeigi
|
A Smart Handheld Edge Device for On-Site Diagnosis and Classification of
Texture and Stiffness of Excised Colorectal Cancer Polyps
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper proposes a smart handheld textural sensing medical device with
complementary Machine Learning (ML) algorithms to enable on-site Colorectal
Cancer (CRC) polyp diagnosis and pathology of excised tumors. The proposed
unique handheld edge device benefits from a unique tactile sensing module and a
dual-stage machine learning algorithms (composed of a dilated residual network
and a t-SNE engine) for polyp type and stiffness characterization. Solely
utilizing the occlusion-free, illumination-resilient textural images captured
by the proposed tactile sensor, the framework is able to sensitively and
reliably identify the type and stage of CRC polyps by classifying their texture
and stiffness, respectively. Moreover, the proposed handheld medical edge
device benefits from internet connectivity for enabling remote digital
pathology (boosting the diagnosis in operating rooms and promoting
accessibility and equity in medical diagnosis).
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 10:23:59 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Kara",
"Ozdemir Can",
""
],
[
"Xue",
"Jiaqi",
""
],
[
"Venkatayogi",
"Nethra",
""
],
[
"Mohanraj",
"Tarunraj G.",
""
],
[
"Hirata",
"Yuki",
""
],
[
"Ikoma",
"Naruhiko",
""
],
[
"Atashzar",
"S. Farokh",
""
],
[
"Alambeigi",
"Farshid",
""
]
] |
new_dataset
| 0.997662 |
2309.09646
|
Alexis W.M. Devillard
|
Alexis W.M. Devillard, Aruna Ramasamy, Damien Faux, Vincent Hayward,
Etienne Burdet
|
Concurrent Haptic, Audio, and Visual Data Set During Bare Finger
Interaction with Textured Surfaces
| null |
2023 IEEE World Haptics Conference (WHC)
|
10.1109/WHC56415.2023.10224372
| null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Perceptual processes are frequently multi-modal. This is the case of haptic
perception. Data sets of visual and haptic sensory signals have been compiled
in the past, especially when it comes to the exploration of textured surfaces.
These data sets were intended to be used in natural and artificial perception
studies and to provide training data sets for machine learning research. These
data sets were typically acquired with rigid probes or artificial robotic
fingers. Here, we collected visual, auditory, and haptic signals acquired when
a human finger explored textured surfaces. We assessed the data set via machine
learning classification techniques. Interestingly, multi-modal classification
performance could reach 97% when haptic classification was around 80%.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 10:30:27 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Devillard",
"Alexis W. M.",
""
],
[
"Ramasamy",
"Aruna",
""
],
[
"Faux",
"Damien",
""
],
[
"Hayward",
"Vincent",
""
],
[
"Burdet",
"Etienne",
""
]
] |
new_dataset
| 0.991417 |
2309.09671
|
Elisheva Shamash PhD
|
Elisheva Shamash and Zhong Fan
|
Contract Design for V2G Smart Energy Trading
| null | null | null | null |
cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
The transition to a net zero energy system necessitates development in a
number of directions including developing advanced electricity trading markets.
Due to electricity markets being responsible for a large portion of carbon
emissions, improving the electricity markets' method for determining energy
transactions could have a significant impact on carbon reductions and thus
facilitate this transition. V2X technology can be applied to regulate different
energy markets, and thus reduce costs and carbon emissions by using the
batteries in electric vehicles to store energy during off-peak hours and export
it during peak hours.
We develop a novel contract based on the VCG-mechanism, for exporting and
importing electricity effectively, and show how this mechanism can raise
efficiency, facilitate the development of a sustainable and efficient
electricity market, and bring us nearer to our Net Zero Goal.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 11:19:26 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Shamash",
"Elisheva",
""
],
[
"Fan",
"Zhong",
""
]
] |
new_dataset
| 0.998212 |
2309.09721
|
Zhipeng Xue
|
Zhipeng Xue, Zhipeng Gao, Xing Hu, Shanping Li
|
ACWRecommender: A Tool for Validating Actionable Warnings with Weak
Supervision
|
accepted by ASE2023 Industry Track
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Static analysis tools have gained popularity among developers for finding
potential bugs, but their widespread adoption is hindered by the accomnpanying
high false alarm rates (up to 90%). To address this challenge, previous studies
proposed the concept of actionable warnings, and apply machine-learning methods
to distinguish actionable warnings from false alarms. Despite these efforts,
our preliminary study suggests that the current methods used to collect
actionable warnings are rather shaky and unreliable, resulting in a large
proportion of invalid actionable warnings. In this work, we mined 68,274
reversions from Top-500 Github C repositories to create a substantia actionable
warning dataset and assigned weak labels to each warning's likelihood of being
a real bug. To automatically identify actionable warnings and recommend those
with a high probability of being real bugs (AWHB), we propose a two-stage
framework called ACWRecommender. In the first stage, our tool use a pre-trained
model, i.e., UniXcoder, to identify actionable warnings from a huge number of
SA tool's reported warnings. In the second stage, we rerank valid actionable
warnings to the top by using weakly supervised learning. Experimental results
showed that our tool outperformed several baselines for actionable warning
detection (in terms of F1-score) and performed better for AWHB recommendation
(in terms of nDCG and MRR). Additionaly, we also performed an in-the-wild
evaluation, we manually validated 24 warnings out of 2,197 reported warnings on
10 randomly selected projects, 22 of which were confirmed by developers as real
bugs, demonstrating the practical usage of our tool.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 12:35:28 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Xue",
"Zhipeng",
""
],
[
"Gao",
"Zhipeng",
""
],
[
"Hu",
"Xing",
""
],
[
"Li",
"Shanping",
""
]
] |
new_dataset
| 0.996689 |
2309.09730
|
Meng Han
|
Meng Han, Xiangde Luo, Wenjun Liao, Shichuan Zhang, Shaoting Zhang,
Guotai Wang
|
Scribble-based 3D Multiple Abdominal Organ Segmentation via
Triple-branch Multi-dilated Network with Pixel- and Class-wise Consistency
|
10 pages, 3 figures, MICCAI2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-organ segmentation in abdominal Computed Tomography (CT) images is of
great importance for diagnosis of abdominal lesions and subsequent treatment
planning. Though deep learning based methods have attained high performance,
they rely heavily on large-scale pixel-level annotations that are
time-consuming and labor-intensive to obtain. Due to its low dependency on
annotation, weakly supervised segmentation has attracted great attention.
However, there is still a large performance gap between current
weakly-supervised methods and fully supervised learning, leaving room for
exploration. In this work, we propose a novel 3D framework with two consistency
constraints for scribble-supervised multiple abdominal organ segmentation from
CT. Specifically, we employ a Triple-branch multi-Dilated network (TDNet) with
one encoder and three decoders using different dilation rates to capture
features from different receptive fields that are complementary to each other
to generate high-quality soft pseudo labels. For more stable unsupervised
learning, we use voxel-wise uncertainty to rectify the soft pseudo labels and
then supervise the outputs of each decoder. To further regularize the network,
class relationship information is exploited by encouraging the generated class
affinity matrices to be consistent across different decoders under multi-view
projection. Experiments on the public WORD dataset show that our method
outperforms five existing scribble-supervised methods.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 12:50:58 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Han",
"Meng",
""
],
[
"Luo",
"Xiangde",
""
],
[
"Liao",
"Wenjun",
""
],
[
"Zhang",
"Shichuan",
""
],
[
"Zhang",
"Shaoting",
""
],
[
"Wang",
"Guotai",
""
]
] |
new_dataset
| 0.997798 |
2309.09737
|
Fangqiang Ding
|
Zhijun Pan, Fangqiang Ding, Hantao Zhong, Chris Xiaoxuan Lu
|
Moving Object Detection and Tracking with 4D Radar Point Cloud
|
8 pages, 4 figures. Co-first authorship for Zhijun Pan, Fangqiang
Ding and Hantao Zhong
| null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile autonomy relies on the precise perception of dynamic environments.
Robustly tracking moving objects in 3D world thus plays a pivotal role for
applications like trajectory prediction, obstacle avoidance, and path planning.
While most current methods utilize LiDARs or cameras for Multiple Object
Tracking (MOT), the capabilities of 4D imaging radars remain largely
unexplored. Recognizing the challenges posed by radar noise and point sparsity
in 4D radar data, we introduce RaTrack, an innovative solution tailored for
radar-based tracking. Bypassing the typical reliance on specific object types
and 3D bounding boxes, our method focuses on motion segmentation and
clustering, enriched by a motion estimation module. Evaluated on the
View-of-Delft dataset, RaTrack showcases superior tracking precision of moving
objects, largely surpassing the performance of the state of the art.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 13:02:29 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Pan",
"Zhijun",
""
],
[
"Ding",
"Fangqiang",
""
],
[
"Zhong",
"Hantao",
""
],
[
"Lu",
"Chris Xiaoxuan",
""
]
] |
new_dataset
| 0.975314 |
2309.09775
|
David Beskow
|
Haley Seaward, Jasmine Talley and David Beskow
|
ArxNet Model and Data: Building Social Networks from Image Archives
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
A corresponding explosion in digital images has accompanied the rapid
adoption of mobile technology around the world. People and their activities are
routinely captured in digital image and video files. By their very nature,
these images and videos often portray social and professional connections.
Individuals in the same picture are often connected in some meaningful way. Our
research seeks to identify and model social connections found in images using
modern face detection technology and social network analysis. The proposed
methods are then demonstrated on the public image repository associated with
the 2022 Emmy's Award Presentation.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 13:57:24 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Seaward",
"Haley",
""
],
[
"Talley",
"Jasmine",
""
],
[
"Beskow",
"David",
""
]
] |
new_dataset
| 0.993024 |
2309.09782
|
Xhani Marvin Sa{\ss}
|
Xhani Marvin Sa{\ss}, Thilo Krachenfels, Frederik Dermot Pustelnik,
Jean-Pierre Seifert, Christian Gro{\ss}e, Frank Altmann
|
Modulation to the Rescue: Identifying Sub-Circuitry in the Transistor
Morass for Targeted Analysis
|
6 pages, short paper at ASHES2023
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Physical attacks form one of the most severe threats against secure computing
platforms. Their criticality arises from their corresponding threat model: By,
e.g., passively measuring an integrated circuit's (IC's) environment during a
security-related operation, internal secrets may be disclosed. Furthermore, by
actively disturbing the physical runtime environment of an IC, an adversary can
cause a specific, exploitable misbehavior. The set of physical attacks consists
of techniques that apply either globally or locally. When compared to global
techniques, local techniques exhibit a much higher precision, hence having the
potential to be used in advanced attack scenarios. However, using physical
techniques with additional spatial dependency expands the parameter search
space exponentially. In this work, we present and compare two techniques,
namely laser logic state imaging (LLSI) and lock-in thermography (LIT), that
can be used to discover sub-circuitry of an entirely unknown IC based on
optical and thermal principles. We show that the time required to identify
specific regions can be drastically reduced, thus lowering the complexity of
physical attacks requiring positional information. Our case study on an Intel
H610 Platform Controller Hub showcases that, depending on the targeted voltage
rail, our technique reduces the search space by around 90 to 98 percent.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 13:59:57 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Saß",
"Xhani Marvin",
""
],
[
"Krachenfels",
"Thilo",
""
],
[
"Pustelnik",
"Frederik Dermot",
""
],
[
"Seifert",
"Jean-Pierre",
""
],
[
"Große",
"Christian",
""
],
[
"Altmann",
"Frank",
""
]
] |
new_dataset
| 0.994191 |
2309.09783
|
Nikola Ljube\v{s}i\'c
|
Michal Mochtak, Peter Rupnik, Nikola Ljube\v{s}i\'c
|
The ParlaSent multilingual training dataset for sentiment identification
in parliamentary proceedings
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Sentiments inherently drive politics. How we receive and process information
plays an essential role in political decision-making, shaping our judgment with
strategic consequences both on the level of legislators and the masses. If
sentiment plays such an important role in politics, how can we study and
measure it systematically? The paper presents a new dataset of
sentiment-annotated sentences, which are used in a series of experiments
focused on training a robust sentiment classifier for parliamentary
proceedings. The paper also introduces the first domain-specific LLM for
political science applications additionally pre-trained on 1.72 billion
domain-specific words from proceedings of 27 European parliaments. We present
experiments demonstrating how the additional pre-training of LLM on
parliamentary data can significantly improve the model downstream performance
on the domain-specific tasks, in our case, sentiment detection in parliamentary
proceedings. We further show that multilingual models perform very well on
unseen languages and that additional data from other languages significantly
improves the target parliament's results. The paper makes an important
contribution to multiple domains of social sciences and bridges them with
computer science and computational linguistics. Lastly, it sets up a more
robust approach to sentiment analysis of political texts in general, which
allows scholars to study political sentiment from a comparative perspective
using standardized tools and techniques.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 14:01:06 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Mochtak",
"Michal",
""
],
[
"Rupnik",
"Peter",
""
],
[
"Ljubešić",
"Nikola",
""
]
] |
new_dataset
| 0.97263 |
2309.09786
|
Erik Demaine
|
Erik D. Demaine, Kritkorn Karntikoon, Nipun Pitimanaaree
|
2-Colorable Perfect Matching is NP-complete in 2-Connected 3-Regular
Planar Graphs
|
11 pages, 10 figures
| null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The 2-colorable perfect matching problem asks whether a graph can be colored
with two colors so that each node has exactly one neighbor with the same color
as itself. We prove that this problem is NP-complete, even when restricted to
2-connected 3-regular planar graphs. In 1978, Schaefer proved that this problem
is NP-complete in general graphs, and claimed without proof that the same
result holds when restricted to 3-regular planar graphs. Thus we fill in the
missing proof of this claim, while simultaneously strengthening to 2-connected
graphs (which implies existence of a perfect matching). We also prove
NP-completeness of $k$-colorable perfect matching, for any fixed $k \geq 2$.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 14:04:07 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Demaine",
"Erik D.",
""
],
[
"Karntikoon",
"Kritkorn",
""
],
[
"Pitimanaaree",
"Nipun",
""
]
] |
new_dataset
| 0.994014 |
2309.09800
|
Abdelrahman E.M. Abdallah
|
Abdelrahman Abdallah, Mahmoud Abdalla, Mohamed Elkasaby, Yasser
Elbendary, Adam Jatowt
|
AMuRD: Annotated Multilingual Receipts Dataset for Cross-lingual Key
Information Extraction and Classification
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Key information extraction involves recognizing and extracting text from
scanned receipts, enabling retrieval of essential content, and organizing it
into structured documents. This paper presents a novel multilingual dataset for
receipt extraction, addressing key challenges in information extraction and
item classification. The dataset comprises $47,720$ samples, including
annotations for item names, attributes like (price, brand, etc.), and
classification into $44$ product categories. We introduce the InstructLLaMA
approach, achieving an F1 score of $0.76$ and an accuracy of $0.68$ for key
information extraction and item classification. We provide code, datasets, and
checkpoints.\footnote{\url{https://github.com/Update-For-Integrated-Business-AI/AMuRD}}.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 14:18:19 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Abdallah",
"Abdelrahman",
""
],
[
"Abdalla",
"Mahmoud",
""
],
[
"Elkasaby",
"Mohamed",
""
],
[
"Elbendary",
"Yasser",
""
],
[
"Jatowt",
"Adam",
""
]
] |
new_dataset
| 0.999789 |
2309.09818
|
Anh Nguyen
|
An Dinh Vuong, Minh Nhat Vu, Hieu Le, Baoru Huang, Binh Huynh, Thieu
Vo, Andreas Kugi, Anh Nguyen
|
Grasp-Anything: Large-scale Grasp Dataset from Foundation Models
|
Project page: https://grasp-anything-2023.github.io
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Foundation models such as ChatGPT have made significant strides in robotic
tasks due to their universal representation of real-world domains. In this
paper, we leverage foundation models to tackle grasp detection, a persistent
challenge in robotics with broad industrial applications. Despite numerous
grasp datasets, their object diversity remains limited compared to real-world
figures. Fortunately, foundation models possess an extensive repository of
real-world knowledge, including objects we encounter in our daily lives. As a
consequence, a promising solution to the limited representation in previous
grasp datasets is to harness the universal knowledge embedded in these
foundation models. We present Grasp-Anything, a new large-scale grasp dataset
synthesized from foundation models to implement this solution. Grasp-Anything
excels in diversity and magnitude, boasting 1M samples with text descriptions
and more than 3M objects, surpassing prior datasets. Empirically, we show that
Grasp-Anything successfully facilitates zero-shot grasp detection on
vision-based tasks and real-world robotic experiments. Our dataset and code are
available at https://grasp-anything-2023.github.io.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 14:39:26 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Vuong",
"An Dinh",
""
],
[
"Vu",
"Minh Nhat",
""
],
[
"Le",
"Hieu",
""
],
[
"Huang",
"Baoru",
""
],
[
"Huynh",
"Binh",
""
],
[
"Vo",
"Thieu",
""
],
[
"Kugi",
"Andreas",
""
],
[
"Nguyen",
"Anh",
""
]
] |
new_dataset
| 0.999782 |
2309.09867
|
Yunnong Chen
|
Liuqing Chen, Yunnong Chen, Shuhong Xiao, Yaxuan Song, Lingyun Sun,
Yankun Zhen, Tingting Zhou, Yanfang Chang
|
EGFE: End-to-end Grouping of Fragmented Elements in UI Designs with
Multimodal Learning
|
Accepted to 46th International Conference on Software Engineering
(ICSE 2024)
| null |
10.1145/3597503.3623313
| null |
cs.SE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
When translating UI design prototypes to code in industry, automatically
generating code from design prototypes can expedite the development of
applications and GUI iterations. However, in design prototypes without strict
design specifications, UI components may be composed of fragmented elements.
Grouping these fragmented elements can greatly improve the readability and
maintainability of the generated code. Current methods employ a two-stage
strategy that introduces hand-crafted rules to group fragmented elements.
Unfortunately, the performance of these methods is not satisfying due to
visually overlapped and tiny UI elements. In this study, we propose EGFE, a
novel method for automatically End-to-end Grouping Fragmented Elements via UI
sequence prediction. To facilitate the UI understanding, we innovatively
construct a Transformer encoder to model the relationship between the UI
elements with multi-modal representation learning. The evaluation on a dataset
of 4606 UI prototypes collected from professional UI designers shows that our
method outperforms the state-of-the-art baselines in the precision (by
29.75\%), recall (by 31.07\%), and F1-score (by 30.39\%) at edit distance
threshold of 4. In addition, we conduct an empirical study to assess the
improvement of the generated front-end code. The results demonstrate the
effectiveness of our method on a real software engineering application. Our
end-to-end fragmented elements grouping method creates opportunities for
improving UI-related software engineering tasks.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 15:28:12 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Chen",
"Liuqing",
""
],
[
"Chen",
"Yunnong",
""
],
[
"Xiao",
"Shuhong",
""
],
[
"Song",
"Yaxuan",
""
],
[
"Sun",
"Lingyun",
""
],
[
"Zhen",
"Yankun",
""
],
[
"Zhou",
"Tingting",
""
],
[
"Chang",
"Yanfang",
""
]
] |
new_dataset
| 0.997756 |
2309.09875
|
Daniele Cattaneo
|
Abhijeet Nayak, Daniele Cattaneo, Abhinav Valada
|
RaLF: Flow-based Global and Metric Radar Localization in LiDAR Maps
| null | null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Localization is paramount for autonomous robots. While camera and LiDAR-based
approaches have been extensively investigated, they are affected by adverse
illumination and weather conditions. Therefore, radar sensors have recently
gained attention due to their intrinsic robustness to such conditions. In this
paper, we propose RaLF, a novel deep neural network-based approach for
localizing radar scans in a LiDAR map of the environment, by jointly learning
to address both place recognition and metric localization. RaLF is composed of
radar and LiDAR feature encoders, a place recognition head that generates
global descriptors, and a metric localization head that predicts the 3-DoF
transformation between the radar scan and the map. We tackle the place
recognition task by learning a shared embedding space between the two
modalities via cross-modal metric learning. Additionally, we perform metric
localization by predicting pixel-level flow vectors that align the query radar
scan with the LiDAR map. We extensively evaluate our approach on multiple
real-world driving datasets and show that RaLF achieves state-of-the-art
performance for both place recognition and metric localization. Moreover, we
demonstrate that our approach can effectively generalize to different cities
and sensor setups than the ones used during training. We make the code and
trained models publicly available at http://ralf.cs.uni-freiburg.de.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 15:37:01 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Nayak",
"Abhijeet",
""
],
[
"Cattaneo",
"Daniele",
""
],
[
"Valada",
"Abhinav",
""
]
] |
new_dataset
| 0.991467 |
2309.09879
|
Elia Bonetto
|
Chenghao Xu and Elia Bonetto and Aamir Ahmad
|
DynaPix SLAM: A Pixel-Based Dynamic SLAM Approach
|
19 pages
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In static environments, visual simultaneous localization and mapping (V-SLAM)
methods achieve remarkable performance. However, moving objects severely affect
core modules of such systems like state estimation and loop closure detection.
To address this, dynamic SLAM approaches often use semantic information,
geometric constraints, or optical flow to mask features associated with dynamic
entities. These are limited by various factors such as a dependency on the
quality of the underlying method, poor generalization to unknown or unexpected
moving objects, and often produce noisy results, e.g. by masking static but
movable objects or making use of predefined thresholds. In this paper, to
address these trade-offs, we introduce a novel visual SLAM system, DynaPix,
based on per-pixel motion probability values. Our approach consists of a new
semantic-free probabilistic pixel-wise motion estimation module and an improved
pose optimization process. Our per-pixel motion probability estimation combines
a novel static background differencing method on both images and optical flows
from splatted frames. DynaPix fully integrates those motion probabilities into
both map point selection and weighted bundle adjustment within the tracking and
optimization modules of ORB-SLAM2. We evaluate DynaPix against ORB-SLAM2 and
DynaSLAM on both GRADE and TUM-RGBD datasets, obtaining lower errors and longer
trajectory tracking times. We will release both source code and data upon
acceptance of this work.
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 15:39:19 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Xu",
"Chenghao",
""
],
[
"Bonetto",
"Elia",
""
],
[
"Ahmad",
"Aamir",
""
]
] |
new_dataset
| 0.993319 |
2309.09969
|
Yen-Jen Wang
|
Yen-Jen Wang, Bike Zhang, Jianyu Chen, Koushil Sreenath
|
Prompt a Robot to Walk with Large Language Models
| null | null | null | null |
cs.RO cs.LG cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) pre-trained on vast internet-scale data have
showcased remarkable capabilities across diverse domains. Recently, there has
been escalating interest in deploying LLMs for robotics, aiming to harness the
power of foundation models in real-world settings. However, this approach faces
significant challenges, particularly in grounding these models in the physical
world and in generating dynamic robot motions. To address these issues, we
introduce a novel paradigm in which we use few-shot prompts collected from the
physical environment, enabling the LLM to autoregressively generate low-level
control commands for robots without task-specific fine-tuning. Experiments
across various robots and environments validate that our method can effectively
prompt a robot to walk. We thus illustrate how LLMs can proficiently function
as low-level feedback controllers for dynamic motion control even in
high-dimensional robotic systems. The project website and source code can be
found at: https://prompt2walk.github.io/ .
|
[
{
"version": "v1",
"created": "Mon, 18 Sep 2023 17:50:17 GMT"
}
] | 2023-09-19T00:00:00 |
[
[
"Wang",
"Yen-Jen",
""
],
[
"Zhang",
"Bike",
""
],
[
"Chen",
"Jianyu",
""
],
[
"Sreenath",
"Koushil",
""
]
] |
new_dataset
| 0.950014 |
2104.00640
|
Max Glockner
|
Max Glockner, Ieva Stali\=unait\.e, James Thorne, Gisela Vallejo,
Andreas Vlachos, Iryna Gurevych
|
AmbiFC: Fact-Checking Ambiguous Claims with Evidence
|
Accepted at TACL; pre-MIT Press publication version
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated fact-checking systems verify claims against evidence to predict
their veracity. In real-world scenarios, the retrieved evidence may not
unambiguously support or refute the claim and yield conflicting but valid
interpretations. Existing fact-checking datasets assume that the models
developed with them predict a single veracity label for each claim, thus
discouraging the handling of such ambiguity. To address this issue we present
AmbiFC, a fact-checking dataset with 10k claims derived from real-world
information needs. It contains fine-grained evidence annotations of 50k
passages from 5k Wikipedia pages. We analyze the disagreements arising from
ambiguity when comparing claims against evidence in AmbiFC, observing a strong
correlation of annotator disagreement with linguistic phenomena such as
underspecification and probabilistic reasoning. We develop models for
predicting veracity handling this ambiguity via soft labels and find that a
pipeline that learns the label distribution for sentence-level evidence
selection and veracity prediction yields the best performance. We compare
models trained on different subsets of AmbiFC and show that models trained on
the ambiguous instances perform better when faced with the identified
linguistic phenomena.
|
[
{
"version": "v1",
"created": "Thu, 1 Apr 2021 17:40:08 GMT"
},
{
"version": "v2",
"created": "Wed, 31 May 2023 11:18:24 GMT"
},
{
"version": "v3",
"created": "Fri, 15 Sep 2023 06:41:39 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Glockner",
"Max",
""
],
[
"Staliūnaitė",
"Ieva",
""
],
[
"Thorne",
"James",
""
],
[
"Vallejo",
"Gisela",
""
],
[
"Vlachos",
"Andreas",
""
],
[
"Gurevych",
"Iryna",
""
]
] |
new_dataset
| 0.999824 |
2202.06851
|
Yong-Lu Li
|
Yong-Lu Li, Xinpeng Liu, Xiaoqian Wu, Yizhuo Li, Zuoyu Qiu, Liang Xu,
Yue Xu, Hao-Shu Fang, Cewu Lu
|
HAKE: A Knowledge Engine Foundation for Human Activity Understanding
|
HAKE 2.0; website: http://hake-mvig.cn/, code:
https://github.com/DirtyHarryLYL/HAKE-Action-Torch/tree/HAKE-Reason
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human activity understanding is of widespread interest in artificial
intelligence and spans diverse applications like health care and behavior
analysis. Although there have been advances in deep learning, it remains
challenging. The object recognition-like solutions usually try to map pixels to
semantics directly, but activity patterns are much different from object
patterns, thus hindering success. In this work, we propose a novel paradigm to
reformulate this task in two stages: first mapping pixels to an intermediate
space spanned by atomic activity primitives, then programming detected
primitives with interpretable logic rules to infer semantics. To afford a
representative primitive space, we build a knowledge base including 26+ M
primitive labels and logic rules from human priors or automatic discovering.
Our framework, the Human Activity Knowledge Engine (HAKE), exhibits superior
generalization ability and performance upon canonical methods on challenging
benchmarks. Code and data are available at http://hake-mvig.cn/.
|
[
{
"version": "v1",
"created": "Mon, 14 Feb 2022 16:38:31 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2023 08:00:19 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Li",
"Yong-Lu",
""
],
[
"Liu",
"Xinpeng",
""
],
[
"Wu",
"Xiaoqian",
""
],
[
"Li",
"Yizhuo",
""
],
[
"Qiu",
"Zuoyu",
""
],
[
"Xu",
"Liang",
""
],
[
"Xu",
"Yue",
""
],
[
"Fang",
"Hao-Shu",
""
],
[
"Lu",
"Cewu",
""
]
] |
new_dataset
| 0.985723 |
2203.10729
|
Jiaxu Wan
|
JiaXu Wan, Hong Zhang, Jin Zhang, Yuan Ding, Yifan Yang, Yan Li and
Xuliang Li
|
DSRRTracker: Dynamic Search Region Refinement for Attention-based
Siamese Multi-Object Tracking
|
The paper contained some errors in the legends and visualisations,
such as incorrectly using the visualisations of the next generation model we
studied. We have rewritten our paper on its next-generation model based on
that paper. Since we do not want readers to misunderstand the next-generation
paper due to the errors in this preprint paper, we have decided to withdraw
this preprint paper
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many multi-object tracking (MOT) methods follow the framework of "tracking by
detection", which associates the target objects-of-interest based on the
detection results. However, due to the separate models for detection and
association, the tracking results are not optimal.Moreover, the speed is
limited by some cumbersome association methods to achieve high tracking
performance. In this work, we propose an end-to-end MOT method, with a Gaussian
filter-inspired dynamic search region refinement module to dynamically filter
and refine the search region by considering both the template information from
the past frames and the detection results from the current frame with little
computational burden, and a lightweight attention-based tracking head to
achieve the effective fine-grained instance association. Extensive experiments
and ablation study on MOT17 and MOT20 datasets demonstrate that our method can
achieve the state-of-the-art performance with reasonable speed.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 04:14:06 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2023 10:14:34 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Wan",
"JiaXu",
""
],
[
"Zhang",
"Hong",
""
],
[
"Zhang",
"Jin",
""
],
[
"Ding",
"Yuan",
""
],
[
"Yang",
"Yifan",
""
],
[
"Li",
"Yan",
""
],
[
"Li",
"Xuliang",
""
]
] |
new_dataset
| 0.99638 |
2204.11701
|
Maria Bauza
|
Maria Bauza, Antonia Bronars, Alberto Rodriguez
|
Tac2Pose: Tactile Object Pose Estimation from the First Touch
|
Submitted to IJRR, 22 pages + Appendix, 11 figures
| null |
10.1177/02783649231196925
| null |
cs.CV cs.LG cs.RO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In this paper, we present Tac2Pose, an object-specific approach to tactile
pose estimation from the first touch for known objects. Given the object
geometry, we learn a tailored perception model in simulation that estimates a
probability distribution over possible object poses given a tactile
observation. To do so, we simulate the contact shapes that a dense set of
object poses would produce on the sensor. Then, given a new contact shape
obtained from the sensor, we match it against the pre-computed set using an
object-specific embedding learned using contrastive learning. We obtain contact
shapes from the sensor with an object-agnostic calibration step that maps RGB
tactile observations to binary contact shapes. This mapping, which can be
reused across object and sensor instances, is the only step trained with real
sensor data. This results in a perception model that localizes objects from the
first real tactile observation. Importantly, it produces pose distributions and
can incorporate additional pose constraints coming from other perception
systems, contacts, or priors.
We provide quantitative results for 20 objects. Tac2Pose provides high
accuracy pose estimations from distinctive tactile observations while
regressing meaningful pose distributions to account for those contact shapes
that could result from different object poses. We also test Tac2Pose on object
models reconstructed from a 3D scanner, to evaluate the robustness to
uncertainty in the object model. Finally, we demonstrate the advantages of
Tac2Pose compared with three baseline methods for tactile pose estimation:
directly regressing the object pose with a neural network, matching an observed
contact to a set of possible contacts using a standard classification neural
network, and direct pixel comparison of an observed contact with a set of
possible contacts.
Website: http://mcube.mit.edu/research/tac2pose.html
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 14:43:48 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Sep 2022 10:05:41 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Sep 2023 22:52:50 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Bauza",
"Maria",
""
],
[
"Bronars",
"Antonia",
""
],
[
"Rodriguez",
"Alberto",
""
]
] |
new_dataset
| 0.999596 |
2205.11501
|
Yanan Wang
|
Yanan Wang, Michihiro Yasunaga, Hongyu Ren, Shinya Wada, Jure Leskovec
|
VQA-GNN: Reasoning with Multimodal Knowledge via Graph Neural Networks
for Visual Question Answering
|
Accepted at ICCV 2023
| null | null | null |
cs.CV cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Visual question answering (VQA) requires systems to perform concept-level
reasoning by unifying unstructured (e.g., the context in question and answer;
"QA context") and structured (e.g., knowledge graph for the QA context and
scene; "concept graph") multimodal knowledge. Existing works typically combine
a scene graph and a concept graph of the scene by connecting corresponding
visual nodes and concept nodes, then incorporate the QA context representation
to perform question answering. However, these methods only perform a
unidirectional fusion from unstructured knowledge to structured knowledge,
limiting their potential to capture joint reasoning over the heterogeneous
modalities of knowledge. To perform more expressive reasoning, we propose
VQA-GNN, a new VQA model that performs bidirectional fusion between
unstructured and structured multimodal knowledge to obtain unified knowledge
representations. Specifically, we inter-connect the scene graph and the concept
graph through a super node that represents the QA context, and introduce a new
multimodal GNN technique to perform inter-modal message passing for reasoning
that mitigates representational gaps between modalities. On two challenging VQA
tasks (VCR and GQA), our method outperforms strong baseline VQA methods by 3.2%
on VCR (Q-AR) and 4.6% on GQA, suggesting its strength in performing
concept-level reasoning. Ablation studies further demonstrate the efficacy of
the bidirectional fusion and multimodal GNN method in unifying unstructured and
structured multimodal knowledge.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2022 17:55:34 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2023 08:16:01 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Wang",
"Yanan",
""
],
[
"Yasunaga",
"Michihiro",
""
],
[
"Ren",
"Hongyu",
""
],
[
"Wada",
"Shinya",
""
],
[
"Leskovec",
"Jure",
""
]
] |
new_dataset
| 0.979963 |
2208.10489
|
Xinrui Yan
|
Xinrui Yan, Jiangyan Yi, Chenglong Wang, Jianhua Tao, Junzuo Zhou, Hao
Gu, Ruibo Fu
|
System Fingerprint Recognition for Deepfake Audio: An Initial Dataset
and Investigation
|
13 pages, 4 figures. Submit to IEEE Transactions on Audio, Speech and
Language Processing (TASLP). arXiv admin note: text overlap with
arXiv:2208.09646
| null | null | null |
cs.SD cs.AI eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid progress of deep speech synthesis models has posed significant
threats to society such as malicious content manipulation. Therefore, many
studies have emerged to detect the so-called deepfake audio. However, existing
works focus on the binary detection of real audio and fake audio. In real-world
scenarios such as model copyright protection and digital evidence forensics, it
is needed to know what tool or model generated the deepfake audio to explain
the decision. This motivates us to ask: Can we recognize the system
fingerprints of deepfake audio? In this paper, we present the first deepfake
audio dataset for system fingerprint recognition (SFR) and conduct an initial
investigation. We collected the dataset from the speech synthesis systems of
seven Chinese vendors that use the latest state-of-the-art deep learning
technologies, including both clean and compressed sets. In addition, to
facilitate the further development of system fingerprint recognition methods,
we provide extensive benchmarks that can be compared and research findings. The
dataset will be publicly available. .
|
[
{
"version": "v1",
"created": "Sun, 21 Aug 2022 05:15:40 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Feb 2023 06:45:50 GMT"
},
{
"version": "v3",
"created": "Fri, 15 Sep 2023 07:19:46 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Yan",
"Xinrui",
""
],
[
"Yi",
"Jiangyan",
""
],
[
"Wang",
"Chenglong",
""
],
[
"Tao",
"Jianhua",
""
],
[
"Zhou",
"Junzuo",
""
],
[
"Gu",
"Hao",
""
],
[
"Fu",
"Ruibo",
""
]
] |
new_dataset
| 0.988941 |
2209.06758
|
Jonathan K\"ulz
|
Jonathan K\"ulz, Matthias Mayer, and Matthias Althoff
|
Timor Python: A Toolbox for Industrial Modular Robotics
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modular Reconfigurable Robots (MRRs) represent an exciting path forward for
industrial robotics, opening up new possibilities for robot design. Compared to
monolithic manipulators, they promise greater flexibility, improved
maintainability, and cost-efficiency. However, there is no tool or standardized
way to model and simulate assemblies of modules in the same way it has been
done for robotic manipulators for decades. We introduce the Toolbox for
Industrial Modular Robotics (Timor), a Python toolbox to bridge this gap and
integrate modular robotics into existing simulation and optimization pipelines.
Our open-source library offers model generation and task-based configuration
optimization for MRRs. It can easily be integrated with existing simulation
tools - not least by offering URDF export of arbitrary modular robot
assemblies. Moreover, our experimental study demonstrates the effectiveness of
Timor as a tool for designing modular robots optimized for specific use cases.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 16:20:32 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2023 13:43:16 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Külz",
"Jonathan",
""
],
[
"Mayer",
"Matthias",
""
],
[
"Althoff",
"Matthias",
""
]
] |
new_dataset
| 0.999352 |
2210.07592
|
Daeun Song
|
Daeun Song, Eunjung Lim, Jiyoon Park, Minjung Jung, Young J. Kim
|
TSP-Bot: Robotic TSP Pen Art using High-DoF Manipulators
|
Submitted to IEEE ICRA 2024
| null | null | null |
cs.RO cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
TSP art is an art form for drawing an image using piecewise-continuous line
segments. This paper presents a robotic pen drawing system capable of creating
complicated TSP pen art on a planar surface using multiple colors. The system
begins by converting a colored raster image into a set of points that represent
the image's tone, which can be controlled by adjusting the point density. Next,
the system finds a piecewise-continuous linear path that visits each point
exactly once, which is equivalent to solving a Traveling Salesman Problem
(TSP). The path is simplified with fewer points using bounded approximation and
smoothed and optimized using Bezier spline curves with bounded curvature. Our
robotic drawing system consisting of single or dual manipulators with fingered
grippers and a mobile platform performs the drawing task by following the
resulting complex and sophisticated path composed of thousands of TSP sites. As
a result, our system can draw a complicated and visually pleasing TSP pen art.
|
[
{
"version": "v1",
"created": "Fri, 14 Oct 2022 07:43:55 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Oct 2022 06:50:53 GMT"
},
{
"version": "v3",
"created": "Fri, 3 Mar 2023 08:27:11 GMT"
},
{
"version": "v4",
"created": "Thu, 14 Sep 2023 19:35:06 GMT"
}
] | 2023-09-18T00:00:00 |
[
[
"Song",
"Daeun",
""
],
[
"Lim",
"Eunjung",
""
],
[
"Park",
"Jiyoon",
""
],
[
"Jung",
"Minjung",
""
],
[
"Kim",
"Young J.",
""
]
] |
new_dataset
| 0.999533 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.