File size: 1,685 Bytes
4ccb7d8
 
 
 
 
 
 
 
530f2dd
6676919
4ccb7d8
530f2dd
06aafdb
4ccb7d8
530f2dd
 
 
4ccb7d8
530f2dd
 
 
 
 
 
 
4ccb7d8
530f2dd
4ccb7d8
530f2dd
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
pipeline_tag: reinforcement-learning
tags:
- deep
- reinforcement
- learning
- world
- models
library_name: pytorch
license: gpl-3.0
---

# M<sup>3</sup>: A Modular World Model over Streams of Tokens

📄 [Paper](https://arxiv.org/abs/2502.11537) ▪️ 💾 [Code](https://github.com/leor-c/M3) ▪️ 🧠 [Trained Model Weights](https://huggingface.co/leorc/M3)

M<sup>3</sup> is a modular world model that extends the token-based world model framework to handle diverse observation and action modalities through independent, modality-specific components.  It incorporates improvements from existing literature to enhance agent performance and achieves state-of-the-art sample efficiency for planning-free world models.  It is the first method of this kind to reach a human-level median score on Atari 100K, exhibiting superhuman performance on 13 games.  The model weights provided here cover Atari 100K, DeepMind Control Suite Proprioceptive 500K, and Craftax (Symbolic) 1M.

<div align="center">
<img src="https://github.com/user-attachments/assets/14734453-38dd-4bc0-a2e0-349e4eec37a2" height="220" />
<img src="https://github.com/user-attachments/assets/11beac5f-f8ee-48a7-94ec-2130087ed060" height="220" />
<img src="https://github.com/user-attachments/assets/a7e89c77-754f-43e3-982c-423c1257846c" height="220" />
<img src="https://github.com/user-attachments/assets/2ae2791a-9aec-4649-88ad-56e9840cd6b1" height="220" />
<img src="https://github.com/user-attachments/assets/2d5f9c98-2468-4206-95ac-e4d370798d7e" height="220" />
</div>

<div align="center">

<img src="https://github.com/user-attachments/assets/32f547d1-198c-4cf4-8f4e-9f683250399a" height="350" />
</div>