vision
tracking
yangyi02 commited on
Commit
c9f60a6
Β·
verified Β·
1 Parent(s): 04e0fd5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -3
README.md CHANGED
@@ -7,8 +7,19 @@ tags:
7
 
8
  # TAPNet
9
 
10
- This repository contains the models presented in [TAPNext: Tracking Any Point (TAP) as Next Token Prediction](https://huggingface.co/papers/2504.05579).
11
 
12
- Code: https://github.com/google-deepmind/tapnet
13
 
14
- Project page: https://tap-next.github.io/
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
  # TAPNet
9
 
10
+ This repository contains the checkpoints of several point tracking models developed by DeepMind for point tracking.
11
 
12
+ πŸ”— **Code**: [https://github.com/google-deepmind/tapnet](https://github.com/google-deepmind/tapnet)
13
 
14
+ ## Included Models
15
+
16
+ - **TAPIR** (*Tracking Any Point with Implicit Representations*) – A fast and accurate point tracker for continuous point trajectories in space-time.
17
+ 🌐 **Project page**: [https://deepmind-tapir.github.io/](https://deepmind-tapir.github.io/)
18
+
19
+ - **BootsTAPIR** – A bootstrapped variant of TAPIR that improves robustness and stability across long videos via self-supervised refinement.
20
+ 🌐 **Project page**: [https://bootstap.github.io/](https://bootstap.github.io/)
21
+
22
+ - **TAPNext** – A new generative approach that frames point tracking as next-token prediction, enabling semi-dense, accurate, and temporally coherent tracking across challenging videos, including those presented in the paper [**TAPNext: Tracking Any Point (TAP) as Next Token Prediction**](https://huggingface.co/papers/2504.05579).
23
+ 🌐 **Project page**: [https://tap-next.github.io/](https://tap-next.github.io/)
24
+
25
+ These models provide state-of-the-art performance for tracking arbitrary points in videos and support research and applications in robotics, perception, and video generation.