I run 20 AI coding agents locally on my desktop workstation at 400+ tokens/sec with MiniMax-M2. Itβs a Sonnet drop-in replacement in my Cursor, Claude Code, Droid, Kilo and Cline peak at 11k tok/sec input and 433 tok/s output, can generate 1B+ tok/m.All with 196k context window. I'm running it for 6 days now with this config.
Today max performance was stable at 490.2 tokens/sec across 48 concurrent clients and MiniMax M2.
**Training ACT on SO-101: From Woodpecker to 90% Success (All the Mistakes Included)**
I spent 3 weeks training Action Chunking Transformer on SO-101 for pick-and-place. Spoiler: the first attempt trained a woodpecker that just pecked the table. π¦
**What's Different About This Post:** Most ACT tutorials show the success. I documented every failure, hardware issue, and debugging step. If you're new to SO-101/LeRobot/ACT, hopefully my mistakes save you time.
Try 1: The Woodpecker - Followed the LeRobot tutorial, collected 50 episodes - Beautiful loss curves β - Robot learned to peck at table β - Rookie mistakes: moving cameras, arm calibration mismatch, limited data diversity, looking at follower arm during teleop (it's cheating!)
Try 2: Engineering Upgrades - Fixed hardware setup (tape + markers everywhere) - USB udev rules for camera stability - Formal task definition with stratified sampling - Built proper eval pipeline with progress scoring - Motor breakdown mid-collection (broke the gripper with excessive force π) - Results: 60% in-distribution success, 10% OOD (better, but not great)
Key Learnings: - Consistent hardware setup is everything - Don't look at the follower arm during teleop - Data diversity is key for generalization - Debug infrastructure matters - Real robots break in mysterious ways (buy spare motors!)