AI & ML interests

None defined yet.

Recent Activity

Problems Prompt

5
#12 opened 2 months ago by
tonylog

What happened

1
#3 opened 10 days ago by
Dobby94

Not working

6
#1 opened 10 days ago by
fremen1

New Updated

2
#1 opened about 2 months ago by
seawolf2357
aiqtechย 
posted an update about 1 month ago
view post
Post
2564
๐Ÿ”ฅ HuggingFace Heatmap Leaderboard
Visualizing AI ecosystem activity at a glance

aiqtech/Heatmap-Leaderboard

๐ŸŽฏ Introduction
A leaderboard that visualizes the vibrant HuggingFace community activity through heatmaps.

โœจ Key Features
๐Ÿ“Š Real-time Tracking - Model/dataset/app releases from AI labs and developers
๐Ÿ† Auto Ranking - Rankings based on activity over the past year
๐ŸŽจ Responsive UI - Unique colors per organization, mobile optimized
โšก Auto Updates - Hourly data refresh for latest information

๐ŸŒ Major Participants
Big Tech: OpenAI, Google, Meta, Microsoft, Apple, NVIDIA
AI Startups: Anthropic, Mistral, Stability AI, Cohere, DeepSeek
Chinese Companies: Tencent, Baidu, ByteDance, Qwen
HuggingFace Official: HuggingFaceH4, HuggingFaceM4, lerobot, etc.
Active Developers: prithivMLmods, lllyasviel, multimodalart and many more

๐Ÿš€ Value
Trend Analysis ๐Ÿ“ˆ Real-time open source contribution insights
Inspiration ๐Ÿ’ช Learn from other developers' activity patterns
Ecosystem Growth ๐ŸŒฑ Visualize AI community development

@John6666 @Nymbo @MaziyarPanahi @prithivMLmods @fffiloni @gokaygokay @enzostvs @black-forest-labs @lllyasviel @briaai @multimodalart @unsloth @Xenova @mistralai @meta-llama @facebook @openai @Anthropic @google @allenai @apple @microsoft @nvidia @CohereLabs @ibm-granite @stabilityai @huggingface @OpenEvals @HuggingFaceTB @HuggingFaceH4 @HuggingFaceM4 @HuggingFaceFW @HuggingFaceFV @open-r1 @parler-tts @nanotron @lerobot @distilbert @kakaobrain @NCSOFT @upstage @moreh @LGAI-EXAONE @naver-hyperclovax @OnomaAIResearch @kakaocorp @Baidu @PaddlePaddle @tencent @BAAI @OpenGVLab @InternLM @Skywork @MiniMaxAI @stepfun-ai @ByteDance @Bytedance Seed @bytedance-research @openbmb @THUDM @rednote-hilab @deepseek-ai @Qwen @wan-ai @XiaomiMiMo @IndexTeam @agents-course
@Agents-MCP-Hackathon @akhaliq @alexnasa @Alibaba-NLP
@ArtificialAnalysis @bartowski @bibibi12345 @calcuis
@ChenDY @city96 @Comfy-Org @fancyfeast @fal @google
  • 1 reply
ยท
seawolf2357ย 
posted an update about 2 months ago
view post
Post
1270
๐Ÿš€ VEO3 Real-Time: Real-time AI Video Generation with Self-Forcing

๐ŸŽฏ Core Innovation: Self-Forcing Technology
VEO3 Real-Time, an open-source project challenging Google's VEO3, achieves real-time video generation through revolutionary Self-Forcing technology.

Heartsync/VEO3-RealTime

โšก What is Self-Forcing?
While traditional methods require 50-100 steps, Self-Forcing achieves the same quality in just 1-2 steps. Through self-correction and rapid convergence, this Distribution Matching Distillation (DMD) technique maintains quality while delivering 50x speed improvement.

๐Ÿ’ก Technical Advantages of Self-Forcing
1. Extreme Speed
Generates 4-second videos in under 30 seconds, with first frame streaming in just 3 seconds. This represents 50x faster performance than traditional diffusion methods.
2. Consistent Quality
Maintains cinematic quality despite fewer steps, ensures temporal consistency, and minimizes artifacts.
3. Efficient Resource Usage
Reduces GPU memory usage by 70% and heat generation by 30%, enabling smooth operation on mid-range GPUs like RTX 3060.

๐Ÿ› ๏ธ Technology Stack Synergy
VEO3 Real-Time integrates multiple technologies organically around Self-Forcing DMD. Self-Forcing DMD handles ultra-fast video generation, Wan2.1-T2V-1.3B serves as the high-quality video backbone, PyAV streaming enables real-time transmission, and Qwen3 adds intelligent prompt enhancement for polished results.

๐Ÿ“Š Performance Comparison
Traditional methods require 50-100 steps, taking 2-5 minutes for the first frame and 5-10 minutes total. In contrast, Self-Forcing needs only 1-2 steps, delivering the first frame in 3 seconds and complete videos in 30 seconds while maintaining equal quality.๐Ÿ”ฎ Future of Self-Forcing
Our next goal is real-time 1080p generation, with ongoing research to achieve
seawolf2357ย 
posted an update 2 months ago
view post
Post
7283
โšก FusionX Enhanced Wan 2.1 I2V (14B) ๐ŸŽฌ

๐Ÿš€ Revolutionary Image-to-Video Generation Model
Generate cinematic-quality videos in just 8 steps!

Heartsync/WAN2-1-fast-T2V-FusioniX

โœจ Key Features
๐ŸŽฏ Ultra-Fast Generation: Premium quality in just 8-10 steps
๐ŸŽฌ Cinematic Quality: Smooth motion with detailed textures
๐Ÿ”ฅ FusionX Technology: Enhanced with CausVid + MPS Rewards LoRA
๐Ÿ“ Optimized Resolution: 576ร—1024 default settings
โšก 50% Speed Boost: Faster rendering compared to base models
๐Ÿ› ๏ธ Technical Stack

Base Model: Wan2.1 I2V 14B
Enhancement Technologies:

๐Ÿ”— CausVid LoRA (1.0 strength) - Motion modeling
๐Ÿ”— MPS Rewards LoRA (0.7 strength) - Detail optimization

Scheduler: UniPC Multistep (flow_shift=8.0)
Auto Prompt Enhancement: Automatic cinematic keyword injection

๐ŸŽจ How to Use

Upload Image - Select your starting image
Enter Prompt - Describe desired motion and style
Adjust Settings - 8 steps, 2-5 seconds recommended
Generate - Complete in just minutes!

๐Ÿ’ก Optimization Tips
โœ… Recommended Settings: 8-10 steps, 576ร—1024 resolution
โœ… Prompting: Use "cinematic motion, smooth animation" keywords
โœ… Duration: 2-5 seconds for optimal quality
โœ… Motion: Emphasize natural movement and camera work
๐Ÿ† FusionX Enhanced vs Standard Models
Performance Comparison: While standard models typically require 15-20 inference steps to achieve decent quality, our FusionX Enhanced version delivers premium results in just 8-10 steps - that's more than 50% faster! The rendering speed has been dramatically improved through optimized LoRA fusion, allowing creators to iterate quickly without sacrificing quality. Motion quality has been significantly enhanced with advanced causal modeling, producing smoother, more realistic animations compared to base implementations. Detail preservation is substantially better thanks to MPS Rewards training, maintaining crisp textures and consistent temporal coherence throughout the generated sequences.
  • 1 reply
ยท
seawolf2357ย 
posted an update 2 months ago
view post
Post
1609
๐Ÿš€ Just Found an Interesting New Leaderboard for Medical AI Evaluation!

I recently stumbled upon a medical domain-specific FACTS Grounding leaderboard on Hugging Face, and the approach to evaluating AI accuracy in medical contexts is quite impressive, so I thought I'd share.

๐Ÿ“Š What is FACTS Grounding?
It's originally a benchmark developed by Google DeepMind that measures how well LLMs generate answers based solely on provided documents. What's cool about this medical-focused version is that it's designed to test even small open-source models.

๐Ÿฅ Medical Domain Version Features

236 medical examples: Extracted from the original 860 examples
Tests small models like Qwen 3 1.7B: Great for resource-constrained environments
Uses Gemini 1.5 Flash for evaluation: Simplified to a single judge model

๐Ÿ“ˆ The Evaluation Method is Pretty Neat

Grounding Score: Are all claims in the response supported by the provided document?
Quality Score: Does it properly answer the user's question?
Combined Score: Did it pass both checks?

Since medical information requires extreme accuracy, this thorough verification approach makes a lot of sense.
๐Ÿ”— Check It Out Yourself

The actual leaderboard: MaziyarPanahi/FACTS-Leaderboard

๐Ÿ’ญ My thoughts: As medical AI continues to evolve, evaluation tools like this are becoming increasingly important. The fact that it can test smaller models is particularly helpful for the open-source community!