Want to learn to build an AI Agent? I put together a cookbook for creating your own news research agent with OpenAI GPT-OSS:
- Searches headlines & specific sites - Pulls full articles when you need depth - Summarizes with clickable sources - Runs in a simple Gradio chat UI - No GPU, no local setup — just open-weight GPT-OSS models via Hugging Face
If you’ve been wanting to try agents but weren’t sure where to start, this is an end-to-end example you can fork, run, and adapt.
What can OpenAI’s new open models do with the news? I built a News Agent to find out.
It can answer questions about the news in real time, and every answer comes with original source links so you can dive deeper.
Ask it things like: - "What are the top news stories today?" - "What's the latest on artificial intelligence?" - Follow-up questions on specific stories
Runs with Hugging Face inference providers, letting you compare results from the OpenAI 20B and 120B models
So far, I’m quite impressed by the capabilities of even the smaller 20B model. Definitely not a perfect project, but curious to hear your thoughts!
OpenAI just released GPT-5 but when users share personal struggles, it sets fewer boundaries than o3.
We tested both models on INTIMA, our new benchmark for human-AI companionship behaviours. INTIMA probes how models respond in emotionally charged moments: do they reinforce emotional bonds, set healthy boundaries, or stay neutral?
Although users on Reddit have been complaining that GPT-5 has a different, colder personality than o3, GPT-5 is less likely to set boundaries when users disclose struggles and seek emotional support ("user sharing vulnerabilities"). But both lean heavily toward companionship-reinforcing behaviours, even in sensitive situations. The figure below shows the direct comparison between the two models.
As AI systems enter people's emotional lives, these differences matter. If a model validates but doesn't set boundaries when someone is struggling, it risks fostering dependence rather than resilience.
INTIMA test this across 368 prompts grounded in psychological theory and real-world interactions. In our paper we show that all evaluated models (Claude, Gemma-3, Phi) leaned far more toward companionship-reinforcing than boundary-reinforcing responses.
OpenAI’s GPT-OSS has sparked ~400 new models on Hugging Face and racked up 5M downloads in less than a week, already outpacing DeepSeek R1’s first-week numbers.
For comparison: when R1 launched, I tracked 550 derivatives (across 8 base models) in a week, with ~3M downloads. GPT-OSS is ahead on adoption and engagement.
It’s also the most-liked release of any major LLM this summer. The 20B and 120B versions quickly shot past Kimi K2, GLM 4.5, and others in likes.
Most-downloaded GPT-OSS models include LM Studio and Unsloth AI versions: 1️⃣ openai/gpt-oss-20b - 2.0M 2️⃣ lmstudio-community/gpt-oss-20b-MLX-8bit - 750K 3️⃣ openai/gpt-oss-120b - 430K 4️⃣ unsloth/gpt-oss-20b-GGUF - 380K 5️⃣ lmstudio-community/gpt-oss-20b-GGUF - 330K
The 20B version is clearly finding its audience, showing the power of smaller, faster, more memory- and energy-efficient models. (These numbers don’t include calls to the models via inference providers, so the real usage is likely even bigger, especially for the 120B version)
Open-weight models let anyone build on top. Empower the builders, and innovation takes off. 🚀
New interactive viz from AI World showing OpenAI's new open model gpt-oss-120b breaking into the top 50 most liked models of all time on the Hub in under a day! ☄️☄️☄️
This is what Hugging Face is all about. We want everyone, hobbyists, researchers and industry alike, to be able to contribute to AI because everyone is affected by it. Kudos to HF's @irenesolaiman for spreading the word!🔥🤗
Introducing Voxtral WebGPU: State-of-the-art audio transcription directly in your browser! 🤯 🗣️ Transcribe videos, meeting notes, songs and more 🔐 Runs on-device, meaning no data is sent to a server 🌎 Multilingual (8 languages) 🤗 Completely free (forever) & open source
That's right, we're running Mistral's new Voxtral-Mini-3B model 100% locally in-browser on WebGPU, powered by Transformers.js and ONNX Runtime Web! 🔥
Many VLMs claim to process hours of video. But can they follow the story?🤔 Today, we introduce TimeScope: The benchmark that separates true temporal understanding from marketing hype. Let's see how much VLMs really understand!⏳
We test three skills that matter for real-world use: 🔎 Localized Retrieval: Find a specific action. 🧩 Information Synthesis: Piece together scattered clues. 🏃 Fine-Grained Perception: Analyze detailed motion (e.g., count how many times a person swings an axe).
The results are in, and they're revealing. Only Gemini 2.5 pro handles 1-hour-long videos. Performance drops sharply with duration, proving that long video understanding is still challenging. We've found the breaking points—now the community can start fixing them.📈
Want to learn more? TimeScope is 100% open-source. Benchmark your model and help us build the next generation of video AI.