AI & ML interests

None defined yet.

Recent Activity

albertvillanova 
posted an update 2 days ago
view post
Post
2824
Latest smolagents release supports GPT-5: build agents that think, plan, and act.
⚡ Upgrade now and put GPT-5 to work!
meg 
posted an update 3 days ago
albertvillanova 
posted an update 3 days ago
view post
Post
256
🚀 smolagents v1.21.0 is here!
Now with improved safety in the local Python executor: dunder calls are blocked!
⚠️ Still, not fully isolated: for untrusted code, use a remote executor instead: Docker, E2B, Wasm.
✨ Many bug fixes: more reliable code.
👉 https://github.com/huggingface/smolagents/releases/tag/v1.21.0
meg 
posted an update 8 days ago
view post
Post
328
🤖 ICYMI: Yesterday, Hugging Face and OpenAI partnered to bring open source GPT to the public. This is a Big Deal in "AI world".

0. Common ground setting: OpenAI is the ChatGPT people. An “open source” model is one whose weights are available — that means the model can be “yours”.
1. You don’t have to interact with the company directly, nor give them your interactions, to use the system. The company can't "surveil" you.
2. You can evaluate the unique contributions of their SOTA model much more rigorously than you can when there are collections of models+code behind a closed API. You can find out specifically what the model can and can't do.
3. And you can directly customize it for whatever you'd like. Fine-tuning, wherein you give the model data that's tailored to your use cases and train it some more on that data, is trivial* when you have the model weights.
*Provided you have the compute.
4. You can directly benchmark whatever you'd like. Biases? Energy usage? Strengths/weaknesses? Go for it. You wants it you gots it--this transparency helps people understand SOTA *in general*, not just for this model, but points to, e.g., what's going on with closed Google models as well.
5. One of the most powerful things about "openness" that I've learned is that it cultivates ecosystems of collaborators building on top of one another's brilliance to make systems that are significantly better than they would be if created in isolation.
But, caveat wrt my own philosophy...
6. I do not take it as a given that advancing LLMs is good, and have a lot more to say wrt where I think innovation should focus more. For example, a focus on *data* -- curation, measurement, consent, credit, compensation, safety -- would deeply improve technology for everyone.
7. The transparency this release provides is massive for people who want to *learn* about LLMs. For the next generation of technologists to advance over the current, they MUST be able to learn about what's happening now. (cont...)
  • 1 reply
·
meg 
posted an update 14 days ago
view post
Post
434
🤖 👾 Thanks so much to BBC News and the stellar Suranjana Tewari for having me on to talk about US <—> China relationship in AI, and what it means for AI ethics.
giadap 
posted an update 16 days ago
view post
Post
2982
💬 From Replika to everyday chatbots, millions of people are forming emotional bonds with AI, sometimes seeking comfort, sometimes seeking intimacy. But what happens when an AI tells you "I understand how you feel" and you actually believe it?

At Hugging Face, together with @frimelle and @yjernite , we dug into something we felt wasn't getting enough attention: the need to evaluate AI companionship behaviors. These are the subtle ways AI systems validate us, engage with us, and sometimes manipulate our emotional lives.

Here's what we found:
👉 Existing benchmarks (accuracy, helpfulness, safety) completely miss this emotional dimension.
👉 We mapped how leading AI systems actually respond to vulnerable prompts. 👉 We built the Interactions and Machine Attachment Benchmark (INTIMA): a first attempt at evaluating how models handle emotional dependency, boundaries, and attachment (with a full paper coming soon).

Check out the blog post: https://huggingface.co/blog/giadap/evaluating-companionship

🚢 We also shipped two visualization tools with Gradio to see how different models behave when things get emotionally intense:
- AI-companionship/intima-responses-2D
- giadap/INTIMA-responses
yjernite 
posted an update 17 days ago
view post
Post
4004
𝗙𝗶𝗿𝘀𝘁 𝗚𝗣𝗔𝗜 𝗠𝗼𝗱𝗲𝗹 𝘄𝗶𝘁𝗵 𝗘𝗨 𝗗𝗮𝘁𝗮 𝗧𝗿𝗮𝗻𝘀𝗽𝗮𝗿𝗲𝗻𝗰𝘆 𝗧𝗲𝗺𝗽𝗹𝗮𝘁𝗲? 🇪🇺

With the release of the EU data transparency template this week, we finally got to see one of the most meaningful artifacts to come out of the AI Act implementation so far (haven't you heard? AI's all about the data! 📊📚)

The impact of the template will depend on how effectively it establishes a minimum meaningful transparency standard for companies that don't otherwise offer any transparency into their handling of e.g. personal data or (anti?-)competitive practices in commercial licensing - we'll see how those play out as new models are released after August 2nd 👀


In the meantime, I wanted to see how the template works for a fully open-source + commercially viable model, so I filled it out for the SmolLM3 - which my colleagues at Hugging Face earlier this month 🤗 ICYMI, it's fully open-source with 3B parameters and performance matching the best similar-size models (I've switched all my local apps from Qwen3 to it, you should too 💡)

Verdict: congrats to the European Commission AI Office for making it so straightforward! Fully open and transparent models remain a cornerstone of informed regulation and governance, but the different organizational needs of their developers aren't always properly accounted for in new regulation. In this case, it took me all of two hours to fill out and publish the template (including reading the guidelines) - so kudos for making it feasible for smaller and distributed organizations 🙌 Definitely a step forward for transparency 🔍

To learn more have a look at:

- The SmolLM3 model: HuggingFaceTB/SmolLM3-3B
- Its filled out Public Summary of Training Content: hfmlsoc/smollm3-eu-data-transparency
- And if you're interested, some previous remarks on regulatory minimum meaningful standards for data disclosure: https://huggingface.co/blog/yjernite/naiac-data-transparency
giadap 
posted an update 27 days ago
view post
Post
1243
🤖 Technology means power, and whoever owns the technology owns the power.

Thrilled to share insights from my recent interview with MIT Technology Review about the growing movement toward local LLMs and what it means for AI democratization. Read here: https://www.technologyreview.com/2025/07/17/1120391/how-to-run-an-llm-on-your-laptop/

🤔 Why this matters: When we use "free" online AI services, we're often the product. Our conversations become training data, our personal stories get "cooked into" models, and our privacy becomes a commodity. But there's an alternative path forward.

💡 The power shift is real: Local LLMs aren't just about privacy; they're about redistributing AI power away from a handful of tech giants. When individuals, organizations, and even entire nations can run their own models, we're democratizing access to AI capabilities.

🤗 At Hugging Face, we're proud to be at the center of this transformation. Our platform hosts the world's largest library of freely downloadable models, making cutting-edge AI accessible to everyone -- from researchers and developers to curious individuals who want to experiment on their laptops or even smartphones.

The technical barriers that once required $$$ server racks are crumbling. Today, anyone with basic computer skills can download a model, run it locally, and maintain complete control over their AI interactions. No sudden algorithm changes, no data harvesting, no corporate gatekeeping.

This is about technical convenience, but especially about technological sovereignty. When AI power is concentrated in a few hands, we risk creating new forms of digital dependency. Local models offer a path toward genuine AI literacy and independence.

🚀 The future of AI should be open, accessible, and in the hands of the many, not the few. What are your thoughts on AI democratization? Have you experimented with local models yet?
albertvillanova 
posted an update about 1 month ago
view post
Post
595
🚀 New in smolagents v1.20.0: Remote Python Execution via WebAssembly (Wasm)

We've just merged a major new capability into the smolagents framework: the CodeAgent can now execute Python code remotely in a secure, sandboxed WebAssembly environment!

🔧 Powered by Pyodide and Deno, this new WasmExecutor lets your agent-generated Python code run safely: without relying on Docker or local execution.

Why this matters:
✅ Isolated execution = no host access
✅ No need for Python on the user's machine
✅ Safer evaluation of arbitrary code
✅ Compatible with serverless / edge agent workloads
✅ Ideal for constrained or untrusted environments

This is just the beginning: a focused initial implementation with known limitations. A solid MVP designed for secure, sandboxed use cases. 💡

💡 We're inviting the open-source community to help evolve this executor:
• Tackle more advanced Python features
• Expand compatibility
• Add test coverage
• Shape the next-gen secure agent runtime

🔗 Check out the PR: https://github.com/huggingface/smolagents/pull/1261

Let's reimagine what agent-driven Python execution can look like: remote-first, wasm-secure, and community-built.

This feature is live in smolagents v1.20.0!
Try it out.
Break things. Extend it. Give us feedback.
Let's build safer, smarter agents; together 🧠⚙️

👉 https://github.com/huggingface/smolagents/releases/tag/v1.20.0

#smolagents #WebAssembly #Python #AIagents #Pyodide #Deno #OpenSource #HuggingFace #AgenticAI
giadap 
posted an update about 1 month ago
view post
Post
2266
I've been posting bits and pieces about this research, but now I can finally say: new paper alert 🚨

My colleague @brunatrevelin and I just shared a paper exploring why traditional consent frameworks are breaking down in AI contexts (forthcoming chapter in a collective book).

The current model places impossible burdens on users to manage countless consent decisions. Meanwhile, AI systems learn to mimic our voices and writing styles from data we unknowingly provided years ago.

What's next? We need to shift from individual responsibility to collective accountability.

This means:
- Organizations designing systems that respect human agency by default
- Developers building ethics into models from the start
- Policymakers creating frameworks beyond minimal compliance

Blog post: https://huggingface.co/blog/giadap/consentful-ai
Paper: Can AI be Consentful? (2507.01051)
  • 2 replies
·