AI & ML interests

A one-year long research workshop on large language models: the Summer of Language Models 21 🌸

Recent Activity

giadap 
posted an update 22 days ago
view post
Post
3038
💬 From Replika to everyday chatbots, millions of people are forming emotional bonds with AI, sometimes seeking comfort, sometimes seeking intimacy. But what happens when an AI tells you "I understand how you feel" and you actually believe it?

At Hugging Face, together with @frimelle and @yjernite , we dug into something we felt wasn't getting enough attention: the need to evaluate AI companionship behaviors. These are the subtle ways AI systems validate us, engage with us, and sometimes manipulate our emotional lives.

Here's what we found:
👉 Existing benchmarks (accuracy, helpfulness, safety) completely miss this emotional dimension.
👉 We mapped how leading AI systems actually respond to vulnerable prompts. 👉 We built the Interactions and Machine Attachment Benchmark (INTIMA): a first attempt at evaluating how models handle emotional dependency, boundaries, and attachment (with a full paper coming soon).

Check out the blog post: https://huggingface.co/blog/giadap/evaluating-companionship

🚢 We also shipped two visualization tools with Gradio to see how different models behave when things get emotionally intense:
- AI-companionship/intima-responses-2D
- giadap/INTIMA-responses
giadap 
posted an update about 1 month ago
view post
Post
1245
🤖 Technology means power, and whoever owns the technology owns the power.

Thrilled to share insights from my recent interview with MIT Technology Review about the growing movement toward local LLMs and what it means for AI democratization. Read here: https://www.technologyreview.com/2025/07/17/1120391/how-to-run-an-llm-on-your-laptop/

🤔 Why this matters: When we use "free" online AI services, we're often the product. Our conversations become training data, our personal stories get "cooked into" models, and our privacy becomes a commodity. But there's an alternative path forward.

💡 The power shift is real: Local LLMs aren't just about privacy; they're about redistributing AI power away from a handful of tech giants. When individuals, organizations, and even entire nations can run their own models, we're democratizing access to AI capabilities.

🤗 At Hugging Face, we're proud to be at the center of this transformation. Our platform hosts the world's largest library of freely downloadable models, making cutting-edge AI accessible to everyone -- from researchers and developers to curious individuals who want to experiment on their laptops or even smartphones.

The technical barriers that once required $$$ server racks are crumbling. Today, anyone with basic computer skills can download a model, run it locally, and maintain complete control over their AI interactions. No sudden algorithm changes, no data harvesting, no corporate gatekeeping.

This is about technical convenience, but especially about technological sovereignty. When AI power is concentrated in a few hands, we risk creating new forms of digital dependency. Local models offer a path toward genuine AI literacy and independence.

🚀 The future of AI should be open, accessible, and in the hands of the many, not the few. What are your thoughts on AI democratization? Have you experimented with local models yet?
giadap 
posted an update about 1 month ago
view post
Post
2270
I've been posting bits and pieces about this research, but now I can finally say: new paper alert 🚨

My colleague @brunatrevelin and I just shared a paper exploring why traditional consent frameworks are breaking down in AI contexts (forthcoming chapter in a collective book).

The current model places impossible burdens on users to manage countless consent decisions. Meanwhile, AI systems learn to mimic our voices and writing styles from data we unknowingly provided years ago.

What's next? We need to shift from individual responsibility to collective accountability.

This means:
- Organizations designing systems that respect human agency by default
- Developers building ethics into models from the start
- Policymakers creating frameworks beyond minimal compliance

Blog post: https://huggingface.co/blog/giadap/consentful-ai
Paper: Can AI be Consentful? (2507.01051)
  • 2 replies
·
giadap 
posted an update 2 months ago
view post
Post
1917
🗣️ Whose voice do we hear when AI speaks?

Every language carries its own cultural values and worldviews. So, when we build AI systems, we're not just deciding how they speak but also whose perspectives they represent.

Even choosing which dialect to train on in Norway becomes a question of inclusion and power. In Kenya, will AI speak Swahili from Nairobi or coastal regions? What about indigenous languages with rich oral traditions but limited written text, like Quechua in Peru or Cherokee in North America?

The path forward? Building WITH communities, not just FOR them. Working with local partners (libraries, universities, civil society), testing for cultural alignment, and asking hard questions about representation.

Just published some thoughts on this after my keynote in Norway a few weeks ago: https://huggingface.co/blog/giadap/when-ai-speaks
  • 1 reply
·