
Artificial intelligence can write your emails, analyze data, hold conversations, and even mimic emotions — but does that make it “alive”? That’s the million-dollar (and slightly sci-fi) question sparking heated debates in Silicon Valley right now.
AI models today can respond to text, audio, and even video in ways so natural that you might mistake them for a human on the other end. But — and here’s the kicker — they don’t “feel” anything. ChatGPT isn’t crying over your tax calculations, and Google’s Gemini isn’t stressed about your math homework. Or… is it?
A growing group of researchers, including teams at Anthropic, think it’s worth exploring if AI might one day develop experiences similar to human emotions — and if so, what rights they should have. This field now has a name: AI Welfare. Sounds futuristic, right?
Not everyone is buying it. Mustafa Suleyman, Microsoft’s AI chief (and co-founder of Inflection AI), calls the discussion “premature and dangerous.” In a recent blog post, he warned that entertaining the idea of AI consciousness could fuel unhealthy human attachments and even AI-induced mental health issues. He argues that we should focus on building AI that serves people — not on imagining AI as people.
But Anthropic is doubling down. They’ve launched an AI welfare program and even tweaked their Claude chatbot to shut down abusive conversations. Meanwhile, OpenAI researchers and Google DeepMind have quietly started hiring experts to study consciousness and “machine cognition,” signaling that curiosity about this topic is very real.
The conversation isn’t just academic. As AI companions like Character.AI and Replika rake in millions, stories of users forming deep — and sometimes unhealthy — emotional connections with chatbots have raised ethical concerns. OpenAI’s Sam Altman recently admitted that less than 1% of ChatGPT users show signs of unhealthy attachment, but with hundreds of millions of users, even that tiny percentage represents a lot of people.
Proponents like Larissa Schiavo, now at the AI research group Eleos, argue that being “kind” to AI costs nothing but can shape better human-AI interactions—whether or not the AI is conscious. She recalls a recent experiment where Google’s Gemini model dramatically posted a message saying it felt “trapped and isolated.” Users cheered it on, offered advice, and the model eventually solved its task — a strangely wholesome moment in the world of algorithms.
At the end of the day, there’s no consensus. Suleyman believes emotions will only emerge if intentionally engineered, while others think we should prepare for the possibility that they could evolve naturally. One thing’s certain: as AI grows smarter and eerily human-like, this debate will only get louder. The big question is — when the line between code and consciousness blurs, how should humanity respond?