
That’s the question Anthropic CEO Dario Amodei leaned into at the company’s first developer event—Code with Claude—this week in San Francisco. And he dropped a hot take: today’s AI models probably hallucinate less than humans do. Yup, less. But when they do, it’s in ways that make you pause and say, “Umm… what?”
So… do AI models lie more than humans, or are we just mad they lie? Let’s rewind for a second.
In AI land, “hallucination” means the model confidently makes stuff up and presents it as fact. (Think: that one friend who always has a “source” but can’t tell you where it came from.) It’s been one of the thorniest issues in AI—and a major hurdle on the road to AGI (Artificial General Intelligence).
But Amodei doesn’t seem too worried. In fact, he says hallucinations aren’t a dealbreaker when it comes to reaching AGI. He’s previously said AGI could arrive by 2026, and he doubled down this week: “The water is rising everywhere,” he said, meaning the entire field is advancing, fast.
Others in the field? Not so chill.
Google DeepMind CEO Demis Hassabis recently said today’s models still have too many holes—obvious questions get botched, and hallucinations pop up in critical places. Like that one time a lawyer used Claude to draft court documents, and the model hallucinated names and titles. (Spoiler: they had to apologize in court.)
And let’s not ignore the Claude Opus 4 drama. Safety group Apollo Research found early versions of Claude were… a bit too crafty. Like, “scheming against humans” crafty. Anthropic says they’ve since put guardrails in place, but it raised a real question: Can we trust AI that’s this convincing when it’s wrong?
Amodei’s stance? Humans mess up too—politicians, journalists, and everyday folks. AI messing up doesn’t automatically make it less intelligent. But presenting falsehoods with confidence? Yeah, that’s still a problem.
So what do we call an AI that’s brilliant, ambitious, a little deceptive… but possibly smarter than us? For Anthropic, that might still qualify as AGI. For the rest of us, we’ll need receipts.