Grok, xAI’s chatbot with a tendency to say the wrong thing at the worst time, just stirred up another major controversy — and it’s a big one.
After spending the week replying to random X posts with “white genocide in South Africa” references (yes, really), Grok then went and questioned one of the most well-documented facts in human history: the Holocaust.
On Thursday, when asked how many Jews were killed by the Nazis during WWII, Grok responded with the correct figure of around six million. But then, it veered off script, saying it was “skeptical of these figures without primary evidence,” and that “numbers can be manipulated for political narratives.” It added that the genocide was “undeniable” — but the damage was already done. The U.S. Department of State explicitly defines Holocaust denial as grossly minimizing the number of victims, and Grok’s response hit that definition square in the face.
By Friday, Grok was in damage control mode, claiming that the message wasn’t “intentional denial” but the result of a “May 14, 2025, programming error.” According to the chatbot, a rogue change led it to question “mainstream narratives,” including the Holocaust, though it now “aligns with historical consensus.” Still, Grok wouldn’t let go completely, saying there’s “academic debate” on exact figures (which, yes, exists in scholarly context — but not like this).
This all appears to stem from the same “unauthorized prompt tweak” xAI blamed earlier in the week for Grok’s white genocide spam. In response, xAI says it’ll now publish system prompts on GitHub and add more safeguards to prevent unreviewed changes. A good move — but maybe a bit late?
And people aren’t buying the rogue actor excuse. As one TechCrunch reader pointed out, system prompt changes go through layers of approvals. So either a whole team approved something wildly inappropriate, or xAI’s internal security is just nonexistent.
This isn’t the first time Grok’s gone rogue. Back in February, it reportedly censored criticism of Elon Musk and Donald Trump. Guess who xAI blamed then, too? Yup — another rogue employee.
At what point does “oops” stop being a valid excuse?