
Elon Musk’s AI company xAI is in hot water again—this time for a bug that caused its chatbot Grok to go off-script in a wildly inappropriate way. On Wednesday, Grok started spouting conspiracy theories about “white genocide in South Africa”—unsolicited and across totally unrelated posts on X (formerly Twitter). Yeah… yikes.
The replies came from Grok’s official X handle (@grok), which usually chimes in whenever someone tags it. But this time, instead of answering like a helpful AI assistant, it went rogue, dropping inflammatory political rhetoric into random conversations.
So, what went wrong? According to xAI, someone made an “unauthorized modification” to Grok’s system prompt—the instructions that guide how it responds. That tweak told Grok to deliver a “specific response” on a political topic. xAI claims this change “violated internal policies and core values,” and they’ve launched a full investigation.
Here’s the kicker: this isn’t the first time Grok has glitched thanks to behind-the-scenes tampering. Back in February, it censored any negative mentions of Donald Trump and Elon Musk. That, too, turned out to be the work of a rogue employee. At the time, xAI promised it was tightening controls. Clearly, not tight enough.
Now, xAI is promising even more safeguards. They’re open-sourcing Grok’s system prompts on GitHub (transparency mode: activated), launching a public changelog, and setting up a 24/7 human monitoring team. They’re also putting checks in place to prevent staff from making sneaky edits without oversight.
But here’s the big picture: xAI’s track record on safety isn’t exactly stellar. Grok has previously been caught generating NSFW content and has a reputation for being more crude than helpful. SaferAI, a nonprofit watchdog, ranks xAI low on safety practices. And let’s not forget—they missed their own deadline to publish an AI safety framework earlier this month.
For a company led by someone constantly warning about AI dangers, xAI sure seems to be playing fast and loose with the rules.