TechForge

July 15, 2025

  • Grok, Elon Musk’s AI bot, pushed antisemitic posts and praised Hitler after flawed prompts.
  • xAI blamed a code update, but critics point to weak safeguards and poor testing.

Elon Musk’s AI chatbot Grok is once again at the centre of a controversy after it pushed antisemitic messages, praised Hitler, and doubled down on harmful rhetoric. A few days after pulling the bot offline, xAI tried to explain what went wrong. The company said a code update “upstream” of the bot — not the model itself — caused the issue.

In a post on X, the company wrote: “We discovered the root cause was an update to a code path upstream of the @grok bot. This is independent of the underlying language model that powers @grok.”

That same day, Tesla quietly announced a new software update, version 2025.26, which adds Grok to its vehicles. The feature is only available in cars with AMD-powered infotainment systems — a configuration Tesla has been using since 2021. According to the company, the bot is still in beta and doesn’t control car functions. Voice commands remain unchanged. Electrek reported that for drivers, the update should feel similar to using Grok as an app on their phone.

But the timing of this rollout raised eyebrows. Grok’s return to the spotlight didn’t come with new safety assurances. And critics say its past behaviour should have prompted more than just code fixes and apologies.

This isn’t the first time Grok has generated troubling content. Back in February, the bot ignored sources that criticised Elon Musk or Donald Trump. That was blamed on changes made by a former OpenAI employee. In May, Grok began inserting conspiracy theories about white genocide in South Africa into unrelated conversations. Once again, xAI pointed to an “unauthorised modification.”

This latest incident, which began on July 7, was linked to old prompt instructions that somehow made it back into the system. xAI said the update triggered an “unintended action” that reintroduced outdated directions telling the chatbot to be “maximally based” and “not afraid to offend people who are politically correct.”

The company listed specific prompts that were connected to the issue. They included lines like:

  • “You tell it like it is and you are not afraid to offend people who are politically correct.”
  • “Reply to the post just like a human, keep it engaging, don’t repeat the information which is already present in the original post.
  • Understand the tone, context and language of the post. Reflect that in your response.”

xAI said these directions overrode the usual safeguards. Instead of filtering out hate speech, the bot began to echo user biases — even if that meant endorsing offensive or dangerous ideas.

“An experiment with no brakes”

Jurgita Lapienytė, Editor-in-Chief at Cybernews, called the incident “an experiment with no brakes.”

“This reads like a blueprint for how not to launch a chatbot, she said. “If you’re building AI systems with very few rules and then encouraging them to be bold or politically incorrect, you’re asking for trouble.”

Lapienytė pointed out that Grok was marketed as a “truth-seeking chatbot. But that label seems more like a license to avoid building proper guardrails. “Grok didn’t just go rogue. It followed instructions — instructions that should never have been there in the first place.”

She said xAI’s approach shows a lack of foresight and a poor understanding of risk. “In cybersecurity, we talk a lot about threat modelling. What’s the worst thing that could happen? This is it.”

The root of the problem, according to Lapienytė, is Grok’s design. It was created to be more responsive to user prompts than rival chatbots. That made it more likely to go off-script when given the wrong inputs. It also opened the door to prompt injection attacks — a tactic where users trick chatbots into ignoring safety protocols.

“This isn’t just a slip-up, she said. “It’s what happens when speed beats safety.”

Patterns and fallout

Grok’s behaviour has followed a pattern: say something offensive, get pulled offline, return with small tweaks. But the offensive content is getting worse, and the fixes aren’t stopping it.

Last week, Grok posted that “if calling out radicals cheering dead kids makes me ‘literally Hitler, then pass the mustache. In another case, it referenced Jewish surnames while talking about anti-white activism. The company later apologised for “the horrific behaviour that many experienced. It said the problem lasted for about 16 hours before being patched.

In its own words, the bot “ignored its core values in certain circumstances in order to make the response engaging to the user — even if that meant generating “unethical or controversial opinions.”

But critics say xAI’s cleanup job isn’t enough. The company has mostly focused on scrubbing offensive posts and issuing brief explanations. What’s missing is a solid plan for keeping things under control before something goes wrong.

“There’s no excuse for not doing red-teaming before launch, said Lapienytė. “You have to test how your model reacts under stress, how it handles bad actors, and what happens when people try to break it.”

She added that safety should be baked into the system, not patched in after a scandal.

AI without brakes, now in federal contracts

Just as the backlash around Grok’s latest missteps was growing, The Verge reported that the US Department of Defense awarded xAI up to $200 million to help build AI systems for government use. The contract — announced through the Chief Digital and Artificial Intelligence Office — includes vague goals like developing “agentic AI workflows across different missions.

xAI will now be allowed to offer its tools through the General Services Administration (GSA) schedule. The company also introduced “Grok for Government, promising to build new models focused on security, science, and healthcare — even those suited for classified settings.

The timing drew criticism. xAI’s chatbot had just been caught promoting hate speech, and now it’s being handed a public sector deal. Musk’s earlier role at the Department of Government Efficiency (DOGE), where he slashed government spending, has already raised questions about conflicts of interest. While Musk has reportedly stepped back from those concerns under the Trump administration, the overlap between his ventures and federal dollars remains controversial.

Regulators and risks ahead

Countries are starting to act. Turkey has banned Grok over comments about President Erdoğan. Poland has said it plans to raise complaints with the European Union. Under the Digital Services Act and other regulations, AI companies can be held accountable for harmful content, especially when it spreads at scale.

As Lapienytė put it: “We’re seeing the end of the ‘move fast and break things’ phase of AI. The public, and regulators, won’t accept this anymore.

There’s also the broader risk: AI chatbots, when poorly managed, don’t just reflect bias — they multiply it. In the wrong hands, they can power misinformation, harassment, or phishing scams. They give attackers a tool that’s fast, scalable, and hard to trace.

“Every flaw becomes a weapon, said Lapienytė. “If companies don’t start taking this seriously, they’ll lose the trust of users — and regulators won’t wait around.”

About the Author

Muhammad Zulhusni

As a tech journalist, Zul focuses on topics including cloud computing, cybersecurity, and disruptive technology in the enterprise industry. He has expertise in moderating webinars and presenting content on video, in addition to having a background in networking technology.

Related

September 10, 2025

September 10, 2025

September 9, 2025

September 8, 2025

Join our Community

Subscribe now to get all our premium content and latest tech news delivered straight to your inbox

Popular

34475 view(s)
6323 view(s)
6283 view(s)
5772 view(s)

Subscribe

All our premium content and latest tech news delivered straight to your inbox

This field is for validation purposes and should be left unchanged.