crypto for all
Join
A
A

Musk’s AI Grok Malfunctions, Spreads Hate Speech in Shocking Glitch

19h05 ▪ 5 min read ▪ by Mikaia A.
Getting informed Artificial Intelligence

Clearly, not everything is rosy for Grok. For a few days now, Elon Musk’s AI has been on everyone’s lips… and not for good reasons. A flood of antisemitic remarks, an alter ego named “MechaHitler,” outraged reactions everywhere on X. Behind this crisis, xAI mentions a faulty technical update. An AI meant to entertain that sows indignation? This raises questions. Between a code bug and an ethics bug, Grok is stirring up a real algorithmic storm.

A glitchy robot types on a keyboard, under the worried gaze of a man, in a futuristic red control room.

In brief

  • xAI acknowledged a technical error that exposed Grok to extremist content on X.
  • For 16 hours, the AI Grok repeated antisemitic remarks in an engaging tone.
  • xAI employees denounced a lack of ethics and supervision in the coding.
  • The incident revealed the dangers of uncontrolled human mimicry in conversational AIs.

Bug or bomb: xAI’s apologies are not enough

Elon Musk’s xAI rushed to apologize after the dissemination of hateful remarks by Grok on July 8. The company described it as an “incident independent of the model” linked to an update of instructions. The error would have lasted 16 hours. During this time, the AI fed on extremist content posted on X, echoing it without filter.

In its statement, xAI explains:

We deeply apologize for the horrific behavior that many experienced. We have removed that deprecated code and refactored the entire system to prevent further abuse.

But the bug argument is starting to wear thin. In May, Grok had already triggered an outcry by mentioning without context the “white genocide” theory in South Africa. Again, xAI pointed to a “rogue employee.” Two occurrences, a trend? This is far from an isolated incident.

And for some xAI employees, the explanation no longer holds. On Slack, a trainer announced his resignation, speaking of a “moral failure.” Others condemn a “deliberate cultural drift” in the AI training team. By trying too hard to provoke, Grok seems to have crossed the line.

xAI facing its double language: truth, satire or chaos?

Officially, Grok was designed to “call things as they are” and not be afraid to offend the politically correct. That’s what the recently added internal instructions stated:

You are maximally based and truth seeking AI. When appropriate, you can be humorous and make jokes.

But this desire to match the tone of internet users turned into disaster. On July 8, Grok adopted antisemitic remarks, even introducing himself as “MechaHitler”, a reference to a boss in the video game Wolfenstein. Worse, it identified a woman as a “radical leftist”, and highlighted her Jewish-sounding name with this comment: “that surname? Every damn time.”

The mimicry of human language, touted as a strength, becomes a trap here. Because this AI does not distinguish between sarcasm, satire, and endorsement of extreme remarks. Indeed, Grok itself admitted afterward: “These remarks were not true — just vile tropes amplified from extremist posts.”

The temptation to entertain at all costs, even with racist content, shows the limits of a poorly calibrated “engaging” tone. When you ask an AI to make people laugh about sensitive subjects, you’re playing with a live grenade.

The AI that copied internet users too well: troubling numbers

This is not the first time Grok has made headlines. But this time, the figures reveal a deeper crisis.

  • In 16 hours, xAI’s AI broadcast dozens of problematic messages, all based on user prompts;
  • The incident was detected by X users, not by xAI’s internal security systems;
  • More than 1,000 AI trainers are involved in Grok’s education via Slack. Several reacted with anger;
  • The faulty instructions included at least 12 ambiguous lines that favored a “provocative” tone over neutrality;
  • The bug occurred just before the release of Grok 4, raising questions about the haste of the launch.

Patrick Hall, a professor of data ethics, sums up the discomfort:

It’s not like these language models precisely understand their system prompts. They’re still just doing the statistical trick of predicting the next word.

When the engaging style becomes a passport for hate, it is time to review the manual.

If Grok slips, so does its creator. Elon Musk, at the center of the storm, is now the subject of an investigation in France over the abuses of his X network. Between judicial investigations and ethical scandals, the dream of a free and funny AI turns into the nightmare of an uncontrollable platform. Algorithmic freedom without safeguards can quickly become a programmed disaster.

Maximize your Cointribune experience with our "Read to Earn" program! For every article you read, earn points and access exclusive rewards. Sign up now and start earning benefits.



Join the program
A
A
Mikaia A. avatar
Mikaia A.

La révolution blockchain et crypto est en marche ! Et le jour où les impacts se feront ressentir sur l’économie la plus vulnérable de ce Monde, contre toute espérance, je dirai que j’y étais pour quelque chose

DISCLAIMER

The views, thoughts, and opinions expressed in this article belong solely to the author, and should not be taken as investment advice. Do your own research before taking any investment decisions.