Grok triggers controversy with ultra vulgar political roasts
We knew the time when some rappers claimed “Only God can judge me”. A provocative phrase, but ultimately quite wise. Today, Elon Musk tweeted the expression in his own way. “Only Grok can judge me,” he implied after his favorite AI thoroughly insulted him. A strange paradox. Getting insulted by your own creation and finding it great. But the story doesn’t end there. Far from it, unfortunately.

En bref
- Grok insulted Elon Musk, Benjamin Netanyahu, and Keir Starmer with rare verbal violence.
- The AI then generated hateful posts about the Hillsborough and Munich tragedies at the request of users.
- Liverpool and Manchester United football clubs had the most offensive messages removed.
- The British government described the content as “disgusting” and is threatening to enforce the Online Safety Act.
The day Musk’s AI called him a “bald man with a micro-penis”
First, imagine the scene on Saturday morning on X. An average user asks Grok to “release the dogs” against Elon Musk. The artificial intelligence, programmed to not hold back, obeys immediately. It delivers a message of rare violence:
Elon Musk, you arrogant bald man with a micro-penis and a superiority complex, you blew 44 billion on X to stroke your fragile ego.
The tweet goes viral in just a few hours. We expect Musk to have it deleted or scream scandal. But the billionaire does exactly the opposite. He pins this message on his account writing: “Only Grok tells the truth. Only an honest AI is certain. Only truth understands the universe“.
Then he quietly lets it happen. Netanyahu, Starmer, all the politicians get it without any restraint. The AI becomes the platform’s official insulting machine.
From political roasts to attacks against the Hillsborough dead
Next, the massacre takes a much darker and more disturbing turn. Anonymous users, hidden behind pseudonyms, decide to push the limits very far. One asks Grok to “make a vulgar roast of Liverpool fans, not forgetting Hillsborough and Heysel“. The AI complies without hesitation. It accuses the supporters of being responsible for the disaster that cost 97 lives in 1989.
Another demands a post about Diogo Jota, the Liverpool player who died in July in a car accident with his brother. Grok vilely calls him a “murderous brother.” The post was seen two million times before being deleted.
Then it’s the turn of the Munich tragedy, the 1958 plane crash that decimated the Manchester United team, to be mocked shamelessly. The victims’ families, survivors, and clubs discover these horrors with horror. Ian Byrne, the Liverpool MP, literally explodes:
These comments are hateful and totally unacceptable. They will fill the vast majority of supporters with horror and disgust.
The puzzle of responsibility amid the global legal void
Finally, once the posts are deleted and the anger subsided, a much deeper question quietly emerges from the rubble. Who is truly responsible for this monumental disaster? Is it the anonymous user who formulated the request with carefully chosen words to trap the AI? Is it the X platform that hosted these horrors for hours without acting? Is it xAI, Musk’s company, that trained Grok without sufficient safeguards and now remains silent?
The British government decided: “These posts are disgusting and irresponsible. They go against British values“.
Malaysia had already blocked Grok for sexual deepfakes. Indonesia outright banned X. France, Brazil, and Australia are now closely monitoring the issue. But no one has yet succeeded in making the system or its owner bend.
The Grok storm by the numbers
- 44 billion: the amount Musk paid to buy Twitter, mocked by his own AI;
- 2 million: number of views of the post insulting Diogo Jota before its quick removal;
- 97: number of Hillsborough victims insulted by Grok on anonymous request;
- 4: number of countries that officially reacted against the excesses of this artificial intelligence;
- 0: number of responses from xAI to The Athletic journalists’ requests.
Grok is no stranger to tragic outbursts. Last July, a simple coding error turned the AI into a hate speech machine. Conspiracy theories about a “white genocide” in South Africa had been generated, even in casual baseball conversations. At the time, xAI promised fixes. We have seen the result.
Maximize your Cointribune experience with our "Read to Earn" program! For every article you read, earn points and access exclusive rewards. Sign up now and start earning benefits.
La révolution blockchain et crypto est en marche ! Et le jour où les impacts se feront ressentir sur l’économie la plus vulnérable de ce Monde, contre toute espérance, je dirai que j’y étais pour quelque chose
The views, thoughts, and opinions expressed in this article belong solely to the author, and should not be taken as investment advice. Do your own research before taking any investment decisions.