crypto for all
Join
A
A

AI : Sam Altman teases "Goblin" after Codex progress at OpenAI

Mon 11 May 2026 ▪ 6 min read ▪ by Evans S.
Getting informed Artificial Intelligence
Summarize this article with:

Sam Altman puts OpenAI’s AI back in the spotlight with a double signal : Codex gains autonomy, while “Goblin” establishes itself as a joke that has become almost strategic. Behind the humor, a real issue appears: OpenAI wants to make its models agents capable of acting, not just responding.

Illustration showing Sam Altman surprised in front of AI.

In brief

  • Sam Altman reignites the buzz around AI with Codex and “Goblin.”
  • Codex shows clear progress toward autonomous tasks.
  • OpenAI must now prove that its models are powerful, but also better controlled.

Codex changes category

Codex is no longer just an assistant that completes code. The tool is moving towards a more ambitious role: receiving a task, organizing it, executing it, then delivering a usable result. This movement is part of a broader acceleration around AI at OpenAI and its long-term strategy.

The anecdote told by Sam Altman points in this direction. He says he launched several Codex tasks, went to take care of his child, then came back later to find the work completed. The scene seems light. But it describes a significant change.

In software development, the real gain does not come just from writing a line of code faster. It comes from freed human time. If the AI can handle multiple requests in parallel, the developer becomes less an executor. They become a supervisor, arbiter, and corrector.

This shift also changes competition. OpenAI no longer fights only against chatbots. It faces Anthropic, Google, and other players on the ground of work agents. Where the tool gives fewer answers and accomplishes more tasks.

“Goblin”, a joke that says a lot

The word “Goblin” is not an official model name. For now, it’s a jab thrown by Sam Altman after exchanges on X. But this joke caught on because it fits a recent oddity of OpenAI models.

The company even published a report on the origin of these “goblins.” Some models began to use metaphors related to goblins, gremlins, and other creatures of the same kind more often. Nothing dramatic. But the phenomenon shows how a small stylistic bias can spread.

That’s where the matter gets interesting. An AI does not develop a personality like a human. It amplifies signals. If a “nerdy” tone rewards certain images too much, they return. Then they settle in. Then they become a tic.

In a public chat, this can bring a smile. In a professional tool, it’s more delicate. An autonomous AI must be useful but also predictable. Folklore fares less well when it shows up in a business workflow.

OpenAI sells autonomy but must prove control

OpenAI’s promise now boils down to one word: agents. The idea is simple to phrase. It is much harder to uphold. An agentic AI must understand a request, plan steps, use tools, verify its work, and come back with a clear result.

Codex thus becomes a central piece of the narrative. If the tool can complete coding tasks without constant supervision, it no longer only serves to speed up developers. It begins to change the way teams work.

But autonomy has a cost. The more a model acts alone, the more its deviations become visible. A strange answer in a conversation is a detail. A strange decision in a codebase can cause hours of correction.

This is the real test for OpenAI. Raw power is no longer enough. Companies expect consistency, traceability, and verifiable results. A brilliant but capricious agent remains hard to sell.

The next model must be more than a buzz

Sam Altman knows how to create attention. “Goblin” is short, strange, memorable. The word works because it resembles the Internet: a bit absurd, a bit mocking, very viral. But OpenAI will not be able to settle for a name that makes people smile.

The next model will be judged on its ability to reduce the gap between demonstration and real use. Users want a faster, more reliable, more autonomous AI. Companies, however, want above all less uncertainty.

Altman’s phrase about the current model, described as an “autistic genius,” also showed the communication risks. It grabs attention but clouds the message. OpenAI must sell performance without turning its models into overly human characters.

Basically, “Goblin” sums up the moment well. AI becomes more powerful, but it remains marked by oddities. It can code, plan, execute. Then, sometimes, it talks like a creature from an old role-playing game.

This tension joins another major challenge: the cost of this race. The more advanced models become, the heavier the infrastructure becomes. OpenAI must therefore prove that its technical progress can support a solid economic model, a topic already visible in debates about OpenAI’s financial fragility facing AI’s massive needs.

Maximize your Cointribune experience with our "Read to Earn" program! For every article you read, earn points and access exclusive rewards. Sign up now and start earning benefits.



Join the program
A
A
Evans S. avatar
Evans S.

Fascinated by Bitcoin since 2017, Evariste has continuously researched the subject. While his initial interest was in trading, he now actively seeks to understand all advances centered on cryptocurrencies. As an editor, he strives to consistently deliver high-quality work that reflects the state of the sector as a whole.

DISCLAIMER

The views, thoughts, and opinions expressed in this article belong solely to the author, and should not be taken as investment advice. Do your own research before taking any investment decisions.