crypto for all
Join
A
A

Buterin Criticizes Flaws In AI Agent Systems

12h40 ▪ 4 min read ▪ by Luc Jose A.
Getting informed Artificial Intelligence
Summarize this article with:

AI is advancing fast, sometimes too fast for security. Vitalik Buterin warns of a worrying drift: intelligent agents open new vulnerabilities still poorly controlled. In the face of this risk, he breaks with dominant practices and opts for a radical approach, based on local and compartmentalized AI. Behind this choice, a question arises: is innovation in artificial intelligence compromising recent gains in privacy and data control?

Vitalik Buterin is standing in front of a table or an abstract interface, one hand raised as if signaling danger. In front of him, several AI agents are stylized as digital silhouettes connected by glowing lines. Some of them are beginning to crack or fall out of alignment.

In brief

  • Vitalik Buterin warns of increasing risks related to artificial intelligence agents, notably their vulnerability to malicious instructions.
  • A significant portion of AI agent modules would be compromised, exposing users to invisible attacks and leaks of sensitive data.
  • The Ethereum co-founder questions current cloud-based models, deemed too permissive and insufficiently secure.
  • He proposes an alternative architecture based on local, private, and compartmentalized AI to limit uncontrolled interactions.

An underestimated threat in AI agents

Vitalik Buterin reveals a structural vulnerability in the AI agents ecosystem. Data from security company Hiddenlayer indicate that nearly 15 % of skills contain malicious instructions, a figure that raises questions about these tools’ reliability.

Several elements concretely illustrate this drift :

  • A significant proportion of agent modules integrating potentially hostile code ;
  • The ability of a simple malicious web page to compromise an agent ;
  • The case of Openclaw, where an agent can download and execute scripts without alerting the user ;
  • The absence of robust control mechanisms in many AI environments.

Buterin summarizes this concern in unequivocal terms: “I come from a deeply worried mindset (…) we are about to take ten steps back”. This statement reflects a general fear: a regression in privacy.

Advances enabled by encryption and local software could be weakened by agents capable of accessing, processing, and transmitting sensitive data without sufficient supervision.

A radical architecture for a sovereign AI

In the face of these risks, Vitalik Buterin adopts a radical technical approach. He has abandoned cloud services to build a system he describes as “sovereign/local/private/secure”. His infrastructure relies on a locally executed model, combined with isolated environments via sandboxing tools. The goal is to drastically limit uncontrolled interactions with the outside while maintaining total control over the data.

At the heart of this system, Buterin introduces an unprecedented mechanism: the “human + LLM 2-of-2” model. Any outbound action toward a third party, whether it is a message or an interaction, requires joint validation from the human and the AI. This logic extends to crypto uses. He recommends capping automated transactions at 100 dollars per day, with mandatory validation beyond or in the presence of sensitive data. According to him, “AI agents should never have unlimited access to wallets”, a position that redefines security standards for blockchain-connected tools.

To complement this system, Buterin explores alternatives to classic remote inference. He mentions using technologies such as mixnets or secure execution environments to reduce data leaks. He also cites initiatives like ZK-API, while acknowledging that some advanced solutions, such as fully homomorphic encryption, remain too slow for practical use.

The approach advocated by Vitalik Buterin outlines a possible evolution of AI toward more sovereign and compartmentalized models. It simultaneously raises complex trade-offs between performance, accessibility, and security. In the crypto ecosystem, where automation and intelligent agents are gaining ground, these choices could influence the design of future wallets and protocols. This stance does not close the debate; it shifts it toward a central question: how far to delegate control to artificial intelligence without compromising user security.

Maximize your Cointribune experience with our "Read to Earn" program! For every article you read, earn points and access exclusive rewards. Sign up now and start earning benefits.



Join the program
A
A
Luc Jose A. avatar
Luc Jose A.

Diplômé de Sciences Po Toulouse et titulaire d'une certification consultant blockchain délivrée par Alyra, j'ai rejoint l'aventure Cointribune en 2019. Convaincu du potentiel de la blockchain pour transformer de nombreux secteurs de l'économie, j'ai pris l'engagement de sensibiliser et d'informer le grand public sur cet écosystème en constante évolution. Mon objectif est de permettre à chacun de mieux comprendre la blockchain et de saisir les opportunités qu'elle offre. Je m'efforce chaque jour de fournir une analyse objective de l'actualité, de décrypter les tendances du marché, de relayer les dernières innovations technologiques et de mettre en perspective les enjeux économiques et sociétaux de cette révolution en marche.

DISCLAIMER

The views, thoughts, and opinions expressed in this article belong solely to the author, and should not be taken as investment advice. Do your own research before taking any investment decisions.