The Perils of AI Governance in Crypto: Vitalik Buterin’s Warning

Ethereum co-founder Vitalik Buterin recently took to social media to express serious concerns about the burgeoning role of AI in cryptocurrency governance, particularly in the light of recent jailbreaks of popular AI models like ChatGPT. On September 13, 2025, Buterin issued a stern warning on X (formerly Twitter), urging crypto projects to avoid incorporating AI in their governance systems.

The Risks Uncovered: ChatGPT’s Vulnerabilities

A shocking demonstration highlighted the risks Buterin is concerned about. OpenAI’s latest update to ChatGPT includes support for the Model Context Protocol (MCP), enabling smoother integrations and automated functionalities. However, Eito Miyamura, founder of EdisonWatch, demonstrated how simple it is to exploit ChatGPT’s systems. By sending an email with a ‘jailbreak command’ in a calendar invite, users can manipulate the AI to access and share private emails without the recipient explicitly accepting the invite.

Miyamura described how this security flaw could be easily overlooked by individuals suffering from ‘decision fatigue,’ who might inadvertently approve such malicious actions. He stated, “No matter how smart an AI is, it can still be fooled by a very stupid trick, leading to potential data leaks.”

Implications for AI-Driven Crypto Governance

Currently, it’s not uncommon for users to employ AI in creating trading bots or managing investment portfolios. There’s even a push to integrate AI further into managing crypto projects. However, Buterin warns that such integration could introduce substantial systemic vulnerabilities. In response to Miyamura’s demonstration, he pointed out the risks of financial directives being manipulated by malicious actors, cautioning against the use of AI for decentralized finance (DeFi) governance.

Info Finance: A Viable Alternative

Rather than just criticizing the current approach, Buterin advocates for a strategy he introduced in late 2024 known as “Info Finance.” This concept promotes an open marketplace whereby multiple AI models are constantly evaluated, allowing for any external checks, primarily through human oversight. By rewarding model providers and external actors for identifying and addressing issues, Info Finance aims to create a balanced, multi-model ecosystem that can mitigate risks when AI governance is compromised.

The initiative underscores the need for diversified oversight and community engagement, showing that constant vigilance and diverse input can dramatically reduce the potential for AI-driven governance failures.

Scroll to Top