Anthropic's recent findings reveal the alarming potential of AI models in the world of smart contracts. In an evaluation of ten advanced AI systems, such as Llama 3 and GPT-5, 405 historical exploits of smart contracts were analyzed. The machines successfully reproduced 207 of these exploits, resulting in an alarming total of $550 million in simulated stolen funds. This confirms that AI can be an efficient tool for both defense and offense.
In addition, these AI agents are increasingly identifying new zero-day vulnerabilities, with recently discovered defects in Binance Smart Chain contracts. The speed at which these systems can detect and exploit vulnerabilities is staggering. This demonstrates the importance of addressing vulnerabilities quickly before they are exploited. For investors, this means paying more attention to the security measures surrounding smart contracts to prevent potential asset loss.
The ease with which these AI models can find vulnerabilities isn't just interesting from a technical perspective. David Schwed, COO of SovereignAI, points out that many programmed attacks are easily scalable thanks to the public accessibility of many vulnerabilities. This allows criminals to scout systems 24/7 for potential attacks. This means that even projects with a low Total Asset Blocked (TVL) are not secure.
Anthropic points out that their AI agents are not only capable of attacking smart contracts, but that the same technology can be used to develop security measures. This requires an acceleration of defensive innovations so that good actors can keep pace with developments on the offensive side. In recent years, we've seen the cost of exploiting AI models drop dramatically—to 70,2% below the latest generations—meaning the threshold for attack is steadily lowering.
These findings paint not only a threatening picture, but also a pragmatic one. Schwed argues that most vulnerabilities are relatively easy to avoid with proper controls, internal testing, and real-time monitoring. Good actors have the same access to AI tools as criminals. It's therefore crucial that developers increasingly arm themselves against these risks and integrate innovative tools into their security processes. This way, they can combat the automated attacks of today and tomorrow.
What are the main findings of the Anthropic study?
The study reveals that AI systems were successful in more than half of the smart contract exploits examined and even found new vulnerabilities, indicating increasing accessibility of this technology to attackers.
How are security experts responding to these results?
Security experts are concerned, but also point to the opportunity to use AI for defense. With the right measures and innovations, many vulnerabilities can be prevented.
What can investors do to protect themselves against these threats?
Investors should pay attention to the security of projects they invest in, and ensure that developers implement robust security protocols and technologies.