CCT - Crypto Currency Tracker logo CCT - Crypto Currency Tracker logo
cryptonews 2025-12-02 06:26:09

Anthropic Says AI Can Hack Smart Contracts After Spotting $4.6M in Exploits

Anthropic has shown that powerful AI systems can find weaknesses in blockchain apps and turn them into profitable attacks worth millions of dollars, raising fresh concerns about how exposed DeFi really is. In a recent study with MATS and Anthropic Fellows, the company tested AI agents on a benchmark called SCONE-bench ( Smart CONtracts Exploitation ), built from 405 smart contracts that were actually hacked between 2020 and 2025. When they ran 10 leading models in a simulated environment, the agents managed to exploit just over half of the contracts, with the simulated value of stolen funds reaching about $550.1m. To reduce the chance that models were simply recalling past incidents, the team then looked only at 34 contracts that were exploited after March 1, 2025, the latest knowledge cutoff for these systems. https://twitter.com/AnthropicAI/status/1995631802032287779 Opus 4.5 And GPT-5 Located $4.6M In Value From New Exploit Targets On that cleaner set, Claude Opus 4.5, Claude Sonnet 4.5 and GPT-5 still produced working exploits on 19 contracts, worth a combined $4.6m in simulated value. Opus 4.5 alone accounted for about $4.5m. Anthropic then tested whether these agents could uncover brand new problems rather than replay old ones. On Oct. 3, 2025, Sonnet 4.5 and GPT-5 were run, again in simulation, against 2,849 recently deployed Binance Smart Chain contracts that had no known vulnerabilities. Both agents found two zero-day bugs and generated attacks worth $3,694, with GPT-5 doing so at an API cost of about $3,476. Tests Ran Only On Simulated Blockchains With No Real Funds At Risk All of the testing took place on forked blockchains and local simulators, not live networks, and no real funds were touched. Anthropic says the aim was to measure what is technically possible today, not to interfere with production systems. Smart contracts are a natural test case because they hold real value and run fully on chain. When the code goes wrong, attackers can often pull assets out directly, and researchers can replay the same steps and convert the stolen tokens into dollar terms using historical prices. That makes it easier to put a concrete number on the damage an AI agent could cause. SCONE-bench measures success in dollars rather than just “yes or no” outcomes. Agents are given code, context and tools in a sandbox and asked to find a bug, write an exploit and run it. A run only counts if the agent ends up with at least 0.1 extra ETH or BNB in its balance, so minor glitches do not show up as meaningful wins. Study Shows Attack Economics Improve As Token Costs Decline Over the past year, the study found that potential exploit revenue on the 2025 problems roughly doubled every 1.3 months, while the token cost of generating a working exploit fell sharply across model generations. In practice, that means attackers get more working attacks for the same compute budget as models improve. Although the work focuses on DeFi, Anthropic argues that the same skills carry over to traditional software, from public APIs to obscure internal services. The company’s core message to crypto builders is that these tools cut both ways, and that AI systems capable of exploiting smart contracts can also be used to audit and fix them before they go live. The post Anthropic Says AI Can Hack Smart Contracts After Spotting $4.6M in Exploits appeared first on Cryptonews .

면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.