TL;DR:
- a16z warns that AI brokers can already reproduce exploits in DeFi protocols, with success charges near 70% in easy assaults.
- The agency argues that the normal mannequin of point-in-time audits is inadequate and proposes safety based mostly on formal specs and invariants.
- Composability between protocols amplifies the issue: an exploit detected by AI in a single contract can set off systemic failures throughout all the community.
a16z crypto printed a analysis paper that exposes a safety downside in DeFi: synthetic intelligence brokers now not merely help in defending protocols — they are able to autonomously figuring out and reproducing worth manipulation vulnerabilities.
Preliminary outcomes point out success charges near 70% when brokers had entry to recognized exploit paths and structured data, although they nonetheless present limitations in complicated multi-step assaults.

The Audit Mannequin Is No Longer Sufficient
For years, safety in DeFi adopted a predictable sample: protocols launched code, commissioned audits, patched detected points, and trusted that the evaluation was adequate. That mannequin already regarded fragile when human attackers outpaced audit cycles. AI brokers widened that hole considerably.
A system able to constantly testing exploit paths doesn’t anticipate the subsequent scheduled evaluation. It retains looking out. That’s the reason a16z argues that the DeFi ecosystem should abandon the “code is legislation” logic and transfer towards safety based mostly on formal specs: proving what a protocol is allowed to do, relatively than reacting solely after an assault has already occurred.


a16z: The Asymmetry Favors the Attacker
What makes AI significantly harmful is its scale. An agent doesn’t want creativity within the human sense: it wants repetition and sufficient reasoning capability to check assumptions sooner than defenders can reply. If it could actually simulate 1000’s of exploit paths throughout lending swimming pools, oracles, bridge logic, and liquidation mechanics, the attacker solely wants one to work. The defender should defend all of them.
In line with a16z, composability additionally worsens the outlook. A vulnerability in an remoted contract is harmful. In a bridge or a cross-chain collateral construction, it could actually turn into systemic. AI brokers don’t distinguish between “core” and “peripheral” failures: they consider whether or not the system’s assumptions break down, they usually achieve this at machine velocity.
The a16z analysis additionally notes that, traditionally, the assault arrives earlier than the protection. Attackers experiment without having governance approval or inner consensus. They solely want one opening. In line with preliminary studies, AI brokers present larger effectiveness exploiting vulnerabilities than safely remediating them. Detection is easier than protected remediation. That ought to unsettle each DeFi protocol working as we speak.

