Felix Pinkston
Feb 18, 2026 20:03
New Anthropic analysis exhibits Claude Code autonomy practically doubled in 3 months, with skilled customers granting extra independence whereas sustaining oversight.
AI brokers are working independently for considerably longer intervals as customers develop belief of their capabilities, in keeping with new analysis from Anthropic revealed February 18, 2026. The research, which analyzed tens of millions of human-agent interactions, discovered that the longest-running Claude Code periods practically doubled from below 25 minutes to over 45 minutes between October 2025 and January 2026.
The findings arrive as Anthropic rides a wave of investor confidence, having simply closed a $30 billion Sequence G spherical that valued the corporate at $380 billion. That valuation displays rising enterprise urge for food for AI brokers—and this analysis gives the primary large-scale empirical take a look at how people really work with them.
Belief Builds Progressively, Not By Functionality Jumps
Maybe essentially the most putting discovering: the rise in autonomous operation time was clean throughout mannequin releases. If autonomy have been purely about functionality enhancements, you’d count on sharp jumps when new fashions dropped. As a substitute, the regular climb suggests customers are progressively extending belief as they achieve expertise.
The information backs this up. Amongst new Claude Code customers, roughly 20% of periods use full auto-approve mode. By the point customers hit 750 periods, that quantity exceeds 40%. However here is the counterintuitive half—skilled customers additionally interrupt Claude extra steadily, not much less. New customers interrupt in about 5% of turns; veterans interrupt in roughly 9%.
What’s taking place? Customers aren’t abandoning oversight. They’re shifting technique. Quite than approving each motion upfront, skilled customers let Claude run and step in when one thing wants correction. It is the distinction between micromanaging and monitoring.
Claude Is aware of When to Ask
The analysis revealed one thing surprising about Claude’s personal conduct. On advanced duties, the AI stops to ask clarifying questions greater than twice as typically as people interrupt it. Claude-initiated pauses really exceed human-initiated interruptions on essentially the most tough work.
Frequent causes Claude stops itself embrace presenting customers with decisions between approaches (35% of pauses), gathering diagnostic info (21%), and clarifying obscure requests (13%). In the meantime, people sometimes interrupt to supply lacking technical context (32%) or as a result of Claude was operating sluggish or extreme (17%).
This means Anthropic’s coaching for uncertainty recognition is working. Claude seems calibrated to its personal limitations—although the researchers warning it could not at all times cease on the proper moments.
Software program Dominates, However Riskier Domains Emerge
Software program engineering accounts for practically 50% of all agentic device calls on Anthropic’s public API. That focus is smart—code is testable, reviewable, and comparatively low-stakes if one thing breaks.
However the researchers discovered rising utilization in healthcare, finance, and cybersecurity. Most actions stay low-risk and reversible—solely 0.8% of noticed actions appeared irreversible, like sending buyer emails. Nonetheless, the highest-risk clusters concerned delicate safety operations, monetary transactions, and medical data.
The staff acknowledges limitations: many high-risk actions may very well be red-team evaluations quite than manufacturing deployments. They cannot at all times inform the distinction from their vantage level.
What This Means for the Trade
Anthropic’s researchers argue in opposition to mandating particular oversight patterns like requiring human approval for each motion. Their information suggests such necessities would create friction with out security advantages—skilled customers naturally develop extra environment friendly monitoring methods.
As a substitute, they’re calling for higher post-deployment monitoring infrastructure throughout the trade. Pre-deployment testing cannot seize how people really work together with brokers in apply. The patterns they noticed—belief constructing over time, shifting oversight methods, brokers limiting their very own autonomy—solely emerge in real-world utilization.
For enterprises evaluating AI agent deployments, the analysis gives a concrete benchmark: even energy customers on the excessive finish of the distribution are operating Claude autonomously for below an hour at a stretch. The hole between what fashions can theoretically deal with (METR estimates 5 hours for comparable duties) and what customers really allow suggests vital headroom stays—and that belief, not functionality, stands out as the binding constraint on adoption.
Picture supply: Shutterstock

