Anthropic launched a new "auto mode" feature for Claude Code that allows the AI to execute code independently without requesting permission for each action — but only after an AI safety layer reviews whether the request is risky. Available in research preview for Enterprise and API users, the feature aims to solve what developers call the "babysitting" problem: either micromanaging every AI action or letting models run completely unchecked.
The move represents Anthropic's latest attempt to balance speed with safety as AI coding tools increasingly operate without human oversight. Auto mode builds on Claude Code's existing "dangerously-skip-permissions" command, which already handed decision-making to the AI, but adds a crucial safety layer on top.
- AI safeguards review each action before execution
- System blocks risky behavior and prompt injection attacks
- Safe actions proceed automatically without user permission
- Currently works only with Claude Sonnet 4.6 and Opus 4.6
The feature addresses a fundamental tension in AI development tools. Traditional approaches force developers to approve every action, slowing down workflows. But giving AI complete autonomy introduces risks — from executing malicious code to making unintended changes that could damage production systems.
Anthropic's solution shifts the decision-making from the user to the AI itself. Instead of asking "Should I run this code?" Claude now asks "Is this code safe to run?" and proceeds based on its own safety assessment.
However, Anthropic hasn't detailed the specific criteria its safety layer uses to distinguish safe actions from risky ones — information developers will likely demand before adopting the feature widely. The company declined to provide additional details when contacted by TechCrunch.
The feature joins a wave of autonomous coding tools from major tech companies. GitHub and OpenAI already offer systems that can execute tasks on developers' behalf, but Anthropic's approach goes further by automating the permission decisions themselves.
Anthropic launched Claude Code Review for automatic bug detection
Company released Dispatch for Cowork, allowing users to send tasks to AI agents
Auto mode enters research preview for Enterprise and API users
Anthropic recommends using auto mode only in "isolated environments" — sandboxed setups kept separate from production systems to limit potential damage if something goes wrong. This guidance reflects the company's cautious approach to AI safety, even as it expands Claude's autonomous capabilities.
The research preview status indicates the feature isn't ready for widespread production use. Developers can test it, but Anthropic is likely gathering data on how well the safety mechanisms work in practice before a full rollout.
Auto mode arrives as the AI industry grapples with increasing demands for both capability and safety. Companies face pressure to make AI tools more autonomous and useful, while also ensuring they don't cause harm or security breaches.
For developers, the feature promises to reduce the constant interruptions that come with current AI coding assistants. Instead of approving every file read, API call, or code execution, they can focus on higher-level tasks while Claude handles routine operations independently.
The rollout to Enterprise and API users first suggests Anthropic wants feedback from sophisticated users who understand the risks and have proper security measures in place. These organizations are also more likely to use the recommended isolated environments rather than testing directly on production systems.
