Anthropic hands Claude Code more control, but keeps it on a leash
Anthropic's Claude Code now has an 'auto mode' allowing it to execute tasks with fewer human approvals, balancing speed and safety with built-in safeguards.
Read on TechCrunch →Anthropic's Claude AI can now control a user's computer, performing tasks like opening apps and browsing the web after receiving explicit user permission for each action.
Why it matters
This development signifies a significant step towards more integrated and agentic AI assistants. By enabling AI models to directly interact with a user's operating system and applications, it opens up new possibilities for automation and task completion, blurring the lines between AI as a tool and AI as an active participant in digital workflows. The emphasis on user control and safety mechanisms is crucial for building trust and ensuring responsible deployment of such powerful capabilities.
Imagine your AI assistant, Claude, can now help you by opening apps and clicking buttons on your computer, just like you would. It will always ask you first before doing anything to make sure it's okay.
Anthropic Claude Gains Enhanced Computer Control
View in Playbook →
Anthropic's Claude Code now has an 'auto mode' allowing it to execute tasks with fewer human approvals, balancing speed and safety with built-in safeguards.
Read on TechCrunch →Spotify is testing a new tool to allow artists to control which AI-generated tracks are associated with their name, aiming to combat AI "slop" and protect artist attribution.
Read on TechCrunch →OpenAI releases open-source tools to help developers build safer AI applications for teenagers.
Read on TechCrunch →