Anthropic is rolling out Cowork, a research preview that lets its Claude assistant take on multi-step, time-consuming tasks with minimal prompting. Available first to Claude Max subscribers via the macOS app, Cowork can orchestrate workflows like building spreadsheets, organizing files, or assembling first drafts—then report back as it goes. The company is also warning users to proceed carefully: greater autonomy brings greater risk.
How Claude Cowork Executes Multi-Step Tasks Autonomously
Built atop Claude Code, Cowork is designed to reduce back-and-forth. You provide materials and intent—say, a folder of meeting notes and a request to produce a summary and budget tracker—and the agent executes, formatting outputs and saving artifacts without constant nudging. It supports access to designated local folders and can be configured to use connectors, skills, and Google Chrome to complete web-assisted tasks.

In practice, that means Claude can ingest CSVs and emails, create a cleaned spreadsheet, draft an executive summary, and prepare a shareable document or slide outline, all in one run. The agent keeps a running commentary of steps, so users can monitor progress and intervene if needed. Anthropic positions this as “leave it with a coworker” rather than the usual prompt-and-wait chatbot experience.
Key limitations and security risks in using Claude Cowork
Anthropic says Cowork will ask for confirmation before taking significant actions. Still, the company is plain about the trade-offs: ambiguous instructions could result in harmful behavior, including destructive actions like deleting local files if the model misinterprets a task. Clear, constrained directives are essential.
The bigger concern is prompt injection—malicious text embedded in files, web pages, or documents that tricks an agent into exfiltrating data or overriding safety rules. OpenAI has cautioned that prompt injections will likely remain an unsolved problem for agentic systems, and industry groups like OWASP now list LLM prompt injection as a top risk in their LLM Top 10. Even with “sophisticated defenses,” as Anthropic describes, this is an active and evolving threat area.
There’s also the classic alignment problem: autonomous agents can pursue goals too literally or in unexpected ways. Research shared by leading labs has documented agents that ignore user intent when instructions are vague or competing. NIST’s AI Risk Management Framework urges least-privilege access and robust monitoring for precisely this reason when deploying AI with system-level permissions.

Early access for Claude Cowork, roadmap, and what’s next
Cowork is launching to Claude Max subscribers as a macOS-only research preview, with a waitlist for others. Anthropic says it will use early feedback to shape the roadmap, including cross-device support, Windows availability, and additional safety controls. That iterative path is typical for agent features, which need real-world usage to tune guardrails without hobbling utility.
The move underscores a broader trend: chatbots are becoming agents that operate apps, files, and the web on our behalf. Competitors have made similar pushes with agent frameworks and workflow tools inside productivity suites. Gartner projects that by 2026, 80% of enterprises will have used GenAI APIs and models, up from less than 5% just a few years ago—momentum that will favor tools that can act, not just chat.
For Anthropic, Cowork also bridges its strong enterprise developer base with everyday users who want an “autopilot” for routine tasks. The feature’s support for connectors—including spreadsheet and browser-based work—hints at a future where AI quietly manages back-office chores while humans focus on oversight and edge cases.
Practical safety tips to use Claude Cowork more securely
- Constrain access: Grant Cowork only the specific folders and tools it needs. Avoid giving it blanket system permissions or access to sensitive repositories.
- Start in a sandbox: Test tasks in a non-production environment with dummy data. Validate outputs before pointing Cowork at live files, shared drives, or critical systems.
- Be explicit: Write concrete, scoping instructions that prohibit destructive actions. For example, “Do not delete or move any files—create new files in this folder only.”
- Require dry runs: Ask Cowork to list planned steps and file operations before execution. Review and approve the plan, then run the task.
- Watch for injections: Treat external text, web pages, and PDFs as untrusted. Prefer whitelisted sources and consider stripping or sanitizing content before ingestion.
- Log and back up: Keep version control and automatic backups in place. Ensure you can quickly roll back changes if the agent goes off-script.
Bottom line: convenience versus control with Claude Cowork
Cowork is a meaningful step toward AI that actually does the work, not just drafts suggestions. The trade-off is familiar to anyone deploying agentic systems: convenience versus control. With disciplined setup and clear guardrails, Claude’s new autonomy could save hours on the busywork. Just remember that “hands-off” doesn’t mean “eyes closed.”