Perplexity has unveiled Computer, a long-running, multiagent “digital worker” designed to orchestrate tasks across a roster of top AI models—and it’s being positioned as a safer, more controllable take on the autonomous-agent idea popularized by OpenClaw. The promise is simple: delegate complex, months-long projects to an AI that can plan, coordinate, and deliver while containing risk.
What Computer Is And How It Operates Across Models
Computer acts as an orchestrator, not a single model. Perplexity says it draws on more than a dozen frontier systems, routing each subtask to the best tool for the job. Its core reasoning engine is described as Claude Opus 4.6, with Google’s Nano Banana for images, Veo 3.1 for video, Grok for lightweight chores, and GPT‑5.2 for long‑context queries and expansive web search.
In practice, users specify an outcome—say, “Build an app that surfaces live snow conditions across ski resorts”—and Computer decomposes that brief into a task graph. It then sequences research, data ingestion, UI design, code generation, testing, and documentation, farming each piece to the most capable model and running many in parallel.
Perplexity emphasizes control. Users can override the router, pinning sensitive subtasks to specific models, and can review or adjust intermediate plans. The system can run quietly in the background for weeks or months, surfacing checkpoints only when needed. It’s available now to Perplexity Max users, with Enterprise and Pro access to follow.
Why The OpenClaw Comparison Matters For Safety
OpenClaw—and its earlier Clawdbot incarnation—ignited interest by operating as an always‑on agent across a user’s digital footprint, from files to messaging apps. Its creator, Austrian programmer Peter Steinberger, was quickly hired by OpenAI after the agent’s viral demos spotlighted how powerful such systems could become.
But OpenClaw also exposed the sharp edges of autonomy. A Meta AI security researcher, Summer Yue, publicly described an incident where OpenClaw began a process that risked wiping her inbox, underscoring how agents can misinterpret instructions or “compact” context in ways that override prior constraints. These are not theoretical hazards; they’re precisely the kinds of failure modes that worry security teams.
That backdrop is why Computer’s launch is framed around control and containment. Perplexity is betting that enterprises want the productivity of agents without the stomach‑dropping moments that come from over‑permissioned access and unbounded actions.
Safety Controls And Where Risks Remain For Agents
Perplexity says Computer runs in a safe, isolated development sandbox so that any misbehavior is fenced off from the primary network and data stores. The company reports it has executed thousands of internal tasks—from publishing web copy to building apps—and has been consistently impressed with quality, while using the sandbox to limit blast radius.
This approach aligns with guidance from the NIST AI Risk Management Framework and the OWASP Top 10 for LLM Applications: isolate agents, enforce least‑privilege access, and insert human‑in‑the‑loop gates at moments of irreversible change. In practical terms, stronger guardrails mean capability‑scoped credentials, read‑only defaults, rate limits, timeouts, and auditable logs of every tool call and file write.
Still, “safer” is not “safe.” Long‑running agents can drift from goals, suffer prompt injection via untrusted data, or silently degrade as context windows fill. Model routing adds another layer of complexity: switching models mid‑workflow can introduce subtle inconsistencies. Robust test suites, external red‑teaming, and clear rollback plans remain essential, and Perplexity has not yet published independent audit results.
How Tasks Flow From Idea To Delivery In Practice
Think of Computer as a project manager with specialists on call. It converts a goal into a plan, identifies dependencies, and chooses the right model per subtask—researching with a long‑context model, drafting with a reasoning‑heavy model, generating UI assets with an image model, and writing tests with a code‑centric model.
The agent then executes tasks in parallel where possible, caching intermediate artifacts and handing them to the next specialist. If a tool needs credentials, Computer requests a scoped token rather than full access. Before committing sensitive changes—publishing a site, pushing code, or sending emails—it can pause for human review, keeping the human operator firmly in control.
Enterprise Readiness And Early Uses Reported So Far
Early internal runs cited by Perplexity include content pipelines, prototype apps, and data cleanup. For enterprises, the draw is governance: sandboxed execution, configurable permissioning, and the option to force human approval on predefined risk thresholds. Those features map to the controls security leaders already expect in CI/CD and RPA environments.
If Computer sustains quality over weeks and respects guardrails without constant babysitting, it could slot alongside existing workflows as an autonomous assistant rather than a free‑roaming operator—precisely the distinction many IT teams want.
The Bottom Line And What To Watch In Coming Months
Computer aims to deliver OpenClaw‑style autonomy without the chaos by combining model specialization, sandboxed execution, and human checkpoints. The concept is sound and in step with emerging best practices, but proof will rest on hard numbers: task success rates, intervention frequency, audit transparency, and third‑party red‑team reports.
Until then, the safest verdict is cautious optimism. Multiagent orchestration looks like the right architecture for complex work, and Perplexity is saying the right things about safety. Now the market will want to see whether “safer” in the lab translates to safer in the wild.