A seasoned architect of OpenAI’s enterprise business is crossing the table. Aliisa Rosenthal, who helped grow OpenAI’s sales organization from a two-person beachhead into a large go-to-market engine, has joined Acrew Capital to invest in AI-first startups. Her pitch to founders is straightforward: she has seen, up close, where application-layer moats can actually hold when foundation model providers move fast and ship broadly.
From Foundation Sales To Picking Applications
Rosenthal’s move aligns with Acrew’s long-running focus on data, security, and workflow depth. General partner Lauren Kolodny has emphasized that sustained value in AI will accrue where proprietary data and distribution combine, a view that dovetails with Rosenthal’s operator lens on enterprise buying cycles, integration hurdles, and the very real gap between what executives imagine and what IT teams can deploy.
That buyer behavior matters. Enterprises rarely adopt “blank canvas” AI. They purchase outcomes wrapped in compliance, change management, and integration into the systems they already live in. Rosenthal’s bet is that investors who understand those frictions will pick more durable companies at the application layer, not just the hottest model benchmarks.
The Moat That Matters Most: Context and Memory
Ask her where moats emerge and she points to context—the continuously updated information an AI system can recall and reason over. The industry standard has been retrieval-augmented generation, which pulls in relevant documents at query time. Rosenthal expects the next barrier to shift toward persistent “context graphs” and memory: durable representations of a company’s processes, data lineage, and decisions that compound over time.
This is more than bigger context windows. It’s ownership and management of the context layer itself—who curates the entities and relationships, how they evolve with each interaction, and how that graph plugs into approvals, identity, and observability. Stanford’s AI Index has chronicled rapid advances in model context capacity into the hundreds of thousands of tokens, but the defensibility comes from the structure and stewardship of what goes inside, not raw length.
Practical examples are emerging: customer operations platforms that learn escalation patterns and automatically adapt runbooks; financial compliance systems that encode policy changes, exceptions, and regulator feedback into a living knowledge base; engineering tools that remember architectural decisions and link them to code and incident history. In each case, the context asset—and the product’s grip on it—is the moat.
Specialization Over One Size Fits All Approaches
Another pillar of her thesis is specialization. Foundation model providers ship consumer experiences, core APIs, and a widening set of enterprise features, but they cannot credibly pursue every vertical workflow with the depth buyers demand. The edge belongs to teams that internalize domain ontologies, regulatory nuance, and human-in-the-loop workflows—and can show measurable lift in precision, latency, or compliance risk reduction inside that niche.
Market research from Gartner and IDC has consistently found that enterprise AI projects succeed when tied to specific, high-frequency use cases and operational KPIs rather than platform-first promises. That’s the wedge: earn trust with a painkiller workflow, then broaden into adjacent jobs as the context layer compounds.
Cheaper Models and the Rising Inference Bill
Rosenthal also expects more startups to forgo top leaderboard models in favor of lighter systems tuned for cost and latency. Multiple industry analyses, including work from a16z and startups operating at scale, estimate that inference accounts for 70–90% of AI compute spend in production. For many applications, a smaller fine-tuned model—or a cascade that routes only the hardest prompts to a frontier model—wins on gross margin without sacrificing outcomes.
This creates room for optimization across quantization, batching, caching, and on-prem or edge deployments when data residency demands it. The prize is compelling unit economics that survive vendor pricing shifts and allow aggressive, usage-based pricing in the enterprise.
Acrew’s Angle And The OpenAI Alumni Flywheel
Acrew has built a reputation around backing founders who harness proprietary data advantages. Rosenthal adds a distribution advantage: deep relationships with buyers who are actively piloting AI. That access can shorten design partner loops and anchor early revenue with real-world constraints, not sandbox demos.
She also taps a growing OpenAI alumni network—now spawning companies across the stack, from foundation model labs like Anthropic to application upstarts such as Safe Superintelligence. There’s precedent for alumni turning investors and winning allocations; Peter Deng’s move to Felicis was followed by stakes in buzzy early-stage deals like LMArena and Periodic Labs. Expect similar heat around Acrew’s AI pipeline.
Why It Matters for Today’s Enterprise Buyers
Corporate demand is strong but uneven. McKinsey’s most recent AI survey points to a majority of organizations experimenting with generative AI, yet many report challenges moving pilots into production at scale. The delta between aspiration and deployment is where Rosenthal intends to operate: pairing founders who master context, specialization, and cost control with enterprises that need proof-of-value inside existing workflows.
If her thesis holds, the next wave of durable AI companies won’t win because they access the same models as everyone else. They’ll win because they own the context that matters, deliver domain outcomes that general platforms won’t chase, and keep the inference bill in check—moats built in the messy reality of how software is bought and used.