Andrew Tulloch, a co-founder of Thinking Machines Lab, is taking his leave for Meta, in a sign the battle for elite AI talent is heating up. Tulloch told colleagues of his departure in an internal message, reported by The Wall Street Journal, with a company spokesperson confirming it was for personal reasons. The news is the latest in which Meta was said to have courted both the startup and Tulloch with aggressive advances—propositions that the company had publicly denied were accurately portrayed.
Tulloch’s appointment will be closely watched by the field of AI. He had former stints at OpenAI and on Facebook’s AI Research team, experience that will allow him to immediately contribute to Meta’s efforts around large-scale model training, inference efficiency, and other open tooling.
Why Tulloch’s Expertise Matters at Meta Today
Meta’s AI strategy is built on two levers: advancing the Llama family of models and deploying them pervasively across huge consumer surfaces like Instagram, WhatsApp, and Messenger. That takes deep expertise in distributed training, reliability at scale, and hardware-aware optimization—areas where Tulloch brings a track record from previous stints building and productionizing machine learning systems at industry goliaths.
Meta’s ambitions in infrastructure also framed the context. Executives have said the company is looking to manage compute at, on the order of, hundreds of thousands of high-end GPUs, and public descriptions pointed to a target in the vicinity of 350,000 Nvidia H100s and around 600,000 “H100-equivalents” worth of computation. On that scale, efficiency gains in the low double digits (i.e., around 10%) result in significant cost reductions and shorter iteration cycles—exactly where senior system-focused researchers can have a huge impact.
It also matters beyond research, because of Meta’s open ecosystem. PyTorch—grown out of Facebook AI—has already established itself as the production-ready, end-to-end platform for AI. Managers who grasp the whole open-source stack as well as what it takes to ship models to billions of users can help close the traditional divide between lab breakthroughs and safe, low-latency products. You can expect Tulloch’s influence to be felt in training pipelines, inference runtime selection (and possibly interchange), as well as model-serving ergonomics for developers inside and outside Meta.
Meta’s Relentless AI Talent Offensive Intensifies
The move is just one example in a larger trend. Articles in publications from The Wall Street Journal to The Information have recounted how the biggest tech companies are assembling all-star AI benches, with pay packages that can rival the payoff of your average startup equity stake. Levels.fyi data, which is far from perfect, shows total compensation for a senior AI role often crossing seven figures per year, with some select leadership packages even higher. The scramble accelerated as business demand for generative AI rose and open models began to close the performance gap with closed systems.
There is nothing unique, however, about the aggressive posture represented by Meta. Microsoft lured away much of Inflection AI’s leadership, Google DeepMind has concentrated research horsepower, and OpenAI is still a magnet for top-tier talent. Here, individual hiring decisions have ripples: a single technical leader could expedite certain platform choices—tokenizer strategies on what to include in context windows, how available memory is used during fine-tuning—that echo through research roadmaps and product roadmap timelines.
What Does This Mean For Thinking Machines Lab
For Thinking Machines Lab—an AI startup that was founded by former OpenAI CTO Mira Murati, according to a report in the Journalist Exchange—losing someone on the founding team is an early crisis. In the same way that startups at this stage have material key-person risk, the counterbalance is having a strong bench, cultures of rigorous documentation, and partnerships that multiply execution capacity. The company has kept its public product plans so secretive that we don’t even know when its first release might come, but the investment ecosystem around foundation models, agents, and enterprise AI tooling points in a few different directions under continued hiring momentum.
Even the tug-of-war over the startup itself says something about today’s market dynamics. When formal acquisition discussions flounder, incumbents often go after specific hires. It’s a well-worn playbook from previous AI cycles, of course; but as frontier compute budgets have soared and distribution (social, messaging, and mixed-reality platforms all included) is becoming a key differentiator, the stakes are higher than they’ve ever been. Strategic hires can often extract much of the desired value even without any transaction.
What to Watch Next Following Tulloch’s Move to Meta
Three signs will tell us if Tulloch’s move has had a significant effect.
- Where he settles within Meta—core research, applied GenAI, or infrastructure—which will signal a near-term emphasis on foundational model capability, product integration, or cost/performance at scale.
- Whether Meta makes other moves to woo Thinking Machines Lab or its leadership, indicating how central the firm’s talent is to Meta’s game plan.
- Downstream technical changes at Meta—whether through new tooling for parameter-efficient training, optimizations in multi-modal inference, or open model releases with more rigorous safety evaluations.
The broader lesson here is a simple one: the AI talent market is still red-hot, and high-leverage hires can have a dramatic impact on the slope of a company’s learning curve. To Meta, Tulloch’s return to the orbit of its AI efforts is another move in an extended push to keep up at the front edge. For Thinking Machines Lab, it’s time to hire with conviction and continue building.