Former UK Prime Minister Rishi Sunak has taken up a pair of senior advisory positions at Microsoft and Anthropic, landing one of Britain’s most high-profile recent politicians with two top global AI players. The appointments were revealed in correspondence from the Advisory Committee on Business Appointments, which acts as a watchdog over post-government employment for former ministers.
The move highlights how aggressively the AI industry is wooing seasoned political judgment as the technology barrels into an era of regulation. It also places Sunak in the middle of a high-stakes discussion about innovation, market power and guardrails within an industry where government connections increasingly count.

What Sunak Will and Won’t Do in His Advisory Roles
Based on the watchdog’s letters, Sunak’s brief is confined to a distant view of global macroeconomics and geopolitical forces. He also promised not to advise on UK policy, or have any access to UK officials and business discussions. He plans to donate his pay, he says, to the Richmond Project, a nonprofit initiative that he helped found.
These conditions add to standard ACOBA undertakings which include a rule on not using privileged information held in office and one which stipulates lobbyists should wait two years before taking up lobbying. The committee did, however, raise a fair concern that the roles might offer unfair access or influence, especially as debates play out over AI rule making.
Why Microsoft and Anthropic Want His Guidance
For Microsoft, Sunak’s leading the UK government at a time of high AI activity presents strategic value as it scales cloud and AI infrastructure, vies for government contracts in the public sector and responds to heightened scrutiny around big tech’s role in foundation models. The company has publicly pledged “multi‑billion‑pound” investments in data centres and AI skills in the UK, suggesting ambitions that can align with national policy priorities around compute, skills and resilience.
Anthropic, creator of the Claude family of models, has committed to a safety research and constitutional AI stance. With the firm expanding in London and reaching out to policymakers on testing, evaluations and model governance, Sunak’s perspective on international coordination and standard-setting may be particularly appealing. Government fluency becomes very much a practical necessity, with industry efforts like the Frontier Model Forum and collaborations with national AI safety organizations being some examples.
The Regulatory and Market Backdrop for AI in the UK
The Competition and Markets Authority has cautioned that dominance over compute, data and distribution may cement a small number of companies in the base model sector. Its findings raised concerns around vertical integration and default agreements which drive developer and consumer choice. At the same time, the UK’s AI Safety Institute is developing testing regimes for powerful models, and international bodies including the OECD and G7 are calling for interoperable guarantees.

Microsoft and Anthropic sit squarely in this frame: the former, one of the hyperscale platforms that carve out access to compute and tooling; the latter as a frontier‑model developer arguing for alignment and evaluations. Both find themselves increasingly the subjects of scrutiny by competition, data protection and online safety regulators in multiple countries. Government fluency is advisory muscle, therefore, a competitive advantage.
Conflict Management and the Risk of Perception
ACOBA’s terms are meant to help mitigate against the risk that inside information gleaned by Sunak when in office could benefit either company in live policy or procurement decisions. That involves not advising on individual UK bids, avoiding contact with UK officials, and steering clear of any use of privileged information. Transparency around the nature of work being done as well as tight internal firewalls will be critical to meeting these requirements and preserving public confidence.
Think tanks such as the Institute for Government and similar watchdogs have repeatedly made clear that appearance is as important as rule in post‑ministerial appointments. That process itself can create the appearance of a revolving door, undermining confidence in policy making. Clear disclosure, enforcement of cooling‑off rules and independent oversight can mitigate that risk.
Repercussions for the UK’s Broader AI Ambitions
The UK is a leader in the safe development of AI, with a strong academic research base and the world’s greatest fintech centre as well as related industries such as cloud computing, big data and cybersecurity. That both AI heavyweights bring in a former prime minister to act as adviser speaks volumes about the closeness of relationship between industrial strategy and the governance of AI. If done transparently, his roles could be a pipeline for real-world feedback loops between developers and policymakers in fields such as model evaluations, safety benchmarks, and skills pipelines.
But the line between constructive engagement and capture by regulators is thin. The CMA’s foundational model work, the ICO guidance on data use, and the evolving mandate of the AI Safety Institute indicate that the UK wants to marry growth with guardrails. The boundaries as Sunak and his new employers interpret them are likely to be an important early test.
The Bottom Line on Sunak’s Advisory Roles in AI
Sunak’s appointments at Microsoft and Anthropic are a sign of the AI industry’s hunger for seasoned policy advice, coming as the UK continues to take centre stage in global discussions about artificial intelligence. Under ACOBA’s conditions, the worth of his advice will depend on maintaining a strict separation between strategic advice and policy influence. The result will serve as a bellwether of how democracies are working to corral the permeable borders between tech and statecraft.