A problem all but unsolvable for Democrats is often not that there aren’t enough ideas but that ideas are so big, and large groups so bad at synthesizing them. A startup that grew out of a partnership between Lorsch and Maya Ben Dror will now develop artificial intelligence tools to facilitate those conversations, so people can arrive at agreement more quickly — without silencing minority voices.
What’s Different About AI-Mediated Consensus
Most workplace software is designed to facilitate collaboration — sharing files, managing tasks, communicating with colleagues on different schedules. Cooperation is harder. It calls for surfacing points of agreement, mapping disagreements and writing statements that participants think accurately reflect them. That’s a facilitation problem, not a productivity one, and has traditionally been the domain of trained humans who don’t scale cleanly.
- What’s Different About AI-Mediated Consensus
- Inside the Bonn Pilot With African Youth Delegates
- How the Consensus-Building System Works in Practice
- Early Results and Real Limits for AI Consensus Tools
- Beyond Climate Talks: Corporate and Public-Sector Uses
- What to Watch Next for AI-Mediated Consensus Building
Complex Chaos contends that can become at least in part the domain of contemporary language models. The company’s system relies on research such as Google’s Habermas Machine — an A.I. method that creates candidate consensus statements for groups, and invites people in the group to iteratively refine them. The concept is simple: organize the conversation, make clear summaries and test whether people feel seen by what comes out of it.
Inside the Bonn Pilot With African Youth Delegates
To test real-world utility, Complex Chaos has recently been trialing its tool with young delegates from nine African countries attending climate negotiations at a United Nations campus in Bonn, Germany. The aim was to get the group into a coherent bloc position before the party’s leadership met with other parties — a process that typically jolts negotiations to a halt while those behind closed doors re-form their blocs.
In post-session feedback the startup collected from participants, they said their time spent coordinating positions decreased by as much as 60%. And 91% said the AI had surfaced perspectives they may not have otherwise seen. Those are early, self-reported numbers, but they dovetail with decades of research on deliberative democracy suggesting that structured prompts and iterative summaries increase inclusivity and clarity.
How the Consensus-Building System Works in Practice
The platform combines two capabilities. First, it deploys models like Google’s Habermas Machine and OpenAI’s systems to translate sprawling briefs and draft texts into succinct, neutral summaries as well as develop targeted questions that help illuminate trade-offs. Second, it enables rounds of statement-crafting: the AI suggests language that reflects positions they’ve agreed on and disagree on, then humans respond whether it does in fact reflect their opinion or not, including for minority opinions.
The workflow mimics the process in civic tech projects like the way Taiwan took on Pol.is, which helps large groups negotiate their way to actionable statements by finding consensus items across ideological lines. What’s new here is that large language models are deployed to draft and redraft compromise language in real time — or the next best thing — turning late-night caucus scribbles into an interactive, data-informed loop.
Early Results and Real Limits for AI Consensus Tools
The tool, reportedly intended to cut short the “regroup and rewrite” cycle that bogs down high-stakes talks, comes from Complex Chaos. That assertion jibes with organizational realities: strategic planning at large companies can take months, as teams reconcile goals across layers and time zones. Which means if an AI middleman can compress even 20–30% of those cycles, it will get you back weeks of productive time.
But consensus isn’t a math problem, and AI can trip up. If the training data is light on some communities, bias is a well-known hazard. And there’s privacy to consider when draft positions can be so sensitive. And there’s a democratic worry: who gets to set the prompts, and determine what counts as “neutral” language? Researchers at the Stanford Deliberative Democracy Lab and the MIT Center for Constructive Communication argue for transparent processes, with audits that measure inclusiveness, not just speed.
Complex Chaos says it enforces guardrails by surfacing prompts, revision histories and minority rationales to all participants, and by weighting feedback so small factions are not shouted down by majorities. They recount how features like the deliberative quality of information or participant satisfaction among subgroups on a range of demographic variables are valid, measurable and valuable outcomes.
Beyond Climate Talks: Corporate and Public-Sector Uses
The logic is the same for corporate strategy and policy making. Most organizations devote a quarter of the year to hammering out annual plans, with everyone reviewing everyone else in hopes of predictable performance. An AI mediator that helps to articulate options, spot convergences and flag sticking points could facilitate making those decisions faster — all while documenting the trade-offs made by leadership.
Public-sector use is plausible, too. The OECD’s work on innovative citizen participation and global experiments with citizens’ assemblies indicate that process design is just as important as content. AI that helps clarify issues and identifies minority dissent could potentially make mass consultations more representative — if used with transparency and opt-in data controls.
What to Watch Next for AI-Mediated Consensus Building
Three signals will reveal whether AI mediation becomes an accepted practice: independent verification under more complex negotiation conditions (like UNFCCC sessions), sustained increases in participant trust and perceived fairness, and a road to clear data and model behavior governance. There is a broadening attitudinal distance, as polarization researchers from places like the Pew Research Center have documented; tools that help people recognize the overlap in their interests are overdue.
Complex Chaos is gambling that cooperation can emerge without being engineered out. If AI can keep speeding up the process while also leaving more of the outliers unchastened, then perhaps — and only perhaps — it will be able to finally deliver on at least some of its hype: helping spaces talk less past each other and more toward whatever shared path might yet work for them both.