Scale AI unveiled Seal Showdown, a public head‑to‑head evaluation platform framed as a more representative option than LMSYS’s LMArena (formerly Chatbot Arena). The new leaderboard focuses on actual user preferences and populations of granular audience segments — from country to language to profession, and beyond — with the hope of capturing how people actually use large language models, not merely how well they score on synthetic tests.
Taking a user‑first approach to model leaderboards
Classical benchmarks such as MMLU, GSM8K and HumanEval quantify discrete abilities — valuable but somewhat disentangled from everyday practice. LMArena did move the needle with blind pairwise battles and an Elo‑like rating based on millions of community votes compiled by LMSYS. Seal Showdown takes the head‑to‑head approach but prioritizes verified, nuanced participation and, instead of providing a single, monolithic result, can provide dashboard-level information on how well different student groups in an institution are doing by these metrics.
- Taking a user‑first approach to model leaderboards
- How Seal Showdown works in head-to-head evaluations
- Why this challenges LMArena’s community-driven rankings
- Early signs and the broader AI industry backdrop
- Caveats and what to watch in segmented preference data
- Bottom line: what Seal Showdown could mean for AI users

Scale AI says the source data is contributed to its Outlier platform and is verified, spanning over 100 countries, across 70 languages and more than 200 professional domains. That validation toward the top results in a leaderboard that can be sliced by region, native language, job function or even education level — uncovering, say, whether a model that resonated with U.S. software engineers was well received by non‑English‑speaking marketers or healthcare analysts.
How Seal Showdown works in head-to-head evaluations
Participants have blind conversations with two models on the same task, and vote on which one produces a better response. The votes are combined into rankings from an overall perspective and within demographic or professional slices. The pipeline has user verification and quality checks to limit spamming and botting — concerns on a public leaderboard, where a small group of highly online people can skew outcomes, according to Scale AI.
Seal Showdown is added to the company’s current lineup of SEAL (Safety, Evaluations and Alignment Lab) leaderboards for which expert testing has usually provided support. Adding broad real-world preference data thus offers model builders, and consumers, a second lens: an “expert score” for capability depth and safety, as well as population-level assessments of everyday helpfulness, tone, and task fit.
Why this challenges LMArena’s community-driven rankings
LMArena has become the de facto public rankings board, but is it suited to better serve? That community is English-leaning, but even beyond that, it’s a technical user base and hobbyist market. Important! But not the whole market. Organizations are desperate to understand how models perform for their people and their customers. A finance procurement team, for example, might care about refusal rates on policy‑sensitive queries, accuracy in non‑English contract analysis, how much over‑advertised latency under load — factors that don’t necessarily show up in generic head‑to‑head chitchat.
With segmented results, Seal Showdown could make it easier to see where systems diverge: which perform more ably as a Spanish‑language customer support agent, summarizing medical literature for clinicians or engaging in market research with fewer hallucinations. That granularity reflects a larger trend in AI assessment, including the HELM and MLCommons initiatives at Stanford, of more scenario‑specific measurement.

Early signs and the broader AI industry backdrop
Debate rages around public leaderboards as model families proliferate. Supporters of open models argue that some domains implicitly select for frontier, closed‑vocabulary systems from one of the major labs, and that community voting can reward fluency over factual reliability. Seal Showdown tries to split the difference by maintaining the sheer simplicity of pairwise preference voting, but letting you see who’s doing the voting and giving readers an option to drill down into only those cohorts that they find most relevant for their own use case.
Top‑tier proprietary models come at the front, or proximity thereof, of early leaderboards as defined by Scale AI. That probably just reflects most users’ current preferences, not a judgment about capability in every task. Safety policies, verbosity, and tone can affect preferences as much as raw reasoning ability. The actual win here is that of transparency: if a model’s ahead on the whole but lagging in, say, multilingual drafting or enterprise policy compliance, stakeholders can see that breakdown rather than relying on one consolidated score.
Caveats and what to watch in segmented preference data
Segmented data on preferences is only as good as its sampling. Key questions remain: How representative are BALANCE cohorts in terms of region and occupation? Are prompts standardized or organic? Are any manipulations in place for difficulty of prompts, length of responses, or bias by the judge? Solid methodology disclosures — confidence intervals, inter‑rater agreement, and per‑segment sample sizes — are what’s going to inform how decision‑makers weigh these results against academic benchmarks and private red‑team findings.
That said, a credible alternative to LMArena is good for the ecosystem. A kind of multi-lens transparency — expert evaluations, public preferences, task-specific stress tests — diminishes the chances that any single scoreboard defines “best” in isolation. For developers, segmented feedback can help home in on fine‑tuning for underserved users. And for buyers, it may help put model selection in the context of the realities on their own workforce and customers.
Bottom line: what Seal Showdown could mean for AI users
Seal Showdown represents a departure from one‑size‑fits‑all rankings to preference data that is tailored to who is asking and why. LMArena matters still, but it’s Scale AI’s bet on verified, segmentable feedback that could help make model evaluation more inclusive — and thereby more actionable — for the next wave of AI adoption.
