Guide Labs is introducing an open-source language model built to show its work. The San Francisco startup’s new Steerling-8B promises token-by-token traceability back to training data, offering developers and risk teams a rare capability in modern AI systems—credible, inspectable explanations for what the model says and why.
The 8 billion-parameter model uses an architectural twist the company calls a concept layer, designed so every output token can be linked to the data and concepts that informed it. Guide Labs says Steerling-8B reaches about 90% of the capability of today’s widely used models while using less training data, a claim that—if borne out—could reset expectations for how performance and transparency can coexist.
How the Concept Layer Works to Enable Traceability
Most interpretability research treats neural networks like black boxes and then reverse-engineers patterns post hoc. Guide Labs flips that approach. During training, the model organizes inputs into traceable, human-meaningful categories—“concepts”—that are preserved in an explicit intermediate layer. Think of it as data lineage engineered into the network’s internal reasoning, not bolted on afterward.
That design requires more up-front data annotation, which the team says it accelerates with assisting models. The payoff is granular provenance: if Steerling-8B cites a scientific claim, the model can point to the curated sources behind it; if it explains a joke or a historical event, developers can inspect which concept clusters were activated. Those same hooks also allow targeted controls—tightening or relaxing outputs around topics like medical advice, copyrighted content, violence, or misinformation—without blunt, system-wide suppression.
One fear with structured interpretability is that it will sand down a model’s emergent abilities. Guide Labs counters that Steerling-8B still generalizes and even tracks “discovered concepts” that arise during training (the team cites quantum computing as one example), giving builders both a catalog of known categories and visibility into new ones.
Why Interpretability Matters Now for Enterprise AI
Enterprises are under pressure to show how AI systems reach conclusions, especially when stakes are high. The NIST AI Risk Management Framework calls for transparency and traceability; the EU’s sweeping AI legislation emphasizes documentation and oversight; U.S. agencies including the FTC have warned that opaque models do not excuse unfair or deceptive outcomes. At the same time, publishers and rightsholders are demanding clarity on training data, not just model behavior.
Steerling-8B’s design lands squarely in that compliance gap. In finance, a lender could verify that underwriting draws on income and credit history concepts rather than prohibited attributes. In healthcare and life sciences, researchers could audit how the model weighs protein motifs or literature when proposing candidates. For customer support and enterprise search, token-level provenance enables source citations by default, reducing hallucinations and streamlining reviews.
Performance and Trade-Offs in an Interpretable 8B Model
Guide Labs positions Steerling-8B as a mid-sized, efficient alternative: open-source for external auditing, small enough for practical deployment, yet competitive on mainstream benchmarks such as MMLU or HellaSwag, according to the company’s early tests. The claim that it reaches roughly 90% of top-tier capability while consuming less data will draw scrutiny, but it highlights a key point—interpretability need not be synonymous with sacrificing quality.
There are trade-offs. Concept-layer modeling introduces annotation overhead and demands careful taxonomy design; blind spots in the concept catalog could become blind spots in behavior. And while concept traceability beats post hoc saliency maps criticized by researchers in a widely cited MIT study, provenance itself can be misread without solid evaluation protocols. The upside, Guide Labs argues, is that these are now engineering problems—instrument, test, and iterate—rather than mysterious quirks of a black box.
What Comes Next for Steerling-8B and Guide Labs
With Steerling-8B, Guide Labs is courting developers who want controllability out of the box: policy levers tied to concrete concepts, red-teaming that reveals exactly which data paths were traveled, and audit trails that support regulatory reviews. The company plans to scale the architecture to larger models and to ship API and agentic access so teams can combine interpretable generation with tools and workflows.
Founded by CEO Julius Adebayo and chief science officer Aya Abdelsalam Ismail, the startup has roots in academic work that challenged the reliability of common interpretability techniques and later informed this engineered approach. Backed by Y Combinator and a $9 million seed from Initialized Capital, Guide Labs is betting that provenance will be the new performance—an axis on which buyers will select models just as much as they do on speed or accuracy.
If that bet pays off, Steerling-8B won’t just be another open-source LLM. It could become a reference design for how to build systems that are not only powerful, but also verifiable—giving teams a way to ask an increasingly important question and finally get a concrete answer: where did this come from, and can we trust it?