An administrative law judge in California has found that Tesla’s use of the terms Autopilot and Full Self-Driving to describe its driver assistance systems is deceptive, siding with the state’s Department of Motor Vehicles and ordering a 30-day suspension of the automaker’s business in the state if certain corrective orders are not met. The order provides Tesla with an opportunity either to stop using the terms or modify the systems so that the marketing corresponds to their capabilities.
Tesla has fought back, saying that it will keep selling and presenting a recent video of a driverless robotaxi concept as evidence of its progress.
The showdown also sets up a high-stakes test of how much an automaker can stretch language around advanced driver-assistance, and whether California will step in to compel clearer delineations between driver-assist and self-driving claims.
Why The Court Deemed Autopilot Misleading
The DMV complaint contended that “Autopilot” and “Full Self-Driving Capability” suggest autonomy features those cars do not deliver under widely used industry definitions. According to the SAE (formerly NHTSA-adopted) Levels of Driving Automation, Tesla’s technology falls at Level 2 — which allows for steering and speed control in certain conditions, but requires human oversight at all times and overall responsibility. Level 3 or above, where the dynamic driving functions under certain conditions are all performed by the system in hand-off mode.
Judge Juliet E. Cox sided with the DMV, pointing out that the agency’s power to regulate vehicle advertising does not rest on evidence of harm to a particular consumer. In practical terms, it’s a binary ruling: Tesla either raises the technology to meet the autonomy that its branding suggests, or adjusts its branding, disclosures and user experience so owners don’t misinterpret (or aren’t misled) about what the system will and won’t be able to do.
The order contains a 30-day suspension of state business activity if Tesla does not meet the order, and another 60 days for compliance. California is Tesla’s largest single state market, by registrations — so even a temporary pause would have genuine business implications.
The Naming Issue in Driver-Assist Technology
For years, experts have warned that brand names can create dangerous overconfidence. AAA found in polling that, thanks to marketing labels, a large fraction of drivers misunderstand the capabilities; in one study, 40 percent of people thought features labeled “Autopilot” would let a car drive itself. And like the Insurance Institute for Highway Safety, the administration has warned that language void of robust driver monitoring and reasonable restrictions fosters abuse.

Regulators in other nations have also struck a harsher tone. A German court had previously ruled that Tesla’s advertising claims concerning full driving were misleading, demanding clearer descriptions of the system’s limitations. American automakers have tried more neutral names: GM’s Super Cruise, Ford’s BlueCruise, and have bound capabilities to specific conditions and driver-facing safeguards. California’s decision may prompt a swifter move to plainer language and stronger in-vehicle warnings across the industry.
Safety Data and Continuing Scrutiny of Tesla
It is a complicated abundance of safety results for advanced driver-assistance. The agency’s incident records indicate that Tesla is involved in a disproportionate number of reported Level 2 crashes, a pattern reflecting its large fleet size and the always-on measures used to compile reports, rather than an unsafe amount of driving by the cars’ owners. The agency has repeatedly pushed Tesla on safety, including in a recall of more than two million vehicles to include stronger driver monitoring and alerts when Autopilot is activated. And the company’s Full Self-Driving Beta was issued a recall, to fix some of its riskier behaviors in certain situations.
Legal risk is growing along with regulatory stress. In a Florida trial, a Miami jury found Tesla liable for over $240 million to the family of Naibel Benavides in connection with a fatal crash in which an Autopilot-equipped vehicle was involved — showing early signs of juries’ beginning affirmative leanings against the company related to Autopilot. Yet several studies show that such active safety systems are capable of mitigating certain types of collisions when they work as designed, highlighting the gap between their potential and the way some drivers actually use them.
What’s Next For Tesla And California Drivers
Tesla’s near-term options are clear. It can rename Autopilot and Full Self-Driving with clear Level 2 designations; create more alarming, user-irremovable warnings across ads, the purchase flow, and a car’s interface; further restrict how long and in what conditions drivers may use the system. Instead, it could try to evolve in the direction of Level 3 functionality in constrained environments — an evolution that would require rigorous validation and increased redundancy of perception, computation and fallback controls.
If the company bucks the order and maintains that it can sell the systems as they are, we see the judge’s stay triggering and bleeding over into more punitive measures. It could send shock waves across California’s EV market, which makes up about 40% of U.S. electric vehicle registrations according to state energy officials and where Tesla’s Model Y has been among the best-selling vehicles overall.
Beyond Tesla, the ruling indicates a wider tightening of guardrails around how advanced driver-assistance is described and sold. Here in California, the message is so simple: if you are going to make a promise, match it to your product or change it … enough said! For an industry that is scrambling toward automation, what self-driving means has suddenly become a matter merely of compliance, not a marketing decision.
