Federal auto-safety officials have expanded their inquiry into Tesla’s Full Self-Driving (Supervised) after finding recurring performance problems in low-visibility conditions. The National Highway Traffic Safety Administration’s Office of Defects Investigation has elevated the case to an engineering analysis, a high-intensity phase that often precedes a safety recall if a defect is confirmed.
Engineering Analysis Signals Higher Stakes
By moving to an engineering analysis, investigators gain broader latitude to demand detailed technical data, conduct field evaluations, and scrutinize software logic. Regulators are zeroing in on how Tesla’s perception stack behaves when cameras are challenged by fog, rain, glare, darkness, and other visibility impairments, as well as whether the system consistently recognizes and responds to its own degraded sensing capability.
- Engineering Analysis Signals Higher Stakes
- Low-Visibility Failures Are At The Center Of The Probe
- Data Gaps And Software Updates Come Under Scrutiny
- Parallel Probe Into Traffic Law Violations
- Vision-Only Strategy Faces Real-World Edge Cases
- Regulatory Precedent And Possible Outcomes
- Implications For Tesla’s Ambitious Robotaxi Push
- What Tesla Owners Should Watch And Do Right Now
- What Comes Next In Tesla FSD Safety Investigation
Low-Visibility Failures Are At The Center Of The Probe
The probe began after a cluster of crashes in poor visibility, including one that killed a pedestrian, with investigators later identifying additional similar incidents. In case summaries, ODI says the software at times failed to detect common conditions that compromise cameras, delayed or omitted driver alerts about reduced sensor performance, and in multiple crashes either lost track of or never recognized a vehicle directly ahead.
Data Gaps And Software Updates Come Under Scrutiny
Regulators say Tesla told them it began developing a software change to address low-visibility issues before the federal probe opened, but ODI reports the company has not clearly stated whether that fix reached customers or which vehicle builds received it. Investigators also warn that crash counts may be understated, citing Tesla’s own descriptions of data collection and labeling limitations that could miss or misclassify relevant events.
Parallel Probe Into Traffic Law Violations
This deepened scrutiny runs alongside a separate ODI investigation into more than 80 incidents where Tesla’s driver-assistance software allegedly violated basic traffic rules, such as running red lights or failing to stop. While the two tracks focus on different behaviors, both speak to the core question regulators are asking: Does FSD (Supervised) reliably recognize risk and cue the human driver with enough time to intervene?
Vision-Only Strategy Faces Real-World Edge Cases
Tesla’s decision to rely on a camera-only approach—phasing out radar and ultrasonic sensors in many models—places heightened demands on vision-based perception in adverse conditions. Safety investigators and independent experts, including the National Transportation Safety Board, have long emphasized sensor redundancy and robust driver monitoring as guardrails against automation complacency and perception blind spots. The Insurance Institute for Highway Safety has introduced ratings for partial automation that prioritize these safeguards, underscoring how challenging it is to manage real-world edge cases.
Regulatory Precedent And Possible Outcomes
NHTSA has previously compelled Tesla to make broad over-the-air changes to its driver-assistance features, including a recall affecting roughly 2 million vehicles to strengthen driver engagement checks, and a separate action addressing risky intersection behavior by FSD Beta. An engineering analysis can culminate in a negotiated remedy, a formal recall, or a closure if no defect is found. Here, the combination of crash reports, incomplete update documentation, and signals of underreported incidents elevates the likelihood of corrective measures.
Implications For Tesla’s Ambitious Robotaxi Push
The widened inquiry lands as Tesla promotes driverless ambitions, including plans for a robotaxi service in select markets. Any finding that FSD (Supervised) struggles in common visibility challenges—or fails to communicate those limits early enough—could complicate both regulatory trust and public readiness for higher automation levels. State agencies have also pressed Tesla on marketing claims for advanced driver assistance, reinforcing the intense oversight surrounding automated driving rollouts.
What Tesla Owners Should Watch And Do Right Now
- Treat FSD (Supervised) as a Level 2 system that requires full attention, hands on the wheel, and readiness to take over instantly.
- Exercise extra caution in fog, heavy rain, tunnels, or low sun, when cameras are most challenged.
- Monitor release notes for changes tied to visibility, and report anomalies promptly; regulators are closely evaluating real-world behavior, not just lab performance.
What Comes Next In Tesla FSD Safety Investigation
ODI is expected to pursue deeper technical submissions from Tesla, review additional crash data, and analyze the efficacy of any software mitigations. If investigators determine a safety-related defect, the agency can require a remedy that reaches the full affected fleet. Until then, the case stands as a pivotal test of how vision-first driver-assistance software manages messy, low-visibility reality—and how quickly federal oversight can close the gap when it does not.