Recent investigations and studies have highlighted significant risks associated with self-driving technologies, particularly concerning Tesla’s Autopilot and Full Self-Driving (FSD) systems. The National Highway Traffic Safety Administration (NHTSA) has initiated multiple probes into Tesla’s FSD system following incidents where vehicles, operating under FSD, were involved in crashes under reduced visibility conditions such as sun glare, fog, or airborne dust.
One tragic case involved a fatal accident in Arizona, where a Tesla Model Y, engaged in FSD, struck and killed a pedestrian. This and other incidents raise concerns about the ability of Tesla’s FSD technology to detect and respond appropriately in challenging environmental conditions. According to NHTSA, the agency is assessing:
- The ability of FSD’s engineering controls to detect and respond appropriately to reduced roadway visibility conditions;
- Whether any other similar FSD crashes have occurred in reduced roadway visibility conditions and, if so, the contributing circumstances for those crashes; and
- Any updates or modifications from Tesla to the FSD system that may affect the performance of FSD in reduced roadway visibility conditions.
Tesla has faced recurring safety concerns regarding its driver-assistance systems. In December 2023, the company recalled nearly 2 million U.S. vehicles following an NHTSA investigation into approximately 1,000 crashes involving its Autopilot system. Earlier, in February 2023, Tesla recalled nearly 363,000 vehicles equipped with FSD due to safety concerns, with NHTSA citing that its autosteer feature posed an “unreasonable risk to motor vehicle safety” by failing to adequately adhere to traffic laws.
Assessing the Safety Impact of Assisted Driving Technologies
While advanced driver assist systems were initially thought to lower crash risk and improve safety by reducing human error, studies from the Insurance Institute for Highway Safety (IIHS) indicate that there is little evidence that these systems are preventing collisions. Rather, the study observed that drivers often engaged in activities such as using their phones or eating, even when the system required their attention. This behavior suggests an overreliance on automation, leading to complacency and reduced situational awareness, which can compromise safety.
Additionally, the terminology used by Tesla, specifically the terms “Autopilot” and “Full Self-Driving,” has been criticized for potentially misleading consumers into overestimating the capabilities of their vehicles. NHTSA has expressed concerns that such language may give drivers a false sense of security, leading them to believe that the vehicle can operate autonomously without human intervention, contrary to the requirements outlined in Tesla’s own manuals. This discrepancy between marketing and actual system capabilities poses a significant risk, as it may encourage misuse of the technology.
According to the IIHS, “partial driving automation needs to be thought of as a convenience feature and not a safety feature until there is strong support otherwise. Designing partial driving automation with robust safeguards to deter misuse will be crucial to minimizing the possibility that the systems will inadvertently increase crash risk.”
Performance in Urban vs. Highway Environments
Most studies on self-driving systems focus on highway use, where conditions are more predictable. However, urban environments pose additional challenges, including pedestrians, cyclists, and complex intersections. Reports from real-world testing have shown that self-driving software, including Tesla’s FSD, has difficulty accurately predicting pedestrian and cyclist movements, increasing the risks of accidents in city settings.
Lack of Standardized Safety Regulations
While Tesla’s self-driving technology has been the focus of many safety investigations, concerns extend to the broader industry. There is currently no unified federal regulatory framework governing self-driving technology, which means automakers can implement their own systems with varying levels of safeguards. The lack of standardized safety benchmarks increases the risk of inconsistent performance across different manufacturers and models.
Change may be coming. In its federal policy framework released in January 2025, the Autonomous Vehicle Industry Association’s (AVIA) CEO Jeff Farrah said, “[t]he U.S. remains at the forefront of AV innovation, with AVs already improving road safety, supply chain efficiency, and accessibility nationwide. However, federal action is needed to ensure American leadership on this path-breaking technology. Federal inaction in recent years has occurred against a backdrop of enhanced global competition on AVs, particularly from China, as well as a wave of state action on AV policy. Policymakers must act decisively to ensure that AV technology continues to thrive and deliver benefits across safety, mobility, and economic growth.” But industry associations cannot be left to develop safeguards alone. It is vital that consumer and safety groups be given a strong voice in the development of standards.
While advancements in self-driving technologies offer promising benefits, these developments also present notable risks to consumers. Addressing these risks requires not only improvements in vehicle performance and driver engagement but also clearer regulations and stricter safety oversight. Without these safeguards, the potential benefits of self-driving technology may be overshadowed by growing consumer distrust and safety concerns.