First, let’s go over just what happened. Thanks to a California Public Records Act request from the website The Intercept, video and photographs of the crash are available, as is the full police report of the incident. The crash happened on I-80 eastbound, in the lower level of the Bay Bridge. There’s five lanes of traffic there, and cars were moving steadily at around 55 mph; there appeared to be no obstructions and good visibility. Nothing unusual at all. A Tesla was driving in the second lane from the left, and had its left turn signal on. The Tesla began to slow, despite no traffic anywhere ahead of it, then pulled into the leftmost lane and came to a complete stop — on the lower level of a bridge, with traffic all around it going between 50 and 60 mph or so. The results were grimly predictable, with cars stopping suddenly behind the now-immobile Tesla, leading to the eight-car crash. Here’s what it looked like from the surveillance cameras:
…and here’s the diagram from the police report:
According to the police report, here’s what the Tesla (referred to in the report as V-1) said of what happened So, the driver’s testimony was that the car was in Full Self-Driving (FSD) mode, and it would be easy to simply blame all of this on the demonstrated technological deficiencies of FSD Beta. This could be an example of “phantom braking,” where the system becomes confused and attempts to stop the car even when there are no obstacles in its path. It could have been the system disengaged for some reason and attempted to get the driver to take over, or it could be caused by any number of technological issues, but that’s not really what the underlying problem is. This is the sort of wreck that, it appears, would be extremely unlikely to happen to a normal, unimpaired driver (unless, say, the car depleted its battery, though the police report states that the Tesla was driven away, so it wasn’t that) because there was really no reason for it to happen at all. It’s about the simplest driving situation possible: full visibility, moderate speed, straight line, light traffic. And, of course, if the driver was using this Level 2 system as intended – remember, even though the system is called Full Self-Driving, it is still only a semi-automated system that requires a driver’s full, nonstop attention and a readiness to take over at any moment, which is something the “driver” of this Tesla clearly did not do. Of course, Tesla knows this and we all technically know this and the police even included a screengrab from Tesla’s site that states this in its report:
We all know this basic fact about L2 systems, that they must be watched nonstop, but what we keep seeing is that people are just not good at doing this. This is a drum I’ve been banging for years and years, and sometimes I think to myself: “Enough already, people get it,” but then I’ll see a crash like this, where a car just does something patently idiotic and absurd and entirely, easily preventable if the dingus behind the wheel would just pay the slightest flapjacking bit of attention to the world outside, and I realize that, no, people still don’t get it. So I’m going to say it again. While, yes, Tesla’s system was the particular one that appears to have failed here, and yes, the system is deceptively named in a way that encourages this idiotic behavior, this is not a problem unique to Tesla. It’s not a technical problem. You can’t program your way out of the problem with Level 2; in fact, the better the Level 2 system seems to be, the worse the problem gets. That problem is that human beings are simply no good at monitoring systems that do most of the work of a task and remaining ready to take over that task with minimal to no warning. This isn’t news to people who pay attention. It’s been proven since 1948, when N.H. Mackworth published his study The Breakdown of Vigilance During Prolonged Visual Search which defined what has come to be known as the “vigilance problem.” Essentially, the problem is that people are just not great at paying close attention to monitoring tasks, and if a semi-automated driving system is doing most of the steering, speed control, and other aspects of the driving task, the human in the driver’s seat’s job changes from one of active control to one of monitoring for when the system may make an error. The results of the human not performing this task well are evidenced by the crash we’re talking about. I think it’s not unreasonable to think of Level 2 driving as potentially impaired driving, because the mental focus of the driver when engaging with the driving task from a monitoring approach is impaired when compared to an active driver. I know lots of people claim that systems like these make driving safer – and they certainly can, in a large number of contexts. But they also introduce significant and new points of failure that simply do not need to be introduced. The same safety benefits can be had if the Level 2 paradigm was flipped, where the driver was always in control, but the semi-automated driving system was doing the monitoring, and was ready to take over if it detected dangerous choices by the human driver. This would help in situations of a tired or distracted or impaired driver, but would be less sexy in that the act of driving wouldn’t feel any different than normal human driving. If we take anything away from this wreck, it shouldn’t be that Tesla’s FSD Beta is the real problem here. It’s technically impressive in many ways though certainly by no means perfect; it’s also not the root of what’s wrong, which is Level 2 itself. We need to stop pretending this is a good approach, and start being realistic about the problems it introduces. Cars aren’t toys, and as much fun as it is to show off your car pretending to drive itself to your buddies, the truth is it can’t, and when you’re behind the wheel, you’re in charge — no question, no playing around. If you want to read about this even more, for some reason, I might know of a book you could get. Just saying.
New IIHS Study Confirms What We Suspected About Tesla’s Autopilot And Other Level 2 Driver Assist Systems: People Are Dangerously Confused
Level 3 Autonomy Is Confusing Garbage
Support our mission of championing car culture by becoming an Official Autopian Member.
Got a hot tip? Send it to us here. Or check out the stories on our homepage.
Several manufactures have started adding lane changing to their Level 2 systems which makes them seem like they are operating at higher level and encourages people to engage less. Level 2 features are for “support”.
Tesla is perfectly happy with the public’s perception that their ADAS can operate at a higher level than it is capable of. Unfortunately, misuse of their system coupled with beta testing their s/w on public roads has had things more dangerous for everyone rather than the stated purpose of making things safer.
The only true Level 3 (You ARE NOT driving) system available in the US is from Mercedes-Benz (certified for public roads in NV and CA with more states to come), but that system will only engage below 40 mph. Anything above 40 mph and it’s back to Level 2 (You ARE driving).
I’m not sure how you would study this, but it would be interesting to know what the difference in reaction times between “deer ran in front of you” to “random and unexpected hard braking from car in front” are. I’d be willing to bet the latter is much, much worse.
“Enough already, people get it,” Obviously not Jason, obviously not..
Short version of the story is car suffers some sort of mechanical failure, perhaps the computer diagnostics indicated it was not safe to continue, the car pulled to the side, and a bunch of cars hit it.
I still love the performance of this magnificent beast!
2. Beta testing by the general public in their private road vehicles is dumber still.
“Full Seat-Belt (FSB) has many advanced safety functions, but you cannot rely on Full Seat-Belt to keep you restrained – it may unexpectedly disengage at any time, so you should always keep your hands ready to brace yourself or be thrown from the vehicle in the event of an accident. Full Seat-Belt is a driver safety-assistance system. It does not provide seat belt functionality. Also, the manufacturer bears no responsibility if Full Seat-Belt should suddenly disengage during regular driving or in an emergency event or if it performs unexpected maneuvers – the driver should be constantly monitoring how FSB is behaving and be ready to immediately disengage and take over Full Seat-Belt’s safety functions.“
Oh, also there’s a whole toxic sub-group of Full Seat-Belt drivers who deliberately stage unapproved and unintended scenarios on public roads in order to back up the claim that FSB is actually a fully-functional seatbelt system, with the tacit approval of the manufacturer, who cannot legally make those claims themselves.
I’ve said it before and I’ll say it again – if a seatbelt can disengage at any time, unexpectedly, then it isn’t ready yet and it certainly shouldn’t be tested on public roads with other unsuspecting humans around.