I think we’re missing the point that this is currently designed for a driver to monitor at all times, the driver intervened appropriately, and thereby provided another training example for the network. This is also a beta version that is being tested by humans who have the legal and moral responsibility for control of the car.
One of the attributes of human perception is that it is TERRIBLE at maintaining persistent vigilance without engagement.
Even at a very low level, the nervous system is designed to habituate to constant stimuli; e.g., when you first encounter something that smells (good or bad) it can be overwhelming, but after a few minutes the same smell barely registers. More on point, spend some time looking forward at speed, or rotating (e.g., in a swivel chair), then stop quickly, and watch how your visual system creates the illusion of everything flowing in the opposite direction.
Now, scale that up to higher levels of cognition. The more the car gets right, the worse will be the human's attention. When a car here does almost everything right, people can and will literally read or fall asleep at the wheel. Until that one critical failure.
As a former licensed road racing driver and champion, I find the idea of anything between L2 and L4 to be terrifying. I can and have handled many very tricky and emergency situations at a wide range of speeds on everything from dry roads to wet ice (on and off the track) — when my attention was fully focused on the road, the situation, my grip levels, the balance of the car, etc.
The idea of being largely unfocused while the car does almost everything, then having an alert and having to regain, in fractions of a second, full orientation to everything I need to know then take action, is terrifying. 60 mph is 88 feet per second. Even a quick reaction where I've squandered only a half second figuring out what to do is the difference between avoiding or stopping before an obstacle, and blowing ~50' past it, or over it, at speed.
Attempts to say "it's just fine because a human is in the loop (and ultimately responsible)" are just bullsh*t and evading responsibility, even if human beta testing is fantastic for gathering massive amounts of data to analyze.
Among high risk and speed sports, it is almost axiomatic for us to draw distinctions between "smart crazy" vs "dumb crazy", and everyone knows the difference without a definition. The best definition I heard was that it's the difference between [using knowledge, technology, and skill to make a hazardous activity reliably safe] vs [getting away with something]. You can 'get away' with Russian Roulette 5 out of six times, and you'll probably get a great adrenaline rush, but you can't expect to do so for long.
Although this kind of "full self driving" has much better odds vs Russian Roulette, it is still unreliable, and the system of expecting the human to always be able to detect, orient, and respond in time to the car's errors is systematically unsafe. You will 'get away with it' a lot, and there will even be times when the system catches things the humans won't.
But to place the entire "legal and moral responsibility" on the human to 100% reliably operate a system that is specifically designed against human capabilities is wrong, unless you want to say that this is a [no human should every operate under these conditions], like drunk driving, and outlaw the system and the action of operating it.
If your concerns are correct, shouldn’t we see a lot MORE collisions among the millions of current tesla drivers using the existing, less advanced system than we do among comparable vehicles and drivers? Wouldn’t we expect to see higher insurance premiums for tesla drivers with FSD than for comparable drivers and comparably expensive cars? That doesn’t seem to be the case for the most numerous tesla vehicles[1]. In which case this sounds like a “it works in practice, but does it work in theory?” kind of situation :)
Indeed - one thing insurers are good at is gathering good and relevant data! In this case, a quick skim shows the Tesla often more to insure than the regular car, but not a ton. What I'd want to see is the data for only the models with the "Full Self Drive" option.
Not necessarily more, but we do see some really horrifying ones that humans would rarely do. E.g., the car in FL that just full-self-drove at full speed straight under a semi-trailer turning across the road, decapitating the driver, or the Apple exec that died piling into the construction divider on the highway because the Tesla failed to understand the temporary marks on the road.
I'm fine with Tesla or other companies using and even testing automated driving systems on public roads (within reason). Ultimately, it should be better, and probably is already better than the average human.
My objection is ONLY to the idea that the human driver should be considered 100% morally & legally responsible for any action of the car.
Aside from the fact that the code is secret and proprietary, and even it's authors often cannot immediately see why the car took some action, the considerations of actual human performance make such responsibility a preposterous proposition.
The maker of the autonomous system, and its user, must share responsibility for the actions of the car. When there is a disaster, it will, and should come down to a case-by-case examination of the actual details of the incident. Did the driver ask the car/system to do something beyond it's capabilities, or was s/he unreasonably negligent? Or, did the system do something surprising and unexpected?
In the present situation, where the car started to make a sharp & surprising turn and almost ran over a pedestrian, it was Pure Dumb Luck that the driver was so attentive and caught it in time. If he'd been just a bit slower and the pedestrian was hit, I would place this blame 90%++ on Tesla, not the driver (given only the video). OTOH, there are many other cases where the driver tries to falsely blame Tesla.
We just can't A Priori declare one or the other always at fault.