Tesla “Full Self-Driving” Beta [V12.4.1]

Some random behavior:

4 Likes

Wait till “this version” runs over a couple of people or hits a utility pole or two. Amazingly, the accident rate for autonomous vehicles remains substantially worse than for humans. Or maybe it’s not so amazing since AI remains dumb as a box of rocks. As careless, nutty, and stupid as humans are, AI is far less competent to pilot an automobile.

4 Likes

Interesting… had a chance to drive the new FSD version and even for shorter distances its behavior was anxiety inducing at times.

3 Likes

Have a cite, Doc?

On the one hand, I had heard the opposite from Musk and others.

On the other hand, Musk and others asserting the opposite did not explain their controls.

Did they control for vehicle age and price?

Did they control for driver competence (e.g., new drivers and geezers not being likely to drive FSD Teslas)?

Did they control for geography?

Did they control for driving conditions (e.g., if FSD shuts down in dangerous weather conditions, then the accidents per mile when using FSD would be artificially low)?

Did they control for misuse of FSD like defeating the driver engagement monitoring (which begs the question of whether they should)?

4 Likes

It is nontrivial to control for all factors. However, I direct you to the following fact (approximate), known to me without doing a web search:

  • There have been on the order of 100 million miles of autonomous automobile miles logged.
  • There have numerous fatalities caused by autonomous cars.
  • The fatality rate for human drivers is about 1 per 100 million miles (US).

Hence, the fatalities caused by autonomous cars exceeds that of humans. Furthermore, to prove that autonomous drivers are safer than humans would require far more than 100 million logged miles for reasons that I hope are obvious. A number of years ago, Rand did a study to quantify this. This must have been around 2018 because that’s when a woman was run over by a self-driving car (with a human minder who wasn’t paying attention) in Tempe, Arizona. I was peripherally involved in this business around that time, about which more later.

A complication arises because many tests of autonomous systems do, indeed, have human minders who are paying attention. This means they are capable of correcting for errors by the autonomous system, thereby avoiding accidents as illustrated in this post’s video; there are still human interventions.

Since you wanted cites:

Most autonomous vehicles rely on a combination of a complex suite of sensors (passive optical, lidar, radar) that provide far more data than humans have available, and yet they still manage to screw up. For instance, some manufacturers are considering range-gated doppler lidar.†

If human drivers had cars equipped with warning systems that incorporated such information, they would undoubtedly be even less accident-prone. In short, it’s not a fair comparison. I’d love to have a heads-up doppler-lidar display. Conversely, the fact that these AI systems require so much more data implies they are far stupider than humans.

*most of the logged miles are not truly fully autonomous and none of those fatalities occurred in fully-autonomous mode. The number of fully autonomous miles driven is far too small to conclude much about safety record.
†I did an assessment of such a system in 2018.

4 Likes

From the article those 18 were level 2, not Level 3+:

Across those 419 crashes, the NHTSA records 18 definite fatalities. Importantly, all 18 deaths were Level 2 ADAS cars. Thus far, no carmaker has reported any fatalities due to a fully autonomous vehicle.

There are crucial caveats to keep in mind, however. The NHTSA lists 19 accidents in ADS-equipped vehicles in which injury level was recorded as “unknown.” The NHTSA also does not require carmakers to report whether an accident was caused by malfunction or user error.

As long as the data is limited in these ways, fully assessing the safety of autonomous vehicles will be necessarily complicated. That rules out drawing broad conclusions. For now, the NHTSA does not record any fatalities definitively caused by a car under fully autonomous control. Making a more complete statement regarding self-driving car safety will have to wait until more comprehensive data becomes available.

Read More: How Many People Have Actually Been Killed By Self-Driving Cars?

The reports must be made for all level 2 or greater ADAS-equipped cars regardless of whether the system was active. And Level 2 includes simple autobrake and lane keeping assist cars.

The “100 million miles of autonomous automobile miles logged” must be in autonomous mode as there probably are at least 10 million such cars on the road.

Notice a problem or three?

4 Likes

Did you take a moment to read my footnote on this point? It’s that thingie with the asterisk.

Not so fast. It’s not clear how many of the miles are in fully autonomous mode. Just because a vehicle is capable of fully autonomous doesn’t mean it was in that mode for all, or even most, of its miles. Furthermore, likely none of these miles were level 5. Level 3 is defined as Conditional Driving Automation, in which human override is still required. Thus, the only way a fatality happens is if the human isn’t doing his/her/zer’s job. This is what happened in the Tempe accident. I’m not sure why this is not included in the stats for fatalities. Even level 4 requires aids such as geofencing and possibly human involvement, which are not truly autonomous.

This misses the overall point that it is unreasonable to compare legacy human driving statistics with highly enhanced semi-autonomous vehicles. Add in all the newest sensors to the human-driven auto and the fatality stats are likely to improve, while still leaving the human with the final, executive control.

4 Likes

Doc, That’s where I got my quote from. Note, I did not include “fully” in “must be in autonomous mode”. But they must be in some autonomous mode because the level 2 or greater-equipped cars likely drive more than a few dozen miles per year apiece.

2 Likes

Again, read and respond to my entire comment. All these cars have human minders. That means accidents can be avoided by humans even if the AI makes an error. If they are not fully autonomous, it is not a valid test of fully autonomous vehicles.

In short, we have no data on the performance of fully autonomous cars in realistic conditions because none have been reported. What we do have is a lot of accidents and some fatalities even with some human supervision and a system that appears to require greatly enhanced sensors to carry out a task humans do with their eyes and ears. A proper apples-to-apples comparison would be to have similar sensor suites available for both and no human interventions for the autonomous vehicle.

It would be like comparing a cohort of human-driven cars without rearview mirrors, no electric turn signals (hand signals only), no antilock brakes, and no other features newer than 1950 to a second group of human-driven cars with contemporary features. Then, concluding that “hey, look, the second set of humans must be better drivers since they have fewer accidents or fatalities.”

3 Likes

And these human drivers and supervisors are Asian or female or both?

3 Likes

Ha! Funny you should mention that. In the Tempe, AZ incident, the human minder was a male-to-female trans individual, if memory serves. Or, at least, some guy who was pretending to be a woman. So you could say the accident was caused by a defective tranny.

You can’t make this stuff up.

6 Likes

And that’s assuming the second group has less accidents…there’s also the confounding variable of “moral hazard”…maybe the first group would drive way more defensively if they knew they were in a death trap on wheels.

3 Likes

I recall reading years ago that the introduction of anti-lock brakes resulted in the average driver driving more aggressively, hoping that the anti-lock brakes would save her.

Let’s face it – the Good Old Days are behind us now. Once Full Self Driving is mandatory in vehicles (It is coming! You know it!) and all vehicles are continuously monitored through Starlink, the driver who tries to switch into manual mode will find that he is governed to the speed limit – and better not try doing a California stop!

The peons used to enjoy the freedom of driving; therefore the experience must be made as tedious and annoying as possible. Presumably, the personal cars of politicians and senior bureaucrats will be exempt.

2 Likes

True, though it’s more risk compensation rather than moral hazard. Risk compensation is definitely a factor with added safety devices. Examples such as antilock brakes are discussed in this article. I found the Swedish case most interesting.

4 Likes

They made a song for that:

2 Likes

Thank you for that! :slight_smile: You made me realize that I’ve been using the term “moral hazard” incorrectly my entire life when I really mean “risk compensation”.

3 Likes

I’ve had a similar experience with winter tires. I used to drive all year round with “four season” (cough) tires. When I got my first set of winter tires, I found myself driving much more aggressively: braking harder in a shorter distance, going around curves much faster etc. I think my level of safety is probably about the same with or without winter tires.

4 Likes

Human behavior around driving risk has been used to put reasonable estimates of the value of human life (@CTLaw might have some smart things to say on this matter as well):
https://www.princeton.edu/news/2002/09/16/researchers-tally-value-human-life

3 Likes

Studying the years from 1987 to 1993, the economists found that, on average, people drove about 2 miles per hour faster on roads with a 65-mile-per-hour-limit than they did under the old 55-mile-per-hour limit.

2 mph faster? Those economists must be bicycle-riding Millennials. That claim does not jibe with lived experience. In my local rural area, there is a long good-quality mostly straight road with a 65 mph limit section and a 55 mph limit section. (Don’t ask why – Bureaucrats!) The average driver drives at 70 mph on the 65 mph section … and at 70 mph on the 55 mph section.

7 Likes

How long until a mobile connection is mandated for every vehicle as a condition of registration (They must be making big plans as my PA registration fee just jumped by about 40%)? Once that’s in place, previously-written, ready-to-go software will happily send automatic warnings and fines, distributed in a manner previously perfected, similar to autonomous drones dispensing explosives to terrorists abroad. Incidentally, these drones’ IQs are actually higher than the meat drones currently employed by the US gov.; they are no more polite.

Note: we have already officially been told that the biggest terrorist threat to the US is “domestic” (read anyone no fully on board with woke dogma) - you know, like people to attend their local school board meetings and object to racial stereotyping and hatred of entire races based merely on skin color. Of course, there is no terrorist threat resulting from millions of illegal entrants from around the world crossing the southern border unhindered (and given $$ cell phones and airline tickets to most any US destination. The new rule is that POCs can say or do no wrong; POWs (“people of whiteness”) can say or do nothing right. How long before “kill switches” are also mandatory?- Initially for the car’s electronics, soon afterwards for its occupants.

This essential-for-domestic-tranquility executive function will surely be elevated to cabinet status. I propose: “The Department of Kinetic Justice”. No knock warrants will be delivered on the target car’s video screen, no less than 3 seconds prior to drone application of a rapid scheduled disassembly device (RSDD). By enabling statute, executive orders of this department are final and not subject to judicial review; jurisdiction has been specifically removed from the federal courts.

Get ready! All this is coming soon to a theater (of undeclared civil war against the insufficiently woke) near you!

3 Likes