• Welcome to the FREE TUGBBS forums! The absolute best place for owners to get help and advice about their timeshares for more than 31 years!

    Join Tens of Thousands of other owners just like you here to get any and all Timeshare questions answered 24 hours a day!
  • TUG started 32 years ago in October 1993 as a group of regular Timeshare owners just like you!

    Read about our 32st anniversary: Happy 32st Birthday TUG!
  • TUG has a YouTube Channel to produce weekly short informative videos on popular Timeshare topics!

    All subscribers auto-entered to win all free TUG membership giveaways!

    Visit TUG on Youtube!
  • TUG has now saved timeshare owners more than $24,000,000 dollars just by finding us in time to rescind a new Timeshare purchase! A truly incredible milestone!

    Read more here: TUG saves owners more than $24 Million dollars
  • Sign up to get the TUG Newsletter for free!

    Tens of thousands of subscribing owners! A weekly recap of the best Timeshare resort reviews and the most popular topics discussed by owners!
  • Our official "end my sales presentation early" T-shirts are available again! Also come with the option for a free membership extension with purchase to offset the cost!

    All T-shirt options here!
  • A few of the most common links here on the forums for newbies and guests!

You're Being Lied to About Electric Cars

Musk himself has stated that to get to L4/L5 driving with any system - will require somewhere between 6-10 billion miles of real world data - and 10x that in synthetic data
I believe that. I'm no expert. My undergrad engineering degree did focus on robotics, but it had nothing to do with AI. Plain ol programming & control systems.
people don't really understand just how difficult a problem solving for AADS really is. It's very difficult, especially when dealing with edge cases.
I LOV when people just throw around L3, L4, L5, like it is just a series of baby steps.
Edge cases? pfft, so a few cyclists and baby carriages get flattened. We're talking progress here.
 
I do understand your argument
In theory you give the edge to real world data
There are differing opinions on the value of real-world vs simulated data
We will see how simulated data matches real world data
Simulated data is how the "new" nuclear mini reactors are being approved
People are investing billions of dollars based on these simulations
In the development of FAD we will see how simulated data matches up against real world data
In real world, you could drive thousands of miles and never encounter a fire truck parked in the fast lane
In simulation, you can plan on this happening and provide the algo to deal with the situation
Only time will tell
But to me the argument is not a convincing one
So we just took a trip up to BB using FSD 14.1.3 on the way up and installed FSD 14.1.4 while at BB and then all the way home. Well over 1000 miles of driving. I literally hardly ever had to intervene the entirety of the time we were up in BB - including multiple day trips to Fort Ticonderoga, Burlington VT, Woodstock VT, etc. Keep in mind this is mostly rural mountainous driving with windy back roads with fading paint (in some cases no paint at all), bad lighting, fall foliage season (leaves all over the place), etc. Literally not a single critical safety intervention the entirety of over 1000 miles - and in most cases - the vehicle not only drove itself everywhere - but parked at the conclusion of each drive in a parking space with, again, zero intervention. I had a few what are termed "encouragements" where I hit the go pedal a few times because the system was being overly cautious - it's still an early release version of v14.1.x - so this is to be expected - and I came across a couple of edge cases where I had to intervene - like the dirt parking lot at Fort Ticonderoga - where FSD wasn't sure what to do. The edge cases are almost myriad in number and difficult to train on. I'd say we're solidly at L3 now with FSD v14. This is all because Tesla has six billion miles of real-world data to train the neural networks, and synthetic data to train on RL for edge cases. Show me another system in place, right now, that can come anywhere close to this and do so without geofencing and ultra-high-definition mapping (which is what every other system is using to get around gaps in their real-world datasets today - and why systems like Waymo can only function within certain geofenced areas). My argument is based on real world experience, using a real-world system that we literally use in our vehicle every day. My wife literally uses FSD 14.1.4 every day to drive back and forth to work. Literally door to door now - parking and everything. The only thing FSD won't do for us - is pull into our garage at home - it parks in our driveway at present - and tries to back into the garage space - but we have two individual garage doors - and it's just not quite there yet when attempting to pull into a two car garage with older smaller garage doors (8x7 not 9x8 like most newer homes have now) or back into a single garage space - it's trying every day though. My buddy in comparison has a single car garage - and it pulls right into it every time - zero issues - and backs out of his garage every time - zero issues. FSD pulls right out of our garage bays though - zero issues. So we're halfway there already. That's pretty impressive really. I suspect by 14.3 it'll have it figured out when "reasoning" is added into the FSD NN stack with respect to my garage configuration. FSD v13.2.9 was lacking what I'd term "last mile" type edge case management. FSD v14.x largely resolves these last mile items. IMHO we are solidly in L3 territory right now with FSD v14.x - whether Tesla pursues this officially or not remains to be seen. I'd personally rather they certify FSD v14 as L3 so I can stop having to pay attention when driving - and only get my attention when really needed.

I don't know of any other system that comes remotely close to this level of capability today. I'm convinced by real world experiences based upon real world data. I don't care about theoretical arguments in the least.
 
Last edited:
I need to step back from this argument
I have made my retirement funds from decisions made years ago
I bet on Nvidia shortly after the IPO and never sold any shares as it was in a joint IRA (unfortunately not a Roth)
It has done well
I made money in Tesla up and down as a trader
I have been playing with nuclear stocks of late
If Tesla takes over the world and its stock goes to the moon
It will do so without me
I am no longer investing
Arguing about camera based FAD and camera and other sensor FAD is just a reason to argue
I have no real skin in the game
 
I LOV when people just throw around L3, L4, L5, like it is just a series of baby steps.

No one has used the term baby steps but these are the actual steps or tiers of autonomous vehicles. Is there a better term like maybe updog.

Bill
 
I literally hardly ever had to intervene .... I'd personally rather they certify FSD v14 as L3 so I can stop having to pay attention when driving
Hardly ever means it's not ready for L3. In L2 you're liable for an accident, in L3 the car is liable.
 
Hardly ever means it's not ready for L3. In L2 you're liable for an accident, in L3 the car is liable.
No, that's not accurate. With L3, it's driving the vehicle until it comes across an issue where it requires human intervention, at which point the system requires the human driver to take over. This is how FSD v14 is really behaving today already, whether Tesla ever decides to pursue this officially or not is another matter. If Tesla were to pursue L3 certification, then the attention monitoring would only be required when the vehicle indicates it requires human intervention, per the chart below (red emphasis mine):

1762362712919.png
 
The terminology of these levels is not being adhered to in the press
I have seen the term Level 2 plus now being used
I have also seen incorrect definitions of L3 and L4 being bandied about
When I say incorrect, they do not adhere to the terminology as defined by the SAE
Just another day on the internet and the state of journalism in 2025
Chaos and click bait prevail

In case anyone is looking for more information on the SAE and the levels of driving automation
The link below will explain it further


I found out where the term Level 2 plus has entered the discussion for Autonomous Vehicle Operation
Mobileye introduced the terminology

 
Last edited:
terminology of these levels is not being adhered to in the press
Do you think the press would let some vague engineering doc rule their lives? Suuuuuuuuuuuuuuuuuuuuuuuuuuure.
When the time comes, quick Cliff Note (as shown above):
L4 uses the word "LIMITED". Pretty bleepin vague if you ask me, SAE.
L5 uses the words "ALL" & "EVERYWHERE".
Sue me if I see the difference being night & day.
Limited could mean "It really doesnt work." or could mean "It works great in a tiny area under tightly-controlled conditions, but dont you dare try to use it anywhere else and with us holding your hand."
 
Do you think the press would let some vague engineering doc rule their lives? Suuuuuuuuuuuuuuuuuuuuuuuuuuure.
When the time comes, quick Cliff Note (as shown above):
L4 uses the word "LIMITED". Pretty bleepin vague if you ask me, SAE.
L5 uses the words "ALL" & "EVERYWHERE".
Sue me if I see the difference being night & day.
Limited could mean "It really doesnt work." or could mean "It works great in a tiny area under tightly-controlled conditions, but dont you dare try to use it anywhere else and with us holding your hand."

L4 “limited conditions” is further defined in the SAE documentation. From a practical standpoint with the current providers such as Waymo, Robotaxi, etc., it effectively translates to geofencing, combined with certain weather related limits. Both Robotaxi and Waymo vehicles will actually pull over in heavy precipitation due to degraded sensor and camera conditions for example.


Sent from my iPhone using Tapatalk
 
Do you think the press would let some vague engineering doc rule their lives? Suuuuuuuuuuuuuuuuuuuuuuuuuuure.
When the time comes, quick Cliff Note (as shown above):
L4 uses the word "LIMITED". Pretty bleepin vague if you ask me, SAE.
L5 uses the words "ALL" & "EVERYWHERE".
Sue me if I see the difference being night & day.
Limited could mean "It really doesnt work." or could mean "It works great in a tiny area under tightly-controlled conditions, but dont you dare try to use it anywhere else and with us holding your hand
Young journalist faced with finding out what SAE means

in Urban Dictionary, "SAE" can refer to:
 
L4 “limited conditions” is further defined in the SAE documentation. From a practical standpoint with the current providers such as Waymo, Robotaxi, etc., it effectively translates to geofencing, combined with certain weather related limits. Both Robotaxi and Waymo vehicles will actually pull over in heavy precipitation due to degraded sensor and camera conditions for example.


Sent from my iPhone using Tapatalk
Nvidia's system will operate in Blizzards, Hurricanes, Earthquakes, Tornados, Derechos, Tsunamis, Zombie Attacks, and Political Unrest
At least that is the claim on the Anti-Tesla forums
 
No, that's not accurate.
A definition by an international organization is not the same as determining legal liability in the place you drive your vehicle (ex. Delaware).

In regards to liability NHTSA says: When the AV systems are monitoring the roadway, the surrounding environment, and executing driving tasks (autonomy Levels 3 through 5), the vehicle itself should be classified as the driver.
 
Nvidia's system will operate in Blizzards, Hurricanes, Earthquakes, Tornados, Derechos, Tsunamis, Zombie Attacks, and Political Unrest
At least that is the claim on the Anti-Tesla forums
Since Nvidia's system hasn't actually been deployed in the real world by anyone, it's easy to make claims. Waymo said the same thing years ago with their system, which is effectively the same as the NVidia's architecture, and have yet to be able to deliver on those claims. I'll take real world results over theoretical claims every day and twice on Sunday.
 
A definition by an international organization is not the same as determining legal liability in the place you drive your vehicle (ex. Delaware).

In regards to liability NHTSA says: When the AV systems are monitoring the roadway, the surrounding environment, and executing driving tasks (autonomy Levels 3 through 5), the vehicle itself should be classified as the driver.
Please provide a link to your source.
 
Since Nvidia's system hasn't actually been deployed in the real world by anyone, it's easy to make claims. Waymo said the same thing years ago with their system, which is effectively the same as the NVidia's architecture, and have yet to be able to deliver on those claims. I'll take real world results over theoretical claims every day and twice on Sunday.
Did I really need to put a sarcasm alert on that post
 
A definition by an international organization is not the same as determining legal liability in the place you drive your vehicle (ex. Delaware).

In regards to liability NHTSA says: When the AV systems are monitoring the roadway, the surrounding environment, and executing driving tasks (autonomy Levels 3 through 5), the vehicle itself should be classified as the driver.
NHTSA has endorsed the SAE J3016 standard directly - and bases their autonomous driving policies and resultant categorizations and related decisions directly based upon J3016, as is clearly evidenced by the NHTSA website itself here: https://www.nhtsa.gov/vehicle-safety/automated-vehicles-safety

Graphic for proof:

1762373741952.png
 
Did I really need to put a sarcasm alert on that post
Learn to use emoticons - they exist for a reason LOL. Words are often tone deaf - hence the reason emoticons were created in the first place. :cool:
 
NHTSA has endorsed the SAE J3016 standard directly - and bases their autonomous driving policies and resultant categorizations and related decisions directly based upon J3016, as is clearly evidenced by the NHTSA website itself here: https://www.nhtsa.gov/vehicle-safety/automated-vehicles-safety
Read the guidelines I posted above. NHTSA has been moving away from SAE definitions for completely autonomous driving for years as they don't provide the specificity required to classify vehicle accidents and liability determinations.
 
Read the guidelines I posted above. NHTSA has been moving away from SAE definitions for completely autonomous driving for years as they don't provide the specificity required to classify vehicle accidents and liability determinations.
Straight from the PDF document you provided, and I quote - for L3:

“Level 3” means the same as and is coterminous with the definition of “Level or Category 3 - Conditional Driving Automation” in SAE J3016 Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles § 5.4 (April 2021). A Level 3 system is an Automated Driving System (ADS) on the vehicle that can perform all aspects of the driving task under some circumstances. In those circumstances, the human driver must be ready to take back control at any time when the ADS requests the human driver to do so. In all other circumstances, the human driver performs the driving task.

Proof that it is in point of fact a CONDITIONAL DRIVING AUTOMATION that sometimes requires human intervention and directly references the SAE J3016 taxonomy. Thanks for proving my point. Quit while you're behind already. The Deldot reference is irrelevant for the purposes of this thread.
 
🩴 I found the emoticons
Which one is best to signal sarcasm

Sarcasm emoticons:

- raised eyebrow
- wink
- wink/tongue in cheek
- inverted smiley indicates irony/joking sarcasm


Sent from my iPhone using Tapatalk
 
The main reason autonomous driving isn't going to happen soon is liability. How liability is assigned would be ambiguous regarding self driving ev's because of the complexity of the autonomous car, owner missing a scheduled maintenance or software update, a software update that the owner had no control over and didn't receive, and believe it or not, weather.

Different components of the autonomous car may end up with a liability assigned to the part provided instead of solely to the manufacturer.

You weren't driving the autonomous car, but as the owner of the autonomous car, much like any property you own, you could be liable for accidents in the current legal system. Currently, insurance companies do cover cars with partial automation because the liability falls onto the human driver.

Bill
 
The main reason autonomous driving isn't going to happen soon is liability. How liability is assigned would be ambiguous regarding self driving ev's because of the complexity of the autonomous car, owner missing a scheduled maintenance or software update, a software update that the owner had no control over and didn't receive, and believe it or not, weather.

Different components of the autonomous car may end up with a liability assigned to the part provided instead of solely to the manufacturer.

You weren't driving the autonomous car, but as the owner of the autonomous car, much like any property you own, you could be liable for accidents in the current legal system. Currently, insurance companies do cover cars with partial automation because the liability falls onto the human driver.

Bill
Yes, from what I've read, and I've read quite a bit - until the insurance, legal, and government regs catch up - L3 systems will use the same insurance structure that's in place today. Newer online insurance companies like Lemonade, are moving toward offering insurance plans that obtain the FSD vehicle usage data via the Tesla APIs - and will offer heavily discounted rates when the car is using FSD as opposed to the human driving. Tesla FSD safety data, which is published quarterly, already shows that autopilot usage is roughly 9x safer than a human driver from a "xxxxx miles per accident" statistical standpoint:



Firms like Lemonade are essentially starting to provide steep rate discounts on auto insurance based upon the data - encouraging people to use FSD in return for lower rates.
 
Top