It’s not that Tesla is using better sensors, it’s simply using far fewer of them. Tesla’s big bet is that humans drive only using their eyes, and hence self driving cars should be able to do the same. In fact Tesla has eliminated sensors like forward looking radar (that came standard on my 2018 Model 3) in order to cut system costs. Meanwhile the Waymo cars use not only cameras but radar and Lidar (those spinning things project laser beams that measure distance with great precision rather than estimating as humans do).In the Phoenix metro area we have a lot of Waymo self driving cars. When one of them is next to me and I look at all the spinning radars and lidars and sonars and other sensors all over the exterior of those electric Jaguars, I wonder how it is that Tesla can supposedly accomplish essentially the same task with so many fewer sensors? Is it a case of Tesla's FSD sensors being better and smaller and less visible? Or is it a case of Tesla not needing to care as much since Tesla's FSD only operates if there is a human driver behind the wheel, as opposed to Waymo where there literally are times the Waymos don't have a human inside, much less driving?
Long term Tesla should be right: if humans can drive only using their eyesight then robots should be able to as well, or in fact better- humans only have two eyes, while robots can have lots more cameras. Short term, however, dropping sensors makes some parts of the problem much harder. In most of the frontal accidents Tesla’s have had the computer vision misses an important, often hard to see object. A flat bed trailer across the road of similar color to the background. A person crouched in a similar color bush with a hat on, a poorly lit vehicle stopped in the fast lane in fog. A shadow that looks like the edge of the road (the thing that tried to kill me).
Currently Tesla is using their customers as guinea pigs: every time a person takes the wheel to correct a FSD Tesla that event can be logged. They have data centers full of thousands of CPUs identical to the ones in the cars, so as FSD software is updated they can present the new version of software with either simulated or actual camera footage from real cars to see if near misses and accidents that previously occurred can be caught by the software next time. Thus much as Chat GPT trains on data from the internet Tesla trains on data from its customers.
Unfortunately this method carries with it a substantial amount of risk for the testers (ie you and me). The forward looking radar Tesla used to install would 100% see big, solid objects in the road every time. The computer could then even make some judgements nearly impossible for a human (is that paper bag in the road empty, or full of concrete?). So while Tesla’s AI system, trained by millions of customers, is a good way to make self driving tech cheaper (radar and lidar are not inexpensive) it has inherent flaws that will make getting to high nines of reliability very challenging. It results in very fluid and natural car motion, but it can be hard to remember that the system has inherently bad eyesight that it relies on 100%. Which is why Waymo, etc have gone another way and are well further along in certifications despite having a fraction of the cars on the road to train their systems on.