Tesla's SHOCKING FSD Improvement
ฝัง
- เผยแพร่เมื่อ 16 มี.ค. 2024
- Enjoying this clip? Watch the full video here:
• We Need To Talk About ...
If you'd like to support the channel, you can find me on Patreon here:
patreon.com/HansCNelson
Shopping at Tesla? You can support the channel & save by using my Referral code:
ts.la/hans71453
HOST INFO:
Farzad Mesbahi
X Profile - x.com/farzyness
TH-cam - @farzyness
CHANNEL LINKS:
Hans' X profile - x.com/HansCNelson
Hans' TH-cam channel - @HansCNelson
MY BUDGET SETUP:
Laptop: M1 Macbook Air
Camera: iPhone SE (Continuity Camera)
Mic: Shure MV7
Audio/Video Interface: OBS
Streaming Platform: Streamyard
Want to create live streams like this? Check out StreamYard: streamyard.com/pal/d/60406730...
#Tesla #elon #musk #elonmusk #teslaengineering #teslaai #teslafsd #fsd #fsdbeta #autonomousvehicles #autonomy #neuralnetworks #machinelearning #engineering #deepmind
Tesla, Elon, Musk, Elon Musk, Tesla Engineering, Tesla AI, Tesla FSD, FSD, AI, AGI, Technology, Software, Computing, machine learnings, AI models, Engineering, ai, neural networks, machine learning, engineering, deepmind, fsd beta, autonomous vehicles, autonomy - บันเทิง
There’s a good chemistry between Farzad and Hans.
It’s not terrible
I mean have you seen how handsome Hans is?
I am not here for your good looks. I am here for your good talks. 😉
FSD will eventually be able to drive like Eddie Morra in limitless.
Yes you can fail in FSD, using simulation. That is what huge the amount of compute is needed for. 😊
FSD and the neurolink training system which it works on will have game changing effects on many industries particularly the robotic industry. And I agree, the rate of exponential growth will increase rapidly to 100% safety. The future is exciting. The future is electric. 😊
FSD is also done in simulations where it can in fact fail! I think the analogy to GO is pretty close to spot on.
Tesla's FSD v12.3 is finally an exponential improvement. Reminds me of the many years it took for computers to master chess at an average level, and how few years after that computers were beating chess Masters. That's the moment we're in. Hope it comes to all of you soon!
I'd like to see v12.3+ compared to human video of drives (recorded by all cameras+sensor data) to see if it can do better than the human in exactly the same situation. These would be fascinating vide comparisons.
Humans perceive "rate of learning" based on *our* rate of learning.
.
I wonder if it’s possible, in the gamification solution of perfecting FSD, if it will get to a point where it will avoid routes that are less than ideal to reach its destination (like Chuck Cook’s unprotected left turn). I do this as there are less sketchy routes I take to get to destinations.
I think people either have no idea what Actual Smart Summon is, or they know about it and are grossly underestimating the impact it will have. And, it’s very possible to come this year.
I’d love to see a video about the potential of Actual Smart Summon.
Great work!
Safety isn't only rate-limiting parameter... compute availability is also limiting. A 'game' can be run innumerable times because it is all simulation, whereas driving needs to be based on real world input actualizations. If simulations could be used to multiply this input, progress might be improved, but it may still come down to compute.
At this rate of improvement, I think we'll all be surprised at how incredible it will be by the end of this year, IMO.
Tony Seba in 2015 or 2016 mentioned the Go example in talks he gave.
This reminds me of the times Elon has stated "driving better than the average human" is a pretty low bar... and while FSD isn't there yet, it's on the way, and it's probably outside of our imagination, to picture just how good it could possibly be in the future. The next big hurdle will be getting regulations changed for FSD equipped vehicles, because constraints like stop signs and speed limits will be just silly for an FSD system ;)
Will we realistically get to the point where there are NO human drivers? Because that’s the reality that would have to exist to eliminate stop signs and speed limits. FSD would have to negotiate its path fwd with every vehicle in its vicinity in real time.
You might be able to eliminate four way stop signs, but some intersections are two way stop.
@@jrsandsDecades from now probably so.
@@fractalelf7760 FSD would have to be mandatory in all vehicles. There will always be situations where humans will operate the vehicle. Say, vacations or emergency vehicles or so many other scenarios. We’ll probably have to mandate 4 way stops convert to roundabouts.
@@jrsands It almost certainly will be in time.
as harsh as it sounds, one milestone is waymo level of driving. we have to admit that they're doing it better now in SF, at least. I've ridden in one and it's very good. part of it is the reliability of the sensors and the millimeter level accuracy of their readings.
I think a lot of people would disagree that this is anything like a milestone. V12.3 appears to be able to do a lot more than Waymo in many situations and it isn't geo-fenced to a location.
It's not a scaleable solution im economic terms. What Waymo has done is found a way to drive a car much more expensively than hiring a chauffeur. Tesla is working on making it an order of magnitude cheaper.
To agree with FM from another vid; Tesla has to make it work with the current FSD model - this year.
Any additional rewrites or do-overs will have a negative impact on their position as 1st mover.
The rate at which Ai is known to improve and the $ put towards it achieving AGI/ASI is at a rate exceeding that of FSD's - period. It may well be that moving toward the goal of AGI solves the problems faced with FSD, much in the same way it works for the inverse scenario.
The gap created at the outset of the FSD program is quickly diminishing. This is not to say that Tesla won't be the leader, just that it won't be for as long as previously thought. If FSD isn’t regulated before an alternative is conceivably on the horizon from Ai, then it can be argued that a lot of Tesla's previous work was for naught.
Check out Chuck Cooks FSD 12.3 test drive from Saturday - he must have said WOW 100 times. It seems this end to end approach is actually going to be the solution and likely out of beta within a year or two.
The progress of 12.3 this weekend has been super encouraging to see after Farzad and I had this conversation just 1 week ago.
Tesla FSD V.12.2 in Small Town Ohio!
Rebellionaire
Thanks guys!! Idea #1: Bumpercars. Tesla should build a bunch of cars outfitted as bumpercars so they can hit each other (make mistakes) and the system can learn faster. On private roads of course.
Idea #2: Driving a car with nearly perfect safety may turn out to be easier for E2E NNs than we thought. Because what is the march of nines marching towards? Perfect driving? NO! (What is perfect driving anyway?). The march of nines is marching towards a) get from A to B, b) don't hit anything, and c) don't scare people. With NNs, 360 deg video, and lots of data and compute, a) and b) may turn out to be a pretty low bar. ( I mean FSD 12.3 is just the beginning and it's already pretty dam good) c) may be a little harder, but less critical.
Thoughts?
Idea #1 sounds like an amazing Mr. Beast video
My tesla Y is having a hard time with side wind! Its clear europe is not smooth sailing yet. my autocruise has done some very strange things, and if it want for me being a professional driver, for the laymen (say an older lady or man) it could have been a scary situation. regulatory rules are about corrective and preventative actions. for a human we know and have accepted the failure mode. for a Synthetic being, we dont know enough of whats going on in the model; as such can we take ever solve the long tail?
Europe has much less data stored and much less specific data accumulating.
.
However.
V12 will be the difference between a new driver taking a holiday and hiring a car in an unfamiliar country, compared to a driver with decades of experience.
.
It's ability to adapt when it reaches Europe will be both impressive and rapid (imo)
Europe (and UK) will never have the v10, v11 to v12 progress marks. It's likely to be at least at v12.3 when full FSD is released here.
One way to perhaps creep up on this in a more safe but real world is to train Optimus to ride e-bikes which pose a much lower safety threat and then collect real world data from them as they are used say for last mile deliveries.
The situations they encounter will be a bit different than that of a car but maybe not so different as to prevent the experience gained from them to be useful as applied to cars.
Also, doing that might be killing two birds with one stone in that not only might this experience be applied to cars but also it will benefit bots as well!
Plus it could get people more used to the idea of fully autonomous vehicles driving about on public streets (albeit just e-bikes, but that is a start) as well as used to the idea that humanoid robots are indeed here!
Just add "balance at speed" to the equation....
(Or were you kidding?)
Economic value of bikes is much lower, though, so from an opportunity standpoint, cars make sense.
But as you noted, very hard.
The key in my mind is just making sure that 99.99% of safety flaws are discovered via disengagement rather than collision.
The revenue might be low but the cost to do this is also low and thus one might do this not as a money maker but as a strategic move to expand the bot market into new areas and clinch them before others can get a foot hold in the last mile delivery market and for Tesla to fire a shot across the bow of Jeff Bezos.
For if one has a bit delivery goodies to one's front door that might accelerate one wanting a bot to mow one's lawn and take out the trash and wash dishes as well as raising expectations that this might be more feasible than otherwise one might have thought.@@HansCNelson
Also, self driving vans might distribute goods late at night/early morning to distribution sites (i.e. parked vans) when the traffic is much less and then the robots on bikes could distribute the merchandise from those trucks using the e-bikes )cargo e bikes or maybe e-bikes with trailers the last mile where the bots can leave the merch on peoples doorsteps.
e.g. Google
It's time to replace urban delivery vans @@HansCNelson
And so there is not just a technical hurdle but a regulatory one as well in regard to fully self driving vehicles and this approach might lower that hurdle a bit as well.
Hans, see my comment re bumpercars@@HansCNelson
Tesla's FSD can be like MuZero in simulations. No one dies that way. That would help to some extent I'm sure.
I think we should skip the different levels. How do we even define when they are reached. And the same with the march of nines. How do we define 99.99999% for example? What we need to do is measuring FSD accident statistics against human accident statistics and when FSD is X times better than humans, according to NHTSA, it is ready.
I'm almost positive that the march of 9s reduces to miles per intervention (or perhaps collision critical intervention).
1 intervention per 10 miles = 90%
1 intervention per 100 miles = 99%
1 intervention per 1000 miles = 99.9%
etc.
@@HansCNelson, Don't you mean increases the miles per intervention? And would you define an accident as an intervention? Of course, fewer interventions per mile is a good way to measure progress. But don't you think NHTSA is more interested in hard statistic accident facts?
With this talk about how far above human level perfection really is, does it not feel like training on clips of human driving is the wrong approach? Seems like that would needlessly slow down the process.
Maybe the right approach is training 100% on simulation. Have all different scenarios and courses of action play out where the neural net is free to make mistakes without any danger to life, and decide from there what the optimal behaviours are to achieve better than human driving skills.
They will never discover all the edge cases that occur in the real world without real video of the real world.
@@HansCNelson I don't think that's true. You could have a simulation run random scenarios for as long as it takes to cover all eventualities.
And even then, if you find that the neural net is lacking experience in certain circumstances because they just don't come up often enough then surely you can just tweak the simulation to make it generate certain types of scenarios more than others?
I feel like, especially now that there is a good base to start from where a neural net has already been developed that is essentially 95% of the way there, this would be the way to go which would get it over the finish line. Seems like trying to teach it to be better than humans by making it drive exactly like humans is taking too long, IMO.
There are over 5 million police reported US crashes per year and around 40,000 auto caused US deaths per year. If FSD instead of human driving reduces those numbers that is a type of perfection.
A small towns road network is infinitely more complex than Go: a 2D finite grid with 2 axis got “God mode” on a room sized computer, not a cars GPU.
Tesla FSD needs vastly more capacity to “play” driving.
Tesla trains on one of the World's largest super computers, and it's getting larger every day. Once it's trained, it downscales to car size. It's actually similar to LLMs, which run individual instances on rack servers, or they couldn't cope with everyone's requests.
FSD can fail in shadow mode and in simulation…
I disagree with Hans that FSD can not afford to fail during training - if that training is done in simulation! To run the training millions of times, similar to AlphaGo, it will have to be in simulated environments. Only after it has been honed to where it does not fail, then it can be tried IRL. First to a limited base of experienced testers for verification before wide adoption.
Hello guys
👋
In 2023 Ashok Elluswamy CVPR’23 keynote explained that among other things, Tesla was able to do real-world simulations of what the car’s cameras see.
th-cam.com/video/6x-Xb_uT7ts/w-d-xo.htmlsi=ZLZfN_70ptP81Z_M
When OpenAI released Sora, Elon commented “Tesla has been able to do real-world video generation with accurate physics for about a year. It wasn’t super interesting because all the training data came from the cars, so it just looked like video from Tesla, albeit with a dynamically generated (not remembered) world.”
Tesla will be able to fail when more computing power is available, they will be able to a version of FSD with entirely virtual data, as many edge cases as they want, other drivers with various amounts of skill or recklessness, and if they have a virtual accident then that will just be one more data point, and they can continue the simulations. When more computing power is brought online at the end of the year the need for real-world driving miles will go away as will the huge lead in autonomous driving miles, but it will lead to the MuZero version of FSD.
They primarily use simulation to supplement training data if there isn’t enough real world data of a given situation available.
But to the best of my knowledge, real world driving data is the way they discover the edge cases where the system doesn’t perform.
That real world data can be from interventions or shadow mode, though.
But is teaching AI FSD to drive by human teaching going to limit it to human accuracy and accident avoidance?* 🎉🎉🎉🎉
They use the best driver's data. So it might be limited by them in some aspects but I would not be unhappy with that. Remember, it will have instantaneous reactions and simulated data to practice on.
FSD can fail and is allowed to fail. Every intervention is considered a failure by the NN to tune and improve.
100% true.
As long as interventions catch 99.9%+ of safety critical failures, we'll be golden.
Tesla FSD will make money in 2035
Why is it shocking?! All tests indicate it is pretty good but still needs lots of improvement. Shocking would be performance of 99.9999 and it is far from it.
99.99% would be four deaths a year.
No human is at 99.9999…
99.99% would be four deaths per year
The acceptable rate of human failure is 40k deaths per year. What is the acceptable rate for FSD?
If Elon's 10x rule turns out to be roughly accurate, I guess that puts it at 4k * (Tesla's US Auto Fleet Share)
@@HansCNelson that's a no go from a regulatory approval standpoint, not to mention societal acceptance or the liability cost to Tesla. I think it would have to be 1000x.
There are no milestones, you either win or lose a game of go. Same goes for driving, either you get it right or not
That's patently incorrect.
.
Were it true, every human driver would be infallible, all the time.
.
Just as with humans, the "milestones are and will be measured in the probability of success, gained by experience.
@@rogerstarkey5390 Circumstance always upsets your theory
The question is, what level of error will be permissible by society? I would argue that a 99% reduction in fatalities is acceptable, but I think that most people would not be happy if FSD killed 400 people per year.
@@MegaWildernessHe’s correct though.
YT stop deleting my comments. There is an acceptable rate of failure for humans. It's 40k deaths per year. What is the acceptable rate for FSD?
Escae praevalebunt!
Great clip, clickbait thumbnail. It may work on an individual video basis Hans, but it telegraphs a need to distort or twist the truth just to get views. Don’t be that guy.
FSD will have just as many regressions as improvements with each update for the foreseeable future, and it will not save the stock price until TSLA assumes responsibility for accidents. No one except tech bros are paying 12gs for a glorified cruise control.