Tesla FSD Is NOT THE SAME as Optimus--And It Matters!

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 ก.พ. 2024
  • If you thought (like me) that Tesla could simply drop their Full Self Driving Neural Network into their Optimus humanoid robot and it would be just as effective, I have some news for you: I believe that Optimus and FSD Beta have such drastically different use cases that what's good for FSD Beta v12 is not exactly what's good for Teslabot.
    **If you are looking to purchase a new Tesla Car, Solar roof, Solar tiles or PowerWall, just click this link to get up to $500 off! www.tesla.com/referral/john11286. Thank you!
    Join this channel to get access to perks:
    / @drknowitallknows
    **To become part of our Patreon team, help support the channel, and get awesome perks, check out our Patreon site here: / drknowitallknows . Thanks for your support!
    Go to drinkag1.com/drknowitall to get started on your first purchase and receive a FREE 1-year supply of Vitamin D3+K2 and 5 travel packs.
    Get The Elon Musk Mission (I've got two chapters in it) here:
    Paperback: amzn.to/3TQXV9g
    Kindle: amzn.to/3U7f7Hr!
    **Want some awesome Dr. Know-it-all merch, including the AI STUDENT DRIVER Bumper Sticker? Check out our awesome Merch store: drknowitall.itemorder.com/sale
    For a limited time, use the code "Knows2021" to get 20% off your entire order!
    **Check out Artimatic: www.artimatic.io
    **You can help support this channel with one click! We have an Amazon Affiliate link in several countries. If you click the link for your country, anything you buy from Amazon in the next several hours gives us a small commission, and costs you nothing. Thank you!
    * USA: amzn.to/39n5mPH
    * Germany: amzn.to/2XbdxJi
    * United Kingdom: amzn.to/3hGlzTR
    * France: amzn.to/2KRAwXh
    * Spain: amzn.to/3hJYYFV
    **What do we use to shoot our videos?
    -Sony alpha a7 III: amzn.to/3czV2XJ
    --and lens: amzn.to/3aujOqE
    -Feelworld portable field monitor: amzn.to/38yf2ah
    -Neewer compact desk tripod: amzn.to/3l8yrUk
    -Glidegear teleprompter: amzn.to/3rJeFkP
    -Neewer dimmable LED lights: amzn.to/3qAg3oF
    -Rode Wireless Go II Lavalier microphones: amzn.to/3eC9jUZ
    -Rode NT USB+ Studio Microphone: amzn.to/3U65Q3w
    -Focusrite Scarlette 2i2 audio interface: amzn.to/3l8vqDu
    -Studio soundproofing tiles: amzn.to/3rFUtQU
    -Sony MDR-7506 Professional Headphones: amzn.to/2OoDdBd
    -Apple M1 Max Studio: amzn.to/3GfxPYY
    -Apple M1 MacBook Pro: amzn.to/3wPYV1D
    -Docking Station for MacBook: amzn.to/3yIhc1S
    -Philips Brilliance 4K Docking Monitor: amzn.to/3xwSKAb
    -Sabrent 8TB SSD drive: amzn.to/3rhSxQM
    -DJI Mavic Mini Drone: amzn.to/2OnHCEw
    -GoPro Hero 9 Black action camera: amzn.to/3vgVMrH
    -GoPro Max 360 camera: amzn.to/3nORGYk
    -Tesla phone mount: amzn.to/3U92fl9
    -Suction car mount for camera: amzn.to/3tcUfRK
    -Extender Rod for car mount camera: amzn.to/3wHQXsw
    **Here are a few products we've found really fun and/or useful:
    -NeoCharge Dryer/EV charger splitter: amzn.to/39UcKWx
    -Lift pucks for your Tesla: amzn.to/3vJF3iB
    -Emergency tire fill and repair kit: amzn.to/3vMkL8d
    -CO2 Monitor: amzn.to/3PsQRh2
    -Camping mattress for your Tesla model S/3/X/Y: amzn.to/3m7ffef
    **Music by Zenlee. Check out his amazing music on instagram -@zenlee_music
    or TH-cam - / @zenlee_music
    Tesla Stock: TSLA
    **EVANNEX
    Check out the Evannex web site: evannex.com/
    If you use my discount code, KnowsEVs, you get $10 off any order over $100!
    **For business inquiries, please email me here: DrKnowItAllKnows@gmail.com
    Twitter: / drknowitall16
    Also on Twitter: @Tesla_UnPR: / tesla_un
    Instagram: @drknowitallknows
    **Want some outdoorsy videos? Check out Whole Nuts and Donuts: / @wholenutsanddonuts5741
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 101

  • @bathbuilder123
    @bathbuilder123 2 หลายเดือนก่อน +4

    Lost my dad in Dec ... All the best

  • @champignaq
    @champignaq 2 หลายเดือนก่อน +4

    My most sincere condolences to you and your loved ones. Thanks for your consistency and professionalism throughout your turbulent time. It is very much appreciated.

  • @RuedigerDrischel
    @RuedigerDrischel 2 หลายเดือนก่อน +5

    Condolences

  • @frangalarza
    @frangalarza 2 หลายเดือนก่อน

    There's also the point of communicating the task to be performed. For a car it's super easy: "take me to point X". It only needs to understand how to navigate. I can communicate this task to an Uber driver without even talking, just dropping a pin on a map.
    For more complex tasks (the ones that the bot is supposed to do) it's more complicated. Think about the laundry example. Even for a grown up human with full dexterity and perception, I still need to explain where the dirty clothes are, where the soap is, what type of washing cycle I want, do I want to separate based on colours/fabric type or not, do I want to use the dryer or do I want to hang it outside, where should it put the clothes after washing them, do I want the shirts ironed or not, do I want them folded or hanging, etc etc etc.
    This is a level of complexity just to understand the task that the car does not have to deal with. And then it has to go and execute.
    So yeah, when I hear that the bot is the same as the car and they can just drop the same software in and call it a day, I disagree. It actually makes me question if they're trying to intentionally simplify the scope to excite investors or something.

  • @colinmackie5211
    @colinmackie5211 2 หลายเดือนก่อน

    I agree with your comparison. you might care to note that driving also involves processing sound, directional sound. Sirens, horns, shouting, objects contacting the car body e.g. tree branches, or under body or though-body noises like a puncture, objects thrown up from the road. Many of these are outliers but when you think about it, we use this information a lot in a car.
    The bot cannot pause it's balance control, an autonomous function that must continue unless we consider lying down equivalent to a car pull over to the roadside.

  • @lylestavast7652
    @lylestavast7652 2 หลายเดือนก่อน

    Condolences extended. I watched a movie today set during the depression and many things my father recounted (he as 14-18 in 1930-35) came rushing back, he passed away in 2007. May you find peace in your memories of association.

  • @BB-xy5nd
    @BB-xy5nd 2 หลายเดือนก่อน +11

    Robotaxis will need a way to make way for emergency vehicles. An auditory sensor might be needed.

    • @RobertLoPinto
      @RobertLoPinto 2 หลายเดือนก่อน +1

      That would enhance emergency vehicle detection, but it is not required.
      Individuals who are either deaf or hearing impaired can drive legally in the US. Emergency vehicles are required to have their lights flashing whenever their sirens are turned on.

    • @demonz9065
      @demonz9065 2 หลายเดือนก่อน

      did you forget that lights exist and they can see in 360 degrees?

    • @tycox9364
      @tycox9364 2 หลายเดือนก่อน

      ​@@RobertLoPinto was thinking the exact same thing. No part the best part

  • @flyboypat
    @flyboypat 2 หลายเดือนก่อน +31

    Sorry to hear about your father passing

  • @vaclavmatousek3007
    @vaclavmatousek3007 2 หลายเดือนก่อน

    Sorry to hear about your father. Condolences, Dr.

  • @Mariahugo472
    @Mariahugo472 2 หลายเดือนก่อน +9

    Well explained. Thank you for bringing up this video. Financial education is indeed required for more than 70% of society in the country as very few are literate on the subject, thanks to Mrs Olivia Harlow the lady you ones recommended

    • @JosephFransi
      @JosephFransi 2 หลายเดือนก่อน

      That woman has changed my life for good. I attended her investment class couple of weeks and she's the best when it comes for guidance.

    • @JohnMcCain-qw4zl
      @JohnMcCain-qw4zl 2 หลายเดือนก่อน

      Sounds familiar, I have heard her names on several occasions.. And both her success stories on wall street journey!

    • @user-xu2md9iv3l
      @user-xu2md9iv3l 2 หลายเดือนก่อน

      Wow.. I know her too she is a licensed broker and a FINRA agent she is popular in US and Canada she is really amazing woman with good skills and experience

    • @douglasrodriguez250
      @douglasrodriguez250 2 หลายเดือนก่อน

      After i raised up to 425k trading with Olivia Harlow , I bought a new house here in the States and also paid for my son's surgery (Kelvin) glory to God.

    • @ron.porter.10
      @ron.porter.10 2 หลายเดือนก่อน

      After i raised up to 425k trading with Olivia Harlow , I bought a new house here in the States and also paid for my son's surgery (Kelvin) glory to God.

  • @robertgamble7497
    @robertgamble7497 2 หลายเดือนก่อน +1

    One of the inputs I think the car needs is sound. It needs to hear if an emergency vehicle is approaching. That way it can pull to the side of the road, stop if necessary, and allow the emergency vehicle to have the path it needs.

    • @demonz9065
      @demonz9065 2 หลายเดือนก่อน

      those vehicles all also have lights and the cars can see in 360. why would sound be required

    • @TheSpartan3669
      @TheSpartan3669 2 หลายเดือนก่อน +1

      ​@@demonz9065Because there are often instances where you hear the sirens long before you see them and drivers typically account for that. Also, hearing verbal commands, horns, etc is very useful.

  • @nigelwilliams7920
    @nigelwilliams7920 2 หลายเดือนก่อน

    A thing that most current driver assist systems (including past fsd) are bad at is avoiding an obstacle (a truck across the lane, end of a guardrail etc) that is stationary, and no vision system is any good at looking thru fog or smoke across the road. That raw ‘Ffs STOP!” control could be usefully implemented by some sort of non-visual sensor, perhaps overriding the fsd rather than being fused with it.

  • @garycarson3128
    @garycarson3128 2 หลายเดือนก่อน +3

    So sorry to hear about your Dad John. My thoughts and prayers are with you and your family.

  • @jonrico7937
    @jonrico7937 2 หลายเดือนก่อน

    Hello John, so sorry to hear about your father. Both of my parents have passed. Hang in there. I obtained my degree in Biology decades ago and love science...and hence your channel. Keep up the good work.

  • @richardhost7794
    @richardhost7794 2 หลายเดือนก่อน

    You raise a very good point. Humans have a hard-wired reflex reaction to some external stimulus “without thinking” because it would take too long if they did. So a car reacting to a sudden unexpected stimuli can’t use a regular LLM when requiring a very rapid “reflex” action. Mmm - interesting.

  • @johntrotter8678
    @johntrotter8678 2 หลายเดือนก่อน

    Some driving tasks are slower, but still important. Turn signals. Headlights. Horn. Windshield wipers/washers. Emergency flashers. All come to mind. Tesla FSD often does poorly on these less challenging items (or has no capability at all), yet they will be needed for the holy grail of robotaxis.

  • @paulwujek5208
    @paulwujek5208 2 หลายเดือนก่อน

    Robots should have many different sensors, but at the same time they still need to make split-second decisions, for example if they plug something in and it starts sparking they can't take half a minute to unplug it. Another example, if they are walking on stairs they have to detect that they are falling and to catch the railing before they fall.

  • @ghauptli1
    @ghauptli1 2 หลายเดือนก่อน

    IMO vision input is all that’s needed (except parking where the reintroduction of ultra sonic sensors would be nice). FSD’s present challenge is what to do with its visual input in heavy traffic. The system needs to do a better job of identifying common congested traffic situations and then making safe polite driving decisions. It needs to make the correct move without hesitation. 11.4.9 has improved it’s ability to switch lanes smoothly in heavy traffic on the freeway but it still hasn’t learned to politely merge into a slow moving exit lane. I’m hoping we’ll see a big improvement in v12.

  • @jackcoats4146
    @jackcoats4146 2 หลายเดือนก่อน +4

    Condolences John.

  • @dougellis2216
    @dougellis2216 2 หลายเดือนก่อน

    You keep coming up with

  • @BrianBellia
    @BrianBellia 2 หลายเดือนก่อน

    Sorry to hear about your dad, John.
    Condolences to you and yours.

  • @heltok
    @heltok 2 หลายเดือนก่อน

    You could consider other robots and the cloud to be a slower sensor. So yeah, maybe sometimes the high level controller will decide to wait for more information and do some search, but the low level controller once decided will probably run at the same rate once it's been told do execute.

  • @johnjones8330
    @johnjones8330 2 หลายเดือนก่อน

    Drivers / traditional non autonomous cars have lots of other input methods besides cameras, wheel slippage, acceleration, steering feedback, sound. It is appropriate for FSD to use all of those.

  • @kalpeshpatel8504
    @kalpeshpatel8504 2 หลายเดือนก่อน

    May your father rest in peace 🕊️. God bless.

  • @curtisyoung7107
    @curtisyoung7107 2 หลายเดือนก่อน

    Use models are prioritized based on selected consequences that result for actions or non-actions with the available resources.
    The methodology is consistent in general though the weights and actions can be extensive for the use cases and failure mode preventions 😊

  • @davidmackay2387
    @davidmackay2387 2 หลายเดือนก่อน

    I do think there is overlap, particularly in video processing to understand the universe. Whenever something is moving toward or away from a camera, the distance and relative speed can be calculated from the rate of change of the object’s image size. This is more precise at close range as would usually be the case for Optimus and I expect Tesla already does this for FSD.

  • @johnjones8330
    @johnjones8330 2 หลายเดือนก่อน

    This distinction is well described in systems research and is basically hard vs soft real time systems.

  • @Ask-a-Rocket-Scientist
    @Ask-a-Rocket-Scientist หลายเดือนก่อน

    In a factory setting, the assembly line is running. Stopping it is a huge deal. You also don’t want flawed products. It’s more similar than not to FSD.

  • @johnfurr6060
    @johnfurr6060 2 หลายเดือนก่อน

    I think Tesla should solve 99% of use cases in car with camera and then they should add other sensors in as needed to get the last percent. Perhaps they only get used when cameras fail... but I fully agree cameras will get us there. I use FSD everyday and it's 90% there now...
    For the Bot... I'm indifferent. I trust Tesla will figure this out better than me. :)

  • @Gargamel-n-Rudmilla
    @Gargamel-n-Rudmilla 2 หลายเดือนก่อน +1

    The clear differences between FSD for vehicles and the bot and Chat GPT is that the responce times are not critical or related to safty ( internal or external).
    Also hallucinations are not tightly coupled to the need for accuracy which in real world bot is vital to the functionality and usefulness of the bot as well as safty issues.
    I do not see much difference between fsd for bot vs vehicles baring complexity.

  • @rayrawa9517
    @rayrawa9517 2 หลายเดือนก่อน

    The bot has a lot of sensors especially if you include motor torque for each of the joints. To some degree this allows the robot to feel its way around the environment without necessarily seeing it

  • @johncamara3497
    @johncamara3497 2 หลายเดือนก่อน

    Normally I agree with most things you say, but this notion that FSD being a hard real time safety critical system means it needs to have simple sensor input where as the humanoid bot is presumed to be a soft real time non-safety critical system and thus its ok to have more complex sensor input, as the assumption is it has more time to process, is not a good way of assessing a systems sensor needs.
    All systems should be built with a sensor network that is simple as possible that allows for success. Quite often a safety critical system will have a more complex array of sensors, as it is often desired to have the ability to keep running in the event of sensor failures. Although if a safety critical system has a way of bailing out and shutting down in a safe way it may not be necessary to have redundant sensors. FSD for instance could decide not to start driving if a critical sensor is bad or if one goes bad during driving it could slow down and stop and ideally pull over to the side of the road before stopping. This is one approached that could be used in FSD to keep the sensor count down. Now if it was controlling a plane, redundant sensors are likely required.
    The difference in FSD and and the humanoid bot, assuming the humanoid bot is a soft real time non-safety critical system is that the consequences of FSD making a mistake are quite high and thus it's very important that it has a very low probability of making any dangerous mistakes and it has a short period of time (dt) in which to gather sensor input, perform fault detection and accomodation, make decisions, and update outputs. If it turns out that success can not be made with the current "simple" network of sensors it maybe necessary to add a radar or lidar into the mix. Not that I'm saying we will find out eventually that it will become necessary as I don't know. But what I do know is if it becomes necessary, that new data and all the new issues it brings will still have to be processed in the same dt.
    I believe humanoid robots will initial perform tasks that are non-safety critical that can be performed in soft real time. However, since some humans perform tasks that are safety critical and may have to make decisions and performing actions in real time, the humanoid bots will eventually have to be able to perform these tasks. If I was designing the humanoid bot I would just treat it as a hard real time safety critical system as it will eventually be required. You can always change the scheduled deadlines based on the task its performing so the dt does not necessarily need to be a tight as FSD.
    My expectation is that the bots are going to need a lot of sensors as we humans have so many of them. They obviously won't have as many as we do and some that we have make no sense for the bot i.e. all the sensors we have for internal organs. But how many touch sensors are required, one for each finger tip, for each segment of each finger, some in the palm of the hand. How about some on the chest arms, etc. Sometimes when we assemble something we sometimes lean a part against ourselves which could lead to having additional touch sensors. Do you put temperature sensors on the hands. Some tasks require feeling heat. What about smell, taste, etc. Initially, the sensor counts will be low as many tasks may not need many of them. Eventually there will likely be multiple hand options, some with a lower degree of freedom with a few touch sensors and some models with higher degrees of freedom and a higher sensor counts with multiple types of sensors.
    Anyway, sensor count is not going to be based of hard real time vs soft real time. It will be based on what is needed for success for a given task.

  • @rockycata6078
    @rockycata6078 2 หลายเดือนก่อน

    Of course it matters, ...because FSD is operating in a 2-dimension environment and Optimus is in 3-dimensions. Also, FSD does not need a LLM, and only has to respond to a limited set of commands useful for driving. The 'labeling' for Optimus has to incorporate everything in FSD plus everything else in the human environment.

  • @jjamespacbell
    @jjamespacbell 2 หลายเดือนก่อน +1

    FSD is many times simpler output than a humanoid robot. Yes FSD needs to take the images from the world around over a few seconds make a 3D digital world then using past data project where all the objects will be in a 1/10 second increments for the next few seconds but the only outputs are acceleration and wheel alignment.
    A humanoid robot may have fewer and either stationary or relatively slow moving objects but the output is many times more complex, for example open a threaded jar take out object then grab lid and tighten when there are hundreds of different lid types requires output to fingers, finger sensors, wrist and both arms. This is a relatively simple task but not cross threading requires a lot of subtlety.
    I used to program robots for a living and eliminating variables in an industrial environment can be challenging, in the real world situation this is insanely complex.

  • @DanGolick
    @DanGolick 2 หลายเดือนก่อน +1

    I really don't think sensor fusion is the problem you think it is. Sensor fusion even at highway speeds is a solved problem. (See kalman filters)
    I think living in Florida you have a skewed view of how important being able to drive in rain, snow and fog are.
    I believe the reason for going to camera only was probably cost.
    But I don't think Tesla can go beyond Level 2 without some redundancy in its "vision". There are a lot of papers on this topic. I think the solution will be to add two millimeter radar sensors (less expensive than lidar) to get binocular radar vision. The second sensor vastly improves the density of the point cloud it is able to create. If need be add additional front end processing units to do the sensor fusion.
    One huge advantage of the camera only system is that it makes the simulated video generation much easier and this has been key to training FSD 12.

    • @juliahello6673
      @juliahello6673 2 หลายเดือนก่อน

      The issue, as I understand it, is that if there is a discrepancy in two overlapping modules, which one should the car act on? As humans we have discrepancies too. However, we have world knowledge and intelligence to determine which one we should choose. If we hear a cat meowing but we only see our younger sibling who likes to prank you by making cat noises, we know to disregard the meow = cat. A car is dumb so it has to choose radar vs vision input in an unintelligent way. If vision is generally better than radar then it would have to disregard the radar, in which case what is the point of having it?

    • @DanGolick
      @DanGolick 2 หลายเดือนก่อน

      @@juliahello6673 still a solved problem a Kalyan filter takes in all the data and gives a better answer than you get from a single sensor. The physics of the sensors and stability of the measurements gives you the best answer. Different sensors have different advantages and report different things with different errors. We don’t have to guess which sensor to believe the output of the filter is a single value and an error estimate.

  • @engr.scotty
    @engr.scotty 2 หลายเดือนก่อน

    If you're only going to have one sensor, it needs to be a very good sensor. My current version 3 HW is myopic. It is like a new driver who is only looking a little bit in front of the car, which gives it only a small amount of time to make a judgment. If it had better or different sensors, It might actually have more time to come up with the decision. But I do agree with you, the use cases quite different than on a robot.

    • @darwinboor1300
      @darwinboor1300 2 หลายเดือนก่อน

      FSD does not work how you think. A simple explanation is: FSD takes the inputs from its cameras and rebuilds the world in 3D in digital space and introduces moving objects. It then determines a "safe" path through digital space and time. FSD then issues commands to move your car through real space and time. FSD in your car does not learn. FSD does not remember for extended periods of time (from one intersection to another). If FSD does not find a solution in its predetermined AI, then it can wait for the world to change enough to provide a solution, fail, or try to force a lower weight solution.

  • @JMeyer-qj1pv
    @JMeyer-qj1pv 2 หลายเดือนก่อน

    I disagree with the premise that more sensors on the car will result in slow decision making. AI processing is highly parallel and can be made to run very quickly for sensor fusion. The main issue I see with multiple sensors is that they all need to be reliable so that the car isn't reacting to ghost inputs that aren't accurate. There's also cost with more sensors costing more while Tesla is focused on low cost to maximize margins. But in general more sensors should result in better decisions and more robustness if implemented correctly. Cameras fail in some situations where other sensors like radar or ultrasonics can fill in missing information. Maps will also be a good input to give the AI a better understanding of how to position the car.

  • @royh6526
    @royh6526 2 หลายเดือนก่อน

    I think a major difference between a car and a robot is dialog with a human. A car could respond to a verbal input for some fixed functions like temperature or destination. A robot needs to talk back to a human and have far more response capabilities.

  • @johne.kipping4505
    @johne.kipping4505 2 หลายเดือนก่อน +2

    Could you tackle a show on the difference between Groq and Grok? Thanks.

    • @potatodog7910
      @potatodog7910 2 หลายเดือนก่อน

      Groq is a company to make LPUs (Language processing units)
      Grok is a large language model

  • @arleneallen8809
    @arleneallen8809 2 หลายเดือนก่อน

    I understand our need for generalizations about these inputs. That said, a human driver uses instrumentation and hearing as inputs. If you are race trained, your seat and the steering wheel are inputs. It is not clear how much a typical driver may subjectively pay attention to these. The IMU and vision are dominant in the case of FSD, but audible still matters. What is a true L4 vehicle going to behave like if it doesn't hear emergency vehicles, or perhaps human screaming? I would opine that it would be a poorer driver than the typical human. The big gray area is the boundaries you mentioned. How dark is too dark? How bad must a snow flurry or tule fog be to inhibit the system? Every single human driver exercises that judgement differently and there is no vehicle code to use for guidance. Clearly, vehicle administrations must create some kind of rule making prior to generic L4 cars. Perhaps L4 operation is disabled in inclement conditions.

  • @alexandreblais8756
    @alexandreblais8756 2 หลายเดือนก่อน

    I can already see the car in a huge winter storm stopping on the Side and saying "i cant drive you to work in these conditions, sorry " Me: nono dont be Sorry😂😂😂

    • @darwinboor1300
      @darwinboor1300 2 หลายเดือนก่อน

      FSD already notifies you when its inputs are too degraded for it to work. It flashes red and tells you to take over driving. I remain uncertain as to how Tesla will deal with this in a vehicle where no one is attending to the conditions and the "steering wheel" is basically a game controller stored in a pocket in the dashboard.

  • @brunoheggli2888
    @brunoheggli2888 2 หลายเดือนก่อน

    Its game over for Tesla!You better makeva BYD channel!

  • @RChamp116
    @RChamp116 2 หลายเดือนก่อน

    condolences

  • @engr.scotty
    @engr.scotty 2 หลายเดือนก่อน

    There is one type of fusion that Tesla needs to stop using, Tesla maps (Tesla has its own maps, not Google maps. I believe it is a crutch to get the system to work better). I find a few locations where Tesla maps thinks the car must turn right and then pull a u-turn however, the vision system clearly shows it can make a left hand turn. (While sitting in the left hand turn lane which the vision system clearly shows), the car will try to cross multiple lanes and make a right hand turn because of this false fusion. I believe they are compute or resolution constrained and trying to use maps to overcome this. They don't seem confident in trusting their vision system, possibly because of the high stakes of driving as you point out.

  • @mumblinge5892
    @mumblinge5892 2 หลายเดือนก่อน

    In a vehicle cameras are not adequate by themselves.

  • @silaskelly604
    @silaskelly604 2 หลายเดือนก่อน

    Humans have 5 sensors, but eyes and ears have 2 each to create stereo vision and hearing and others that create the illusion of a sensor, e.g., acceleration, temperature, time, etc., plus the so-called 6th sense which is the subconscious integration of the others over time.
    Generally 10,000 hours is the typical amount of time humans have to spend on any activity to become a "Master" of that activity, robots mileage will likely differ. Robots operating with an inorganic brain will have advantages and disadvantages in establishing reality and how to best interact with it.
    Our value as humans will likely wind up being based on activities where the advantages are greater for humans - I do wonder what those will be?

  • @credera
    @credera 2 หลายเดือนก่อน

    🤷🏻‍♂

  • @alanbates9073
    @alanbates9073 2 หลายเดือนก่อน

    Does FSD today tell you if, for example, weather conditions exceed its abilities?

  • @danharold3087
    @danharold3087 2 หลายเดือนก่อน

    Eventually we are all become orphans. Condolences.
    FSD makes near continuous life and death decisions and seldom gets a redo. A robot can fumble a nut. Bend over and pick it off the floor.
    What I want to hear about is how the training for optimus will be done. Can they be field trained?
    Will tesla load them up with a NN that can learn by example?

  • @frederickleung8811
    @frederickleung8811 2 หลายเดือนก่อน

    Peace be with you! Love your insightful explanations of the technology between FSD and Oprimus. Thanks for your passion!

  • @garycarson3128
    @garycarson3128 2 หลายเดือนก่อน

    Great video on way sensor fusion may not be a big problem with the humanoid Bot. Well done.

  • @nikmathews555
    @nikmathews555 2 หลายเดือนก่อน

    My condolences, I just had my Grandfather-mentor pass. I think cars need accelerometers and other inertial style sensors to mimic humans interpretation of “safe” driving, but otherwise 100% solid.

  • @IDNHANTU2day
    @IDNHANTU2day 2 หลายเดือนก่อน

    Excellent view and explanation.

  • @DonBrowningRacing
    @DonBrowningRacing 2 หลายเดือนก่อน

    Condolences to you and Misinformation.
    On the AI a big difference between FSD and Optimist is the total lack for the need to mislead or manipulate an audience. The analysis is around truth rather than deception. Reality falls in your high speed category.
    Good work!

  • @MilosMitrovicShomy
    @MilosMitrovicShomy 2 หลายเดือนก่อน +1

    It seems your pressure sensor on left tumb is a little damaged! 😊

  • @coolnameproductions2180
    @coolnameproductions2180 2 หลายเดือนก่อน

    Serious question - when will Teslabot be capable of doing the work of a professional electrician or plumber?

  • @fredhearty1762
    @fredhearty1762 2 หลายเดือนก่อน +1

    Coming upon an unusual situation and pausing for a bit happens while driving, too, especially in urban settings. Pausing helps the driver sort out what's happening and if it is safe to proceed/how to proceed. If FSD hasn't learned this skill, it should, and soon.

  • @larsnystrom6698
    @larsnystrom6698 2 หลายเดือนก่อน

    You are wrong about cars not interacting with others.
    Firstly, the car has to know that what it does changes the behavor of others.
    Sometimes a car which can't force others to accomodate it can't drive in heavy traffic.
    One example: I used to have a extreamly congested piece of road on the way to work
    each morning. Two files of slow moving trafic without any gaps. I got into that part in
    the righ land, but had to get into the left lane. Noone ever gave way volontary.
    The FSD as it is would never get into that lane, becaus it belives signalling will make
    someone let you in. They dont!
    What I had to do, and FSD has to learn is how to force yourself into a queue with no gap.
    My way were to aim between two cars with the turn signal on, and slowly get a corner of
    my car into the gap, it never happened that the car being threatened didn't gave way.
    If FSD can't do that, it wouldn't be able to drive me to work. And it's definitly interacting with
    others.
    We have seen a video by Omar with FSD v 12, where he couldn’t leave the sidewalk, because
    no one would let him out. That's where forcing others would have been the only way to do it.

  • @jedtaylor7833
    @jedtaylor7833 2 หลายเดือนก่อน

    Do bots self identify so they can be distinguished from humans? I would rather run into a bot than a person 9:38

  • @darwinboor1300
    @darwinboor1300 2 หลายเดือนก่อน

    Best wishes for you and your family. Sorry to see your thumb is injured.
    If your bot only makes a mistake 1 time in 100 then you will only need to replace about 4 table settings per year.
    👍 As you finally note, the bot is a different robot than a car. The bot has many more magnitudes of degrees of freedom. To operate, the bot has to do much more than locomote in space and time. Vision alone is not enough to accomplish most real world tasks. Other sensory modalities are required to achieve adequate feedback control. When in doubt, look to nature for opimized models. You were taught about the 5 basic human senses. Based upon unique sensor organs, today, humans are known to have 32 forms of sensory input. Most of these inputs are devoted to feedback control in one form or another.
    Elon was wrong. We don't drive by vision alone. We can achieve simple 4D video game level driving on a 2D screen with vision alone. To achieve high level immersive driving many more inputs are required. Look at modern flight and driving simulators.
    In a controlled and known environment a robot can complete simple invariant tasks without any feedback. To perform most human tasks efficiently in new (unique and variable) environments, bots are going to require more than vision. For many tasks in real world environments, feedback limited to vision, force, and proprioception will result in crude approximations of human function. Life has spent billions of years adapting and optimizing feedback mechanisms to achieve the various "skills" we find in nature. In general, abilities that do not provide an advantage do not survive.

  • @MegaWilderness
    @MegaWilderness 2 หลายเดือนก่อน +2

    Sensor fusion can never be a problem when end-to-end trained same goes for radar and lidar additions

    • @RobertLoPinto
      @RobertLoPinto 2 หลายเดือนก่อน +1

      Never say never.
      Intuition is often wrong.
      Experimentation and data will reveal the truth.
      Just because it's end to end doesn't automatically mean the multiple conflicting sensor inputs would always pick the correct choice.
      In a human that is inherently end-to-end "trained", Audio and video modalities rarely conflict. Touch and balance modalities rarely conflict, etc.
      Think about it. When our modalities conflict we become disoriented / dizzy / confused.
      It's important for the different sensors to not be measuring the same thing.
      Hence why Lidar and Vision create sensor fusion challenges. They both sense electromagnetic frequencies (light).
      Waymo gets away with it because they introduced a disambiguation data stream, i.e., HD maps. However, that comes with a tradeoff of scalability and brittleneess when premapped data goes out of date due to a change in the real world environment.

    • @MegaWilderness
      @MegaWilderness 2 หลายเดือนก่อน

      @@RobertLoPinto Tesla could only benefit from HD maps. Perhaps it could then park and be summoned

    • @RobertLoPinto
      @RobertLoPinto 2 หลายเดือนก่อน

      @@MegaWilderness HD maps are a crutch that bypasses solving the actual problem. If you can navigate your way out of a parking lot you've never seen before, so should FSD.
      V12 has a lot of promise in solving parking lots once sufficient video data has been added to the training set. Tesla has obviously prioritized safety data over convenience data.

    • @MegaWilderness
      @MegaWilderness 2 หลายเดือนก่อน

      @@RobertLoPinto Probably why Waymo is autonomous and Tesla isn't

    • @darwinboor1300
      @darwinboor1300 2 หลายเดือนก่อน

      I see you are not an engineer. 😊

  • @coreycoddington8132
    @coreycoddington8132 2 หลายเดือนก่อน

    Sound! The cars eventually need to be able to hear my brother and I noticed this on residential streets with tight Corners all often roll down the window when I come to a stop sign. Didn't notice I did that until we thought about it

  • @noleftturns
    @noleftturns 2 หลายเดือนก่อน

    Condolences John.
    What is the First Principle of a robot?
    That question was never asked by Elon - the world's greatest expert on manufacturing.
    A robot is just a pair of hands that can do work.
    That's it.
    Instead of focusing on the 1st Principle, Elon is getting mired down in trying to make the hands mobile.
    That opens a Pandora's Box of problems that have nothing to do with the 1st Principle.
    Sadly, Optimus is doomed to failure before it even wobbles around trying to find a human to fire and take their job.
    Which is for the best...

  • @juliahello6673
    @juliahello6673 2 หลายเดือนก่อน

    I’m sorry about your dad.

  • @kevinmccauley9366
    @kevinmccauley9366 2 หลายเดือนก่อน

    Sorry for your loss John. With regard to other sensors in cars, a microphone on the outside of the car is important. The vehicle needs the ability to" listen" for sirens and other traffic sounds when the source of the sound is visually occluded from the car. For example, an emergency vehicle, that you cannot see, is approaching the same intersection you’re about to drive through, at right angles, would be very important to sense.

  • @vincewestin
    @vincewestin 2 หลายเดือนก่อน

    For the basic industrial use case, most situations will not have urgency. However, there are emergencies in the physical world. Preventing large-scale damage to infrastructure is on case. Preventing injury to a human (especially the tiny humans) can be more urgent. Since they will be working with humans (at least to some degree), there are still critical decisions.
    You are correct that operating machinery (including driving) tends to need more urgent decisions. But operating in a kitchen with humans would also have many situations where urgency is indeed a factor in safety.
    The cars are good to start with cameras. They should also use audio sensors, since sirens and humans yelling can be valuable inputs to vehicles as well.
    The robots already have touch sensors (at least in the fingers). They have some level of resistance measurement in the joints/servos. They will have audio for communicating. They may add a smell sensor for fire awareness and such. Over time they will need a version of a taste sensor to be able to become useful in food preparation (tasting ingredients and final product is needed for any cook).
    LIDAR is a waste. A single sense (distance to nearest thing) radar is also pretty useless, but high definition radar might add value (if cost and weight are minimal). Cameras around the robot (eyes in the back of your head) and possibly near the hands/feet could provide perspectives that would help in many operations.
    More input can be good. But it may be more cameras rather than lots of sensor types as we start.

  • @d.d.jacksonpoetryproject
    @d.d.jacksonpoetryproject 2 หลายเดือนก่อน

    Sorry about your father.

  • @restonthewind
    @restonthewind 2 หลายเดือนก่อน

    Best wishes.
    Speaking of the obvious ... FSD's control has two, primary degrees of freedom, left-right and faster-slower. Optimus has as many in one finger. On the input side, FSD has vision. Optimus has vision, hearing and touch. FSD only navigates highly artificial and relatively simple road networks. The contours of a child's playground are more complex. FSD has millions of miles of training data within its relatively simple domain. Optimus has practically none at this point, and it requires vastly more to mimic routine human behavior.

  • @ayoutubewatcher7009
    @ayoutubewatcher7009 2 หลายเดือนก่อน

    I lost count how many times you repeated yourself