I have to say that I appreciate putting your "behind the scenes" guys in front of the camera. Munro doesn't care about perception, they care about facts and knowledge.
Structure from motion is used to get depth from a single camera. Stereo vision is only useful for close ranges ~20m given how close together the cameras are
Exactly. A good question to ask is how tesla is getting distance measurements from the side or rear cameras where there is only one facing one direction not two?
2:55 front camera feeds (probably all) are used to generate 3d world, through photogrammetry like algos. It is constantly making a pointcloud of what it sees. Three images from separated vantage points is enough to make one. And when moving, they have like 60 images per second per camera. Thats why they dont need the radar, they are constantly making a 3d world in front and around the car, on the fly. And you can fairly accurately tell how far each point is.
You can somewhat substitute stereoscopic vision. With one camera and time. The single camera can compare the image from one second ago to the present image and assuming the car has moved it can use the difference between the two imagines to extrapolate depth data. This can only happen as fast as the camera frame rate. Temporal stereoscopic imaging.
Yes, cf. human drivers with only one usable eye, who are legally allowed to drive. Or, put your hand over one eye, then start walking around your house. You'll find your brain fills in most of the 3D information as you move around.
Not only "somewhat substitute", it's far more precise. Stereoscopic vision only works around 60cm in front of the head due to the distance of our eyes.
@@meamzcs - when does Apple have AI DAY, production robot cars, humanoid robots, AI factory software, training tile, DOJO Supercomputer ? when was last time APPLE innovated anything ? not lately......
@@markplott4820 Tesla really is following in Apple's footsteps. The emphasis on vertical integration, designing almost every aspect of your product yourself, and having all of those components working together on a deeper level than the competition (who is outsourcing more of their core competencies) is very Apple. Rather than use a common standard (micro USB) Apple developed their own superior standard, the Lightning port. Rather than use a common standard (J plug) Tesla developed their own superior proprietary charing port. Rather than continuing to rely on a supplier's chip (Nvidia) Tesla developed their own proprietary chip designs. Rather than continuing to rely on a supplier's chip (Intel) Apple leveraged their previous proprietary chip designs, the A-series, and evolved them into desktop chips. Apple was not the first company to enter into most of the product categories they compete in, but when they enter a category they tend to offer a product superior or at least equal to anything available from competitors, if they do tend to be more expensive. Likewise Tesla was not the first company to offer a commercially available electric vehicle, but it was far superior to any available competition at the time. There is now plenty of viable cars competing with Tesla (even if competitors struggle to manufacture them at scale). Some competing models are cheaper but less capable, and others are on par capability wise but just as expensive (and not widely available). Tesla is also now longer the first to enter every product category (the Rivian truck entering the market before the Cybertruck). But Tesla is still the standard bearer we judge every other car to, just as in many product categories Apple is the standard that others are judged by (in tablets, smartphones, and smartwatches, etc.) Knocking Apple for a lack of innovation compared to Tesla is just an ignorant opinion, when Tesla is so clearly taking a lot of lessons from Apple (not just in the ability to offer something new and revolutionary, but also in how the development and manufacturing of a product happens). Even the fancy presentation of a new product with a charismatic CEO on a big stage is something that Tesla does that follows in Apple's footsteps.
1. The dual SoC’s on the Gen 3 HW is for redundancy, but using both when both are working to increase compute. Elon: “Good analysis. Both computers will be used & sync ~20 times per second. This is a long time for a computer. Like a twin-engine plane, use both engines to max for normal operation, but can safely operate on just one.” 2. Teslas can hear using the cabin microphone. Apple has similar functionality in the accessibility in settings to detect sounds like: fire, fire alarm, crying baby, yelling, sirens, running water, shattered glass, etc.
I remember Tesla saying that they use the internal radar to detect when busy parents accidentally forget/leave their kids in chair seats in the car. Apparently more common than you think.
Ben and Chris thank you for the granularity on the electronics plus the insight to progression over time gives us engineer investors the green light to keep on investing. When I see evidence of Tesla's continuous improvement pace I sleep at night.
Man , I never get tired of your videos. This is way beyond my field of expertise and I always listen to the end. Always learn something new. Thanks to all the Munro Team. Great work on theses videos.
Really enjoyed this video and thanks for bringing some clarity on the Tesla chipset. Also, the best advice ever is “be safe while driving” which is applied at any moment in any vehicle!
3:35 I don't mean to be rude, but you seem to have a quite idealized view of radar. I've worked quite a lot with Conti automotive radar. The spatial resolution is not what you might expect. The amount of noise and artifacts is substantial. You have a very non-ideal antenna characteristic which routinely gives you phantom objects in weird locations. Distinguishing multiple cars on the road is a challenge. Radar might be useful because it can directly measure the velocity of other objects without the need to derive distance but apart from that it's much more unreliable than you make it sound.
My favorite explanation of Autonomous driving from Tesla is: “It’s a software problem, not a hardware problem.” Other OEMs are trying to solve a software problem with lots of hardware (Cameras, Radar, LiDAR, etc). I’d love to see a “tear down” of Tesla’s software strategy vs. other OEMs. Might be too much to ask.
Thing i learned is, if you can remove any variable for a system you are optimizing do so. Tesla did it by removing as many systems as possible and only rely on vision and sensors. That way they can fine tune these 2 systems to 99.9% and can later add more system without issue since they know their base system is fully reliable as backup.
@@JosiahLuscher right, but we don’t have anything else even close to AI day for the other OEMs. It’s impossible to compare right now because they keep us in the dark.
*Tesla has a huge blind spot in cross traffic detection* The only cameras that can view cross traffic are on the B-pillar. Tesla‘s next hardware suite needs to add cameras to the headlights looking sideways, One near sighted and one far sighted. You can see the problem with the indecision in their full safe driving videos. New headlights make it easy for retro fit but they need to get realistic and do it as soon as possible before there’re too many cars on the road and it cost more for the retrofit
No. Exceeding a human driver's capabilities does not require radar and lidar. The keys to beating human drivers are decision speed and appropriateness of decision-making, not sensor input. This is the reason that Tesla is focused on machine learning at scale and advanced neural net chips. They are attempting to get decision speed up and improve appropriateness of decision-making. More sensors won't help with either goal.
...attempting is a good description. They may get there, even this year maybe, but it obviously isn't a simple problem regardless of what Elon has implied through countless ready date revisions.
@@AudiTTQuattro2003 I agree that this is a difficult problem. I also agree that, in the past, Elon has been overoptimistic on timelines. He may be overoptimistic even now. That's a point worth considering. But there is another point worth considering, too. That point is, only Tesla is aggressively pursuing machine learning at scale to solve autonomy. No-one else can do this. No-one else has *ever* attempted to apply machine learning tech on a problem even a tenth as difficult. Which means that there are a slew of other companies working on robotaxis who are using 90's tech. Their prototypes are vehicles on virtual rails (geomaps) which use sensors, not to identify objects encountered or to anticipate what they might do, but to reactively avoid collisions as best they can. They almost all rely on lidar and radar as well as optical sensors, they drive on their virtual rails and stop if something is in the way. And those competitors are worried. If Tesla succeeds, the competitors' approach will be instantly rendered obsolete, their investments wasted, their market opportunities squandered. Even worse, Tesla's approach is sufficiently inexpensive that they will be able to offer it on every car they sell, and operate it anywhere regulators will permit it. By contrast, geofencing requires aggressive map updates (expensive) and expensive add-on equipment that would price cars beyond the reach of most consumers. So they're really only good for robotaxis in high-density urban areas, and their fare prices will be higher than Tesla could offer, if Tesla succeeds. Hence the opposition from operators like Green Hills. Tesla is posing an existential threat to all other developers of autonomous vehicles. And they're squawking about it, too - running to legislators, to NHTSA, to the public, trying desperately to get government to slam the brakes on Tesla. You know what that tells me? It tells me that these competitors believe that Tesla will succeed. Maybe by years end, maybe not, doesn't matter. They believe Tesla's tech will work, if nothing is done to stop Tesla. If they didn't believe that, they would just quietly go about their business, content in the knowledge that Tesla was wasting its time, energy and money on a fruitless technology. I think they're correct to be worried. Tesla's autonomy competitors will be disrupted by Tesla completely out of the autonomy business they've worked so hard to enter.
02:20 In humans the iris is for aperture control, focus is through distortion of the lens by the ciciliary muscles. (Note, this is from my biology lessons in 1974, so I could be wrong 🤣).
They weren't talking about aperture, but focal length of the camera lenses. Focal length do different things for cameras and eyes. Eyes have a fixed field of view angle (say 90 degrees per eye), and our lenses change focus to change the distance of what we're focusing on. Cameras have different focal lengths to change the field of view angle, and move the lens back and forth to change what's being focused on, distance wise.
@@SodaPopin5ki I gave the precise time code in the video, he specifically stated that the iris was for focus: “So for a human eye, your IRIS will resize to focus on something close or something far”. But you’re welcome to believe whatever you like, everyone here can see for themselves.
@@SodaPopin5ki same things happend to a human driver... i had cataract surgery so i have a long range lense 1 eye and a short range one in my other eye. i usually don't wear glasses but i can drive. however gives me a hell of a headache after 20-30 minutes of driving. if i can do it. so a computer. fixed distance lenses is a pain to deal with but it is manageable for me. a computer should be able to handle it and it's distortion without the headache :p
@@fredbloggs5902 I assumed he meant lens, as the iris has no focal length. It's the thing that changes the size of the pupil, which is a hole, akin to the aperture of a camera. It lets more or less light in.
2:45 I'm pretty sure it's not used for stereoscopic vision. The cameras would be much further apart. At that camera disparity the depth resolution is just too bad. Besides that, for stereoscopy you need to very precisely sync the time at which images are recorded which is a huge pain. And the matching algorithm doesn't run itself. It's a lot of trouble for very limited value.
This is a great insight on how people think inside of their silos. When all you have is a hammer, everything looks like a nail. When your job is to analyse low voltage electronics, the self driving problem becomes just a hardware problem. You can tell this by how Chris never mentioned software and neural networks. He ignored what Tesla told us during Autonomy day and AI day and assumed they use stereoscopic vision just because Subaru does it. They don't, they feed all the cameras to a neural network that reconstructs the world in 3D. Another example is when he associates the FSD computers to the cars they belong to, implying that 3, Y and S have different hardware. They don't, they use the same part. The difference is due to the year they were manufactured on. This is what disruption looks like, someone comes along and does things completely differently and most people don't see because they're stuck on their mental frameworks. You can extrapolate this to the media and Wall Street and understand why some people are so fixated on the idea that Tesla is grossly overvalued compared to Toyota.
Neural network algorithms are not infallible - they are limited by 1. The model itself, 2. The computational complexity of the said model, and 3. The training set + adaptation inputs. #2 has to be kept low to allow real time processing on today’s hardware and will continue to be a problem for a long time (This is your bog standard problem addressed in any 2nd level programming class in Comp Science). Software does not drive itself - it needs mathematical models to be simple enough to run on the current hardware. So, Chris is correct. One shouldn’t be so quick to swallow marketing talking points, even from a company as admirable as Tesla. Also: how confident are we of #3? We own a Tesla, and let me tell you - it inspires no confidence. It brakes suddenly on autopilot for no reason, so we have to disengage it when someone follows too close behind. Based on my observation as a Tesla owner, they haven’t even mastered level 2 yet. I don’t really have much confidence in their rosy predictions.
There were some interesting distinctions in how the levels of autonomy were described. Sure, a Tesla can't hear like we can, and it's decisions are primarily image based... But with so many more cameras than we have eyes, and with it's sonic sensors ... and most of all it's ability to pay attention to everything without fatigue while making thousands of decisions per second.... it very much is (or will be) capable of exceeding humans. And that shows in Tesla's accident reports, where Tesla's operating with autopilot / ADAS / Accident Avoidence activated get into accidents 8x less than the rest of traffic on US roads.
Tesla does have a microphone, no reason it couldn't be used for the ADAS system if it were to offer a benefit. But deaf people are allowed to drive. Binaural hearing for humans doesn't really work well inside a vehicle cabin.
With this video and others, a compare and contrast with competitors’ systems would be more enlightening than a straightforward description. Sandy is good at telling us what he likes and doesn’t like. It would be nice to hear that from the other terrific hosts as well.
Lidar is coming. Yes, it will be very useful in geofenced niche vehicles used as local cabs. Romba needs to adopt lidar. Damn thing keeps getting stuck.
LIDAR sounds a bit like Hydrogen fueled cars. They are going to be everywhere in 10 years. And every year it's only 10 more years out. LIDAR is an expensive ugly crutch, and while it works for cars that are not for sale (Waymo) when you have an unlimited budget. You also don't care how awful it looks or the practicalities working in all automotive environments. It is gerat that LIDAR costs are coming down, but the cheapest forecasts (at some unknown point in the future) is still 100x the cost of a camera.
2019 model so I suppose your 3 has the front radar. I recall hearing Tesla's phantom brakes was caused, at least in part, by radar/camera disagreement. Deleting radar may have helped.
As an electrical hardware designer, I think there was quite a bit more information that could have been discussed. How many layers is the PCB? What type of material was used? Did they keep the design simple (larger pitch components, no blind or buried vias, standard spacing between copper, etc.) or did they make it harder to manufacturer to make it as small as possible and to have better electrical performance? How many different components were used on the PCBA? What was the smallest size component used? Are all components automotive grade or better or were some industrial grade components used? Were there special things they did like flooding the outside layers with ground to help prevent noise? How does this compare to other ADAS?
Nice to see Magna and Continental in the Tesla Plaid S. Magna & Continental are each working (if i recall) with Blackberry QNX that provides a best in breed ISO 26262 pre-certified OS with an Automotive Safety Integrity Level D for safe and secure components.
Yeah well 😂 Qualified stuff is great for regulators and CEOs and corporate as well as public bureaucracy other than that... Meh... SpaceX flies rockets to space and lands them too on open source Linux with the PREEMT_RT patch... Not saying QNX is bad it's just that it beeing certified doesn't necessarily make it better than other stuff but it sure makes it a lot more expensive.
@@meamzcs cyber security is one of the biggest topics in the world. I dont see Automonous going far without a secure SBOM along with a high safety level rating for integrity. I am not getting in Huawei taxi for example.
@@danielhull9079 Beeing certified doesn't mean you're even a tiny little bit more secure than an uncertified open source system... If Tesla is gonna achieve autonomous driving, it's also gonna run on a customized Linux Kernel and not something proprietary... I would probably take a Linux Kernel over some certified proprietary stuff too in most cases...
Sandy should make a video about where Tesla cut expenses to handle the supply chain shortages, and increase profit margins. As reducing modules, features, and accessories.
I agree. With certain types of sensors, it's extremely difficult to substitute with another supplier or part series. This might also explain why Tesla switched to vision only, and also why they actually increased production when others struggled.
@@neuropilot7310 their growth in production numbers it's also explain by some questionable healthy strategies during the pandemic, as now in Shanghai, with Workers sleeping on the factory. Wonder what is the average quality control with Workers in such conditions.
The cameras are too close for useful stereoscopic vision at driving distances, especially with different focal lengths. Human stereoscopic vision is only useful at short distances, you don't need/use stereoscopic vision for long distance. It's primarily only useful for things within your physical grasp. Your brain estimates distance based on context and perspective. It's a basic function of geometry, you would need cameras a meter apart to have useful stereoscopic functionality at hundreds of feet.
I'm a bit puzzled by Chris' statement, and I guess belief, that camera/image based autonomy can only allow a car to be equally capable to a human, but not exceed human capabilities. Elon Musk has in the past explained and demonstrated that the cameras see more and the computers react faster than any human could and this should be obvious even to a layman. I know that there is scepticism to Tesla's optics based approach to autonomy, but the superiority of cameras and computers over eyes and brains isn't debatable.
Everyone has a different level of understanding and I guess from where he's sitting that's his impression. Who is right it yet to be known. I myself think another hardware upgrade might be needed to push that level 5 boundary but I am no expert either.
For both humans and computers it's the brain that is the crucial part, and it's fact that computers can process information much faster than a human can. Computers also doesn't get distracted like us and can have a 360 degree awaraness at all times.
Exactly, I don't know if that's what he meant to say but that's how I interpreted it. But the commodity cameras have better vision, in 360°, than humans do and certainly the vehicle will be able to think and react at superhuman levels. And when we say a "human" do we mean an average human or a race car driver? I don't doubt that some implementations will use lidar and radar, but those technologies aren't directly correlated to ADAS capabilities.
Most accidents occur because of distraction or emotional impairment. Once the AI system has similar environment awareness and pathfinding, they will crash much less frequently than people. I wouldn't be surprised if they become safer with way worse actual driving skill, just because of constant attentiveness and not doing stupid stuff.
@@yourcrazybear Computers can get distracted too, by charged particles for example. So FSD HW in the future will be likely triple redundant, to protect against single event upsets. (in large servers those charged particles cause detectable problems, especially at higher elevations)
About binocular vision, let me tell you a true story...: A teen aged friend my age in a family my family knew living in Needles, California lost one eye when he was 15 when a rock hammer broke as he struck. UCLA eye doctors saw to him, and when I visited him in the hospital I was certain he would now not be able to drive, since I had been taught that binocular vision was necessary. The eye doctors knew differently though. Fast forward a year or so and another family visit: My friend had his own hot-rodded '52 Ford V8, was on a first name basis with all of the local Highway Patrol officers in Needles and was having no trouble whatsoever with fast driving those lonely desert roads. He pointed out to me what those eye doctors had told him. binocular vision is not how distance is judged in the mind, at any distance much beyond that hood ornament. Perspective and our visual cortex does the work. Maybe the ADAS systems can make a better distinction pixel v pixel, but not likely at distance with that close to human spacing of the two cameras. Perhaps Subaru might use two very separated cameras to gain the needed spacing for far field accuracy, but I seriously doubt it is of much use to Tesla unless just employed in parking situations.
It’s hard to estimate these costs. Even if you’re estimating the cost of the components, these are just the components themselves, leaving out the money Tesla (or any other company) paid to develop, manufacture, the logistics and so on.
iPhone video setting error/failure? I suspect it was in Cinematic mode that resulted in some shallow focus and too much auto-out-of focus. Also, try setting to the .5 (x) camera and get closer to the subject for better depth-of-field.
Munro knows cars but do not know much about artificial neural networks hence on the topic of FSD, they don't know what they are talking about. I still like them.
Monro, please explain why Tesla doesn't have some type of cleaning action on their cameras. Is it difficult to draw solvent lines to rear and side cameras?
That's correct. There's no way to tell just looking at the hardware because it's all controlled by software, but Tesla have told us that they use them for redundancy.
They were. But tesla isn’t doing that now. They were only getting 17fps on each chip for total of 8 cameras, but now combining them they get 34fps which is much better.
That was an interesting identification of various sensors used by Tesla. I was hoping Chris would touch upon how vehicles will be certified to be FSD 3.0. I wouldn't want it left up to the automaker to decide if it's good enough.
There is an ERROR on the slide titled Level’s of Autonomy. The greater than/equal to and Less than/equal to symbols are backwards. ADAS are levels LESS than or equal to Level 2. Full Autonomy are levels GREATER than or equal to Level 3.
I wonder if there was ever - in the history of driving- an accident, that could have been avoided by knowing distances between objects down to the millimeter and not centimeters.
if you look hard enough you may find an edge case for anything. After reaching superhuman abilities, self driving systems will race to distinguish themselves in extremely rare edge cases. So I can imagine a future where high res radar, lidar and flir are all optional extras for Tesla FSD. Basically Tesla is solving the "thinking part" of the problem. After their solution is working, adding other sensors is relatively easy. They have found that adding other sensors can seriously confuse the system if it is not super smart already. So the problem always comes back to smart perception, and not sensors.
@@adamrak7560 Yeah I fully agree. I think current hardware is sufficient for safer than human, but there will be better hardware in the future. Robotaxi likely will have higher resolution, better light sensitivity cameras, maybe more cameras and HW 4. I am note sure about lidar yet. However I expect the Bot to maybe have lidar.
@@theipc-twizzt2789 nope doesn’t make sense to use lidar for bot. Tesla is literally solving the lidar problem using cameras. It’s getting really really good. Also if you want another sensor you’d rather use something like infrared light rays instead of visible light.
@@harsimranbansal5355 I think they absolutely can do the bot without Lidar. However its use cases at first will need high precision movements, down to the millimeter. That's what lidar (or any other emitting sensor) is good for. You don't need it for cars and later bots. But maybe they use it for the first gen to also get some good training data. After all Tesla is using lidar for ground truthing.
Diagram of sensor locations would really help. (I guess if I was an actual engineer I'd go look it up) I would expect all ADAS boards for all Teslas to be the same as continual improvements progress thru all models & factories.
Elon goes against the grain and is right most of the time and is willing to change when needed It's way too easy to get caught up with stuff when you deal with it every day and lose focus on the big picture because we always do things a certain way does not mean it's the best way of doing it Change is very hard for us mere mortals Tho I really do enjoy these videos even when some of them don't understand everything completely because none of us understand everything all the time It's extremely hard for a lot of people to understand physics at its basic level
It's their best guess for both Elon and Chris. I'm with Chris, but that doesn't mean Elon is wrong. Time will tell, but being obsessed with who's right is immature.
Dual cameras and stereoscopic vision is not really possible beyond 5 meters, I'm sure they calculate distance to cars with neural net and train it with radar and lidar. Radars give sporadic false signals and is the reason why almost all cars with adaptive cruise have random braking.
Personal opinions don't matter. We are talking about what is possible. About the perception of the sound for a car, I don't think is important, the. horn is necessary when you have just two eyes and can be distracted. A computer is never distracted from its tasks and eight eyes are covering 360 degrees all the time.
You are correct in the electrical code definition. But in general use low voltage is under 120V. When I worked on 575VAC low voltage motors, it was a bit misleading to say the least.
MUNRO Live - interesting take, but you are just GUESSING. you really don't know what Tesla is doing unless you are using FSD beta w/ visualization. Tesla is been using psudo- lidar (vision) for awhile now. it does not use Stereo Vision for anything. go see KAPARTHY video on AI day.
I am certain that computers with cameras will be able to exceed human ability in recognition and speed of decision making. They never get tired. They never get drunk. They never get sick. They never get angry. They never take their eyes off the road to text and drive. Having said that, I would still like vehicles to have sensors to allow them to "see" even in conditions where normal vision is obscured such as: smoke, fog, heavy rain or snow. So tell me... Exactly how would lidar be an improvement over camera vision in those situations? I can understand how radar and sonar would help, but not something that scans one dot at a time in the visible spectrum. RIP lidar.
The camera is used for side capture of the car, like if a car deside to swing into your lane from your blind spot, the car can already predict / prevent that by monitoring the other car speed/trajectory and prob check if other side is clear so he can safely move out of the other car path and prevent collision. Also Tesla has sentry mode, where the car capture 360 degree of the car when it's parked in case someone damage your car / bump into it. It get logged and can be used as evidance in case the person does a hit and run.
11:40 absolutely incorrect. A Tesla with FSD will be able to drive at superhuman levels. Having 8 cameras in 360 and a dedicated neural net computer for driving tasks is demonstrative of this fact as humans have only two cameras facing the same direction, and a bio neural net that isn't dedicated to the task of driving (see: distracted drivers).
I don't see how. Almost all digital electronics are running at 5v or less. The larger the delta from the power input to the desired voltage requires a more complex supply and is likely to generate a bit more heat and cost more. This answer only addresses electronics, not the total value of going to 48v. The prime reason for 48v is to reduce the copper in wiring going between modules, small motors and lights. So far, the lack of third-party 48v components at an equal or lower cost than high volume 12v components has held back 48v adoption. Vehicles from Tesla having 48v internal bus is likely to happen, but I'm not sure when.
@@tesla_tap The cost difference is minimal and the savings in wiring are huge. 48V is well within reach of the same buck converter topology used for 12V, so there should not be a significant design change needed. Its just that for most of the ECUs this doesnt matter, they dont consume enough power for 48V to make a difference. It would be interesting to see a split system where high power components draw 48V and simultaneously provide 12V rails for any nearby low power components.
When you guys talk about liquid cooling on those PCB's...is that like a heat pipe layer on the PCB to handle the thermal load, or do they actually pump cooling fluid through a heat plane on the back of the board.....thanks.
Pretty sure its just connected to the cooling system through a metal casing that can transfer heat away from the board. Basically the heat pump Octavalve system.
How about this for an idea: Each Robo-Taxi comes with an Optimus Prime conductor. The robot conductor loads suitcases, wipe down seats between fares, makes sure no one forgets their purse or cell phone, assists with destination selection and fare payment, and provides security. Also, the robot conductor might be a security blanket for those hesitant about riding in a driverless vehicle. It could perform small courtesy chores like helping select music.
When / if my Tesla cars achieves L5 autonomy, AND automated supercharging is available, I will consider emancipating my fleet. They will need to earn enough to reimburse me for the initial cost, but after that, a car may be able to survive in wild, not only as an autonomous driver, but an Ai with its own agency as well
Chris needs to watch all of the Tesla "days" (autonomy, AI) special presentations. He appears to be in the camp of solving ADAS with hardware rather than software and large compute. BTW, my left eye has a new intraocular lens, the Alcon Panoptix trifocal fresnel lens. Three fixed focal length lenses all in one tiny package. The neural net, our brain, interprets what it sees like the variable focus lens we were born with. Eight cameras should be enough for Level 4 FSD.
One issue you are completely missing is one of redundancy (sensor glare, dirt, etc.) the other is blind faith in what "AI" is really capable of... Basically, computers don't "understand" what they are seeing. A five year-old can infer an enormous amount of information in what they see that the most advanced "AI" in the world is utterly incapable of. "AI" systems are comparison-driven and if there is no relevant comparison, then there is no solution, that's why "edge cases" are so staggeringly hard to predict. It's trivial for a human to infer when an unfamiliar/unexpected situation might be dangerous, an AI would have no way to solve such a situation. This is exactly why redundant sensors, not operating in the visible spectrum, are that "insurance" policy to make sure the 3D imaging algorithms are not misidentifying or ignoring an actual hazard. I suggest you watch a few videos on how Waymo "robotaxis" work. They have 27(!) cameras, radar, sonar and LIDAR and it's the complete fusion of those inputs that allows them to be Lv. 4. Tesla's (read Elon's) insistence on visible-light only has a lot to do with his fantasy that the existing Tesla fleet is only "a software update away from FSD" since they are configured only with visible-light sensors and can't practically be retrofitted.
🤗👍THANKS BEN …FOR ASKING THE RIGHT QUESTIONS FOR US LAYPEOPLE AND CHRIS 🤗 FOR EXPLAINING WHAT WE ARE LOOKING 👁 AT …and do I detect a little HANS SOLO In CHRIS 😉🤷♂️😍😍😍
I have a 2020 Chevy with full ADAS. I've seen the ads where a Chevy comes to a complete stop when a car ahead stops and the driver is inattentive. My wife lightly bumped another car and there was no warning and no braking. Odd because it regularly warns for nothing. It certainly didn't work the way Chevy ads show.
VW Golf R has the best location for rear camera, the VW logo pops up when you put it in reverse, camera is out of weather when not in use, and it looks cooler when the VW logo pops open, which is also the hatch release handle. GERMAN engineering
@@pipooh1 That was my first impression when I originally seen it on my first GTI years ago, I even said to dealer, hmmm, how long will this last...but guess what? never had a failure, 147,000miles on one, 87,000miles on last one, and current Golf R 2019 56,000 miles, not one issue with the flip up lock camera assy. I agree, less moving parts is always better....but they paid attention on the logo camera flap, hatch opening assy. Things can work right, if paid attention to, and not try and keep it cheap, cheap always costs more later
It still feels like the B-Pillar camera is not the correct position for good cross traffic detection, and was put there originally for highway driving only, not to deal with cross traffic situations. Now that Tesla is trying to solve driving for non-highway situations it seems like they need a camera positioned at the very front of the vehicle (corner or centre) facing left and right to see cross traffic.
I suspect the center wide-angle camera can capture front cross traffic, and it's farther forward than a human driver. There is some overlap with this camera and the side cameras, but I don't know if the side cameras are used much for cross-traffic detection.
TESLA FSD beta is LEVEL 4 autonomy. the rest in industry are CRAP. they don't have the VISION or COMPUTE Tesla has. Tesla Is 25 watts , compare to 500 watts NVIDIA system..... Lol
FLIR is nice technology, but dont think Tesla is at this stage interested in anything new. Since they want to have as little of variables as possible, adding ADAS/Radar/Lidar/FLIR can all give false positives and than which system is right? It require prob 10x the calculation power to correct a false possitive since you need to double check everything. The more sensors / type of technology, the more fine tuning you have to do not only on your overall system but also on the detection of all the side systems. Tesla has now only have to correct one system, vision. Once they have that to 95+% they might add more systems for overall safety etc, but you first need a good baseline to know that system X is correct and Y is not. If you have 5 systems running, how you know which one is correct when you still fine tuning them all?
I have to say that I appreciate putting your "behind the scenes" guys in front of the camera. Munro doesn't care about perception, they care about facts and knowledge.
If only he wasn't so wrong about so many things.
@@AdmiralQuality how's that exactly?
@@AwesomeBlackDude The iris is not what does the focusing in the human eye for one thing.
@@eubikedude pls continues... 😬
@@eubikedude Exactly. Several more too but I just don't care to get into it, he's not worth it.
Structure from motion is used to get depth from a single camera. Stereo vision is only useful for close ranges ~20m given how close together the cameras are
Exactly. Stereo vision is needed for static images. Tesla is using video (4D) processing.
@@briansilver3412 stereo still helps, especially if you can also get wide baseline (two upper windshield corners for example.)
How do birds manage depth with eyeballs less than 1cm apart?
@@ricknash3055 same as humans they use other cues. We use stereo for objects within hand reach/throwing distance
Exactly. A good question to ask is how tesla is getting distance measurements from the side or rear cameras where there is only one facing one direction not two?
Drive safe everyone! That's good advice from someone who deals with these safety systems a lot. Thanks for the video!
Excellent job as usual. Chris is clearly an expert and Ben did a great job moving the conversation along on the right track.
2:55 front camera feeds (probably all) are used to generate 3d world, through photogrammetry like algos. It is constantly making a pointcloud of what it sees. Three images from separated vantage points is enough to make one.
And when moving, they have like 60 images per second per camera. Thats why they dont need the radar, they are constantly making a 3d world in front and around the car, on the fly. And you can fairly accurately tell how far each point is.
It is 36 frames per second
@@sjokomelk what's the reason for the extra six frames? 🤔
@@AwesomeBlackDude You have to ask Tesla about that. But that is the frame rate they use for capturing data from the cameras.
@@AwesomeBlackDude best compromise between amount of information and processing.
@@daniel_960_ Thanks guys for the (CFF) electro-retinograms fps explanation do you guys have any idea when Tesla will introduce audio-detection.? 😬
You can somewhat substitute stereoscopic vision. With one camera and time. The single camera can compare the image from one second ago to the present image and assuming the car has moved it can use the difference between the two imagines to extrapolate depth data. This can only happen as fast as the camera frame rate.
Temporal stereoscopic imaging.
Yes, cf. human drivers with only one usable eye, who are legally allowed to drive. Or, put your hand over one eye, then start walking around your house. You'll find your brain fills in most of the 3D information as you move around.
Not only "somewhat substitute", it's far more precise. Stereoscopic vision only works around 60cm in front of the head due to the distance of our eyes.
Fabulous review of Electronics no one would ever see at detail level (outside OEM and supplier)
It was actually pretty shallow
try this channel, its goes in way deeper into tesla electronics: th-cam.com/users/Ingineerix
@@TheMoni7548 So not in-depth enough for you?
@@TheMoni7548 ...in that much of it is off the shelf? Seems the Tesla hype is maybe just that, other than the software part.
@@KarmaticEvolution for someone who may be passionate about cars, this is nothing new but a simple introduction on how ADAS works
Historical note: The Tesla FSD chip was designed by Jim Keller, who previously designed the iPhone A4/A5 chip.
FACT , Tesla D1 chip is better than apple M1
@@markplott4820 You can't compare them AT ALL.
@@meamzcs - when does Apple have AI DAY, production robot cars, humanoid robots, AI factory software, training tile, DOJO Supercomputer ?
when was last time APPLE innovated anything ? not lately......
@@markplott4820 dojo chip is a neural network training chip while m1 is soc
@@markplott4820 Tesla really is following in Apple's footsteps. The emphasis on vertical integration, designing almost every aspect of your product yourself, and having all of those components working together on a deeper level than the competition (who is outsourcing more of their core competencies) is very Apple.
Rather than use a common standard (micro USB) Apple developed their own superior standard, the Lightning port.
Rather than use a common standard (J plug) Tesla developed their own superior proprietary charing port.
Rather than continuing to rely on a supplier's chip (Nvidia) Tesla developed their own proprietary chip designs.
Rather than continuing to rely on a supplier's chip (Intel) Apple leveraged their previous proprietary chip designs, the A-series, and evolved them into desktop chips.
Apple was not the first company to enter into most of the product categories they compete in, but when they enter a category they tend to offer a product superior or at least equal to anything available from competitors, if they do tend to be more expensive.
Likewise Tesla was not the first company to offer a commercially available electric vehicle, but it was far superior to any available competition at the time. There is now plenty of viable cars competing with Tesla (even if competitors struggle to manufacture them at scale). Some competing models are cheaper but less capable, and others are on par capability wise but just as expensive (and not widely available). Tesla is also now longer the first to enter every product category (the Rivian truck entering the market before the Cybertruck). But Tesla is still the standard bearer we judge every other car to, just as in many product categories Apple is the standard that others are judged by (in tablets, smartphones, and smartwatches, etc.)
Knocking Apple for a lack of innovation compared to Tesla is just an ignorant opinion, when Tesla is so clearly taking a lot of lessons from Apple (not just in the ability to offer something new and revolutionary, but also in how the development and manufacturing of a product happens).
Even the fancy presentation of a new product with a charismatic CEO on a big stage is something that Tesla does that follows in Apple's footsteps.
PSA: "pay attention when driving" - wise words Chris
1. The dual SoC’s on the Gen 3 HW is for redundancy, but using both when both are working to increase compute.
Elon:
“Good analysis. Both computers will be used & sync ~20 times per second. This is a long time for a computer. Like a twin-engine plane, use both engines to max for normal operation, but can safely operate on just one.”
2.
Teslas can hear using the cabin microphone.
Apple has similar functionality in the accessibility in settings to detect sounds like: fire, fire alarm, crying baby, yelling, sirens, running water, shattered glass, etc.
I remember Tesla saying that they use the internal radar to detect when busy parents accidentally forget/leave their kids in chair seats in the car. Apparently more common than you think.
Ben and Chris thank you for the granularity on the electronics plus the insight to progression over time gives us engineer investors the green light to keep on investing. When I see evidence of Tesla's continuous improvement pace I sleep at night.
How's progress on the Earth II?
Man , I never get tired of your videos. This is way beyond my field of expertise and I always listen to the end. Always learn something new. Thanks to all the Munro Team. Great work on theses videos.
Thanks!
Always a good idea to explain acronyms when they are introduced
Really enjoyed this video and thanks for bringing some clarity on the Tesla chipset.
Also, the best advice ever is “be safe while driving” which is applied at any moment in any vehicle!
Thanks Chris really good explanation of what is going on with the car.
3:35 I don't mean to be rude, but you seem to have a quite idealized view of radar. I've worked quite a lot with Conti automotive radar. The spatial resolution is not what you might expect. The amount of noise and artifacts is substantial. You have a very non-ideal antenna characteristic which routinely gives you phantom objects in weird locations. Distinguishing multiple cars on the road is a challenge. Radar might be useful because it can directly measure the velocity of other objects without the need to derive distance but apart from that it's much more unreliable than you make it sound.
He only said Radar is used to confirm an object is there. Period. No judgment
My favorite explanation of Autonomous driving from Tesla is: “It’s a software problem, not a hardware problem.” Other OEMs are trying to solve a software problem with lots of hardware (Cameras, Radar, LiDAR, etc). I’d love to see a “tear down” of Tesla’s software strategy vs. other OEMs. Might be too much to ask.
Thing i learned is, if you can remove any variable for a system you are optimizing do so. Tesla did it by removing as many systems as possible and only rely on vision and sensors. That way they can fine tune these 2 systems to 99.9% and can later add more system without issue since they know their base system is fully reliable as backup.
It's an explanation that makes me laugh every time I hear it.
@@bobqzzi that it’s a software problem? How so?
Have you seen the AI day presentation? They gave a nice overview of how FSD is being developed.
@@JosiahLuscher right, but we don’t have anything else even close to AI day for the other OEMs. It’s impossible to compare right now because they keep us in the dark.
9:00 correction: Model Y to S, not 3 to S But loved this explanation, thank you.
*Tesla has a huge blind spot in cross traffic detection*
The only cameras that can view cross traffic are on the B-pillar. Tesla‘s next hardware suite needs to add cameras to the headlights looking sideways, One near sighted and one far sighted. You can see the problem with the indecision in their full safe driving videos. New headlights make it easy for retro fit but they need to get realistic and do it as soon as possible before there’re too many cars on the road and it cost more for the retrofit
Ben is the best! We easily digest as layman. Good Job Munro!
2:29 The human iris adjusts for the amount of light coming in, the natural lens, inside the eye, is what does the focusing.
No.
Exceeding a human driver's capabilities does not require radar and lidar. The keys to beating human drivers are decision speed and appropriateness of decision-making, not sensor input.
This is the reason that Tesla is focused on machine learning at scale and advanced neural net chips. They are attempting to get decision speed up and improve appropriateness of decision-making. More sensors won't help with either goal.
...attempting is a good description. They may get there, even this year maybe, but it obviously isn't a simple problem regardless of what Elon has implied through countless ready date revisions.
@@AudiTTQuattro2003 I agree that this is a difficult problem. I also agree that, in the past, Elon has been overoptimistic on timelines. He may be overoptimistic even now.
That's a point worth considering. But there is another point worth considering, too.
That point is, only Tesla is aggressively pursuing machine learning at scale to solve autonomy. No-one else can do this. No-one else has *ever* attempted to apply machine learning tech on a problem even a tenth as difficult.
Which means that there are a slew of other companies working on robotaxis who are using 90's tech. Their prototypes are vehicles on virtual rails (geomaps) which use sensors, not to identify objects encountered or to anticipate what they might do, but to reactively avoid collisions as best they can. They almost all rely on lidar and radar as well as optical sensors, they drive on their virtual rails and stop if something is in the way.
And those competitors are worried. If Tesla succeeds, the competitors' approach will be instantly rendered obsolete, their investments wasted, their market opportunities squandered.
Even worse, Tesla's approach is sufficiently inexpensive that they will be able to offer it on every car they sell, and operate it anywhere regulators will permit it. By contrast, geofencing requires aggressive map updates (expensive) and expensive add-on equipment that would price cars beyond the reach of most consumers. So they're really only good for robotaxis in high-density urban areas, and their fare prices will be higher than Tesla could offer, if Tesla succeeds.
Hence the opposition from operators like Green Hills. Tesla is posing an existential threat to all other developers of autonomous vehicles.
And they're squawking about it, too - running to legislators, to NHTSA, to the public, trying desperately to get government to slam the brakes on Tesla.
You know what that tells me?
It tells me that these competitors believe that Tesla will succeed. Maybe by years end, maybe not, doesn't matter. They believe Tesla's tech will work, if nothing is done to stop Tesla.
If they didn't believe that, they would just quietly go about their business, content in the knowledge that Tesla was wasting its time, energy and money on a fruitless technology.
I think they're correct to be worried. Tesla's autonomy competitors will be disrupted by Tesla completely out of the autonomy business they've worked so hard to enter.
Excellent presentation! This engineer is so knowledgeable! A great asset to Munro!
02:20 In humans the iris is for aperture control, focus is through distortion of the lens by the ciciliary muscles.
(Note, this is from my biology lessons in 1974, so I could be wrong 🤣).
They weren't talking about aperture, but focal length of the camera lenses. Focal length do different things for cameras and eyes. Eyes have a fixed field of view angle (say 90 degrees per eye), and our lenses change focus to change the distance of what we're focusing on. Cameras have different focal lengths to change the field of view angle, and move the lens back and forth to change what's being focused on, distance wise.
@@SodaPopin5ki I gave the precise time code in the video, he specifically stated that the iris was for focus:
“So for a human eye, your IRIS will resize to focus on something close or something far”.
But you’re welcome to believe whatever you like, everyone here can see for themselves.
@@SodaPopin5ki same things happend to a human driver... i had cataract surgery so i have a long range lense 1 eye and a short range one in my other eye. i usually don't wear glasses but i can drive. however gives me a hell of a headache after 20-30 minutes of driving. if i can do it. so a computer. fixed distance lenses is a pain to deal with but it is manageable for me. a computer should be able to handle it and it's distortion without the headache :p
Don't worry, eyes haven't changed since 1974
@@fredbloggs5902 I assumed he meant lens, as the iris has no focal length. It's the thing that changes the size of the pupil, which is a hole, akin to the aperture of a camera. It lets more or less light in.
2:45 I'm pretty sure it's not used for stereoscopic vision. The cameras would be much further apart. At that camera disparity the depth resolution is just too bad. Besides that, for stereoscopy you need to very precisely sync the time at which images are recorded which is a huge pain. And the matching algorithm doesn't run itself. It's a lot of trouble for very limited value.
This is a great insight on how people think inside of their silos. When all you have is a hammer, everything looks like a nail.
When your job is to analyse low voltage electronics, the self driving problem becomes just a hardware problem. You can tell this by how Chris never mentioned software and neural networks. He ignored what Tesla told us during Autonomy day and AI day and assumed they use stereoscopic vision just because Subaru does it. They don't, they feed all the cameras to a neural network that reconstructs the world in 3D.
Another example is when he associates the FSD computers to the cars they belong to, implying that 3, Y and S have different hardware. They don't, they use the same part. The difference is due to the year they were manufactured on.
This is what disruption looks like, someone comes along and does things completely differently and most people don't see because they're stuck on their mental frameworks.
You can extrapolate this to the media and Wall Street and understand why some people are so fixated on the idea that Tesla is grossly overvalued compared to Toyota.
Neural network algorithms are not infallible - they are limited by 1. The model itself, 2. The computational complexity of the said model, and 3. The training set + adaptation inputs. #2 has to be kept low to allow real time processing on today’s hardware and will continue to be a problem for a long time (This is your bog standard problem addressed in any 2nd level programming class in Comp Science). Software does not drive itself - it needs mathematical models to be simple enough to run on the current hardware. So, Chris is correct. One shouldn’t be so quick to swallow marketing talking points, even from a company as admirable as Tesla. Also: how confident are we of #3? We own a Tesla, and let me tell you - it inspires no confidence. It brakes suddenly on autopilot for no reason, so we have to disengage it when someone follows too close behind. Based on my observation as a Tesla owner, they haven’t even mastered level 2 yet. I don’t really have much confidence in their rosy predictions.
Spott on.
There were some interesting distinctions in how the levels of autonomy were described. Sure, a Tesla can't hear like we can, and it's decisions are primarily image based... But with so many more cameras than we have eyes, and with it's sonic sensors ... and most of all it's ability to pay attention to everything without fatigue while making thousands of decisions per second.... it very much is (or will be) capable of exceeding humans.
And that shows in Tesla's accident reports, where Tesla's operating with autopilot / ADAS / Accident Avoidence activated get into accidents 8x less than the rest of traffic on US roads.
Tesla does have a microphone, no reason it couldn't be used for the ADAS system if it were to offer a benefit. But deaf people are allowed to drive. Binaural hearing for humans doesn't really work well inside a vehicle cabin.
@@GregHassler pretty sure elon said they’d be using the microphone in v11?
With this video and others, a compare and contrast with competitors’ systems would be more enlightening than a straightforward description. Sandy is good at telling us what he likes and doesn’t like. It would be nice to hear that from the other terrific hosts as well.
Thank you!
Fabulous detailed overview by Ben and Chris!
Good presentation guys, well done.
A human can't look in all directions at the same time, so it has the potential of surpassing a human.
Tesla bot.
Lidar is coming. Yes, it will be very useful in geofenced niche vehicles used as local cabs. Romba needs to adopt lidar. Damn thing keeps getting stuck.
Neato Robotics has had a low cost Lidar in their robotic vacuum for years.
LIDAR sounds a bit like Hydrogen fueled cars. They are going to be everywhere in 10 years. And every year it's only 10 more years out. LIDAR is an expensive ugly crutch, and while it works for cars that are not for sale (Waymo) when you have an unlimited budget. You also don't care how awful it looks or the practicalities working in all automotive environments. It is gerat that LIDAR costs are coming down, but the cheapest forecasts (at some unknown point in the future) is still 100x the cost of a camera.
It's especially great if you're on a budget and you need your development team to have a nice demo on a street you choose beforehand...
Great video!
My friend’s Subaru has way worse phantom braking compared to my 2019 Model 3.
2019 model so I suppose your 3 has the front radar. I recall hearing Tesla's phantom brakes was caused, at least in part, by radar/camera disagreement. Deleting radar may have helped.
Thanks !
Would be nice to have more board level details of the ADAS board explained in a future video!
Chris is a no bullshit type engineer. I like it
Love this detailed look at the sensor package.
excellent work
Interesting EQS in the background ;-)
As an electrical hardware designer, I think there was quite a bit more information that could have been discussed. How many layers is the PCB? What type of material was used? Did they keep the design simple (larger pitch components, no blind or buried vias, standard spacing between copper, etc.) or did they make it harder to manufacturer to make it as small as possible and to have better electrical performance? How many different components were used on the PCBA? What was the smallest size component used? Are all components automotive grade or better or were some industrial grade components used? Were there special things they did like flooding the outside layers with ground to help prevent noise? How does this compare to other ADAS?
8-meter range on the sonar sensors. Good to know.
Very good content, and detail information.
Thanks.
Keep up the great work!
Thanks for watching!
Nice to see Magna and Continental in the Tesla Plaid S.
Magna & Continental are each working (if i recall) with Blackberry QNX that provides a best in breed ISO 26262 pre-certified OS with an Automotive Safety Integrity Level D for safe and secure components.
Yeah well 😂 Qualified stuff is great for regulators and CEOs and corporate as well as public bureaucracy other than that... Meh... SpaceX flies rockets to space and lands them too on open source Linux with the PREEMT_RT patch...
Not saying QNX is bad it's just that it beeing certified doesn't necessarily make it better than other stuff but it sure makes it a lot more expensive.
@@meamzcs cyber security is one of the biggest topics in the world. I dont see Automonous going far without a secure SBOM along with a high safety level rating for integrity. I am not getting in Huawei taxi for example.
@@danielhull9079 Beeing certified doesn't mean you're even a tiny little bit more secure than an uncertified open source system...
If Tesla is gonna achieve autonomous driving, it's also gonna run on a customized Linux Kernel and not something proprietary... I would probably take a Linux Kernel over some certified proprietary stuff too in most cases...
Sandy should make a video about where Tesla cut expenses to handle the supply chain shortages, and increase profit margins. As reducing modules, features, and accessories.
Great idea!
MUNRO already has an older video on that.
I agree. With certain types of sensors, it's extremely difficult to substitute with another supplier or part series. This might also explain why Tesla switched to vision only, and also why they actually increased production when others struggled.
@@neuropilot7310 their growth in production numbers it's also explain by some questionable healthy strategies during the pandemic, as now in Shanghai, with Workers sleeping on the factory. Wonder what is the average quality control with Workers in such conditions.
@@neuropilot7310 - Tesla has a deal with Samsung for camera.
Awesome breakdown. FSD Beta is not using radar at all in mybuunderstandng. Is this your assessment?
The cameras are too close for useful stereoscopic vision at driving distances, especially with different focal lengths. Human stereoscopic vision is only useful at short distances, you don't need/use stereoscopic vision for long distance. It's primarily only useful for things within your physical grasp. Your brain estimates distance based on context and perspective. It's a basic function of geometry, you would need cameras a meter apart to have useful stereoscopic functionality at hundreds of feet.
Great video. Would be nice to have a "teardown" video of the ADAS board itself.
I'm a bit puzzled by Chris' statement, and I guess belief, that camera/image based autonomy can only allow a car to be equally capable to a human, but not exceed human capabilities.
Elon Musk has in the past explained and demonstrated that the cameras see more and the computers react faster than any human could and this should be obvious even to a layman.
I know that there is scepticism to Tesla's optics based approach to autonomy, but the superiority of cameras and computers over eyes and brains isn't debatable.
Everyone has a different level of understanding and I guess from where he's sitting that's his impression. Who is right it yet to be known. I myself think another hardware upgrade might be needed to push that level 5 boundary but I am no expert either.
For both humans and computers it's the brain that is the crucial part, and it's fact that computers can process information much faster than a human can. Computers also doesn't get distracted like us and can have a 360 degree awaraness at all times.
Exactly, I don't know if that's what he meant to say but that's how I interpreted it. But the commodity cameras have better vision, in 360°, than humans do and certainly the vehicle will be able to think and react at superhuman levels.
And when we say a "human" do we mean an average human or a race car driver?
I don't doubt that some implementations will use lidar and radar, but those technologies aren't directly correlated to ADAS capabilities.
Most accidents occur because of distraction or emotional impairment. Once the AI system has similar environment awareness and pathfinding, they will crash much less frequently than people. I wouldn't be surprised if they become safer with way worse actual driving skill, just because of constant attentiveness and not doing stupid stuff.
@@yourcrazybear Computers can get distracted too, by charged particles for example. So FSD HW in the future will be likely triple redundant, to protect against single event upsets. (in large servers those charged particles cause detectable problems, especially at higher elevations)
Sandy is right it is the best car had one for 2 years from new.
About binocular vision, let me tell you a true story...:
A teen aged friend my age in a family my family knew living in Needles, California lost one eye when he was 15 when a rock hammer broke as he struck. UCLA eye doctors saw to him, and when I visited him in the hospital I was certain he would now not be able to drive, since I had been taught that binocular vision was necessary. The eye doctors knew differently though.
Fast forward a year or so and another family visit: My friend had his own hot-rodded '52 Ford V8, was on a first name basis with all of the local Highway Patrol officers in Needles and was having no trouble whatsoever with fast driving those lonely desert roads. He pointed out to me what those eye doctors had told him. binocular vision is not how distance is judged in the mind, at any distance much beyond that hood ornament. Perspective and our visual cortex does the work.
Maybe the ADAS systems can make a better distinction pixel v pixel, but not likely at distance with that close to human spacing of the two cameras. Perhaps Subaru might use two very separated cameras to gain the needed spacing for far field accuracy, but I seriously doubt it is of much use to Tesla unless just employed in parking situations.
Yup, this is why perspective tricks work so well in photography.
love to hear a bit about the component costs. Ie, what does a company like Tesla pay for that radar unit, and other sensors?
It’s hard to estimate these costs. Even if you’re estimating the cost of the components, these are just the components themselves, leaving out the money Tesla (or any other company) paid to develop, manufacture, the logistics and so on.
This was great, i have alot better understanding of how this automatic driving works and the different levels. Very nice video. Reggie
Thanks Reggie! Please subscribe if you haven't already!
Reggie, you can understand better if you are actual user of Tesla FSD beta. Appreciate it more .
Chris is great, no nonsense here, straight to the point.
iPhone video setting error/failure? I suspect it was in Cinematic mode that resulted in some shallow focus and too much auto-out-of focus. Also, try setting to the .5 (x) camera and get closer to the subject for better depth-of-field.
Nice showdown, & Advice for ALL!
PAY ATTENTION CEL PHONE TALKERS!!
Thanks Chris !
Can you please share the model of the interior radar ?
Munro knows cars but do not know much about artificial neural networks hence on the topic of FSD, they don't know what they are talking about. I still like them.
Does it collect swarm data like VW does?
Awesome content!
Thanks, Pedro!
Monro, please explain why Tesla doesn't have some type of cleaning action on their cameras. Is it difficult to draw solvent lines to rear and side cameras?
As I remember from Tesla Autonomy Day event, the dual chips are for redundancy. Can you please confirm?
That's correct. There's no way to tell just looking at the hardware because it's all controlled by software, but Tesla have told us that they use them for redundancy.
They were. But tesla isn’t doing that now. They were only getting 17fps on each chip for total of 8 cameras, but now combining them they get 34fps which is much better.
That was an interesting identification of various sensors used by Tesla. I was hoping Chris would touch upon how vehicles will be certified to be FSD 3.0. I wouldn't want it left up to the automaker to decide if it's good enough.
There is an ERROR on the slide titled Level’s of Autonomy. The greater than/equal to and Less than/equal to symbols are backwards. ADAS are levels LESS than or equal to Level 2. Full Autonomy are levels GREATER than or equal to Level 3.
At 10:54
C'mon it's not hard, < & > work like funnels.
I wonder if there was ever - in the history of driving- an accident, that could have been avoided by knowing distances between objects down to the millimeter and not centimeters.
if you look hard enough you may find an edge case for anything.
After reaching superhuman abilities, self driving systems will race to distinguish themselves in extremely rare edge cases. So I can imagine a future where high res radar, lidar and flir are all optional extras for Tesla FSD.
Basically Tesla is solving the "thinking part" of the problem. After their solution is working, adding other sensors is relatively easy. They have found that adding other sensors can seriously confuse the system if it is not super smart already. So the problem always comes back to smart perception, and not sensors.
@@adamrak7560 Yeah I fully agree. I think current hardware is sufficient for safer than human, but there will be better hardware in the future. Robotaxi likely will have higher resolution, better light sensitivity cameras, maybe more cameras and HW 4. I am note sure about lidar yet. However I expect the Bot to maybe have lidar.
@@theipc-twizzt2789 nope doesn’t make sense to use lidar for bot. Tesla is literally solving the lidar problem using cameras. It’s getting really really good. Also if you want another sensor you’d rather use something like infrared light rays instead of visible light.
@@harsimranbansal5355 I think they absolutely can do the bot without Lidar. However its use cases at first will need high precision movements, down to the millimeter. That's what lidar (or any other emitting sensor) is good for. You don't need it for cars and later bots. But maybe they use it for the first gen to also get some good training data. After all Tesla is using lidar for ground truthing.
ADAS == Advanced Driver Assistance Systems
Thanks guys. Very usefule info. Where is the forward radar locatied? In the bumper?
cool episode. Tanks guys
Love the content; so insightful!
Nice one Chris. Thank you.
Diagram of sensor locations would really help. (I guess if I was an actual engineer I'd go look it up)
I would expect all ADAS boards for all Teslas to be the same as continual improvements progress thru all models & factories.
Competent, intriguing, compelling!
So Chris Fox is on record for Munro that Elon Musk and Tesla will not succeed without LiDAR…I’m amazed at Chris’s hubris.
Elon goes against the grain and is right most of the time and is willing to change when needed
It's way too easy to get caught up with stuff when you deal with it every day and lose focus on the big picture because we always do things a certain way does not mean it's the best way of doing it
Change is very hard for us mere mortals
Tho I really do enjoy these videos even when some of them don't understand everything completely because none of us understand everything all the time
It's extremely hard for a lot of people to understand physics at its basic level
It's their best guess for both Elon and Chris. I'm with Chris, but that doesn't mean Elon is wrong. Time will tell, but being obsessed with who's right is immature.
I appreciate the Rosenberg connectors on modules.
excellent analysis !
Dual cameras and stereoscopic vision is not really possible beyond 5 meters, I'm sure they calculate distance to cars with neural net and train it with radar and lidar. Radars give sporadic false signals and is the reason why almost all cars with adaptive cruise have random braking.
Personal opinions don't matter. We are talking about what is possible. About the perception of the sound for a car, I don't think is important, the. horn is necessary when you have just two eyes and can be distracted. A computer is never distracted from its tasks and eight eyes are covering 360 degrees all the time.
reminder: I think all ADAS boards have components on BOTH sides.
Very insightful
One think to keep in mind is that in industrial settings, 'Low voltage' is anything below 1,000V. so, your home wall voltage is low voltage.
You are correct in the electrical code definition. But in general use low voltage is under 120V. When I worked on 575VAC low voltage motors, it was a bit misleading to say the least.
@@briansilver3412 no kidding. Low voltage is the worst engineering term. Is it 0.7v, 5v, 24v, 120v, or 1000v? Nobody knows.
great video
I love you guys!
MUNRO Live - interesting take, but you are just GUESSING. you really don't know what Tesla is doing unless you are using FSD beta w/ visualization.
Tesla is been using psudo- lidar (vision) for awhile now.
it does not use Stereo Vision for anything.
go see KAPARTHY video on AI day.
I am certain that computers with cameras will be able to exceed human ability in recognition and speed of decision making. They never get tired. They never get drunk. They never get sick. They never get angry. They never take their eyes off the road to text and drive.
Having said that, I would still like vehicles to have sensors to allow them to "see" even in conditions where normal vision is obscured such as: smoke, fog, heavy rain or snow.
So tell me... Exactly how would lidar be an improvement over camera vision in those situations? I can understand how radar and sonar would help, but not something that scans one dot at a time in the visible spectrum. RIP lidar.
Cheers guys 🇬🇧👍
What is the B Pillar camera used for and why is it said to be an atypical type of installation?
The camera is used for side capture of the car, like if a car deside to swing into your lane from your blind spot, the car can already predict / prevent that by monitoring the other car speed/trajectory and prob check if other side is clear so he can safely move out of the other car path and prevent collision.
Also Tesla has sentry mode, where the car capture 360 degree of the car when it's parked in case someone damage your car / bump into it. It get logged and can be used as evidance in case the person does a hit and run.
@@pipooh1 So the B pillar camera is part of both Sentry Mode and ADAS. Thank you.
11:40 absolutely incorrect. A Tesla with FSD will be able to drive at superhuman levels. Having 8 cameras in 360 and a dedicated neural net computer for driving tasks is demonstrative of this fact as humans have only two cameras facing the same direction, and a bio neural net that isn't dedicated to the task of driving (see: distracted drivers).
Yea I wasn’t a fan of this guy’s attitude very pessimistic seemed like a tesla q member lol
...and yet, it still isn't ready. It's ready when insurance companies charge next to nothing when you use FSD exclusively. Otherwise, it isn't ready.
Can measure distance/speed of moving objects with one camera by comparing two frames.
1 camera could estimate. But Tesla's multi camera setup is another thing entirely
Would Cybertruck or future vehicles benefit substantially from a 48 V system for these sort of electronics?
😏
They would have to be completely re-engineered. Components would have to be changed if they can't handle 48v.
I don't see how. Almost all digital electronics are running at 5v or less. The larger the delta from the power input to the desired voltage requires a more complex supply and is likely to generate a bit more heat and cost more. This answer only addresses electronics, not the total value of going to 48v. The prime reason for 48v is to reduce the copper in wiring going between modules, small motors and lights. So far, the lack of third-party 48v components at an equal or lower cost than high volume 12v components has held back 48v adoption. Vehicles from Tesla having 48v internal bus is likely to happen, but I'm not sure when.
CT is using a 48v system, and 800v platform. w/ 240v export.
@@tesla_tap The cost difference is minimal and the savings in wiring are huge. 48V is well within reach of the same buck converter topology used for 12V, so there should not be a significant design change needed.
Its just that for most of the ECUs this doesnt matter, they dont consume enough power for 48V to make a difference. It would be interesting to see a split system where high power components draw 48V and simultaneously provide 12V rails for any nearby low power components.
When you guys talk about liquid cooling on those PCB's...is that like a heat pipe layer on the PCB to handle the thermal load, or do they actually pump cooling fluid through a heat plane on the back of the board.....thanks.
Pretty sure its just connected to the cooling system through a metal casing that can transfer heat away from the board. Basically the heat pump Octavalve system.
How about this for an idea: Each Robo-Taxi comes with an Optimus Prime conductor. The robot conductor loads suitcases, wipe down seats between fares, makes sure no one forgets their purse or cell phone, assists with destination selection and fare payment, and provides security. Also, the robot conductor might be a security blanket for those hesitant about riding in a driverless vehicle. It could perform small courtesy chores like helping select music.
...and reduces the passenger capacity...
...not a chance.
th-cam.com/video/eWgrvNHjKkY/w-d-xo.html
But "conductor" might not be the best word for the ride-along robot. Perhaps "porter" ?
You have a mighty functional bot planned! :)
Would be nice if you spent 2 seconds explaining what ADAS stands for. More than 1/2 us probably had to go google it during your video.
When / if my Tesla cars achieves L5 autonomy, AND automated supercharging is available, I will consider emancipating my fleet. They will need to earn enough to reimburse me for the initial cost, but after that, a car may be able to survive in wild, not only as an autonomous driver, but an Ai with its own agency as well
I will never let the car drive it self. How much cost is added with all those thingimabobs for self driving?
Chris needs to watch all of the Tesla "days" (autonomy, AI) special presentations. He appears to be in the camp of solving ADAS with hardware rather than software and large compute. BTW, my left eye has a new intraocular lens, the Alcon Panoptix trifocal fresnel lens. Three fixed focal length lenses all in one tiny package. The neural net, our brain, interprets what it sees like the variable focus lens we were born with. Eight cameras should be enough for Level 4 FSD.
One issue you are completely missing is one of redundancy (sensor glare, dirt, etc.) the other is blind faith in what "AI" is really capable of...
Basically, computers don't "understand" what they are seeing. A five year-old can infer an enormous amount of information in what they see that the most advanced "AI" in the world is utterly incapable of.
"AI" systems are comparison-driven and if there is no relevant comparison, then there is no solution, that's why "edge cases" are so staggeringly hard to predict. It's trivial for a human to infer when an unfamiliar/unexpected situation might be dangerous, an AI would have no way to solve such a situation. This is exactly why redundant sensors, not operating in the visible spectrum, are that "insurance" policy to make sure the 3D imaging algorithms are not misidentifying or ignoring an actual hazard.
I suggest you watch a few videos on how Waymo "robotaxis" work. They have 27(!) cameras, radar, sonar and LIDAR and it's the complete fusion of those inputs that allows them to be Lv. 4. Tesla's (read Elon's) insistence on visible-light only has a lot to do with his fantasy that the existing Tesla fleet is only "a software update away from FSD" since they are configured only with visible-light sensors and can't practically be retrofitted.
🤗👍THANKS BEN …FOR ASKING THE RIGHT QUESTIONS FOR US LAYPEOPLE AND CHRIS 🤗 FOR EXPLAINING WHAT WE ARE LOOKING 👁 AT …and do I detect a little HANS SOLO In CHRIS 😉🤷♂️😍😍😍
I have a 2020 Chevy with full ADAS. I've seen the ads where a Chevy comes to a complete stop when a car ahead stops and the driver is inattentive. My wife lightly bumped another car and there was no warning and no braking. Odd because it regularly warns for nothing. It certainly didn't work the way Chevy ads show.
VW Golf R has the best location for rear camera, the VW logo pops up when you put it in reverse, camera is out of weather when not in use, and it looks cooler when the VW logo pops open, which is also the hatch release handle. GERMAN engineering
For me that is just another mechanical moving part that can break and prevent you from safely using your car if it does break.
@@pipooh1 That was my first impression when I originally seen it on my first GTI years ago, I even said to dealer, hmmm, how long will this last...but guess what? never had a failure, 147,000miles on one, 87,000miles on last one, and current Golf R 2019 56,000 miles, not one issue with the flip up lock camera assy. I agree, less moving parts is always better....but they paid attention on the logo camera flap, hatch opening assy. Things can work right, if paid attention to, and not try and keep it cheap, cheap always costs more later
Why do we need interior radar and is it being used today?
It still feels like the B-Pillar camera is not the correct position for good cross traffic detection, and was put there originally for highway driving only, not to deal with cross traffic situations.
Now that Tesla is trying to solve driving for non-highway situations it seems like they need a camera positioned at the very front of the vehicle (corner or centre) facing left and right to see cross traffic.
I suspect the center wide-angle camera can capture front cross traffic, and it's farther forward than a human driver. There is some overlap with this camera and the side cameras, but I don't know if the side cameras are used much for cross-traffic detection.
TESLA FSD beta is LEVEL 4 autonomy. the rest in industry are CRAP. they don't have the VISION or COMPUTE Tesla has.
Tesla Is 25 watts , compare to 500 watts NVIDIA system..... Lol
Sandy mentioned FLIR in a recent interview on E for Electric. Any thoughts on FLIR and Tesla?
Likely not worth the trouble for the limited utility. Sandy should stick to what he knows.
FLIR is nice technology, but dont think Tesla is at this stage interested in anything new. Since they want to have as little of variables as possible, adding ADAS/Radar/Lidar/FLIR can all give false positives and than which system is right? It require prob 10x the calculation power to correct a false possitive since you need to double check everything.
The more sensors / type of technology, the more fine tuning you have to do not only on your overall system but also on the detection of all the side systems. Tesla has now only have to correct one system, vision. Once they have that to 95+% they might add more systems for overall safety etc, but you first need a good baseline to know that system X is correct and Y is not. If you have 5 systems running, how you know which one is correct when you still fine tuning them all?
Aptera update would be great