Lidar is not supposed to be substituting camera vision, but to augment it with additional information. Sure you can measure depth with stereo vision, but only if the object has sufficient contrast and illumination. When you're driving in the full sun, and are about to go inside a dark tunnel, cameras can't see inside the tunnel. When there's a white truck against a white background, there may not be enough contrast to see it. At night, cameras can only see where the headlights point. Stereo vision only works when you have multiple cameras looking at the same object, which is a problem for the Tesla because the B pillar camera is the only one that covers important side view sections.
"When you're driving in the full sun, and are about to go inside a dark tunnel, cameras can't see inside the tunnel. When there's a white truck against a white background, there may not be enough contrast to see it. At night, cameras can only see where the headlights point." How do humans handle the above situations? LIDAR RADAR? Cameras have way better vision than any human and the optics can instantly adjust to changing light levels FASTER THAN YOU CAN REMOVE YOUR SUNGLASSES AS YOU ENTER THE TUNNEL.
@@user232349 uhh im not really sure how the companies actually engineer it and to the extent they use it. but i'm skeptical that Tesla would make this presentation saying that Lidar is bad to map an environment or depth perception or whatever if other companies wern't already using it extensively. but yeah im not an expert.
His answer is just an evasive way to say "Lidar is too expensive to be used in your 100k tesla...", meanwhile processing the streams from the cameras represents a heavy computational workload for real-time identification of features... distance evaluation, and all the matrix algebra involved. Lidar provides real-time distance "measurement" in 3D at very low latency and requires less cpu intensive processing. Last never the least lidar will work day and night regardless of light, which is required by the cameras feeding the CV module. How about cameras + Lidar ? For added safety and redundancy...
@@didier_777 the main advantage of cameras is receiving certain colors and designs which lidar kinda has a problem with. This would be good for traffic lights stop signs and other types of signs. Lidar is good for processing depth which would stop phantom breaking from cameras but it also isn’t good in bad weather like rain or dust which radar wouldn’t have a problem with however lidar has a better imagine resolution which could be better for knowing what the car is looking at but cameras would be even better than that. In conclusion you need more of a combination of sensors.
1:40 Just because humans perform driving using vision alone, doesn't mean an autonomous driving system will not benefit from accurate depth information via lidar. If humans had the physical capability for distance sensing for eg. echolocation like bats or dolphins, you can bet we will be using that ability for driving
This is the whole point. USE as many inputs beyond human capability as possible! Vision is the smallest part of the Spectrum. If Superman had Lidar he would use it AS WELL!
Laser distancing measurements become corrupted in Fog, Rain , Falling Snow and Leaves . None of these systems Fail Safe. TESLA will be sued to bankruptcy .
You got that right! If I could shoot lasers from my eyes I’d do it all the time! Unlike Superman who seems to forget he has that power when lasers from his eyes are most needed.
They didn't say it in this video but they used both systems some time and they had to scrach it because they had imput importance colisions. There will be always problem how to decide with imput is more important at the time and which is to decide. You get conflicting inputs and to which assign higher priority to make decision? It's unsolvable. Better to figure out vision alone.
I agree 100% the focus should be to bring cost down not avoid tech. LiDAR has a place, stereo vision has a place, SFM has a place, hyper/multi spectral imagery has a place. The goal shouldn't be to do it as well as humans but better.
@@AlexRixon LiDar would increase cost, and for something that doesn't need to be precise with pinpoint accuracy, it's pointless. Computer vision with cameras will be better, particularly if they include infrared capability.
It would be costly to produce 2 self driving operating systems and then there would be an argument that one must be less safe which would look bad for them, they haven't even figured out full self driving for any yet
Now that the chips are down, Tesla has won, and Musk proven he was right to drop lidar and go vision only... and kept his promise, even if it was a bit more difficult, and took more time than he thought.
I don’t think it’s right to say that either option is the correct way. Safety is the number one focus and if both systems together can compliment each other to provide enhanced safety measures then I can see that happening in the near future. There have been numerous crashes and issues using just one or the other and if used together, it could vastly improve pedestrian/driver safety.
Good point. And this might be a little off topic but I'm starting to view Elon Musk as a wealthier Neil Degrasse Tyson. Self righteous and thinks their knowledge and way is better.
The biggest thing is that the neural net has to be as good as the human brain or better and has to be present on the car as the reaction time is really low. Using a server based neural net is not possible, so probably they will have to create better models and make them smaller. Quite a big challenge
i dont know.. when it comes to 3d scanning , Lidar is faster and way more effective then photogrammetry using massers of visual data and processing it into a bizarre zombie version of a 3d model.. but i do agree with him about cell structures is way ahead of our technology. The problem is that we wont be able to get our technology anywhere close to the level eyesight is on in our life time, since that would require fully understanding how our brain works (and getting our conciousness to understand itself without commiting suicide)
One of the reasons why people get into accidents is because our vision is imperfect. It takes time for our brain to process visual information and depending on a number of factors it may be hard for us see clearly that something or someone on our way. For example, during the night is extremely hard to see pedestrians in dark clothes, especially if the road has bad illumination. Tesla Vision inherits this issue and makes it even worse. When they only released Vision, one RUclips channel (I don't remember the name unfortunate) tried to put a bright red bin in from of their Tesla and check whether the car will stop by itself. The car didn't recognize the bin and hit it. They had another Tesla with parking sensors and legacy version of FSD. That car successfully recognized the obstacle and stopped itself in time. Self-driving cars potentially can become safer then humans is primarily BECAUSE they can use Lidar and other sensors.
Exactly, an important issue is that the computer vision approach needs to recognize object to work, while it doesn't recognize everything. lidar tells you if there is a physical obstacle no matter what it is.
Does Elon know there is an automotive lidar on the market that offers dynamic range at low latency, and is under $500 per unit? Hell you cant even buy a car bumper for that much!
How do airplanes fly in the night or bad weather? Multitude of redundant and/or sensors. This is life and death matter, I dont understand how anyone can rationalize going cheap on this
@@sheminn46038 percent of crashes come from distracted driving, another 30 percent come from alchohol or narcotics, about ten percent come from aggressive speeding or driving. So yes, vision is just fine, it’s human error that’s the problem
Humans crash cars all the time and are a horrible benchmark for autonomous vehicles so saying humans can navigate without lidar is really not a good argument. You can use cameras and such to infer distance but feed in accurate depth data into that data-set and you'll have huge improvements in your learning algo. Additionally, Lidar doesn't need black box ML algorithms to produce the depth data, so make for more a more predictable system which is great for a safety mechanism. Machine learning algorithms frequently act unpredictably when presented data that wasn't in their training set, and we can see that there have been crashes in Tesla cars that simply wouldn't happen if they had lidar data to check the algo. Additionally, Lidar is getting cheaper... it's in consumer electronics so this idea that it's expensive is evaporating quickly. Maybe Musk is right and once we "solve" computer vision, lidar will be unnecessary, but if we can get cheap accurate depth data, why not?
I'm thinking we're missing out a lot by not having olfactory sensors as part of self-driving cars. It's crucial to modern route planning both to find desirable destinations, and also areas which should be avoided.
What about when it’s Fog or dark how reliable will be vision. In those scenario Lidar is necessary. So to use fully autonomous vehicle probably need lidar and vision both for safety and reliability. Am I wrong ?? Anyone who can answer
Cue the video of a Tesla completely yeeting a child sized doll while all the other Thomas cars using lidar detect said doll and come to a full and complete stop
As a 3D artist and Python developer, I see that direction and need to achieve the level of the human eye for vehicular observation & navigation. In order to do this they are not only going to have to pursue vision but what he said was we had to develop our own LIDAR so now they will have to develop ocular-based camera technology to achieve their results.
Given high enough quality cameras and high enough compute power in the car itself you can process depth of field just fine with vision. The issue is they’re trying to make this work on cars that are 5 years old or more which don’t have the hardware required for this to work good. Those cars must be retrofitted or use lidar as an assistant to their camera systems.
Rare moments Mr. Musk will be wrong about LiDAR usefulness(for short term of next 20years). The processing hardware required to crack stereo vision drive is non-existent as of 2024 to fit in a car at economic value. It's not a mere neuronal net development task to be just figured out, you need a different level of hardware power which is misgauged. (Even parallel RTX4090's won't suffice) This single foundational mistake has cost Tesla 8years of delay in FSD tech. Should have discarded LiDAR "after" vision based neuronal-Net was full baked.
@@goldenamour3730 Nope, LiDAR use light (infrared) measuring distance from the object. In rain and foggy environment, LiDAR performance is poorer as well as optical camera. In this scenario, a lower resolution sensor, RADAR will associate in determining other cars and nearly objects.
There are microwaves built into each of the cameras to heat it up and keep snow and rain and fog out of it and they don't have infrared filters so they can see light that penetrates through fog even better than humans. Basically, what do you do when there's fog? You just drive slower. So it behaves the same but slightly better. On top of that, it has radar. And not just a simple radar, it's actually a radar camera that's really low resolution but still can kind of build a sloppy 3D map.
With vision you can solve multiple problems, read the environment and read road signs etc… With LiDAR you only have the 3D environment so you would also need vision in combination for road sign etc…
@@iKingRPG Humans don't have wrenches for hands either but i sure as hell hope whoever tightened down the nuts on my car didn't do it with their fingers.
Humans drive with eyes, so cars can do it with cameras. Well, maybe yes, but why not using Lidars in ADDITION to cameras? If we follow that reasoning humans don't have eyes behind their head, so we should remove cameras from the back of the car? Maybe cars will be able to drive with cameras, but for sure they will drive better with lidars
If the goal is to solve driving in good weather than sure cameras can solve the problem but if we want our self driving cars to go beyond the capabilities of human vision and drive safely in low visibility situation than at this point of time Lidar is the best solution. My prediction is that in the future Teslas will only run inside tunnels OR Elon will be forced to add Lidars as backup. After all extra $300 bucks on a $40,000 car means nothing when it comes to Human life.
Elon Musk likes to compare self driving against human drivers but he should remember that each human driver is legally responsible for his own failures. Near-perfection is not good enough for automobiles that are mass produced in millions and drive many miles every day. The problem with machine learning-based computer vision is that it can never guarantee its results, and that small chance of failure will eventually cause accidents in many places. Lidar is worth the effort if it can eventually reduce the chance of accidents by a very small amount.
He is completely missing the point that during biological vision, the eyes move very fast and focus constantly to get the best perception possible, cameras can't do that. I wish humans had active vision like lidars, bc perception of depth based on vision alone is not very accurate to begin with. Musk is wrong here.
You're missing the point that cameras have the same resolution everywhere, meaning they're potentially far superior to human eyes, given decent resolution; you somehow think it's a benefit that eyes move very fast to "scan" for a better perception, but it's a massive flaw to circumvent the fact that we only have "high res" vision in the center; cameras don't suffer that flaw, so what are you on about exactly? Furthermore, you can technically get sub-millimeter precision in your 3D reconstruction, if the camera frames you make it from are of high enough resolution to permit it; it's very precise, you're just flat out assuming it's not, because if you had the slightest knowledge on the subject, you'd know perfectly well how precise it is. He is definitely right, given the relevant tech has progressed far enough, and it has. Not a fanboy at all btw, but you're talking complete bs.
Human has sense of distance in a much faster perception. We human don’t only use eyes we also use lot more logical thing and use our eyes to input in the specific fetch. We move our eyes to the direction we intent to get input it’s not the other way round.
Also they don't fully understand yet how the human brain processes visual data to get an accurate model of the environment. It's a very difficult problem to solve
What about driving in the dark? Lidar can help identify objects that aren't well-lit. I think the more sensors the better. Especially when they get cheaper. Its just harder and more expensive to combine all the software with all the different types of sensors.
The compute power is what makes lidar better. You can do a lidar scan with a Resepi lidar unit from Inertial Labs and the small internal unit can make a 3d cloud almost instantly. 30s after landing a drone you have a point cloud compared wit 5-6 hours with picture for 3d mapping. In other words lidar gives far more detail far quicker.... and yes it is expensive.. a Resepi Lidar unit costs 30K so it isn't viable for vehicles yet and cheap lidar isn't much better then a camera.
Crazy how many people die every day in motor vehicle accidents and we lose out sht over loose bolts on a Boeing plane and one near to no deaths caused by airlines.
People joking Tesla engineers only account for weather in California when design the car but forgot about the rest of country 😅 they still can’t make the wiper work properly yet 😂😂
I would agree with Elon Musk, if they had cameras with comparable performance to the human eye. Unfortunately such a sensor is not have been made. For that reason i think that lidar is necessary to make up for the lack of information. Also it is not bad at all, if the computer have more information about the dept, than a human, there are accidents where false dept perception was the cause. I agree that vision should be the primary information source, and for that reason you don't need a state of the art lidar system, a worse consumer grade sensor would be plenty.
Computer vision can do most of it but it would be nice to have lidar for what he said at the beginning in a way. I would think 6 lidars and cameras would cameras would be the best model.
They didn't say it in this video but they used both systems some time and they had to scrach it because they had imput importance colisions. There will be always problem how to decide with imput is more important at the time and which is to decide. You get conflicting inputs and to which assign higher priority to make decision? It's unsolvable. Better to figure out vision alone.
VOLVO will be using LIDAR for the EX90. Lidar is not lame. VOLVO knows about safety. Trust me on this. radar ranging is also essential technology and TESLA shouLd be using it not removing it and then patenting a SUPER RADAR!
Now that the chips are down, with FSD v13.2, Tesla has proven beyond any doubt that their vision-based system was the best way to go! GM just officially dropped Cruise yesterday, Ford and BMW are publicly congratulating Tesla... Waymo will persist for a few more years before throwing the towel. Musk, YET AGAIN, was right years ahead!
I hope that one day only cameras will allow full autonomous driving, but why not using LIDAR until we reach that? Moreover i think it could be a great redundant feature, since also human eyesight can sometimes be confused, while whit LIDAR you "can't get confused". Furthermore if the problem is the cost, maybe the way to go for autonomous veichles is semi-public transportation like Waymo is doing, so that you have really expensive and safe cars but its cost is splitted in lots of users (though it would be nice to have private autonomous cars too)
Nope. They just basically do not use the same method or essentially similar technologies. Cameras do not use emitters of visible light to measure distance (If you include auto-focusing then yes.) LiDAR is a system composing of 2 main devices. Emitter and sensor. LiDAR uses most wavelengths of EM spectrum, not just infrared. LiDAR is similar in concept to RADAR. And Camera or Vision is just a cluster sensor cells that can image visible light.
Uhhh no. . . . Vision takes lightwaves in from the surrounding area and only works when it is light outside and nothing is obstructing the visible light spectrum. Lidar shoots a very specific wavelength out and then detects for when that signal comes back to determine the range, and then uses the doppler effect to determine the velocity(Well the parallel velocity.)
Elon Musk having said that shall return like BOOMERANG to Tesla FSD. In terms of level 5 and Robo Taxi, going multi-modal is the only way to be sure against vision sensors providing ill-formed data in bad weather despite DNN being robust to noise and missing value. In this regards, naming the current level FSD is misleading consumers.
Normal glasses was not made to see better in the dark, only a infrared glasses, i mean LIDAR was developed to air surveillance and reconnaissance, not to sensor of self drive car
@@delmanpronto9374 500 meters is 1/3 a mile. More than enough range for the needs of Lidar, As the Lidar is not the vision system and just needs to verify that the path vision has plotted will not cause a collission with other objects that vision incorrectly guessed size, speed or clearance. Vision will still need to read the lights/signs/road/ and traffic directions and follow this correctly.
@@DanDeGaston it's also 1/2 a km. zoom lens cameras can see a lot further than 500 m. it's the not the "needs of lidar" we are concerned with, but the needs of the car and decision making in respect of its movement. "Vision will still need to read the lights/signs/road/ and traffic directions and follow this correctly." exactly. lidar isn't the solution to that either. good AI is. if cameras do the job of lidar and better, it's as musk says, stupid.
There is no doubt lidar and cameras can see what the human eye sees. The difference is in the processing of that raw data and the response that is generated from the data.
Wtf are these guys talking about. I used to listen to Elon musk but this is so transparently cringe. Using LiDAR is trying to create fully autonomous vehicles and there are countless accounts of hilarious Tesla self driving malfunctions. Obviously I can see if I drove there. Thanks for the anatomy lesson on eyesight. This makes me want to invest in LiDAR because of how scared these dudes are of it to produce this cringe
I think that is too broad we had to use right tool for the right job . For fast and long range then lidar is good like drone aerial photography but for deep underground or behind building then ultrasound and radio wave , microwave and nano aloha are unbeatable and harmless so each is designed 2ith functional in mind
Lidar can tell me if im actually heading into a tunnel or if the roadrunner painted visually accurate tunnel in front of my Wile E Coyote ass. Visual information like symbols, traffic, road signs, pedestrians yes That should be visual. and it is my job to see it. but lidar aids that. Lidar is objective. Lidar says, with certainty, something is truly there, and isn't an illusion or mirage. And there is a measurement between source, and target. Where with a camera, there isn't. Cameras help me see an image with immediacy an image I need to see now, and with clarity to extended ranges Lidar needs to work in TANDEM with my camera to tell me how far away everything is, as lidar alone cannot produce visual clarity like a camera can, but Lidar provides objectivity in what its lasers CAN hit. so that my car can respond to it My car is my co-pilot in this endeavor. If I get something wrong, the car gets it right. If the car gets something wrong, my judgement corrects it. And together we form a complete brain Removing LIDAR is like removing touch from your senses. You are going to notice that sense being missing And wish you had it once its gone. It would be like if your hearing went from stereo to mono, and you could no longer tell relative directions and distances of sound sources. It would be maddening to not have a vast combination of complex senses after previously having them.
The point of computers and tech is to be able to do what humans cannot, why is there this emphasis to try making vision systems behave exactly like we do? Computer vision isn't great. I've heard of Teslas crashing into motorcycles because the computer vision thinks the two very close, low down tail lights on a motorcycle, especially a touring motorcycle are in fact a distant car. Absolute joke, LiDAR would have seen this.
I agree and disagree. 1) Developing AI camera based computer vision equal to that of human capabilities is essential. Using LiDAR as a crunch is for losers 2) Driving down the price of all sensors to a few cents/dollars is essential 2b) Once the price of a sensor is low enough to be negligible, adding it to a tech stack increases future capabilities and use cases
The price won't decrease without using the sensor widely. That's what today's companies are doing. They create demand for LIDAR so that more companies build them which leads to competition and reduced costs. If nobody would use LIDAR outside the military it would never get cheap. LIDAR is not for losers.
Lidars can filter all of these noise, in fact, I programmed the perception and the filters for a robot that works Great even in high snowing conditions
The question: if human could shoot lasers and get depthmap on top of the color vision, how the brain could it? Why evolution totally skipped that capacity? or maybe it has been developped, but it was a failure and so disappeared.
The statement that no one uses lidar to navigate our body is facetious. Of course we dont, but we also arent made of inorganic material. A computer and a brain are two very different things working on two very different spectrums.
I disliked the modern internet hyperbole of using words like "slam" or "hate." Just because Elon Musk says that LIDAR is a bad idea doesn't mean he "hates" it or is "slamming" it. Maybe to investors hopped up on adderal and cocaine it may seem that way but he's casually saying it is a bad idea and we shouldn't depend on it and give up on the idea of autonomous cars.
seriously doubt someone as unqualified as elon "spearheaded" creating spaceX's LiDAR. what he means is the same thing Steve jobs meant, someone else designed it and he announced it
Bruh 🤦🏿♂️ even a kid would let you know redundancy is best. They really got lost in the sauce of "once you solve vision, you solve autonomy" my brother in christ, you're not getting full self driving if you're stubborn aboutyour data streams.
Specifically speaking navigation in a car? What if i am a sailboat entering a channel of ever changing currents and shoals. I fly a drone of the entire sweep of of the channel overlaid with the contour of an existing navigational map. Another passive income owner 80 acres of timber wants a real time summary of how many trees, what volume, board feet, and have an almost perfect analysis of volume, count and board feet. So just one of many aspects of the use cases, and saying it is lame, is like saying the docking with the docking station is lame. Practical value is realtime mapping of growth, change and passive fingerprinting for the mariners, woodland owners, farmers, and developers of the future cities that did not have the supply chain or the trail surveyors map. So the title may be invalid and lame for saying lidar is lame, the value to surveying damage, growth or change far outweighs robotic navigation for space station or car driving. Great to know it is lame for your products. Now what about those who work to feed, house, and navigate the channels of life.
Imagine an NFL quarterback retrofitted with LiDAR and software uploaded into his brain to compliment his eyesight. It’s just a ridiculous example but you’d get complete passes all day.
Lidar is not supposed to be substituting camera vision, but to augment it with additional information. Sure you can measure depth with stereo vision, but only if the object has sufficient contrast and illumination. When you're driving in the full sun, and are about to go inside a dark tunnel, cameras can't see inside the tunnel. When there's a white truck against a white background, there may not be enough contrast to see it. At night, cameras can only see where the headlights point. Stereo vision only works when you have multiple cameras looking at the same object, which is a problem for the Tesla because the B pillar camera is the only one that covers important side view sections.
"When you're driving in the full sun, and are about to go inside a dark tunnel, cameras can't see inside the tunnel. When there's a white truck against a white background, there may not be enough contrast to see it. At night, cameras can only see where the headlights point."
How do humans handle the above situations? LIDAR RADAR? Cameras have way better vision than any human and the optics can instantly adjust to changing light levels FASTER THAN YOU CAN REMOVE YOUR SUNGLASSES AS YOU ENTER THE TUNNEL.
@@richardjerrybest Human eyes have sharper vision than cameras, and larger range between light and dark.
true, but I think Elon is just saying if companies use it, as they are, as their main vision, then they'll fail
@@blahblahblah747 Nobody is using lidar as their main vision.
@@user232349 uhh im not really sure how the companies actually engineer it and to the extent they use it. but i'm skeptical that Tesla would make this presentation saying that Lidar is bad to map an environment or depth perception or whatever if other companies wern't already using it extensively. but yeah im not an expert.
His answer is just an evasive way to say "Lidar is too expensive to be used in your 100k tesla...", meanwhile processing the streams from the cameras represents a heavy computational workload for real-time identification of features... distance evaluation, and all the matrix algebra involved. Lidar provides real-time distance "measurement" in 3D at very low latency and requires less cpu intensive processing. Last never the least lidar will work day and night regardless of light, which is required by the cameras feeding the CV module. How about cameras + Lidar ? For added safety and redundancy...
He just want us to use whatever he develops
How do the cameras deal with the sun shining bright right into them? Or a car with high beams coming at you?
@@didier_777 the main advantage of cameras is receiving certain colors and designs which lidar kinda has a problem with. This would be good for traffic lights stop signs and other types of signs. Lidar is good for processing depth which would stop phantom breaking from cameras but it also isn’t good in bad weather like rain or dust which radar wouldn’t have a problem with however lidar has a better imagine resolution which could be better for knowing what the car is looking at but cameras would be even better than that. In conclusion you need more of a combination of sensors.
@@jamesmylife6578 Ideally you should have some combination of both.
@@didier_777 THATS WHAT IM SAYING… oops that was aggressive 😵💫
1:40 Just because humans perform driving using vision alone, doesn't mean an autonomous driving system will not benefit from accurate depth information via lidar. If humans had the physical capability for distance sensing for eg. echolocation like bats or dolphins, you can bet we will be using that ability for driving
This is the whole point. USE as many inputs beyond human capability as possible! Vision is the smallest part of the Spectrum. If Superman had Lidar he would use it AS WELL!
Laser distancing measurements become corrupted in Fog, Rain , Falling Snow and Leaves . None of these systems Fail Safe. TESLA will be sued to bankruptcy .
You got that right! If I could shoot lasers from my eyes I’d do it all the time! Unlike Superman who seems to forget he has that power when lasers from his eyes are most needed.
The only reason not to add it is just the costs, he cares more about profits than accurate depth measurement and safety apparently
They didn't say it in this video but they used both systems some time and they had to scrach it because they had imput importance colisions. There will be always problem how to decide with imput is more important at the time and which is to decide. You get conflicting inputs and to which assign higher priority to make decision? It's unsolvable. Better to figure out vision alone.
I think it would be lame to rely on only cameras. Learning through different sources is the way to go
elon was wrong, he is marketing his car but he is gonna fail big time
I agree 100% the focus should be to bring cost down not avoid tech. LiDAR has a place, stereo vision has a place, SFM has a place, hyper/multi spectral imagery has a place. The goal shouldn't be to do it as well as humans but better.
@@AlexRixon LiDar would increase cost, and for something that doesn't need to be precise with pinpoint accuracy, it's pointless. Computer vision with cameras will be better, particularly if they include infrared capability.
For as long you are going to afford it nhe
@@LenaLena-ui1pkhahahahaha hahahahaha 😀 this guy is a game changer. Ask Ford, Elon is becoming unstoppable how did we let that happen
Remember that he sold 1 million+ cars with cameras only and the promise so unlock autonomous driving in the future. He simply CAN'T praise lidar now.
It would be costly to produce 2 self driving operating systems and then there would be an argument that one must be less safe which would look bad for them, they haven't even figured out full self driving for any yet
@@danthelambboy you are talking this in 2024?
Now that the chips are down, Tesla has won, and Musk proven he was right to drop lidar and go vision only... and kept his promise, even if it was a bit more difficult, and took more time than he thought.
I don’t think it’s right to say that either option is the correct way. Safety is the number one focus and if both systems together can compliment each other to provide enhanced safety measures then I can see that happening in the near future. There have been numerous crashes and issues using just one or the other and if used together, it could vastly improve pedestrian/driver safety.
Good point. And this might be a little off topic but I'm starting to view Elon Musk as a wealthier Neil Degrasse Tyson. Self righteous and thinks their knowledge and way is better.
The biggest thing is that the neural net has to be as good as the human brain or better and has to be present on the car as the reaction time is really low. Using a server based neural net is not possible, so probably they will have to create better models and make them smaller. Quite a big challenge
No human deaths with Lidar in over a million miles of driverless vehicles
i dont know.. when it comes to 3d scanning , Lidar is faster and way more effective then photogrammetry using massers of visual data and processing it into a bizarre zombie version of a 3d model.. but i do agree with him about cell structures is way ahead of our technology. The problem is that we wont be able to get our technology anywhere close to the level eyesight is on in our life time, since that would require fully understanding how our brain works (and getting our conciousness to understand itself without commiting suicide)
One of the reasons why people get into accidents is because our vision is imperfect. It takes time for our brain to process visual information and depending on a number of factors it may be hard for us see clearly that something or someone on our way. For example, during the night is extremely hard to see pedestrians in dark clothes, especially if the road has bad illumination. Tesla Vision inherits this issue and makes it even worse. When they only released Vision, one RUclips channel (I don't remember the name unfortunate) tried to put a bright red bin in from of their Tesla and check whether the car will stop by itself. The car didn't recognize the bin and hit it. They had another Tesla with parking sensors and legacy version of FSD. That car successfully recognized the obstacle and stopped itself in time.
Self-driving cars potentially can become safer then humans is primarily BECAUSE they can use Lidar and other sensors.
Exactly, an important issue is that the computer vision approach needs to recognize object to work, while it doesn't recognize everything. lidar tells you if there is a physical obstacle no matter what it is.
Does Elon know there is an automotive lidar on the market that offers dynamic range at low latency, and is under $500 per unit? Hell you cant even buy a car bumper for that much!
Their argument of vision only is based on the premise that vision alone is 100% accurate.
Did human driving using eye or lidar ?
@@teknosql4740 did human driving lead to crash using eye or lidar?
@@sheminn460 i still prefer high speed camera + A.i, than using lidar, so the behaviour of driving similar to a skilled human.
How do airplanes fly in the night or bad weather? Multitude of redundant and/or sensors. This is life and death matter, I dont understand how anyone can rationalize going cheap on this
@@sheminn46038 percent of crashes come from distracted driving, another 30 percent come from alchohol or narcotics, about ten percent come from aggressive speeding or driving. So yes, vision is just fine, it’s human error that’s the problem
The fly don't use LIDAR.
Humans crash cars all the time and are a horrible benchmark for autonomous vehicles so saying humans can navigate without lidar is really not a good argument. You can use cameras and such to infer distance but feed in accurate depth data into that data-set and you'll have huge improvements in your learning algo. Additionally, Lidar doesn't need black box ML algorithms to produce the depth data, so make for more a more predictable system which is great for a safety mechanism. Machine learning algorithms frequently act unpredictably when presented data that wasn't in their training set, and we can see that there have been crashes in Tesla cars that simply wouldn't happen if they had lidar data to check the algo. Additionally, Lidar is getting cheaper... it's in consumer electronics so this idea that it's expensive is evaporating quickly. Maybe Musk is right and once we "solve" computer vision, lidar will be unnecessary, but if we can get cheap accurate depth data, why not?
Hit Nail on head! I don't want my car to drive / see like a human thanks, Humans are dangerous.
I'm thinking we're missing out a lot by not having olfactory sensors as part of self-driving cars. It's crucial to modern route planning both to find desirable destinations, and also areas which should be avoided.
What about when it’s Fog or dark how reliable will be vision. In those scenario Lidar is necessary. So to use fully autonomous vehicle probably need lidar and vision both for safety and reliability. Am I wrong ?? Anyone who can answer
trust the smartest man on the planet when it comes to these issues..
@@joshuadarrington. I hope that's sarcastic
You’re totally right, I’m a consultant that works mainly on lidars (ouster especially) and the need is always the same: cameras are not reliable
I have no depth perception when driving in Fog. It's the most scariest time to be behind the Wheel.
Cue the video of a Tesla completely yeeting a child sized doll while all the other Thomas cars using lidar detect said doll and come to a full and complete stop
Bingo
@@spicemelange1077 you guys are such geniuses. What are your thoughts now? 😂
As a 3D artist and Python developer, I see that direction and need to achieve the level of the human eye for vehicular observation & navigation. In order to do this they are not only going to have to pursue vision but what he said was we had to develop our own LIDAR so now they will have to develop ocular-based camera technology to achieve their results.
Given high enough quality cameras and high enough compute power in the car itself you can process depth of field just fine with vision. The issue is they’re trying to make this work on cars that are 5 years old or more which don’t have the hardware required for this to work good. Those cars must be retrofitted or use lidar as an assistant to their camera systems.
Rare moments Mr. Musk will be wrong about LiDAR usefulness(for short term of next 20years). The processing hardware required to crack stereo vision drive is non-existent as of 2024 to fit in a car at economic value. It's not a mere neuronal net development task to be just figured out, you need a different level of hardware power which is misgauged. (Even parallel RTX4090's won't suffice)
This single foundational mistake has cost Tesla 8years of delay in FSD tech.
Should have discarded LiDAR "after" vision based neuronal-Net was full baked.
how do the cameras do with rain on them or fogged up ?
they use extra sensors that penetrate snow or rain
@@cihan.a aka lidar
@@goldenamour3730 Nope, LiDAR use light (infrared) measuring distance from the object. In rain and foggy environment, LiDAR performance is poorer as well as optical camera. In this scenario, a lower resolution sensor, RADAR will associate in determining other cars and nearly objects.
switch to human mode? Gotta maintain driving skills somehow right?
There are microwaves built into each of the cameras to heat it up and keep snow and rain and fog out of it and they don't have infrared filters so they can see light that penetrates through fog even better than humans. Basically, what do you do when there's fog? You just drive slower. So it behaves the same but slightly better. On top of that, it has radar. And not just a simple radar, it's actually a radar camera that's really low resolution but still can kind of build a sloppy 3D map.
Gotta love the irony that his vision guy quit in the meantime 😂
Andrej Karpathy? he's still a believer in vision, he's talked about it on podcasts recently.
With vision you can solve multiple problems, read the environment and read road signs etc…
With LiDAR you only have the 3D environment so you would also need vision in combination for road sign etc…
If im going to sit in the back seat to play on my phone while AI drives me around it better have ALL OF THE ABOVE technology!
do humans use lidar?
@@iKingRPG Humans don't have wrenches for hands either but i sure as hell hope whoever tightened down the nuts on my car didn't do it with their fingers.
how can vision recreate the front of a vehicle it saw from the back only?
3:04 my brain must be broken, they are the same size for me.
Humans drive with eyes, so cars can do it with cameras.
Well, maybe yes, but why not using Lidars in ADDITION to cameras?
If we follow that reasoning humans don't have eyes behind their head, so we should remove cameras from the back of the car?
Maybe cars will be able to drive with cameras, but for sure they will drive better with lidars
If the goal is to solve driving in good weather than sure cameras can solve the problem but if we want our self driving cars to go beyond the capabilities of human vision and drive safely in low visibility situation than at this point of time Lidar is the best solution.
My prediction is that in the future Teslas will only run inside tunnels OR Elon will be forced to add Lidars as backup. After all extra $300 bucks on a $40,000 car means nothing when it comes to Human life.
But the problem with cameras is fast changing light settings and dark spots of course the different angles of an object
That’s why LiDAR is always better. But Elon is cheap and LiDARs are very expensive
@@za3kazeb true true haha
i think they both have to work together lidar and vision! what if theres no light how can vision see??
If there is no light, human can't drive. Btw, driving with no light is absolutely illegal in US
@@user-ik6hu1zz8w absolutely correct! malfunctions or low light like tunnels are what im trying to example
Lidar doesn't need light, it will just be greyscaled.
@@v4sTislow you know about headlights right? They light up the road in front of the car so the cameras can see
Being on par with human drivers is insufficient.
What’s reason for not using both technologies??
Elon Musk likes to compare self driving against human drivers but he should remember that each human driver is legally responsible for his own failures.
Near-perfection is not good enough for automobiles that are mass produced in millions and drive many miles every day. The problem with machine learning-based computer vision is that it can never guarantee its results, and that small chance of failure will eventually cause accidents in many places. Lidar is worth the effort if it can eventually reduce the chance of accidents by a very small amount.
Vision is the future of most robotics. Vision development is extremely important for Optimus. Fantastic presentation. headline could be much better.
And then it rains, is dusty, or there is fog. or you come up against a highly reflective or highy mat surface that imagery cannot percieve depth from.
Lidar also suffer from reflective surface !
BTW imma Lidar fan
@dhrubasaha08 i am a lidar fan too mate. Though I mostly work with aiborne laser scanners.
this was 2 years ago which is a century in tech. Lidar is down to 500 $ and Tesla is using it on their next gen cars
Are they though?
no ! lol
let's compare the fatalities of AP and FSD with Waymo ?? 😂😂😂😂
@@jeffufcfanaticrosenberg tell us how it's going 😂
He is completely missing the point that during biological vision, the eyes move very fast and focus constantly to get the best perception possible, cameras can't do that. I wish humans had active vision like lidars, bc perception of depth based on vision alone is not very accurate to begin with. Musk is wrong here.
Musk is wrong everywhere
You're missing the point that cameras have the same resolution everywhere, meaning they're potentially far superior to human eyes, given decent resolution; you somehow think it's a benefit that eyes move very fast to "scan" for a better perception, but it's a massive flaw to circumvent the fact that we only have "high res" vision in the center; cameras don't suffer that flaw, so what are you on about exactly? Furthermore, you can technically get sub-millimeter precision in your 3D reconstruction, if the camera frames you make it from are of high enough resolution to permit it; it's very precise, you're just flat out assuming it's not, because if you had the slightest knowledge on the subject, you'd know perfectly well how precise it is. He is definitely right, given the relevant tech has progressed far enough, and it has. Not a fanboy at all btw, but you're talking complete bs.
Human has sense of distance in a much faster perception. We human don’t only use eyes we also use lot more logical thing and use our eyes to input in the specific fetch. We move our eyes to the direction we intent to get input it’s not the other way round.
LiDAR plus eyes is human.
Also they don't fully understand yet how the human brain processes visual data to get an accurate model of the environment. It's a very difficult problem to solve
What about driving in the dark? Lidar can help identify objects that aren't well-lit. I think the more sensors the better. Especially when they get cheaper. Its just harder and more expensive to combine all the software with all the different types of sensors.
The compute power is what makes lidar better. You can do a lidar scan with a Resepi lidar unit from Inertial Labs and the small internal unit can make a 3d cloud almost instantly. 30s after landing a drone you have a point cloud compared wit 5-6 hours with picture for 3d mapping. In other words lidar gives far more detail far quicker.... and yes it is expensive.. a Resepi Lidar unit costs 30K so it isn't viable for vehicles yet and cheap lidar isn't much better then a camera.
Crazy how many people die every day in motor vehicle accidents and we lose out sht over loose bolts on a Boeing plane and one near to no deaths caused by airlines.
People joking Tesla engineers only account for weather in California when design the car but forgot about the rest of country 😅 they still can’t make the wiper work properly yet 😂😂
definitely, wiper system is a garbage..
He promoted Hyperloop too. People make mistakes.
Wouldn't your car need a freakin' super computer onboard to run this neural net??
😂Got u
Yes you’d need an AI Inference Computer but it’s a smaller computer in terms of resources compared with AI training computers.
I would agree with Elon Musk, if they had cameras with comparable performance to the human eye. Unfortunately such a sensor is not have been made. For that reason i think that lidar is necessary to make up for the lack of information. Also it is not bad at all, if the computer have more information about the dept, than a human, there are accidents where false dept perception was the cause.
I agree that vision should be the primary information source, and for that reason you don't need a state of the art lidar system, a worse consumer grade sensor would be plenty.
Computer vision can do most of it but it would be nice to have lidar for what he said at the beginning in a way. I would think 6 lidars and cameras would cameras would be the best model.
They didn't say it in this video but they used both systems some time and they had to scrach it because they had imput importance colisions. There will be always problem how to decide with imput is more important at the time and which is to decide. You get conflicting inputs and to which assign higher priority to make decision? It's unsolvable. Better to figure out vision alone.
his attitude is why people burned to death in his cars
LiDAR helps to do a lot of new things that you are not able to do without it. Digital twins are one example of it.
Given most humans can't drive worth their a**... I'd give lidar a try.
VOLVO will be using LIDAR for the EX90. Lidar is not lame. VOLVO knows about safety. Trust me on this. radar ranging is also essential technology and TESLA shouLd be using it not removing it and then patenting a SUPER RADAR!
“You did not drive here by shooting lasers from your eyes” speak for yourself Andrej! I shoot lasers from my eyes all the time.
Now that the chips are down, with FSD v13.2, Tesla has proven beyond any doubt that their vision-based system was the best way to go! GM just officially dropped Cruise yesterday, Ford and BMW are publicly congratulating Tesla... Waymo will persist for a few more years before throwing the towel.
Musk, YET AGAIN, was right years ahead!
I hope that one day only cameras will allow full autonomous driving, but why not using LIDAR until we reach that? Moreover i think it could be a great redundant feature, since also human eyesight can sometimes be confused, while whit LIDAR you "can't get confused". Furthermore if the problem is the cost, maybe the way to go for autonomous veichles is semi-public transportation like Waymo is doing, so that you have really expensive and safe cars but its cost is splitted in lots of users (though it would be nice to have private autonomous cars too)
One thing people dont understand lidar( infra-red ) and vision (visible light) are the same thing with just different wavelength !
exactly... i think radio waves (RADAR) is what most are thinking
Nope. They just basically do not use the same method or essentially similar technologies. Cameras do not use emitters of visible light to measure distance (If you include auto-focusing then yes.)
LiDAR is a system composing of 2 main devices. Emitter and sensor. LiDAR uses most wavelengths of EM spectrum, not just infrared. LiDAR is similar in concept to RADAR. And Camera or Vision is just a cluster sensor cells that can image visible light.
Uhhh no. . . . Vision takes lightwaves in from the surrounding area and only works when it is light outside and nothing is obstructing the visible light spectrum.
Lidar shoots a very specific wavelength out and then detects for when that signal comes back to determine the range, and then uses the doppler effect to determine the velocity(Well the parallel velocity.)
Even my robot vacuum has lidar
Elon Musk having said that shall return like BOOMERANG to Tesla FSD. In terms of level 5 and Robo Taxi, going multi-modal is the only way to be sure against vision sensors providing ill-formed data in bad weather despite DNN being robust to noise and missing value. In this regards, naming the current level FSD is misleading consumers.
Camera depends on the light , if excessive light comes to camera then it fails ,
Normal glasses was not made to see better in the dark, only a infrared glasses, i mean LIDAR was developed to air surveillance and reconnaissance, not to sensor of self drive car
If Lidar's cost comes down to just couple thousand, is it still stupid?
if like musk says, "vision is solved", then i think so yes. LIDAR is short range... and basically light detection unlike RADAR (radio wave detection)
@@delmanpronto9374 How is 500m ”short range”?
@@intelligentunite4557 when you compare that to RADAR it's nothing. solving "vision" basically involves seeing 500 m or more.
@@delmanpronto9374 500 meters is 1/3 a mile. More than enough range for the needs of Lidar, As the Lidar is not the vision system and just needs to verify that the path vision has plotted will not cause a collission with other objects that vision incorrectly guessed size, speed or clearance. Vision will still need to read the lights/signs/road/ and traffic directions and follow this correctly.
@@DanDeGaston it's also 1/2 a km. zoom lens cameras can see a lot further than 500 m. it's the not the "needs of lidar" we are concerned with, but the needs of the car and decision making in respect of its movement.
"Vision will still need to read the lights/signs/road/ and traffic directions and follow this correctly."
exactly. lidar isn't the solution to that either. good AI is. if cameras do the job of lidar and better, it's as musk says, stupid.
There is no doubt lidar and cameras can see what the human eye sees. The difference is in the processing of that raw data and the response that is generated from the data.
Wtf are these guys talking about. I used to listen to Elon musk but this is so transparently cringe. Using LiDAR is trying to create fully autonomous vehicles and there are countless accounts of hilarious Tesla self driving malfunctions. Obviously I can see if I drove there. Thanks for the anatomy lesson on eyesight. This makes me want to invest in LiDAR because of how scared these dudes are of it to produce this cringe
What if it’s pitch black dark?
layyyymme
I think that is too broad we had to use right tool for the right job . For fast and long range then lidar is good like drone aerial photography but for deep underground or behind building then ultrasound and radio wave , microwave and nano aloha are unbeatable and harmless so each is designed 2ith functional in mind
Lidar can tell me if im actually heading into a tunnel or if the roadrunner painted visually accurate tunnel in front of my Wile E Coyote ass.
Visual information like symbols, traffic, road signs, pedestrians
yes
That should be visual. and it is my job to see it.
but lidar aids that.
Lidar is objective.
Lidar says, with certainty, something is truly there, and isn't an illusion or mirage.
And there is a measurement between source, and target.
Where with a camera, there isn't.
Cameras help me see an image
with immediacy
an image I need to see now, and with clarity to extended ranges
Lidar needs to work in TANDEM with my camera
to tell me how far away everything is, as lidar alone cannot produce visual clarity like a camera can, but Lidar provides objectivity in what its lasers CAN hit.
so that my car can respond to it
My car is my co-pilot in this endeavor. If I get something wrong, the car gets it right.
If the car gets something wrong, my judgement corrects it.
And together we form a complete brain
Removing LIDAR is like removing touch from your senses.
You are going to notice that sense being missing
And wish you had it once its gone.
It would be like if your hearing went from stereo to mono, and you could no longer tell relative directions and distances of sound sources.
It would be maddening to not have a vast combination of complex senses after previously having them.
I will fit lidar personally to my car and radar and rockets and EVERYTHING ELSE!
@@hivewatch EVERYTHING!
The point of computers and tech is to be able to do what humans cannot, why is there this emphasis to try making vision systems behave exactly like we do? Computer vision isn't great. I've heard of Teslas crashing into motorcycles because the computer vision thinks the two very close, low down tail lights on a motorcycle, especially a touring motorcycle are in fact a distant car. Absolute joke, LiDAR would have seen this.
With optics and image analysis you get the probability of an object, with lidar you get solid data. Too many bugs with visual perception.
I agree and disagree.
1) Developing AI camera based computer vision equal to that of human capabilities is essential. Using LiDAR as a crunch is for losers
2) Driving down the price of all sensors to a few cents/dollars is essential
2b) Once the price of a sensor is low enough to be negligible, adding it to a tech stack increases future capabilities and use cases
The price won't decrease without using the sensor widely. That's what today's companies are doing. They create demand for LIDAR so that more companies build them which leads to competition and reduced costs. If nobody would use LIDAR outside the military it would never get cheap. LIDAR is not for losers.
And let's all crash our Tesla's into box truck together ❤️
@@REAL-UNKNOWN-SHINOBI Happens frequently enough with regular cars. Might be cheaper using a camry
@@camadams9149 Camrys hold their value
@@Wise__guy They do. I exclusively buy Camrys. Reliable, affordable cars
If it’s meant to be safer than human drivers it should use more than just vision.
LiDar is corrupted by Fog , Rain, Falling Leaves and Snow. FSD is also flawed . What TESLA needs is a dedicated AI brain on each car system.
Or both.
Lidars can filter all of these noise, in fact, I programmed the perception and the filters for a robot that works Great even in high snowing conditions
@@RSC2194 If it is not factory approved the insurance company will deny claims. You patent your system and receive a Hartford seal . Get back to me.
The question: if human could shoot lasers and get depthmap on top of the color vision, how the brain could it?
Why evolution totally skipped that capacity? or maybe it has been developped, but it was a failure and so disappeared.
Lidar isn't even expensive, and is obviously in many ways superior to stereo camera vision for measuring distance, etc
Those guys are clueless. Airplanes have VFR for clear skies and IFR when no vision is possible. The more sensors, the better.
What about infrared and other wavelengths of light? They provide all the characteristics of traditional human vision
Meanwhile, distance recognition and warning in teslas without Lidar works like crap at night
Lame argument. Animal don't have wheels as a mode of transportation just fine. Let's get rid of wheels on the car too!
I agree, police need to stop being citation pimps for the county, respond to accidents, stop giving tickets, I’m still going to speed.
The statement that no one uses lidar to navigate our body is facetious. Of course we dont, but we also arent made of inorganic material. A computer and a brain are two very different things working on two very different spectrums.
I disliked the modern internet hyperbole of using words like "slam" or "hate." Just because Elon Musk says that LIDAR is a bad idea doesn't mean he "hates" it or is "slamming" it. Maybe to investors hopped up on adderal and cocaine it may seem that way but he's casually saying it is a bad idea and we shouldn't depend on it and give up on the idea of autonomous cars.
He had a sarcastic and rude response before the guy even asked the question. Clearly he feels they are a threat to his business
@@S0L12D3 Well he thinks they are stupid. I agree with him.
seriously doubt someone as unqualified as elon "spearheaded" creating spaceX's LiDAR. what he means is the same thing Steve jobs meant, someone else designed it and he announced it
Rule #1 of becoming a billionair: let others do the Work and you just announce the finished Product and Take Credit for it 😂
He recently bought 2 million worth of LIDAR from Luminar technologies
Lies, stop lying 🤡🤡
Bruh 🤦🏿♂️ even a kid would let you know redundancy is best. They really got lost in the sauce of "once you solve vision, you solve autonomy" my brother in christ, you're not getting full self driving if you're stubborn aboutyour data streams.
but camera can not see well in low light/rain/fog condition, and LiDAR has become much cheaper now due to technology advancement
If LiDAR = lame, then ElonMusk = lame
very scientific explanation my sir.
Guaranteed Elon uses LiDAR
The people talking about Lidar clearly didn't watch the video. If you watch the very end, he states the benefits of Lidar.
I like Elon, but just because he’s smart doesn’t mean he’s always right. Going cheap isn’t always the best idea.
Watching this video as Xpeng just announced it's dropping Lidar.
Compare Bat vs Human navigation ability hmmm
What about LIDAR seeing at night ... in the dark?
Yes
Specifically speaking navigation in a car? What if i am a sailboat entering a channel of ever changing currents and shoals. I fly a drone of the entire sweep of of the channel overlaid with the contour of an existing navigational map.
Another passive income owner 80 acres of timber wants a real time summary of how many trees, what volume, board feet, and have an almost perfect analysis of volume, count and board feet.
So just one of many aspects of the use cases, and saying it is lame, is like saying the docking with the docking station is lame.
Practical value is realtime mapping of growth, change and passive fingerprinting for the mariners, woodland owners, farmers, and developers of the future cities that did not have the supply chain or the trail surveyors map.
So the title may be invalid and lame for saying lidar is lame, the value to surveying damage, growth or change far outweighs robotic navigation for space station or car driving. Great to know it is lame for your products. Now what about those who work to feed, house, and navigate the channels of life.
Lidar is expensive but Mara mission is cheap!
that didn't age well knowing TESLA now has LiDAR
the future was fusion all along.
Notice how he scratches his nose after saying “Mark my words” - Classic giveaway that he just makes up stuff as he goes along
this makes intent. Profit motive with intent. I could build a case with this.
Very Interesting stuff
elon musk is really one to talk about “expensive and unnecessary”
Imagine an NFL quarterback retrofitted with LiDAR and software uploaded into his brain to compliment his eyesight. It’s just a ridiculous example but you’d get complete passes all day.
So basically lidar is long distance brail
He said it as if he only uses important matters, not for the average family with babies on board
Instead, smart use Lidar to win battles 🎉. If every country get lidar in their military and a nuclear, no one can threaten you.