There’s no mystery as to why FSD isn’t available on CyberTruck and Semi. They need enough vehicles on the road to collect data for native training. HW4 is running v12 “in emulation” which almost certainly means they’re downsampling, translating and rate limiting the HW4 camera input to match HW3’s from whatever HW4 is capable of.
@@tedmoss The AI is trained to give correct results at 36 FPS, would give wrong answers if it ran faster. HW3 cameras are 36 FPS, HW4 likely higher. This isn’t emulation in the form of one processor emulating another, which is slow. This emulation is of the camera resolution and frame rate.
doubtful, in my opinion, that a software engineer would tie their software to a hard configuration of cameras. They would make a common input to be digested by the software.
Not really software but ai. That stuff is a stickler for setup. If the setup is different the data from other setups is meaningless. They need to figure out translation
this. Hans is completely wrong in the software being dependent on the camera placement. The camera placement Tesla has chosen is purely because they believe it provides the best setup for depth of field and field of view in order to provide the best input for the software to interpret the images. The software itself won't actually care where the placement of the cameras are. Hans in the video should have REALLY talked about the real "camera-gate" issue and that is whether Tesla removing LIDAR and solely using cameras being a mistake or not.
The current stack is running in emulation mode on the new hardware, and now they give free 30 days trial for all new cars (new hardware) for them to start filling the new hardware search engine.
Doesn't seem at all logical to me that the camera locations would be a problem. I remember James Doumer talking about the first stages of the processing was to knit the cameras into a 360 degree view. Everything else downstream,.. all the heavy processing is done on that combined view. Therefore only the stitching app needs to be reworked for a new car. Frankly, it would be unbelievably short sighted to design the system to be impacted by the exact camera placement,... they have five cars themselves, plus older versions, and they're clearly aiming to licence this to others once complete.
My rear camera (Y) is always getting dirty and unusable, and is also a very low resolution (HW3), I don't think it will cut it unless an auto cleaning solution is fitted
The issue with fixed cameras in multiple locations, is the synchronization of the multiple cameras at a high frame rate. This is similar to using focal plane shutter in digital cameras, which are transitioning to a global shutter, to alleviate the issue. Forward facing wide and narrow angle cameras with depth perception is a must. Also is the need for a 360 degree downward tilted camera feed, to make FSD even better. With the cost of high rez video sensors coming down, this could be implemented with a camera focused on a parabolic mirror facing outward,mounted on vehicle top(know it is not aesthetically pleasing, but can be mitigated), with subsequent hardware processing at module level, to correct distortions and scale down the image. Just some 💭
@@tedmoss If only, we can do better than them, in every way🤔. We still don't know the embedded intelligence, other than just being a passive apparatus madeup of a variable focus gimbal mounted self cleaning/self stabilizing image system, with variable aperture control? Also, we hardly process images at full resolution or even in full color for the neural vision training, that FSD goes through, to generate the black box model. Apart from that the created model is emulated using a 2d planar silicon processor, unlike the real neurons in 3d space(where interconnection is dominant), that communicate uniquely, through firing of action potentials, to do the computation.Just some 💭
So let me get this straight. You are asserting, not only that unsupervised FSD with current camera placement is impossible, but also they knew this years ago? For such a substantial claim there needs to be substantial evidence. An opinion isn't evidence.
I don't know how they are going to solve the resolution problem, if HW3 cameras can't read signs from across the intersection clearly or roadside digital signages, that could be a blocker to lvl 4/5... maybe they'll switch out the cameras on existing cars with FSD package? can HW3 FSD computer even process the increased pixel count? only tesla knows i guess
This may be hard to get the ai to learn off this, but I think having a 180 degree camera at the front and back would provide so much more awareness of its surroundings than the current set of cameras. Sure the ai is trained off all those images videos from all the cameras, but still it feels highly limiting with that being your only field of view for the car to work with. So if vision is going to get to the level of full autonomy, it needs to have the field of view of a hawk. All you gotta do is train the software to make its own grid that its driving on that is warped to the view that it sees, kind of like the backup camera in teslas which have warped backup lines, but it coresponds to a real location behind you where your wheels will end up, even though the camera angle is very wide and decently distorted. A similar technique could be used for the ai in a 180 degree front and rear Camara which would allow it to have a much wider view of its suroundings.
6:20 - you need to add additional cameras while also having the old setup as well so you can use the old camera 8x setup while simultaneously training it with more cameras so you can still ship the new car with FSD in its current state and then improve upon it over time when those additional cameras become useful.
Doesn’t “the training data they’re using for FSD depends on the locations of the cameras” because camera locations determine the data they get from the cameras?
Yes. It gets trained off a certain perspective. This may be hard to get the ai to learn off this, but I think having a 180 degree camera at the front and 180 degree on the back would provide so much more awareness of its surroundings than the current set of cameras. Like yeah sure the ai is trained off all those images videos from all the cameras, but still it feels nightly limiting. So if vision is going to get to the level of full autonomy, it needs to have the field of view of a hawk.
@@tedmoss Possibly. But if that were true couldn’t you put them all on the roof pointing straight up? Our biological neural network is “trained” to judge distance from the time we could first see via reenforcement learning (sometimes painfully) by processing visual data from two biological “cameras” at fixed positions and distances from each other. If you now fed the ocular nerve from your right eye visual data from the perspective of the middle of your back your nn wouldn’t be able to function effectively at all. Doesn’t training a single video-in-controls-out neural net similarly rely on consistency between nn training video input and nn operational video input?
8:30 the b-pillar is *NOT* where you head is. It is behind your head and worse if you (human) move your head slightly forward for tougher corners. Chuck did go-pro placement examples of this as one example. Many others have demonstrated corners that they have to go around daily do not allow safe creeping.
Just as good as that from other manufacturers at level4 already and the new nvidia chips etc is going to be a real game changer, possibly the hardware needed to eventually get manufacturers to level 5 and beyond.
Someone recently said all FSD is running HW3 mode and HW4 cars downgrade the video to HW3 resolution. So it's quite plausible they are reluctant to change config. And the moment they develop for HW4 they either have to maintain two branches or say goodbye to HW3 and leave them at the current version. Like they have HW2 and 1. I actually low key think I could make robotaxi level out of HW2, within reason :) and fun fact, HW1 is 10 years ago now.
CT FSD may well need new data to learn off of and then what of the next gen? Something like a stumpy MY where some of the parts are the same and so reduce costs. The M3 also saves money and they may well be able to use S3XY data for the FSD much easier. All very interesting and changing at a lightning rate. Cheers.
For humans to properly supervise FSD it’s important psychologically for the human to have a better view of the road, otherwise the tendency is for the human to give up and be more passive.
Yes and no. It comes down to compute cycles. Different sensors, different vehicles, they have to retrain the model. The sensors see in all directions. There is no need to agonize over sensor placement. Collect data. Train on it. HW3 is just the inference engine. It has enough head room for autonomy - but the model will gain in complexity over time. So more headroom will enable more functionality. FSD licensing will only make sense with high volumes. Training compute requirements for a new model are not trivial. At this time, only BYD has enough volume to justify a license - but they are price-sensitive. They can't afford it. The others all lose money on sales. They are nowhere near ready to license FSD.
FSD done right will be able to know where the car is in relation to the factors you discussed, without any NN training. Plug and play, any car, any cameras. This will likely require another FSD rewrite.
@@EMichaelBall guessing how LLMs will muscle into the autonomy space, and when, is kinda challenging atm. But I do think it's likely to happen. Could go one of two ways I can imagine: the single-mode (video) model at Tesla might serve core functionality with an LLM front-end tacked on; or an LLM's video processing and autolabeling chops might improve to the point where it's better. Either way, we'll gain virtual chauffeurs. Should be cool!
Tesla's valuation hype was built on its FSD. The only reason he chose not to go with LIDAR was the cost. Over the past years LIDAR prices have dropped significantly and it will continue to get cheaper. He could have used LIDAR, had a true autonomous vehicle, and that would have justified the 12K premium for that option.
Lidar can’t see through fog, dust, or heavy rain any better than cameras can. Infrared can do these things, and assuming the cameras don’t have infrared filters on them, no hardware changes should be required. Can the FSD computer process the extra data?
Proof is in the pudding, my friend. Waymo has driverless cars. Tesla not. Tesla is nowhere close to driverless FSD. I own a Tesla. I know how unreliable and a piece of garbage Tesla's FSD is. Maybe call advanced autopilot. But not FSD. The reason for this sudden FSD hype is that Tesla got its ass handed to them on deliveries, and they are hoping to prop the stock up with the FSD bull crap. FSD even as a $199 subscription has failed. It brings no significant revenue to them. They have been working for 10 years on this crap and still have not a single driverless car. And those ignorant diots who think their FSD can truly just drive itself while they nap or play a video game, most end up dead in horrible crashes.
Tesla is maximizing every stage as we go along in daily progress over millions of vehicles. Every month is a stage or even daily. This is a data progress situation we are experiencing.
It is hard to believe that FSD is strongly dependent on position of cameras in different car models. They can probably solve this, if a major issue, by some calibration, do some transformation of camera feed so input to FSD from each camera is the same in each model. Also it is hard to believe that someone as brilliant as Musk never asked team to do some optimization of camera placement way back years ago. I think these speculations are baseless.
Training on hardware 4 is already occurring. They are running it in emulation mode. Elon even said that hardware 4 would be behind hw3 for a while. I agree that the cameras are not placed optimally right now. Like I’ve said before the cameras should be place like a prey animal has its eyes. Not like a predator would have its eyes. A fish has 2 eyes and can see in all directions at once.
Now that Tesla has moved to end-to-end neural network, the camera placements are not a significant legacy constraint. With compute likely already 10x from a few months ago, and 10x again before end of this year, and 10x again next year, they will easily be able to train NNs to integrate other camera views. And a foundation of this training can be done in a simulation even without cameras in the fleet. Compared to the overall potential value of robotaxi or even its R&D costs, this will not be a major cost. In fact it will likely be better the generalizing the NN if they train a foundation model to integrate whatever views are available from a wide range of camera types and positions. This way the NN will be "self calibrating", quickly figuring out where each camera is in relation to the vehicle. By the same token it will be better to train the foundation NN on a variety of performance characteristics (from the response of steering, accelerator & brakes, to tire traction, center of gravity, and so on). This way, just like a human the NN will automatically acclimate itself to whatever cameras (and other sensors) it has and whatever car it is driving. This greater generality will take more training compute (Dojo, et al) but the generalizations will likely only slightly increase inference compute (sometime forcing a NN to learn to solve a more generalized problem can ultimately even reduce the required inference compute, because it learns a more general principle more deeply rather than have to learn several distinct cases). tl;dr: don't worry about the cameras
I WAS wondering why they still haven't added cameras, like front corner cameras. The reason might also be part of the reason why they haven't added some cheap lidar.
Lidar can’t see through fog, dust, or heavy rain. Infrared can do better, and if there’s no ir filter on their cameras, the current stack can see into the ir spectrum. From there, it’s about inputting and interpreting data.
Surely the FSD system itself would be trained on the constructed 3d model of the environment rather than the camera inputs directly. Then if you have different camera placement all you need to do is adjust the processing to generate the model and everything else remains the same
@@bogdanivchenko3723 Yes, but I'm sure it's not just a single NN to do everything but rather a number of NNs each doing their own part of the problem, so you should be able to swap out the camera -> 3d environment NN with one suited to the cameras on a different vehicle
Camera position almost trivial if you have the model weights right. Camera quality and photon count MUCH different. Just compute constrained till now.. all will be well. Really.
Camera placement and whether or not radar is needed is something extremely thorough research had guided Tesla to decide on. Now suddenly brilliant folks are jumping in saying we need more cameras or the cameras should be placed differently 😂
There’s nothing sudden about any of this. It continues to be discussed internally at Tesla to this day. Additionally cybertruck likely doesn’t have fsd because the camera placement would be different on it.
You are lucky to have such deep insights into the workings of Tesla. Others (me included) could be inclined to think that some decisions were purely made by stubbornness or on a whim.
It seems to me that camera placement doesn't matter. It appears that all you have to do is correct new camera location coordinates then all the trigonometric calculations can be made.
Your theory is highly flawed. FSD can be used in X, S, Y, and 3 because the camera positions in 3D space and each cameras focal point and optical magnification is known. The initial steps are to Validate forward. The inputs for the cameras are in the Cybertruk but it takes a quorum of units to reconcile all the way back to the first S datapoint. It's just a quorum versus population reconciliation for validation. Peoples lives are on the line and the training per new vehicle must be verified by a quorum, likely 5000 to 10,000 units with x amount of drive time. This is simple engineering and safety standards and you are thinking their algorithm is flawed when its your engineering/computational statistics knowledge that may be a slightly lacking. Ask questions in the X discussion rooms on computational analysis of FSD. Better to be knowledgeable and credible than entertaining.
I think the average price of cars in the US is @$38k. So most people are buying cars at this price point, so I do not know why you say Tesla needs to go to $25k. They need to go to $25k to reach their mission of selling @20 million cars a yr. Which is not possible on their own. I think all the BEV OEMs can sell at least @20 million per yr by 2030. I think Tesla will max out at 5-6 million cars per yr - this is ex Robotaxis. With Robotaxi, then I think Tesla max's out at @10 million cars per yr globally. This is @2040.
Don't compare waymo to fsd lol that's like saying a paper map is better than Google maps, maybe in a very specific circumstance and location but in totality.
@@MegaWilderness haha I'm dreaming but you think a geofenced pos like waymo is something noteworthy lol the levels are for liability just like everything else in this country. It's not a good metric.
The mistake people are making is looking at the progress as a smooth trajectory up and to the right when it's actually stair steps. The reality has been that each new FSD architecture doesn't really improve much after the initial release. Everyone comes up with all sorts of reasons why the new release is just going to keep getting better. The reality is that hasn't happened. The new release is a bit better than the previous and it doesn't get any better. Or it gets better at some things and worse at others. This has been the actual track record. I don't think this time it will be different.
it never ceases to amaze me how tesla fan boys get rammed up the behind by elon and continue to justify the ramming he does to his customers. Theres no excuse for why the earlier model owners of hw3 were nothing more than test dummies. We all know that as fsd gets closer to level 4 the computers and cameras in the old cars will not be able to handle the new fsd software. its no different to the iphones
Sorry to have to say this, but: Hans and JD are just brilliantly perceptive about the quirks of AI when it comes to FSD. The third person, (Farzan ? ) is a perfectly stupid individual.
FSD HW3 will definitely be L5 capable, or so close that the technicalities are not important. The limitations of the side-view cameras will at worst mean it is extra careful in a few troublesome spots or a void them completely by taking an alternate route. And if you are a passenger reading a book or watching a movie, you won't even notice. Some of the "blind corner" type scenarios will be handled with superhuman reflexes & 360 vision: it can inch forward & back much more quickly than a human could. There are also many indirect cues from sound, to shadows & reflections, which it can use to have an idea of whether it can inch further into a "blind corner". And the AI can make that decision millimeter-by-millimeter--it is not an all-or-nothing decision to stick it's nose 18 inches further out. I doubt there will be a "failed trip" in more than 1 out of 10,000 fares, where a customer might get a refund and wait a few minutes for a robotaxi with better camera placement to come pick them up. But in general the ride-share network & logistics AI will just avoid assigning those cars to the trips that can't be completed in the first place.
is that the level 3 that's only available on some highways? This is something anyone can do and no one cares. Just lane keep on simple highways and deactivate when things get slightly complicated
@@penname4764 they can use the phone. Can't use the phone on Tesla because of camera. This is big quality of life improvement. I think tesla can eventually make it, but it's taking too much time.
Don't know about BMW, but here are the facts about Mercedes DrivePilot: ● It operates only on geofenced, precision mapped, limited access highways ● It operates only at speeds of 40 MPH and below ● It operates only in the same lane-no lane changes ● It operates only with a lead vehicle within 100 meters ahead ● It operates only in clear weather-no rain or snow ● It operates only with no direct sun on the cameras ● It operates only if the driver doesn’t avert his gaze from ahead for more than five seconds Meanwhile, Tesla FSD operates on every street, road, highway or freeway in the U.S. and Canada, at any speed up to 85 MPH, with or without any other traffic on the road, in any weather, in any sun conditions. But compared with Mercedes Drive Pilot, you *do* have to rest your hand on the steering wheel to gain these advantages; so Drive Pilot is clearly way ahead. Right.
@@BigBen621 Driving is not fun for most ppl. Especially sitting in traffic. Sitting on a phone in traffic is a nice feature that Tesla doesn't have. Yes their approach if flawed, but they have a product that ppl want, and tesla's current product is arguably worse.
@@bogdanivchenko3723 I'm not sure what you mean. If you mean you can't hold your phone in your hand and talk in a Tesla, you're right; but you can't on Mercedes Drive Pilot, either. But of course you can link your phone to Tesla, and use it all day long to make and receive hands free voice calls. Please be more specific about what you think you can't do in a Tesla, which you can do in other ADSs or ADASs, such as Drive Pilot.
here are the facts about Mercedes Drive Pilot: ● It operates only on geofenced, precision mapped, limited-access highways ● It operates only at speeds of 40 MPH and below ● It operates only in the same lane-no lane changes ● It operates only with a lead vehicle within 100 yards ahead ● It operates only in clear weather-no rain or snow ● It operates only with no direct sun on the cameras ● It operates only if the driver doesn’t avert his gaze from ahead for more than five seconds Meanwhile, Tesla FSD (Supervised) operates on every street, road, highway or freeway in the U.S. and Canada, at any speed up to 85 MPH, with or without any other traffic on the road, making automatic lane changes for any of several reasons, in almost any weather, in any sun conditions. But compared with Mercedes Drive Pilot, you *do* have to touch the steering wheel once in a while to gain these advantages; so Drive Pilot is clearly way ahead. Right.
@@stennordenmalm9900 And you're claiming that Mercedes Drive Pilot can? Since Mercedes Drive Pilot will not engage without an alert driver able to take back control within 10 seconds, that's simply *false.* Or if you're claiming something else, what exactly is it that you're claiming?
Scary stuff because L3 is worthless. Maybe a few thousand per car on purchase. The must be confident that they can do L4+ on hardware 4 at a minimum. Elon is well known for scrapping projects if they are not on the right path. I hope he gets this decision right as well
people keep pumping Tesla FSD when it is probably the worst autopilot system out there. Waymo is out there picking up rides without drivers. That is true autonomy. You can never have full autonomy without LIDAR. Waymo is a proven technology. What stops them from starting to license their technology to automakers tomorrow?
_You can never have full autonomy without LIDAR._ You are representing as a fact that which is only your opinion. There is no objective evidence to support this assertion.
*@bogdanivchenko3723* _It's not legal in arizona but you can use your phone normally in mercedes_ So lemme get this right. Tesla FSD (Supervised) operates on every street, road, highway or freeway in the U.S. and Canada, at any speed up to 85 MPH, with or without any other traffic on the road, in almost any weather, in any sun conditions, day or night, can make and receive phone calls hands-free on its excellent audio system, and is available on cars costing in the mid-$30 thousands. On the other hand, Mercedes Drive Pilot operates only on precision-mapped limited-access divided highways, only at speeds up to 40 MPH, only with another car within 100 yards in front of it, only in clear weather, only with no direct sun into the cameras, only during daylight yours, and is offered only on cars costing >$100 thousand. But because you claim you can hold up your phone to make calls in the Mercedes, which you acknowledge is illegal anyway, your preference is for Mercedes Drive Pilot. That's about the nuttiest thing I've ever heard on this subject!
Tesla FSD is just a dream hype and a wish and hope of hopelessness! To get to be approved to Level 4 it will need to have geo fencing software like Waymo as all States will require it for level 4.
There’s no mystery as to why FSD isn’t available on CyberTruck and Semi.
They need enough vehicles on the road to collect data for native training.
HW4 is running v12 “in emulation” which almost certainly means they’re downsampling, translating and rate limiting the HW4 camera input to match HW3’s from whatever HW4 is capable of.
Why would you rate limit an emulator? It is already slow.
So for every model they release it will take 10 years to train the ai, lol.
@@flubalubaful Now that they’re no longer compute bound it won’t take long.
V12 trained for 1 year, for instance.
@@tedmoss The AI is trained to give correct results at 36 FPS, would give wrong answers if it ran faster.
HW3 cameras are 36 FPS, HW4 likely higher.
This isn’t emulation in the form of one processor emulating another, which is slow.
This emulation is of the camera resolution and frame rate.
There are a lot of hypotheticals in your assessment - I’d be very surprised is Tesla hasn’t thought this through…
doubtful, in my opinion, that a software engineer would tie their software to a hard configuration of cameras. They would make a common input to be digested by the software.
Not really software but ai. That stuff is a stickler for setup. If the setup is different the data from other setups is meaningless. They need to figure out translation
this. Hans is completely wrong in the software being dependent on the camera placement. The camera placement Tesla has chosen is purely because they believe it provides the best setup for depth of field and field of view in order to provide the best input for the software to interpret the images. The software itself won't actually care where the placement of the cameras are. Hans in the video should have REALLY talked about the real "camera-gate" issue and that is whether Tesla removing LIDAR and solely using cameras being a mistake or not.
Elon promised FSD that's a level 5 system. People bought based on this promise. 12k is a deal breaker for best case level 3.
Agreed, price will come down or you can rent the software per month.
I know! That's why I paid for it. But, they haven't delivered. Why won't they give me money back?
@@Christian-fx9ur Why would they? You knew it was a beta product…
have you experience beta 12.3? I bet you have a better idea....
What makes you conclude the promise will never be delivered?
Elon was talking no steering wheel. That was serious expectation.
The current stack is running in emulation mode on the new hardware, and now they give free 30 days trial for all new cars (new hardware) for them to start filling the new hardware search engine.
Level 5 with constraints is called level 4.
💯
I realized I this while editing. Should have put a little text note on screen.
Doesn't seem at all logical to me that the camera locations would be a problem. I remember James Doumer talking about the first stages of the processing was to knit the cameras into a 360 degree view. Everything else downstream,.. all the heavy processing is done on that combined view. Therefore only the stitching app needs to be reworked for a new car. Frankly, it would be unbelievably short sighted to design the system to be impacted by the exact camera placement,... they have five cars themselves, plus older versions, and they're clearly aiming to licence this to others once complete.
Could have added extra cameras at any point to future proof. No doubt carefully considered. Probably not critical.
My rear camera (Y) is always getting dirty and unusable, and is also a very low resolution (HW3), I don't think it will cut it unless an auto cleaning solution is fitted
Tesla first needs fix navigation and map. Most of the time my FSD is working fine except it does weird stuff due to navigation.
Google it, or not.
The issue with fixed cameras in multiple locations, is the synchronization of the multiple cameras at a high frame rate. This is similar to using focal plane shutter in digital cameras, which are transitioning to a global shutter, to alleviate the issue. Forward facing wide and narrow angle cameras with depth perception is a must. Also is the need for a 360 degree downward tilted camera feed, to make FSD even better. With the cost of high rez video sensors coming down, this could be implemented with a camera focused on a parabolic mirror facing outward,mounted on vehicle top(know it is not aesthetically pleasing, but can be mitigated), with subsequent hardware processing at module level, to correct distortions and scale down the image. Just some 💭
Everyone forgets Elon's idea, we have two eyes.
@@tedmoss If only, we can do better than them, in every way🤔. We still don't know the embedded intelligence, other than just being a passive apparatus madeup of a variable focus gimbal mounted self cleaning/self stabilizing image system, with variable aperture control? Also, we hardly process images at full resolution or even in full color for the neural vision training, that FSD goes through, to generate the black box model. Apart from that the created model is emulated using a 2d planar silicon processor, unlike the real neurons in 3d space(where interconnection is dominant), that communicate uniquely, through firing of action potentials, to do the computation.Just some 💭
So let me get this straight.
You are asserting, not only that unsupervised FSD with current camera placement is impossible, but also they knew this years ago?
For such a substantial claim there needs to be substantial evidence. An opinion isn't evidence.
I don't know how they are going to solve the resolution problem, if HW3 cameras can't read signs from across the intersection clearly or roadside digital signages, that could be a blocker to lvl 4/5... maybe they'll switch out the cameras on existing cars with FSD package? can HW3 FSD computer even process the increased pixel count? only tesla knows i guess
It's a compute and data constraint. Data is vehicle dependent and does not generalize due to overoptimization in hw3 and hw4
This may be hard to get the ai to learn off this, but I think having a 180 degree camera at the front and back would provide so much more awareness of its surroundings than the current set of cameras.
Sure the ai is trained off all those images videos from all the cameras, but still it feels highly limiting with that being your only field of view for the car to work with. So if vision is going to get to the level of full autonomy, it needs to have the field of view of a hawk.
All you gotta do is train the software to make its own grid that its driving on that is warped to the view that it sees, kind of like the backup camera in teslas which have warped backup lines, but it coresponds to a real location behind you where your wheels will end up, even though the camera angle is very wide and decently distorted. A similar technique could be used for the ai in a 180 degree front and rear Camara which would allow it to have a much wider view of its suroundings.
Dang bro if that’s all you have to do the. Why have you not done it.. zzzz
@@alexjeffs7092 not "all you have to do" but certainly should be better than the current front + rear camera setup
I think the driving dynamics is more likely than the camera positions. I'd like to know what Tesla says they plan for rollout on the CT...
6:20 - you need to add additional cameras while also having the old setup as well so you can use the old camera 8x setup while simultaneously training it with more cameras so you can still ship the new car with FSD in its current state and then improve upon it over time when those additional cameras become useful.
Doesn’t “the training data they’re using for FSD depends on the locations of the cameras” because camera locations determine the data they get from the cameras?
Yes. It gets trained off a certain perspective.
This may be hard to get the ai to learn off this, but I think having a 180 degree camera at the front and 180 degree on the back would provide so much more awareness of its surroundings than the current set of cameras. Like yeah sure the ai is trained off all those images videos from all the cameras, but still it feels nightly limiting. So if vision is going to get to the level of full autonomy, it needs to have the field of view of a hawk.
The demo pictures plainy show the data is not constrained by the position of the cameras.
@@tedmoss Possibly. But if that were true couldn’t you put them all on the roof pointing straight up?
Our biological neural network is “trained” to judge distance from the time we could first see via reenforcement learning (sometimes painfully) by processing visual data from two biological “cameras” at fixed positions and distances from each other.
If you now fed the ocular nerve from your right eye visual data from the perspective of the middle of your back your nn wouldn’t be able to function effectively at all.
Doesn’t training a single video-in-controls-out neural net similarly rely on consistency between nn training video input and nn operational video input?
8:30 the b-pillar is *NOT* where you head is. It is behind your head and worse if you (human) move your head slightly forward for tougher corners. Chuck did go-pro placement examples of this as one example. Many others have demonstrated corners that they have to go around daily do not allow safe creeping.
According to Chuck Cook, his hardware 4 car is running in hardware 3 emulation for FSD 12.3.3 to work.
How will FSD improve if it was fully utilizing HW4?
FSD works pretty darn good. 👍
Just as good as that from other manufacturers at level4 already and the new nvidia chips etc is going to be a real game changer, possibly the hardware needed to eventually get manufacturers to level 5 and beyond.
Someone recently said all FSD is running HW3 mode and HW4 cars downgrade the video to HW3 resolution. So it's quite plausible they are reluctant to change config. And the moment they develop for HW4 they either have to maintain two branches or say goodbye to HW3 and leave them at the current version. Like they have HW2 and 1. I actually low key think I could make robotaxi level out of HW2, within reason :) and fun fact, HW1 is 10 years ago now.
I think you guys will be having this same argument in 10 years time!
Accidents and deaths have been steady for a decade around 40,000 per year (not rising ridiculously each year)
Elon should honest separate robotaxi use case from daily FSD usage and charge differently.
CT FSD may well need new data to learn off of and then what of the next gen? Something like a stumpy MY where some of the parts are the same and so reduce costs. The M3 also saves money and they may well be able to use S3XY data for the FSD much easier. All very interesting and changing at a lightning rate. Cheers.
For humans to properly supervise FSD it’s important psychologically for the human to have a better view of the road, otherwise the tendency is for the human to give up and be more passive.
Sounds like opinion being presented as fact.
What if... FSD will be so good that camera placement becomes flexible, then they can implement the optimized camera placement on robotaxi
Yes and no.
It comes down to compute cycles. Different sensors, different vehicles, they have to retrain the model.
The sensors see in all directions. There is no need to agonize over sensor placement. Collect data. Train on it.
HW3 is just the inference engine. It has enough head room for autonomy - but the model will gain in complexity over time. So more headroom will enable more functionality.
FSD licensing will only make sense with high volumes. Training compute requirements for a new model are not trivial. At this time, only BYD has enough volume to justify a license - but they are price-sensitive. They can't afford it. The others all lose money on sales. They are nowhere near ready to license FSD.
FSD done right will be able to know where the car is in relation to the factors you discussed, without any NN training. Plug and play, any car, any cameras. This will likely require another FSD rewrite.
@@EMichaelBall guessing how LLMs will muscle into the autonomy space, and when, is kinda challenging atm.
But I do think it's likely to happen.
Could go one of two ways I can imagine: the single-mode (video) model at Tesla might serve core functionality with an LLM front-end tacked on; or an LLM's video processing and autolabeling chops might improve to the point where it's better.
Either way, we'll gain virtual chauffeurs. Should be cool!
Tesla's valuation hype was built on its FSD. The only reason he chose not to go with LIDAR was the cost. Over the past years LIDAR prices have dropped significantly and it will continue to get cheaper. He could have used LIDAR, had a true autonomous vehicle, and that would have justified the 12K premium for that option.
@@cjjuszczak Now that is an explanation I respect. I learned from it. thank you!
Lidar can’t see through fog, dust, or heavy rain any better than cameras can. Infrared can do these things, and assuming the cameras don’t have infrared filters on them, no hardware changes should be required. Can the FSD computer process the extra data?
The reason for not using LIDAR has been openly stated, you not knowing what that was tells everyone here that your comments are worthless.
@@fredbloggs5902 you smell of a perma tesla bull
Proof is in the pudding, my friend. Waymo has driverless cars. Tesla not. Tesla is nowhere close to driverless FSD. I own a Tesla. I know how unreliable and a piece of garbage Tesla's FSD is. Maybe call advanced autopilot. But not FSD.
The reason for this sudden FSD hype is that Tesla got its ass handed to them on deliveries, and they are hoping to prop the stock up with the FSD bull crap. FSD even as a $199 subscription has failed. It brings no significant revenue to them. They have been working for 10 years on this crap and still have not a single driverless car. And those ignorant diots who think their FSD can truly just drive itself while they nap or play a video game, most end up dead in horrible crashes.
Fsd 12.3 is running of hardware 3 right now so it’ll be able to reach full autonomy.
Tesla is maximizing every stage as we go along in daily progress over millions of vehicles. Every month is a stage or even daily. This is a data progress situation we are experiencing.
Rear wheel steering is messing with FSD in the Cybertruck.
It is hard to believe that FSD is strongly dependent on position of cameras in different car models. They can probably solve this, if a major issue, by some calibration, do some transformation of camera feed so input to FSD from each camera is the same in each model. Also it is hard to believe that someone as brilliant as Musk never asked team to do some optimization of camera placement way back years ago. I think these speculations are baseless.
Its hard to believe baseless speculations.
Training on hardware 4 is already occurring. They are running it in emulation mode. Elon even said that hardware 4 would be behind hw3 for a while. I agree that the cameras are not placed optimally right now. Like I’ve said before the cameras should be place like a prey animal has its eyes. Not like a predator would have its eyes. A fish has 2 eyes and can see in all directions at once.
All camera systems are susceptible to being blinded by a laser pointer directed into the lens!
So are all eyeball systems. What's your point?
Retrofit auxiliary self deploying extension cameras. Also add curtesy "beep beep" horn for cyclists and pedestrians.
Now that Tesla has moved to end-to-end neural network, the camera placements are not a significant legacy constraint. With compute likely already 10x from a few months ago, and 10x again before end of this year, and 10x again next year, they will easily be able to train NNs to integrate other camera views. And a foundation of this training can be done in a simulation even without cameras in the fleet. Compared to the overall potential value of robotaxi or even its R&D costs, this will not be a major cost. In fact it will likely be better the generalizing the NN if they train a foundation model to integrate whatever views are available from a wide range of camera types and positions. This way the NN will be "self calibrating", quickly figuring out where each camera is in relation to the vehicle. By the same token it will be better to train the foundation NN on a variety of performance characteristics (from the response of steering, accelerator & brakes, to tire traction, center of gravity, and so on). This way, just like a human the NN will automatically acclimate itself to whatever cameras (and other sensors) it has and whatever car it is driving. This greater generality will take more training compute (Dojo, et al) but the generalizations will likely only slightly increase inference compute (sometime forcing a NN to learn to solve a more generalized problem can ultimately even reduce the required inference compute, because it learns a more general principle more deeply rather than have to learn several distinct cases).
tl;dr: don't worry about the cameras
공감합니다😊
You are getting closer to the truth. The real reason is it is hard to prove it works correctly. I took 6 mo. to prove a simple program.
I WAS wondering why they still haven't added cameras, like front corner cameras. The reason might also be part of the reason why they haven't added some cheap lidar.
Lidar can’t see through fog, dust, or heavy rain. Infrared can do better, and if there’s no ir filter on their cameras, the current stack can see into the ir spectrum. From there, it’s about inputting and interpreting data.
Surely the FSD system itself would be trained on the constructed 3d model of the environment rather than the camera inputs directly. Then if you have different camera placement all you need to do is adjust the processing to generate the model and everything else remains the same
v12 is end-to end nn
@@bogdanivchenko3723 Yes, but I'm sure it's not just a single NN to do everything but rather a number of NNs each doing their own part of the problem, so you should be able to swap out the camera -> 3d environment NN with one suited to the cameras on a different vehicle
I think you are barking up the wrong tree. Vehicle dynamics seems the limiting factor not camera placement.
Camera position almost trivial if you have the model weights right. Camera quality and photon count MUCH different. Just compute constrained till now.. all will be well. Really.
Camera placement and whether or not radar is needed is something extremely thorough research had guided Tesla to decide on. Now suddenly brilliant folks are jumping in saying we need more cameras or the cameras should be placed differently 😂
There’s nothing sudden about any of this. It continues to be discussed internally at Tesla to this day. Additionally cybertruck likely doesn’t have fsd because the camera placement would be different on it.
You are lucky to have such deep insights into the workings of Tesla. Others (me included) could be inclined to think that some decisions were purely made by stubbornness or on a whim.
There were protests and some engineers resigned because Elon personally removed radar.
Isn't waymo already level 5?
No. Level 5 doesn't allow geofencing, and Waymo is geofenced.
so its level 4?@@BigBen621
Yes, Waymo is Level 4.
It seems to me that camera placement doesn't matter. It appears that all you have to do is correct new camera location coordinates then all the trigonometric calculations can be made.
As far as I can see, you are correct but why trig?
They probably should of waited and released new hardware after fsd is solved to release hardware 5. Hw 4 might have a long period of non use
Your theory is highly flawed. FSD can be used in X, S, Y, and 3 because the camera positions in 3D space and each cameras focal point and optical magnification is known. The initial steps are to Validate forward. The inputs for the cameras are in the Cybertruk but it takes a quorum of units to reconcile all the way back to the first S datapoint. It's just a quorum versus population reconciliation for validation. Peoples lives are on the line and the training per new vehicle must be verified by a quorum, likely 5000 to 10,000 units with x amount of drive time. This is simple engineering and safety standards and you are thinking their algorithm is flawed when its your engineering/computational statistics knowledge that may be a slightly lacking. Ask questions in the X discussion rooms on computational analysis of FSD. Better to be knowledgeable and credible than entertaining.
I think the average price of cars in the US is @$38k.
So most people are buying cars at this price point, so I do not know why you say Tesla needs to go to $25k. They need to go to $25k to reach their mission of selling @20 million cars a yr. Which is not possible on their own. I think all the BEV OEMs can sell at least @20 million per yr by 2030.
I think Tesla will max out at 5-6 million cars per yr - this is ex Robotaxis.
With Robotaxi, then I think Tesla max's out at @10 million cars per yr globally. This is @2040.
FSD is an additional price to 25k as well as other add ons. 25k brings you in but you don’t go out the door with it.
Evening mate
Waymo does it at level 5. You suffering from amnesia Fazad
Don't compare waymo to fsd lol that's like saying a paper map is better than Google maps, maybe in a very specific circumstance and location but in totality.
@@mattjenkins2244 Enjoy your dreaming
@@MegaWilderness haha I'm dreaming but you think a geofenced pos like waymo is something noteworthy lol the levels are for liability just like everything else in this country. It's not a good metric.
Waymo is not Level 5. Waymo is geofenced to very small areas, and Level 5 does not permit geofencing.
@@BigBen621 Sorry you're right, I thought full autonomy was level 5. Anyhow level 4 is still way higher than level 2
The mistake people are making is looking at the progress as a smooth trajectory up and to the right when it's actually stair steps. The reality has been that each new FSD architecture doesn't really improve much after the initial release. Everyone comes up with all sorts of reasons why the new release is just going to keep getting better. The reality is that hasn't happened. The new release is a bit better than the previous and it doesn't get any better. Or it gets better at some things and worse at others. This has been the actual track record. I don't think this time it will be different.
it never ceases to amaze me how tesla fan boys get rammed up the behind by elon and continue to justify the ramming he does to his customers. Theres no excuse for why the earlier model owners of hw3 were nothing more than test dummies. We all know that as fsd gets closer to level 4 the computers and cameras in the old cars will not be able
to handle the new fsd software. its no different to the iphones
Sorry to have to say this, but: Hans and JD are just brilliantly perceptive about the quirks of AI when it comes to FSD.
The third person, (Farzan ? ) is a perfectly stupid individual.
My God another click bait
FSD HW3 will definitely be L5 capable, or so close that the technicalities are not important. The limitations of the side-view cameras will at worst mean it is extra careful in a few troublesome spots or a void them completely by taking an alternate route. And if you are a passenger reading a book or watching a movie, you won't even notice. Some of the "blind corner" type scenarios will be handled with superhuman reflexes & 360 vision: it can inch forward & back much more quickly than a human could. There are also many indirect cues from sound, to shadows & reflections, which it can use to have an idea of whether it can inch further into a "blind corner". And the AI can make that decision millimeter-by-millimeter--it is not an all-or-nothing decision to stick it's nose 18 inches further out.
I doubt there will be a "failed trip" in more than 1 out of 10,000 fares, where a customer might get a refund and wait a few minutes for a robotaxi with better camera placement to come pick them up. But in general the ride-share network & logistics AI will just avoid assigning those cars to the trips that can't be completed in the first place.
my 2019 m3lr with hardware 3 v12 FSD is quite amazing. much much less intervention.
Or you could put a different team on this problem. None of what you say makes any sense
Mercedes and BMW will soon have level 3 on highways. Tesla doesn't.
is that the level 3 that's only available on some highways? This is something anyone can do and no one cares. Just lane keep on simple highways and deactivate when things get slightly complicated
@@penname4764 they can use the phone. Can't use the phone on Tesla because of camera. This is big quality of life improvement. I think tesla can eventually make it, but it's taking too much time.
Don't know about BMW, but here are the facts about Mercedes DrivePilot:
● It operates only on geofenced, precision mapped, limited access highways
● It operates only at speeds of 40 MPH and below
● It operates only in the same lane-no lane changes
● It operates only with a lead vehicle within 100 meters ahead
● It operates only in clear weather-no rain or snow
● It operates only with no direct sun on the cameras
● It operates only if the driver doesn’t avert his gaze from ahead for more than five seconds
Meanwhile, Tesla FSD operates on every street, road, highway or freeway in the U.S. and Canada, at any speed up to 85 MPH, with or without any other traffic on the road, in any weather, in any sun conditions. But compared with Mercedes Drive Pilot, you *do* have to rest your hand on the steering wheel to gain these advantages; so Drive Pilot is clearly way ahead. Right.
@@BigBen621 Driving is not fun for most ppl. Especially sitting in traffic. Sitting on a phone in traffic is a nice feature that Tesla doesn't have. Yes their approach if flawed, but they have a product that ppl want, and tesla's current product is arguably worse.
@@bogdanivchenko3723 I'm not sure what you mean. If you mean you can't hold your phone in your hand and talk in a Tesla, you're right; but you can't on Mercedes Drive Pilot, either. But of course you can link your phone to Tesla, and use it all day long to make and receive hands free voice calls. Please be more specific about what you think you can't do in a Tesla, which you can do in other ADSs or ADASs, such as Drive Pilot.
Koncrete podcast season 1 Ep 160!
I'm no Tesla fan, but you are total speculation. I think you need to do your research before you post pieces like this.
What I have seen, is that Mercedes has the best FSD today
Except in MUCH more limited cases. You obviously don’t know this.
here are the facts about Mercedes Drive Pilot:
● It operates only on geofenced, precision mapped, limited-access highways
● It operates only at speeds of 40 MPH and below
● It operates only in the same lane-no lane changes
● It operates only with a lead vehicle within 100 yards ahead
● It operates only in clear weather-no rain or snow
● It operates only with no direct sun on the cameras
● It operates only if the driver doesn’t avert his gaze from ahead for more than five seconds
Meanwhile, Tesla FSD (Supervised) operates on every street, road, highway or freeway in the U.S. and Canada, at any speed up to 85 MPH, with or without any other traffic on the road, making automatic lane changes for any of several reasons, in almost any weather, in any sun conditions. But compared with Mercedes Drive Pilot, you *do* have to touch the steering wheel once in a while to gain these advantages; so Drive Pilot is clearly way ahead. Right.
@@BigBen621 Tesla can not even park itself without a driver
@@stennordenmalm9900 And you're claiming that Mercedes Drive Pilot can? Since Mercedes Drive Pilot will not engage without an alert driver able to take back control within 10 seconds, that's simply *false.* Or if you're claiming something else, what exactly is it that you're claiming?
@@BigBen621 that Tesla FSD is underperforming, as a lot on Tesla EVs
AI will rule the Planet
So far it can’t even properly rule the windscreen wiper.
Scary stuff because L3 is worthless. Maybe a few thousand per car on purchase. The must be confident that they can do L4+ on hardware 4 at a minimum. Elon is well known for scrapping projects if they are not on the right path. I hope he gets this decision right as well
runaway train to oblivion not self driving
people keep pumping Tesla FSD when it is probably the worst autopilot system out there. Waymo is out there picking up rides without drivers. That is true autonomy. You can never have full autonomy without LIDAR. Waymo is a proven technology. What stops them from starting to license their technology to automakers tomorrow?
_You can never have full autonomy without LIDAR._
You are representing as a fact that which is only your opinion. There is no objective evidence to support this assertion.
Humans drive fine without LIDAR.
*@bogdanivchenko3723* _It's not legal in arizona but you can use your phone normally in mercedes_
So lemme get this right. Tesla FSD (Supervised) operates on every street, road, highway or freeway in the U.S. and Canada, at any speed up to 85 MPH, with or without any other traffic on the road, in almost any weather, in any sun conditions, day or night, can make and receive phone calls hands-free on its excellent audio system, and is available on cars costing in the mid-$30 thousands. On the other hand, Mercedes Drive Pilot operates only on precision-mapped limited-access divided highways, only at speeds up to 40 MPH, only with another car within 100 yards in front of it, only in clear weather, only with no direct sun into the cameras, only during daylight yours, and is offered only on cars costing >$100 thousand. But because you claim you can hold up your phone to make calls in the Mercedes, which you acknowledge is illegal anyway, your preference is for Mercedes Drive Pilot. That's about the nuttiest thing I've ever heard on this subject!
Make sure this podcast host is not sued by Tesla for defamation.
Tesla FSD is just a dream hype and a wish and hope of hopelessness! To get to be approved to Level 4 it will need to have geo fencing software like Waymo as all States will require it for level 4.
_all States will require [geo fencing software] for level 4_
Where'd you get *that* idea? Do you have any evidence to support it?