Get 20% off DeleteMe US consumer plans when you go to joindeleteme.com/DIRTYTESLA and use promo code DIRTYTESLA at checkout. DeleteMe International Plans: international.joindeleteme.com/
Do you think the data is a lot worse due to it going on an unmarked dirt road? (And possibly narrow-ish road? I can’t tell) My Tesla in general doesn’t drive nearly as good in neighborhoods that are unmarked vs the main, striped roads. I wonder if it gets confused and lost more easily?
There is a very good reason why FSD is NOT allowed in the EU. It is a KILLER. The system behaves as if radar technology is still used, as it is obviously only metal objects (the bike) that make the car stop. Personally, I would not trust it, and I think it is a mistake to switch to “camera only” as it cannot provide certain information, but must only be left to “guess” in relation to what a programmer has told an object is made of. Radar can provide information about the nature of an object such as the reflection of the radar beam, and this can be combined with what the camera sees, and then the estimation will be that much more accurate. The gymnastics ball can be discussed whether it should stop for it. If it “recognizes” it as a gymnastics ball, then it must also have something to do with, for example, traffic coming from behind.
I love how unbiased you are. You own multiple Tesla's but you're not a blind fanboy. I appreciate you testing things and calling them as they are and not sugar coating it.
Yes, but he might make it more clear that all the "Fails" are failures of the supervisor (human), not the software. He is testing functionality that has not yet been added to FSD. That's why it is called FSD *_Supervised._* The tests are valid in that they punctuate the need for supervision, but it is presented as if FSD is failing, when it is the driver who is in full control failing.
@@TomTWalkerTesla and Musk are vocally claiming that FSD is almost autonomy ready, with the miles between intervention getting so large that it's hard to improve further. These tests show that's not the case (as is my personal experience using FSD Supervised...crazy behavior from time to time). His video is entirely fair. No one's denying that it's Supervised. But it is a very basic fail not to avoid such obstacles, let alone the very long tail of less common situations.
@@JoshuaHults I am pretty sure the engineers know about the issue, my guess is they decrease the threshold for small objects to reduce phantom breaking. We might need an additional NN specific for small objects in the road. It could also be less accurate on dirt roads
I can confirm, a week ago I was speeding down the highway in my Cybertruck using FSD, a car in front of me swerved to avoid a pallet that was in the middle of the road. My truck just plowed through it as if it didn't exist. No damage to the truck, no popped tires. The rear view camera looked like an explosion of splinters lol
@@rogierkraan sure, but a smart system would be able to make a decision with greater accuracy than a human as to the best course of action. The road was mostly clear of traffic when it occurred, so swerving would be justified. I mean the cars in front of me swerved. If this was a smaller car it would have been a big problem for sure.
When Stephanie sits during the experiment, its safer for her to be standing 15 feet or more BEFORE the object on the road, maybe more. This way, if during the FSD attempt to avoid the object, the car does not steer TOWARD her. instead, it removes this risk altogether, since her body is way before the obstacle, if that makes sense. Keep up the good workk.
@@DirtyTesla all discussion about safety is valid discussion. A lot of people see safety talk as something that impinges on progress or some kind of necessary evil but more often than not optimizing for safety results in innovation. I don’t know all the comments that you get but since you’re a youtuber, I’m sure you get a lot of negative ones. However, genuine discussion about safety should always be welcomed.
Can you test hanging barriers? I'm thinking you could hold a pool noodle at windshield height to see if FSD will drive through a train crossing, for instance.
@@catbert7 This is one of those required FSD features that does not seem to be implemented yet. It cannot be allowed to run through train crossing barriers, for instance. Anyplace that has trains, including the commuter trains here in Phoenix, could have FSD slamming into the guards.
This really speaks to the lack of training data for the CT. I’d be very interested to see this repeated on a paved road. I wonder how the dirt road backdrop is affecting it’s behavior
My own experience with Cybertruck FSD has been very disappointing even without obstacles in the road. It cuts left turns really bad and its speed choices are nearly random. The speed limit can drop down and the truck won’t care-it’ll just blaze along at whatever speed it wants. Once my FSD free preview expires I won’t be renewing it. THANK YOU for demonstrating problems I didn’t even know existed! I assumed it would at least stop for obstacles!
Say it again for the people in the back. FSD is utter garbage. My family refuses to even ride in the car with it active because it’s done so many crazy things.
@ it’s the truth. Been waiting years and years for Tesla to deliver what I paid for and they’re nowhere close. Every day I see videos where it is hitting curbs, it doesn’t react to objects in the road (because it can’t see them due to low resolution and poor intelligence) and I’m supposed to believe there’s going to be a magical steering wheel-less robotaxi on the road in 2026? Lmfao. It’s been 4 years since the beta launched and they haven’t even hit 10 miles per intervention in city driving. Don’t take my word for it, look at AMCI’s 1000 miles of testing with recent FSD releases.
Only if corrupt and/or stupid politicians relax regulations and then we'll have death machines full bore... I mean we basically have that NOW... just people CAN take over and use the actual stearing wheel etc...
Further proof that most people are incapable of logic. He said multiple times in the video that the Cybertruck is the only model having these particular issues. FSD is certainly not near trustworthy enough everywhere, in all situations, but that does not mean it is not nearly ready for the base models, in some areas. You can see for yourself, from various channels, that 3/Y are doing extremely well, in certain places (e.g. CA). It doesn't have to be perfect everywhere and on every model for them to do limited operations, while progress continues.
Cybertruck needs the front bumper cam to to be part of FSD decision making. I suspect this will happen fairly quickly considering the Cyber cab also has a front bumper cam.
Thanks Chris and Steph: 1) Steph, you are one of few wives who qualify for danger pay. 2) The tall box's getting clipped is a bit like firing a warning shot over the bow of a ship. 3) I wonder if a bright orange Home Depot bucket would be recognized? 4) The person/manikan maybe needs needs to be warm? I wonder if the cameras can 'see' some infrared light? Grok-2 says generally no. (Driver monitoring is a yes). I do find the night driving people rendering to be excellent compared to what my eyes can see. 5) LOL When you Model Y 'kicked the bucket', the evil laugh after was a hoot! 👺🤣
Yeah totally agree the FSD thinks the CT is smaller than it is, I feel the same when it turns and crosses the lines into the adjacent turn lanes or clips the median.
Thank you for taking the time to expose these deficiencies in FSD. It’s surprisingly bad for these smaller objects, hopefully the FSD team can resolve this soon.
So interesting to see all the 'retraining' they have to do for hardware 4 or for another vehicle dimensions or both..... I would have thought more of the generic training ex.... 'things in the road' would port over quite easily. Makes me a bit worried for all new vehicle types.
Thank you soo much for those videos .. keeps us updated and informed about the abilities of FSD and i really would NOT rely on it in any form .. i don´t see any trustfull bahaviour of the FSD yet. May be some more iterations from today will change my mind, but ..today... NO WAAAY !!!!!! Keep up the great work ! I wish you the best, Greetings from Germany !
This is informative and hilarious and supports my long-held theory regarding the distain team FSD has for nonhuman shaped, inanimate objects. Thank you. BTW, your wrap looks great.
It seems that this is FSD kindergarten level stuff and things like smoother acceleration is graduate school level... if other improvements and tweaking cause such core functionality to regress or fail, that's very concerning.
I've always thought there should be two similar cameras in both upper corners of the windshield. That way, a stationary object can be easily distinguished from a shadow through parallax. With only one camera, an object straight on will usually only grow larger, which can be difficult to detect.
I'm curious to see what would happen when two Tesla's are in self driving and on a head-on collision course. Will they brake? Will the swerve? Try that test sometime please. ty
“Not perfect” is certainly a word for it. Garbage is another. A non-Tesla with no FSD but radar emergency braking would have 100% stopped for the mannequin of the child - that tech has been shipping in VW Golf’s from 2013.
This is actually pretty dangerous. I've been using FSD for over 2 years and I'm even more concerned now when i think of the number of things that could happen
I think the Cybertruck continued on the second bike test, because it calculated a worse brake performance and then overshoot the brakes basically. So corrected by letting of and coming to a gentle stop a bit later. This shows nicely why Cybertruck is also overall braking so much harder/unsmooth than 3/Y/S/X, because the Cybertruck brake system is more responsive to the same input.
I hate to be that guy, but I think you pressing the pedal interfered with FSD response time. I say this because when I use FSD on my Model Y, if I put my foot on the gas, it waits a little bit before it displays "cruise control will not brake". Same for when I cover the cabin camera. The delay before it says something is quite a few seconds behind.
Possibly. There's certainly a lot of times where I press the accelerator, to get it to advance a little further up, and it mistakes this as permission to ignore cross-traffic or something and proceed all the way.
I'm not sure if I would call some of those fails. I mean, if I see a small cardboard box on the road, I'm likely to run right over it too, unless I have time to change lanes or something. As for the mannequin. I have a suspicion that it just doesn't look human enough, and one criteria it probably uses is movement. A real human is moving and that probably helps it identify it as such.
I’m sorry but how human does it have to be? It being the height and shape of a child as well as dressed like a kid. Does it also have to breathe? Does the CT have a heartbeat monitor? What happens if a kid chases a ball into the street only to look up and see this pedestrian killer driving at them and they freeze up? That’s on the kid now? Claiming the mannequin isn’t human enough is just a cop out.
@@Cthulhusreefright. like it shouldn’t be hitting ANYTHING in the road. whether it looks human or not. it could look like a card board box to you but it could be something that will pop your tires and ruin your axles ect.
That seems to be the case. The Cybertruck's AI seems to know that cardboard boxes and beachballs aren't "vulnerable" and yet the sight of a child's bicycle scares the AI into braking reverently.
the ai models are trained on data from the smaller tesla cars. im sure once data from the cyber trucks becomes more significant in the training dataset, these issues will fizzle out
lmao! best video (and most infromitive….) i’m deciding between an X and a Cybertruck and this video surprised me and has me wondering / hesitant about the Cybertruck. the headlights (i do love where it snows), the wiper and tires (rather expensive to maintain no?), the cost to recharge on road trips (about 150% of other Teslas with superchargers now near .50/kw) are worrying me. thanks for thinking of doing this video! food for thought….
Awesome! The wiper replaced by Tesla is $75 so NBD. The tires will be expensive, but X tires are also expensive and staggered which is annoying. Check out our Cybertruck road trip videos, I compare to road tripping with the X. Energy usage is more but not crazy.
@ how did i miss the X/CT energy comparison video? will check it out tomorrow! had an overnight CT demo tonight and man do i love it! unfortunately my 1-car apt garage isnt big enough for the CT!!! noooooooo! lmao. so i won’t be able to justify it until i have a home charging situation that can accomodate a cybertruck (i still have 8 mos in my apt lease!. thanks again for all your content. it’s probably going to come down now to CT or Junioer (depending on what juniper ends up being).
Yeah, the entire fleet with be driverless taxis next year with "just a software update". My ass. SMH So much more for the AI model to learn and compute for L3, forget about L4. The model is going to get too big for AI3, if not AI4 as well.
My question is is since on dirt road does the ground being brown affect the visuals as well as does a dust in the air like small particles mess with the cameras?
I see from your test that the CT ignored the kid manikin. What if you set it in the middle of the road forming an X shape as if he/she was calling for attention and when the CT is self-slowing down have someone pull on a string to have it fall to the floor to simulate the kid having passed-out. I’m hoping for the truck to continue and stop and not run over it as if the kid moved off the road.
I was driving on a back road today with FSD and there is a pedestrian crossing and its dark. There was a pedestrian waiting to cross. I didn't see the person. FSD did and stopped. Love it. its not perfect but it does a great job in some cases. It will only get better.
I wonder if it is a lack of training data or an algorithm adjustment to reduce phantom braking and reduce object avoidance. Just wondering. Right now, be extra careful for objects.
Dang! 100% my CT thinks its bigger than it is. That explains like 75% of my disengages. And late breaking, good lord lol Still love FSD tho, so much fun!
Early Access Build 🤷♂ I also realized it doesn't realize how big it is. I had it roll past a stop bar on high speed road just not realizing distances in as it should.
The issues with the tow hitch bar snapping the aluminum frame are a bit more troubling to me, can live without FSD but a truck that can't tow kinda defeats the purpose
I noticed that FSD has eliminated many objects that it had in the past. Things like garbage cans, mailboxes, and many more such as traffic cones and barrels have been greatly reduced.
It should not stop or slow down when you’re standing off to the side of the road. It be stopping for everyone walking or getting the mail. You should try walking out into the road.
This is what happens when you remove radar and other sensors and don't want to use LIDAR. I know, I know..... many of you will say these systems aren't needed but I think you're wrong. I have 22 Model Y that I took delivery of that is an Austin built that at least still has the USS. It was the batch of cars right before they removed them. I would be interested in seeing a Model 3 or Y, go up against a competitor that has these systems to see if Tesla's approach is actually valid because at the moment we're all walking around quoting Elon saying that Vision is all you need and I feel that's not the case. I'm not sure why the Cyber Truck has a front camera if it's not being used in FSD and vehicle collision avoidance but hopefully a future software update can change that.
lidar wouldn’t have changed anything in machine learning, tesla’s training data isn’t trained on what to do if a random tv box or ball comes on the streets.
And tesla’s fsd is way better than other systems, as other systems are programmed or have WAY to much safety checks. But in the future there could be more companies that have fsd but if tesla keeps their “teslacab” promise it’s looking bad for other companies.
@@DoubleRainbowXT if Tesla gets permission for full FSD on public roads in early 2025 like promised and it is still hitting stuff like this Tesla will be sued into oblivion.
You might want to try and get the mannequin to look more human-like. It seems like the FSD software does not even pick up the mannequin as anything. Perhaps if it had eyes or glasses or shoes? Its odd that it has zero issue with the bike but the plastic white mannequin is what throws it for a loop....
My model 3 seems to be opposite of this. I’ve had it attempt to dive to a different lane because I guess it thought the McDonald’s bag was a solid object
It's only trained on known patterns, like humans etc. However in real-life depending on angle etc. even humans can look unlike a human or just like some arbitrary objects from the camera perspective. That's why sensors like lidar are so important. The occupancy grid/net also won't help, it only has limited accuracy and cannot make up centimeter differences from ground level in further distances with enough confidence. Also occupancy grid/network does not work on moving objects.
Could the fact that its a drit road have anything to do with this? I imagine it has to be more "forgiving" as to not break too much on these kinds of roads.
Very disappointing results. What if that flexible white bucket was a small white dorm room refrigerator! Why not involve the front camera? I understand if it gets dirty or obstructed, it could be a backup or collaborative with the other camera used. And, this was in good lighting conditions, no less! 🤦♂️ I’m sure it will improve, but this seems kinda obvious stuff.
Meanwhile the Seal 07 is coming to Australia at the same price point of the Model Y with Lidar, 12 x Ultrasonic radars, 5 x mmwave radars, 11 camera's and Nividia Orin N processing with double the power of Hardware 4 at 100 Tops. Yet Tesla can do it all with vision and no redundancy, great, gives me lots of confidence.
Chris, try a stationary test with the mannequin. Add a hat, draw some eyes, pull a string to make a movement aso. What do you need to do for it to detect it as a human being? Also, turn it around, sideways, closer, farther ++. You can mount a phone/camera watching the display and relay it to your phone so you can see it in real time (FaceTime? ) while you do adjustments.
@DirtyTesla Thanks, but I think it would be cool to see what is needed for the FSD to recognize an object as a human being. Shapes, sizes, colors aso. Just to have a glimpse into the inner workings of the FSD. How about pictures of things? A statue? A mobile phone playing a video in front of the camera in the windshield? Lots of funny stuff. 😊
Were were shown that FSD uses an 'occupancy map' which basically appeared to be a series of cubes that it decides is occupied or not. Presumably this is an attempt to avoid the problem of not having to identify what every object is that it might come across. Clearly it's not feasible for it to identify everything. Clearly this doens't work very well at the moment.
"Cybertruck seems to brake late some of the time" - my theory is that all the training data is from M3/Ys. So the Cybertruck mostly thinks that it's a model 3/Y so it brakes and keeps distance like a 3/Y.
This does not bode well for transference of FSD learning to new models with different camera setups. It may be necessary for them to gather millions of miles of data for each new model. We’ll have to see how quickly this gets updated to match the rest of the fleet.
The underlying issue is probably distinguishing VRUs vs solid objects vs lightweight paper/plastic debris vs discolored/shaded patches of road. The context probably matters here--dirt road in the country. Some types of debris humans would ignore too (i.e . paper or plastic bag, bits of cardboard). I don't think it "knows" with complete certainty these objects were harmless and not VRUs, but I think this version of the neural net is strongly leaning that direction. I do suspect it "knows" the mannequin is not a pedestrian, at least assigning very low probability. What surprises me is I would think it would err very much on the side of caution with these objects. But the devs must be doing a balancing act in the neural net training to avoid "phantom braking". This is ultimately going to be a more-data more-training issue.
Tesla trains the Ai using real life driving scenarios captured by their customers. I highly doubt the training data would have many instances where there's a big box standing in the middle of the road
@@CodyCha That is one reason why it has trouble deciding what it is and whether to avoid it. That said, I'm sure there are quite a few examples of boxes, paper or plastic bags, and similar debris, lying on the road or floating/blown across it. Just not enough for the AI to fully generalize the concept and distinction of harmless vs not-harmless debris. They collect edge-cases for training, after all. They have billions of miles of driving that they've drawn their training examples from (all cars in fleet in "shadow mode"). That is many thousands of lifetimes worth of driving. You can bet there are all sorts of weird things in their training data.
FSD does not recognize the mannequin as a person because it literally does not look like a person at all. It’s in the shape of a person but it’s the same color as lane lines. I’d bet if u painted it a skin tone and painted a face and put hair on that same mannequin it gets registered as a person
Get 20% off DeleteMe US consumer plans when you go to joindeleteme.com/DIRTYTESLA and use promo code DIRTYTESLA at checkout. DeleteMe International Plans: international.joindeleteme.com/
Do you think the data is a lot worse due to it going on an unmarked dirt road? (And possibly narrow-ish road? I can’t tell) My Tesla in general doesn’t drive nearly as good in neighborhoods that are unmarked vs the main, striped roads. I wonder if it gets confused and lost more easily?
There is a very good reason why FSD is NOT allowed in the EU. It is a KILLER.
The system behaves as if radar technology is still used, as it is obviously only metal objects (the bike) that make the car stop. Personally, I would not trust it, and I think it is a mistake to switch to “camera only” as it cannot provide certain information, but must only be left to “guess” in relation to what a programmer has told an object is made of. Radar can provide information about the nature of an object such as the reflection of the radar beam, and this can be combined with what the camera sees, and then the estimation will be that much more accurate. The gymnastics ball can be discussed whether it should stop for it. If it “recognizes” it as a gymnastics ball, then it must also have something to do with, for example, traffic coming from behind.
ruclips.net/video/mPUGh0qAqWA/видео.htmlsi=1DAc_flm-SkzB9I1
I love how unbiased you are. You own multiple Tesla's but you're not a blind fanboy. I appreciate you testing things and calling them as they are and not sugar coating it.
it will not improve if everyone fanboys it
Yes, but he might make it more clear that all the "Fails" are failures of the supervisor (human), not the software. He is testing functionality that has not yet been added to FSD. That's why it is called FSD *_Supervised._* The tests are valid in that they punctuate the need for supervision, but it is presented as if FSD is failing, when it is the driver who is in full control failing.
@@TomTWalkerTesla and Musk are vocally claiming that FSD is almost autonomy ready, with the miles between intervention getting so large that it's hard to improve further. These tests show that's not the case (as is my personal experience using FSD Supervised...crazy behavior from time to time). His video is entirely fair. No one's denying that it's Supervised. But it is a very basic fail not to avoid such obstacles, let alone the very long tail of less common situations.
@@JoshuaHults I am pretty sure the engineers know about the issue, my guess is they decrease the threshold for small objects to reduce phantom breaking. We might need an additional NN specific for small objects in the road. It could also be less accurate on dirt roads
Whole mars says hold the phone
FSD is just driving like a normal truck driver. No regard for the little things 😂
This!!!! I wish it would dodge the small things in the road.
only a redneck truck driver 😂
Lmfao :-)
😂😂
Yup
I can confirm, a week ago I was speeding down the highway in my Cybertruck using FSD, a car in front of me swerved to avoid a pallet that was in the middle of the road. My truck just plowed through it as if it didn't exist. No damage to the truck, no popped tires. The rear view camera looked like an explosion of splinters lol
Version 12. V12.what? Are you on a general release V12 (12.5.6.3) or something let out early for the blessed ones to find holes in it?
Most of the time plowing ahead is safer for the driver than swerving
@@rogierkraan sure, but a smart system would be able to make a decision with greater accuracy than a human as to the best course of action. The road was mostly clear of traffic when it occurred, so swerving would be justified. I mean the cars in front of me swerved.
If this was a smaller car it would have been a big problem for sure.
@@slowercuber7767I have only had the truck for 3 weeks so I don't think I am a blessed one.
Why didn't you intervene? Why would you let your vehicle run something over?
Oh man, that's rough. Thanks for sharing the results though! Respect for still putting this out even though it doesn't look good for FSD.
To be clear, this is not how my hardware 3 2021 model Y behaves. It is very respectful of pedestrians and cyclists.
When Stephanie sits during the experiment, its safer for her to be standing 15 feet or more BEFORE the object on the road, maybe more. This way, if during the FSD attempt to avoid the object, the car does not steer TOWARD her. instead, it removes this risk altogether, since her body is way before the obstacle, if that makes sense. Keep up the good workk.
I could put her on the moon and people would still comment on these videos and say she's in danger :)
@@DirtyTesla all discussion about safety is valid discussion. A lot of people see safety talk as something that impinges on progress or some kind of necessary evil but more often than not optimizing for safety results in innovation. I don’t know all the comments that you get but since you’re a youtuber, I’m sure you get a lot of negative ones. However, genuine discussion about safety should always be welcomed.
@@DirtyTeslawhen you put her on the moon, make sure she wears proper PPE.
I agree.
Can you test hanging barriers? I'm thinking you could hold a pool noodle at windshield height to see if FSD will drive through a train crossing, for instance.
Do we need to? It's not even dodging the easy stuff yet ;p
@@catbert7 This is one of those required FSD features that does not seem to be implemented yet. It cannot be allowed to run through train crossing barriers, for instance. Anyplace that has trains, including the commuter trains here in Phoenix, could have FSD slamming into the guards.
cybertruck still coming to terms how thiccc it is
This really speaks to the lack of training data for the CT. I’d be very interested to see this repeated on a paved road. I wonder how the dirt road backdrop is affecting it’s behavior
Impressed with your dedication to FSD testing. :)
My own experience with Cybertruck FSD has been very disappointing even without obstacles in the road. It cuts left turns really bad and its speed choices are nearly random. The speed limit can drop down and the truck won’t care-it’ll just blaze along at whatever speed it wants. Once my FSD free preview expires I won’t be renewing it. THANK YOU for demonstrating problems I didn’t even know existed! I assumed it would at least stop for obstacles!
Further proof that Robotaxi is nowhere near public streets.
Say it again for the people in the back. FSD is utter garbage. My family refuses to even ride in the car with it active because it’s done so many crazy things.
@@CFG39utter garbage is crazy
@ it’s the truth. Been waiting years and years for Tesla to deliver what I paid for and they’re nowhere close. Every day I see videos where it is hitting curbs, it doesn’t react to objects in the road (because it can’t see them due to low resolution and poor intelligence) and I’m supposed to believe there’s going to be a magical steering wheel-less robotaxi on the road in 2026? Lmfao.
It’s been 4 years since the beta launched and they haven’t even hit 10 miles per intervention in city driving. Don’t take my word for it, look at AMCI’s 1000 miles of testing with recent FSD releases.
Only if corrupt and/or stupid politicians relax regulations and then we'll have death machines full bore... I mean we basically have that NOW... just people CAN take over and use the actual stearing wheel etc...
Further proof that most people are incapable of logic.
He said multiple times in the video that the Cybertruck is the only model having these particular issues.
FSD is certainly not near trustworthy enough everywhere, in all situations, but that does not mean it is not nearly ready for the base models, in some areas. You can see for yourself, from various channels, that 3/Y are doing extremely well, in certain places (e.g. CA). It doesn't have to be perfect everywhere and on every model for them to do limited operations, while progress continues.
Full Self-Plowing
It knew the ball and the truck would both be fine. Lol
So smart 😱
Damm just wait till it's a boulder
@dopebossgamerz lol
This is crazy!!! Thanks for doing this test.
That Ganondorf laughter from Ocarina of Time at the end xD
yes xD
It's great
Cybertruck needs the front bumper cam to to be part of FSD decision making. I suspect this will happen fairly quickly considering the Cyber cab also has a front bumper cam.
Thanks Chris and Steph: 1) Steph, you are one of few wives who qualify for danger pay. 2) The tall box's getting clipped is a bit like firing a warning shot over the bow of a ship. 3) I wonder if a bright orange Home Depot bucket would be recognized? 4) The person/manikan maybe needs needs to be warm? I wonder if the cameras can 'see' some infrared light? Grok-2 says generally no. (Driver monitoring is a yes). I do find the night driving people rendering to be excellent compared to what my eyes can see. 5) LOL When you Model Y 'kicked the bucket', the evil laugh after was a hoot! 👺🤣
It NEEDS to see as much IR as possible for vision in fog. If it does not, shame on Tesla.
Yeah totally agree the FSD thinks the CT is smaller than it is, I feel the same when it turns and crosses the lines into the adjacent turn lanes or clips the median.
Thank you for taking the time to expose these deficiencies in FSD. It’s surprisingly bad for these smaller objects, hopefully the FSD team can resolve this soon.
Shocking results. Tnx!
So interesting to see all the 'retraining' they have to do for hardware 4 or for another vehicle dimensions or both..... I would have thought more of the generic training ex.... 'things in the road' would port over quite easily. Makes me a bit worried for all new vehicle types.
LOVE the Zelda reference recordings bro LOL
Thank you soo much for those videos .. keeps us updated and informed about the abilities of FSD and i really would NOT rely on it in any form .. i don´t see any trustfull bahaviour of the FSD yet. May be some more iterations from today will change my mind, but ..today... NO WAAAY !!!!!!
Keep up the great work ! I wish you the best, Greetings from Germany !
This is informative and hilarious and supports my long-held theory regarding the distain team FSD has for nonhuman shaped, inanimate objects. Thank you. BTW, your wrap looks great.
It seems that this is FSD kindergarten level stuff and things like smoother acceleration is graduate school level... if other improvements and tweaking cause such core functionality to regress or fail, that's very concerning.
I've always thought there should be two similar cameras in both upper corners of the windshield.
That way, a stationary object can be easily distinguished from a shadow through parallax.
With only one camera, an object straight on will usually only grow larger, which can be difficult to detect.
Great test!! Thank you very much to you and your wife 👏👏👏😁😁😁
Question, do you think the road conditions had an impact? Would a darker blacktop road make the cyber truck see the objects on the road better?
I'm curious to see what would happen when two Tesla's are in self driving and on a head-on collision course. Will they brake? Will the swerve? Try that test sometime please. ty
Can you change settings for sensitivity/stopping distance or driving behavior?
That's why the driver should always be paying attention !!! FSD is not perfect !!
“Not perfect” is certainly a word for it. Garbage is another. A non-Tesla with no FSD but radar emergency braking would have 100% stopped for the mannequin of the child - that tech has been shipping in VW Golf’s from 2013.
And thanks for doing these tests.
This is actually pretty dangerous. I've been using FSD for over 2 years and I'm even more concerned now when i think of the number of things that could happen
Have you been sending the voice reports to tell Tesla why you stopped it? Needs to send the footage and voice notes to Tesla to teach it.
I think the Cybertruck continued on the second bike test, because it calculated a worse brake performance and then overshoot the brakes basically. So corrected by letting of and coming to a gentle stop a bit later. This shows nicely why Cybertruck is also overall braking so much harder/unsmooth than 3/Y/S/X, because the Cybertruck brake system is more responsive to the same input.
I hate to be that guy, but I think you pressing the pedal interfered with FSD response time. I say this because when I use FSD on my Model Y, if I put my foot on the gas, it waits a little bit before it displays "cruise control will not brake". Same for when I cover the cabin camera. The delay before it says something is quite a few seconds behind.
Or maybe the dirt road?
Possibly. There's certainly a lot of times where I press the accelerator, to get it to advance a little further up, and it mistakes this as permission to ignore cross-traffic or something and proceed all the way.
I'm not sure if I would call some of those fails. I mean, if I see a small cardboard box on the road, I'm likely to run right over it too, unless I have time to change lanes or something. As for the mannequin. I have a suspicion that it just doesn't look human enough, and one criteria it probably uses is movement. A real human is moving and that probably helps it identify it as such.
I’m sorry but how human does it have to be? It being the height and shape of a child as well as dressed like a kid. Does it also have to breathe? Does the CT have a heartbeat monitor? What happens if a kid chases a ball into the street only to look up and see this pedestrian killer driving at them and they freeze up? That’s on the kid now? Claiming the mannequin isn’t human enough is just a cop out.
@@Cthulhusreefright. like it shouldn’t be hitting ANYTHING in the road. whether it looks human or not. it could look like a card board box to you but it could be something that will pop your tires and ruin your axles ect.
They might need to enable object tracking for everything. I think they only have some specific objects for it to stop for. 😂
That seems to be the case. The Cybertruck's AI seems to know that cardboard boxes and beachballs aren't "vulnerable" and yet the sight of a child's bicycle scares the AI into braking reverently.
Did it pass the Ones where you didn't have a camera up only
the ai models are trained on data from the smaller tesla cars. im sure once data from the cyber trucks becomes more significant in the training dataset, these issues will fizzle out
I test drove a Cybertruck and i also noticed the FSD Definitely needs work with where it thinks the truck is and how big it thinks the truck is too
Reminds me of the old novelty song lyric: "Short people got no reason to live." 😂😂😂
Love the video game sound effects for fail / pass 👍
why your front right triangular window is not tinted?)
I had to get it replaced and now I gotta go get the tint back but I'm lazy lol
@@DirtyTesla wow, why did you have to replace it?
lmao! best video (and most infromitive….) i’m deciding between an X and a Cybertruck and this video surprised me and has me wondering / hesitant about the Cybertruck.
the headlights (i do love where it snows), the wiper and tires (rather expensive to maintain no?), the cost to recharge on road trips (about 150% of other Teslas with superchargers now near .50/kw) are worrying me.
thanks for thinking of doing this video! food for thought….
Awesome! The wiper replaced by Tesla is $75 so NBD. The tires will be expensive, but X tires are also expensive and staggered which is annoying.
Check out our Cybertruck road trip videos, I compare to road tripping with the X. Energy usage is more but not crazy.
@ how did i miss the X/CT energy comparison video? will check it out tomorrow!
had an overnight CT demo tonight and man do i love it! unfortunately my 1-car apt garage isnt big enough for the CT!!! noooooooo! lmao.
so i won’t be able to justify it until i have a home charging situation that can accomodate a cybertruck (i still have 8 mos in my apt lease!.
thanks again for all your content. it’s probably going to come down now to CT or Junioer (depending on what juniper ends up being).
Nothing Like You And The Wife Out Having Fun, Great Video. 👫
Can't find my Capitalize Every Word setting. Very Trumpian, please help!
Yeah, the entire fleet with be driverless taxis next year with "just a software update". My ass. SMH
So much more for the AI model to learn and compute for L3, forget about L4. The model is going to get too big for AI3, if not AI4 as well.
and L3 only in specific regions and by maximum the US
An Ai model will never know or 'learn' the traffic regulations of countries like DE/FR/UK etc.
could you please put the FSD version in the description? Will make it easier to search and compare versions in the future.
My question is is since on dirt road does the ground being brown affect the visuals as well as does a dust in the air like small particles mess with the cameras?
Why is the dash video a frame that's been inserted over the video?
Yesterday, my MY in FSD came to a stop for a small painted plywood cutout of a Santa figure. It was not in the road. It was leaning against a mailbox.
can we see a test without the front camera blocked just to be sure?
I see from your test that the CT ignored the kid manikin. What if you set it in the middle of the road forming an X shape as if he/she was calling for attention and when the CT is self-slowing down have someone pull on a string to have it fall to the floor to simulate the kid having passed-out. I’m hoping for the truck to continue and stop and not run over it as if the kid moved off the road.
I was driving on a back road today with FSD and there is a pedestrian crossing and its dark. There was a pedestrian waiting to cross. I didn't see the person. FSD did and stopped. Love it. its not perfect but it does a great job in some cases. It will only get better.
I wonder if it is a lack of training data or an algorithm adjustment to reduce phantom braking and reduce object avoidance. Just wondering. Right now, be extra careful for objects.
1:51 small box was a win it did not move or hurt the box.
Dang! 100% my CT thinks its bigger than it is. That explains like 75% of my disengages. And late breaking, good lord lol Still love FSD tho, so much fun!
Early Access Build 🤷♂
I also realized it doesn't realize how big it is. I had it roll past a stop bar on high speed road just not realizing distances in as it should.
I wonder if it would be different on a marked road
The issues with the tow hitch bar snapping the aluminum frame are a bit more troubling to me, can live without FSD but a truck that can't tow kinda defeats the purpose
I noticed that FSD has eliminated many objects that it had in the past. Things like garbage cans, mailboxes, and many more such as traffic cones and barrels have been greatly reduced.
Love your new little intro lol
I've had it for years I just don't usually use it lol
Im sorta surprised. I almost don't believe it. I used fsd all the time, it seems really good on my model y
Why didn't you use the button on the steering wheel to activate the cameras?
Chris always going that extra mile. Thanks Dirty Tesla & Wife.
It should not stop or slow down when you’re standing off to the side of the road. It be stopping for everyone walking or getting the mail. You should try walking out into the road.
You mean like they did in the last tests?
Maybe they are using thermal imaging. Try with a heated vest or jacket
What's up with front right window tint? what % did u tint?
This is what happens when you remove radar and other sensors and don't want to use LIDAR. I know, I know..... many of you will say these systems aren't needed but I think you're wrong. I have 22 Model Y that I took delivery of that is an Austin built that at least still has the USS. It was the batch of cars right before they removed them.
I would be interested in seeing a Model 3 or Y, go up against a competitor that has these systems to see if Tesla's approach is actually valid because at the moment we're all walking around quoting Elon saying that Vision is all you need and I feel that's not the case.
I'm not sure why the Cyber Truck has a front camera if it's not being used in FSD and vehicle collision avoidance but hopefully a future software update can change that.
lidar wouldn’t have changed anything in machine learning, tesla’s training data isn’t trained on what to do if a random tv box or ball comes on the streets.
humans don’t have lidar either.
And tesla’s fsd is way better than other systems, as other systems are programmed or have WAY to much safety checks.
But in the future there could be more companies that have fsd but if tesla keeps their “teslacab” promise it’s looking bad for other companies.
@@DoubleRainbowXTHow does a lack of safety checks and hardcoded rules make it better?
@@DoubleRainbowXT if Tesla gets permission for full FSD on public roads in early 2025 like promised and it is still hitting stuff like this Tesla will be sued into oblivion.
You might want to try and get the mannequin to look more human-like. It seems like the FSD software does not even pick up the mannequin as anything. Perhaps if it had eyes or glasses or shoes? Its odd that it has zero issue with the bike but the plastic white mannequin is what throws it for a loop....
My model 3 seems to be opposite of this. I’ve had it attempt to dive to a different lane because I guess it thought the McDonald’s bag was a solid object
It's only trained on known patterns, like humans etc. However in real-life depending on angle etc. even humans can look unlike a human or just like some arbitrary objects from the camera perspective. That's why sensors like lidar are so important.
The occupancy grid/net also won't help, it only has limited accuracy and cannot make up centimeter differences from ground level in further distances with enough confidence. Also occupancy grid/network does not work on moving objects.
Could the fact that its a drit road have anything to do with this? I imagine it has to be more "forgiving" as to not break too much on these kinds of roads.
Why cover the bumper camera?
To demonstrate it is not being used with FSD yet. I just covered it for the first object.
My model S, on v12.5.4.2 consistently avoids tree branches or flying plastic bags. Is this issue confined to the cybertruck’s v12.5.5 version?
My model y with FSD and ai4 hit a rabbit two weeks ago makes me fell sooooooooooooooooooooooo bad😢 they really need to fix this
Nice hubcaps!
Now you have to test it again.
dude your wrap is dope! what color is that?
Very disappointing results. What if that flexible white bucket was a small white dorm room refrigerator! Why not involve the front camera? I understand if it gets dirty or obstructed, it could be a backup or collaborative with the other camera used. And, this was in good lighting conditions, no less! 🤦♂️
I’m sure it will improve, but this seems kinda obvious stuff.
13:46 CT has same stance as ED-209. it's not gym ball, it's stairs that will stop it.
Meanwhile the Seal 07 is coming to Australia at the same price point of the Model Y with Lidar, 12 x Ultrasonic radars, 5 x mmwave radars, 11 camera's and Nividia Orin N processing with double the power of Hardware 4 at 100 Tops. Yet Tesla can do it all with vision and no redundancy, great, gives me lots of confidence.
Chris, try a stationary test with the mannequin. Add a hat, draw some eyes, pull a string to make a movement aso.
What do you need to do for it to detect it as a human being?
Also, turn it around, sideways, closer, farther ++.
You can mount a phone/camera watching the display and relay it to your phone so you can see it in real time (FaceTime? ) while you do adjustments.
The Y avoided the mannequin in past tests, there are videos on it linked in the description
@DirtyTesla Thanks, but I think it would be cool to see what is needed for the FSD to recognize an object as a human being. Shapes, sizes, colors aso.
Just to have a glimpse into the inner workings of the FSD.
How about pictures of things? A statue? A mobile phone playing a video in front of the camera in the windshield?
Lots of funny stuff. 😊
Stephanie is adorable. 😊 Congrats Chris. 🤣
Were were shown that FSD uses an 'occupancy map' which basically appeared to be a series of cubes that it decides is occupied or not. Presumably this is an attempt to avoid the problem of not having to identify what every object is that it might come across. Clearly it's not feasible for it to identify everything. Clearly this doens't work very well at the moment.
This shows they (probably) shouldnt have removed the lidar sensors from Teslas, for things cameras just cant understand or pickup in time
Our 2022 MX plaid emergency braked yesterday for a bird flying by. 🤦♂️
"Cybertruck seems to brake late some of the time" - my theory is that all the training data is from M3/Ys. So the Cybertruck mostly thinks that it's a model 3/Y so it brakes and keeps distance like a 3/Y.
dont these dept communicate and share each others information?
What exact version is it?
I finally got my K.A.R.R. Mode!!! I mean, oh no, it didn't stop, lol
You should throw the mannequin too to see if it can hit superman
Wow this is concerning. I may not use FSD until the next update.
This does not bode well for transference of FSD learning to new models with different camera setups. It may be necessary for them to gather millions of miles of data for each new model. We’ll have to see how quickly this gets updated to match the rest of the fleet.
Dan O'Dowd would love to have this video, LOL.
Make video test FSD in winter storm blizzard and fogs conditions And Unplowed secondary road
The underlying issue is probably distinguishing VRUs vs solid objects vs lightweight paper/plastic debris vs discolored/shaded patches of road. The context probably matters here--dirt road in the country. Some types of debris humans would ignore too (i.e . paper or plastic bag, bits of cardboard). I don't think it "knows" with complete certainty these objects were harmless and not VRUs, but I think this version of the neural net is strongly leaning that direction. I do suspect it "knows" the mannequin is not a pedestrian, at least assigning very low probability. What surprises me is I would think it would err very much on the side of caution with these objects. But the devs must be doing a balancing act in the neural net training to avoid "phantom braking". This is ultimately going to be a more-data more-training issue.
Tesla trains the Ai using real life driving scenarios captured by their customers. I highly doubt the training data would have many instances where there's a big box standing in the middle of the road
@@CodyCha That is one reason why it has trouble deciding what it is and whether to avoid it. That said, I'm sure there are quite a few examples of boxes, paper or plastic bags, and similar debris, lying on the road or floating/blown across it. Just not enough for the AI to fully generalize the concept and distinction of harmless vs not-harmless debris. They collect edge-cases for training, after all. They have billions of miles of driving that they've drawn their training examples from (all cars in fleet in "shadow mode"). That is many thousands of lifetimes worth of driving. You can bet there are all sorts of weird things in their training data.
FSD does not recognize the mannequin as a person because it literally does not look like a person at all. It’s in the shape of a person but it’s the same color as lane lines. I’d bet if u painted it a skin tone and painted a face and put hair on that same mannequin it gets registered as a person
@@YungAnt7 I pretty much agree, yes. Did something I said seem like I would not agree with this?
You finally got your wheel cover!
You should have used some movie magic to make it look like the truck hit you 😂