Tesla vs comma.ai approach to machine learning | George Hotz and Lex Fridman

Поделиться
HTML-код
  • Опубликовано: 5 сен 2024
  • Lex Fridman Podcast full episode: • George Hotz: Hacking t...
    Please support this podcast by checking out our sponsors:
    - Four Sigmatic: foursigmatic.c... and use code LexPod to get up to 40% & free shipping
    - Decoding Digital: appdirect.com/...
    - ExpressVPN: expressvpn.com... and use code LexPod to get 3 months free
    PODCAST INFO:
    Podcast website: lexfridman.com...
    Apple Podcasts: apple.co/2lwqZIr
    Spotify: spoti.fi/2nEwCF8
    RSS: lexfridman.com...
    Full episodes playlist: • Lex Fridman Podcast
    Clips playlist: • Lex Fridman Podcast Clips
    CONNECT:
    - Subscribe to this RUclips channel
    - Twitter: / lexfridman
    - LinkedIn: / lexfridman
    - Facebook: / lexfridmanpage
    - Instagram: / lexfridman
    - Medium: / lexfridman
    - Support on Patreon: / lexfridman

Комментарии • 248

  • @neaorin
    @neaorin 3 года назад +184

    One important advantage Tesla's approach has is explainability. The fact that they identify cars, pedestrians, traffic lanes etc. in separate tasks allows them to display them on the console, and this builds confidence in the human passengers on its ability to see and understand the scene. Also, it helps data scientists better understand why the model fails in some cases. With end-to-end, you get neither. All you have at the other end is actions on steering and acceleration / braking. The passengers don't know what's going on, and the data scientists can't tell if the system failed because it did not identify the lane markings correctly, misread a traffic sign, or what.

    • @sawcbee
      @sawcbee 3 года назад +5

      is there someway the two approaches can be merged? so that you can extract explainability from an end-to-end solution?

    • @neaorin
      @neaorin 3 года назад +10

      ​@@sawcbee I'm not sure. To explain a model to data scientists you can produce heatmaps for specific activations ("which areas of this image input were most responsible for the vehicle deciding to brake?") which in some cases might give you an idea about what the model was focusing on. For non-obvious failure cases this might not work at all ("ok, so it was focusing on the sign, but how do we know whether it identified the restriction correctly or not?". I also don't see any obvious solution for producing passenger-friendly visualizations of the "thought process" of a model that's been trained end-to-end.

    • @Supreme_Lobster
      @Supreme_Lobster 3 года назад +19

      @@neaorin just wait. Someone's gonna use segmented tasks just for "explainability" but actually use end-to-end for decision making. That way you make users feel comfortable while getting the benefits of end-to-end. The visualisations users would see would actually be meaningless but oh well.

    • @neaorin
      @neaorin 3 года назад +10

      @@Supreme_Lobster Yeah I thought of that, it seems like you'd be deceiving the user though. Sounds like something Nikola would do :)

    • @Supreme_Lobster
      @Supreme_Lobster 3 года назад +2

      @@neaorin it's definitely deception and I'm not arguing for it, just saying that someone is going to do it

  • @Ryan-xq3kl
    @Ryan-xq3kl 3 года назад +200

    Lex "prefaces every single question with a complete backstory" Friedman

    • @salessiteboost7665
      @salessiteboost7665 3 года назад +2

      bahahaah

    • @myxomatosisification
      @myxomatosisification 3 года назад +2

      Hahahha nice joe rogan reference

    • @royz_1
      @royz_1 3 года назад +4

      101 ways to avoid eye contact

    • @MMATopGs
      @MMATopGs 3 года назад +1

      @@royz_1 eye contact is not needed! it is podcast or radio!

  • @MultiVfc
    @MultiVfc Год назад +6

    He was right Tesla has gone his way

  • @Mark-kt5mh
    @Mark-kt5mh 2 года назад +8

    I agree with George, end-to-end is required for level 5. The hydranet approach has proven itself to be sufficient for L4 and will eventually be used a validation agent for self-supervised end-to-end learning for L5.

    • @autohmae
      @autohmae 2 года назад +1

      "I'm not a liability guy" this is like saying: "I'm just a technology guy, not a business person." self-drinving is also a people problem (trust), not just technology.

  • @davids2207
    @davids2207 Год назад +6

    Yesterday Elon demoed FSD 12. Bookmarking this clip for nostalgia.

  • @Martinko_Pcik
    @Martinko_Pcik Год назад +4

    End to end approach is how our brain works as well. I agree with George. Also look at how well openpilot works with much less complexity and HW requirements.

  • @JohnnyKReviews
    @JohnnyKReviews 3 года назад +23

    Hotz: “who gives a fqk.”
    Lex: goes on one hour of reasoning the question

  • @markjordan7800
    @markjordan7800 3 года назад +185

    did he just ask a 4 min question lol

  • @mrwhitemantv
    @mrwhitemantv Год назад +4

    Tesla has switched to end-to-end in autopilot v12

  • @Eroenjin
    @Eroenjin 3 года назад +17

    I would also be reluctant to use E2E approach for self-driving because of the challenges in validating the model(s). In this case, it's human lives that are at risk, so a model producing bad output does not matter as much as in some game of Go. I am working on a machine learning task in another field where there the room for mistakes is small and the models need to be able to explain themselves.

    • @andreasreiser8160
      @andreasreiser8160 3 года назад +1

      ​@@johnpatrick7699 exactly. How do you verify a human driver? You can't. But you can check how long he can drive without crashing

    • @Eroenjin
      @Eroenjin 3 года назад

      @@johnpatrick7699 it’s not necessarily one or the other. I have seen and have myself built models that are able to explain to the end user why they came up with some output. This requires some feature engineering i.e. domain knowledge and a ML pipeline built using said features (and also some auxilary inputs e.g. raw data). Or it can even be done with E2E but requires domain knowledge during validation.
      I am not too familiar with the self-driving AI development besides owning a Tesla and reading a bit about it but I would design some simulated validation test suite for self-driving A.I. This could contain ”unit tests” such as performing unit functions e.g. turning, line change, emergency braking etc where there is a ”right choice” available and even a human can understand what is good behavior. Then on top of that there would need to be ”functional tests” which test combinations of unit functions (eg executing some complex manuver). Then on top of this would be the ”fuzzy tests” e.g. understanding what is going on around with other vehicles and road users. This could be validated by predicting what the other road users are going to do and then comparing these predictions with what really happened. Tesla should have plenty of footage to do this. As for how to operate amongst other road users - that is trickier to validate. Maybe some validation targets could be compiled from those drivers who have the fewest accidents?
      I am most likely missing some obvious points here.

    • @pisoiorfan
      @pisoiorfan 3 года назад

      Well his business model is they sell the hardware and have the user upload whatever free software they like on their cars. Liability is fully passed on the human driver as it is with normal drivers.

  • @mehmetonur7925
    @mehmetonur7925 3 года назад +5

    George hotz will eventually win this game

  • @lars3743
    @lars3743 3 года назад +6

    I disagree that you can reduce or equate the separation of model sub domains to feature engineering. A model trained on all data has to generalize more than perhaps is necessary. You can actually train models on false positives from upstream models and get significant benefits. Another benefit is explanation and failure attribution.

  • @karlallspach5309
    @karlallspach5309 3 года назад +8

    George Hotz will be a household name. Why does he always seem so pissed off though?

  • @gliese4363
    @gliese4363 3 года назад +5

    I watch this with my morning coffee and joe Rogan when I get high after work

  • @freddyfozzyfilms2688
    @freddyfozzyfilms2688 3 года назад +11

    Plot twist, the stitching of the individual tasks together is just done by another neural network

  • @danielklaussen3054
    @danielklaussen3054 3 года назад +8

    Lex - if you’re talking, you’re not learning.
    The ratio of him-to-you could be better in this clip anyway.
    Love your show.
    Love your willingness to show vulnerability.
    Keep being awesome.

    • @bradharris1459
      @bradharris1459 3 года назад

      I am a first grader trying to understand physics, Lex is my teacher and his guests are guests...I need more of Lex to slowly try his best to explain this mind blowing craziness to me.
      But yes, I do understand your comment. It’s always good to let others explain and talk. However, This guest really seemed to be a “to the point” kind of person.

  • @slmille4
    @slmille4 3 года назад +52

    So just like how Muzero has end-to-end solutions for each game it tackles, is Comma going to have to train a different end-to-end solution for every combination of street signs and driving rules on the planet, while Tesla can just swap out the individual modules? Seems like Tesla has the advantage.

    • @natedammerich1745
      @natedammerich1745 3 года назад +9

      This is a genius point in my opinion. Tesla’s approach would then be more segmented and easier to deploy to 60+ countries whereas Comma will have to have 60 end to end models... that’s an enormous advantage

    • @andreasreiser8160
      @andreasreiser8160 3 года назад +18

      @@natedammerich1745 no comma would have 1 end to end model that knows the rule in every country in the world because it has learned from data of all countries... That's like saying muZero has only learned how to play with the rook but not with the entire set of chess characters. It learns ALL the rules.

    • @thinkingchanged
      @thinkingchanged 3 года назад +10

      @@andreasreiser8160 This holds true if you’re assuming each rules is different. But in terms of driving, the same “rule” can mean two separate things depending on the place/time. I don’t think 1 model would work for every place on the planet.

    • @NutsDriver
      @NutsDriver 3 года назад +4

      @@thinkingchanged exactly, I'd like to see how an end-to-end model will take a decision to take a right turn on red light in Europe...FSD is not some abstract task, it has to comply with statutory regulations, which are FORMAL RULES stipulated in each country's national & regional legislation, and the legislation is constantly being changed all over the world. Which would be the more adaptable FSD concept - the one that requires retraining the whole end-to-end model each time some local rules are changed, or the one that uses specialized models to "produce" higher-level abstractions, which could be utilized by a formal-rules system to take the final decision?

    • @osimmac
      @osimmac 3 года назад +1

      @@NutsDriver they should simply farm and use data from each local environment, it would be dumb to use data from the UK where cars can turn left on red lights in the US version of openpilot. it would probably work fine in places with good rules, but in places where cars drive in chaos, the model may need to be very different.

  • @verynice5574
    @verynice5574 Год назад +2

    Just to catch everyone up. Tesla is now doing end to end, it just got there through a long and painful process of overcoming Elon's ego.

  • @sddndsiduae4b-688
    @sddndsiduae4b-688 3 года назад +24

    to use MuZero approach in car driving you need to be able not only simulate full video effects (especially sun,water, snow, ice related) for billions of driving hours, but also human behavior for pedestrians and other drivers (which include bikes and other crazy stuff which used to move people around), and don't forget pets, and their behavior, in case of watching only approach (i.e. forcing your solution watch millions of hours of car video recording) you missing dependency on agent actions so MuZero approach would not work.

    • @andreasreiser8160
      @andreasreiser8160 3 года назад +3

      You don't need simulation if you have real-world data though. You see how the humans drive and how other agents react. And the behavior of other agents is exactly why RL approach like muZero sounds very promising

    • @thinkingchanged
      @thinkingchanged 3 года назад +2

      @@andreasreiser8160 Yet this is still an incredible amount of data. An amount that only Tesla has/is getting at this point in time. Tesla’s data lead is exponentially increasing, I want to see competition in the autonomy space, but I’m not so confident that comma will be able to catch up in time (data wise).

    • @sddndsiduae4b-688
      @sddndsiduae4b-688 3 года назад

      @@andreasreiser8160 MuZero learns based only on it's own actions, i.e. all those millions simulated or real full games is needed for learning.

    • @sddndsiduae4b-688
      @sddndsiduae4b-688 3 года назад

      i.e. we need another breakthrough here, and if you suggest alphastar then i could show you examples of misunderstanding by him what's happening in battlefield even on last learning iteration (i.e. it copied behavior without fully exploring consequences of such behavior).

    • @DjChronokun
      @DjChronokun 3 года назад +2

      I find it weird that people keep suggesting end-to-end RL approaches, but who is investing in building the simulators to make them viable?
      seems like instead of a 'rook guy' or a 'cone guy' they need to be focusing on finding ways to realistically model pedestrians, drivers, animals, etc. for acting as agents in a simulation, as well as getting expert artists and graphics programmers and so on to build the rest of the simulator if they seriously want to pursue this end-to-end approach

  • @alexng4
    @alexng4 3 года назад +8

    With that type of attitude, It's hard to think George will be successful. Most confident successful people don;t need to sound or carry themselves in a cocky manner in order to get their message across. Even someone as smart as Elon sometimes would prefixes with a percentage probability chance something might fail.

    • @socrates_the_great6209
      @socrates_the_great6209 3 года назад +2

      Elon is an AI. You can't compare him with a human being. He is a machine. George is a genius and you cant see it.

    • @nuwang2381
      @nuwang2381 3 года назад

      Yet george has been wildly successful in the past. Elon is cocky and has taken countless gambles if anything being cocky enough to believe in yourself against the odds is what makes people successful and also make them fail. Allot of this behaviour is high risk high reward behavior. Who comes out on top doesn't matter beacuse at the end of the day us consumers get the best possible product

    • @ahduhm
      @ahduhm 3 года назад +2

      "Will be successful?" Is George Hotz not already successful?

  • @GeoffHou
    @GeoffHou 3 года назад +1

    Assessing that driving and recognising patterns and symbols can be assessed on their own and to then stitch them together is a bit tricky. Is it important to recognise everything happening around?
    I would think that the stitching would be an ai which is similar to what comma ai is doing. So I think George is having a point that in the end the code and probably reaction time might be affected by following a modular approach.

  • @incription
    @incription 3 года назад +3

    I love the moire pattern on his chair

  • @spiritusinfinitus
    @spiritusinfinitus 3 года назад +45

    Long term, Tesla's system can learn end-to-end from it's own previous foundational rook guy experience. Rook-guy's job is transitional, not permanent.

    • @kawo666
      @kawo666 3 года назад +3

      Except that when you look at the latest state of the art in Reinforcement Learning, it's best when AI learns from clean slate without the 'rook guy'.

    • @ruslanuchan8880
      @ruslanuchan8880 3 года назад +5

      @@kawo666 What's this state of the art in RL? Can you link the paper?

    • @MichaelZenkay
      @MichaelZenkay 3 года назад

      rook guy a single neuron in the 6 neuron network we know as chess

  • @e1nste1in
    @e1nste1in 3 года назад +3

    What they didn't mention is that the en-to-end approach comes with the benefit that you don't even have to identify the subtasks (and then weight them).
    There might be some visual clues to driving that were are not even conciously aware of ...

  • @lukem8420
    @lukem8420 5 месяцев назад

    3:50 George turned out to be 100% correct

  • @chizaram7517
    @chizaram7517 3 года назад +7

    I watch both chess videos and Lex Fridman; watched a few of those a few hours ago. When I saw @Black Rose's chess comment, I thought the comment section had frozen. Then I watched the video and heard the chess references, and saw some other chess comments and I realized, oh this was why RUclips recommended this. It felt like a reality glitch for me at first.

  • @KonradBogen
    @KonradBogen 2 года назад

    How do you spell out the name of the paper George mentions?

  • @Ryan-xq3kl
    @Ryan-xq3kl 3 года назад +8

    george might be a geek but he knows what hes talking about forsure

    • @socrates_the_great6209
      @socrates_the_great6209 3 года назад +3

      Not just a geek, a genius.

    • @Ryan-xq3kl
      @Ryan-xq3kl 3 года назад +1

      @@socrates_the_great6209 He does represent lots of those genius like qualities, he even says stuff that sounds dumb sometimes but the way he says it sounds smart lol

  • @markp2381
    @markp2381 3 года назад +1

    Can someone point me out how does comma.ai's end-to-end approach work? For instance, how do they test it?

    • @santishorts
      @santishorts 3 года назад

      They have access to user provided drive data on which they train their models.

  • @jaredwu1194
    @jaredwu1194 3 года назад

    I think there’s no overall advantage for either end-to-end or multitasking. Self-driving capability with end-to-end learning may be really helpful in developing nations considering their daunting traffic condition, where lane lines are ornaments only, hardly any meanings. So in this case, “where to drive” is way more important than “how to drive”. But I do think Tesla FSD do have some hint of end-to-end innit, given they still handle roads with no lane lines or any references pretty well, and the same applies to comma. It’s not fair to put two company’s vision learning algorithms under such strict dichotomous classification.

  • @markmraven
    @markmraven 3 месяца назад

    I like Tesla's way. He's right, they'll obviously have to switch to end to end at some point, but learning tasks is how humans learn to drive. That's what drivers ed is - teaching people what to expect. Obviously, going through drivers ed doesn't make you a good driver, but it develops the fundamental skills needed to be an autonomous driver. In relevant terms, this means putting out a functional product on a shorter time frame and growing a larger user base, which speeds of the next step of analyzing what causes users to take control (more users experiencing fewer problems needing to be solved)

  • @dave2132
    @dave2132 Год назад

    I love an interview where the interviewer loves to hear himself talk for three minutes before the guest utters a word.

  • @blanamaxima
    @blanamaxima 3 года назад +1

    The problem comes once you have to give guarantees... I have some strong doubts that in the next 10 years we will be able to give strong theoretical bounds on performance. The current bounds are so bad that I do not see how on earth we can solve the problem without decomposition.

  • @daveoatway6126
    @daveoatway6126 3 года назад +3

    Lex, I like your prospective. I would rather have the car safely stop and transfer control than almost be right. Just like people, systems can only concentrate on one thing at a time (even though it can switch between tasks quickly). As with people, learning is never done for machines. Losing a game is not the same magnitude as crashing a car or hitting a person.

  • @RyanLasek
    @RyanLasek 3 года назад +13

    This podcast made me appreciate just how palatable Lex's voice is

  • @misteratoz
    @misteratoz 3 года назад

    Theory and speculation of "best" aside, I posit that it's rediculous to say the company who is the clear leader in machine learning ai for self driving based on all available metrics is just wrong and you, individual, are correct. An approach can work even if we don't understand its individual pieces. I'd argue that understanding how the individual items work for a complex task like fsd is valuable from the perspective of debugging. How can you diagnose issues in end to end? The system does... But you can't figure out why.

  • @noisypl
    @noisypl 3 года назад +4

    what was name of this algorithm/new approach? mhew zero? Anyone?

  • @hippopotamus86
    @hippopotamus86 3 года назад

    Skip to 2:57

  • @hobojo153alt4
    @hobojo153alt4 3 года назад +11

    I think what's more likely the correct solution is to eventually put a "master AI" over all the small "task nets" that Tesla currently has. Basically the answer to which is better is yes.

    • @ciarfah
      @ciarfah 3 года назад +2

      Yes

    • @bighands69
      @bighands69 3 года назад

      @Hobojo153 alt
      But then the question becomes how do you get to the master ai system with the modular approach.
      Does it necessarily develop in that linear fashion.

    • @hobojo153alt4
      @hobojo153alt4 3 года назад

      @@bighands69 You train it the same way you train all the existing ones. Only difference is this one's input data is the output of all said existing nets and it's output is steering and acceleration

  • @lambo199721
    @lambo199721 4 месяца назад

    Looks like he was right… FSD 12 (supervised) is GREAT! 😄

  • @Chakabuka
    @Chakabuka 3 года назад +8

    What's the paper George is talking about?

    • @wimsmets4286
      @wimsmets4286 3 года назад +2

      Look up MuZero

    • @Spreadlove5683
      @Spreadlove5683 3 года назад +2

      A MuZero paper apparently. I haven't tried, but just Google MuZero paper and I'm sure it will come up. It's probably where they publish details about how it works and the results they are getting.

    • @mattsattackss
      @mattsattackss 3 года назад +9

      deepmind.com/research/publications/Mastering-Atari-Go-Chess-and-Shogi-by-Planning-with-a-Learned-Model

    • @andreasreiser8160
      @andreasreiser8160 3 года назад +2

      He said the three key papers are muzero by deepmind, "Model-Predictive Policy Learning with Uncertainty Regularization for Driving in Dense Traffic" by Mikael Henaff, Alfredo Canziani, Yann LeCun and the Value Prediction Network paper. I haven't read the last 2 yet but I plan to when I have more time. Hope this was informative.

    • @sheevys
      @sheevys 3 года назад

      MuZero is just like AlphaZero but instead of being given the complete description of the dynamics of the environamt, you learn it. So going back to chess example, AlphaZero knew the rules of chess, while MuZero learnt them while playing. This is good, because in real word applications you don't know the rules which govern the world, so you need to learn to approximate them and have your internal model of the world.

  • @ThePixelize
    @ThePixelize 3 года назад

    3:46 high-pitched "harder!"

  • @bradstewart7007
    @bradstewart7007 7 месяцев назад +1

    FSD Beta 12: George was right. Again.

  • @danalogrippo
    @danalogrippo 3 года назад

    Love this channel

  • @DavidFregoli
    @DavidFregoli 3 года назад +8

    he should have just worked with Elon, Karpathy and the team honestly, Comma is interesting but a waste of his talent

    • @conduit242
      @conduit242 3 года назад +2

      What talent? What’s he done meaningfully? He speaks like a noob who sold dumb money a crock of shite and his lack of funding proves it.

    • @seewhyaneyesee
      @seewhyaneyesee 3 года назад

      @@conduit242 Done more than your lilly silly butt complaining? Maybe? What have you done, sir?

    • @conduit242
      @conduit242 3 года назад

      @@seewhyaneyesee *chortle*

    • @w1z4rd9
      @w1z4rd9 3 года назад +1

      @@conduit242 He jailbreaked ps3 and the first on an iPhone so yeah....

    • @conduit242
      @conduit242 3 года назад +2

      @@w1z4rd9 Ah ha, clearly an AI expert then 😂

  • @sambarratt1349
    @sambarratt1349 3 года назад +33

    "I'm not a liability guy. I'm not going to take liability. Level two forever."
    Yeah, and that's why I wouldn't buy from him. Why would I have confidence in his product if he doesn't have confidence in his product?

    • @CarlosSpicyWiener111
      @CarlosSpicyWiener111 3 года назад +4

      I understand your perspective but I would personally give it a shot. Even if you have liability for a level 5 car that's truly level 5, the probability of an accident should be as improbably low or perhaps even less probable than that of human driving. Given this fact, insurance companies will have fewer claims and can thus incentivize new customers with lower premiums but maybe higher deductibles. I don't think we should expect car manufacturers or software manufacturers to take liability but I'd love to entertain a counterargument to this.

    • @pisoiorfan
      @pisoiorfan 3 года назад +3

      Is not a matter of not having confidence in their solution, it's about bypassing certification costs a small size company like theirs cannot afford.

    • @CarlosSpicyWiener111
      @CarlosSpicyWiener111 3 года назад +2

      @@pisoiorfan Who could/would certify a level 5 vehicle if it came out today? I feel like a simple disengagements per distance or distance driven with no accidents would be sufficient. Peoples' willingness to use it and the actuarial cost that insurance companies use should be a good proxy for safety too right?

    • @actraveler8309
      @actraveler8309 3 года назад

      That is true. There needs to be enough confidence in the product for when you go into higher volumes of production so that your warranty/incident rate doesn’t shut your company down.

    • @ThurstanHethorn
      @ThurstanHethorn 3 года назад

      Although I respect Hotz’s open source approach, his otherwise cavalier attitude is not something appreciated in work so tightly coupled with potential death or disability. Perhaps he should focus on security flaws

  • @brianli1212
    @brianli1212 3 года назад +5

    To be fair, there is no proof that today's artificial neural network could fully express human brain level complexity, which means relying on deep learning to train a perfect end to end model to match hunman capability is oversimplifing the problem. By the way, board game and simple atari video game are totally different with self driving task. In board game, everything is observable, but in self driving the real environemnt's observability is always gonna be limited by sensors. Last but not least, building a simulator to simulate real drivng envrionment to generate all the edge cases needed to train end to end model is as hard a problem if not harder than building a self driving system.

    • @krogan3760
      @krogan3760 3 года назад

      The attempt to get alpha zero playing sc2 was a massive failure and never got close to top pros.

    • @OMGitsjustperfect
      @OMGitsjustperfect Год назад

      it's impossible to create a simulation that is an exact replica of the real world. You have to simplify in many ways. There is just too much information in the real world.

  • @marqueswatson7878
    @marqueswatson7878 3 года назад +162

    I cant wait to see how this arrogance ages.

    • @garrettjones1161
      @garrettjones1161 3 года назад +18

      sounds healthy

    • @natedammerich1745
      @natedammerich1745 3 года назад +62

      I think GeoHotz is going to go down as another one of those geniuses who was just a little bit of a dick. Kinda like almost all the geniuses that we revere today. It’s almost like it’s a necessary characteristic of a genius...

    • @ciarfah
      @ciarfah 3 года назад +19

      @@ken-mb5cp But at the same time, he's better off going all-in in what he believes will work. There are plenty of people doing things differently to him

    • @wheelofcheese100
      @wheelofcheese100 3 года назад +2

      Same. I think they people disagreeing with you are tesla haters (there are a lot). No real tesla investor would side with this goober.

    • @wheelofcheese100
      @wheelofcheese100 3 года назад +1

      @@doooofus So being smart and making claims with no big fleet of cars this is tested on is ok I guess. Geezus, no wonder our society is going downhill. A race to the bottom.

  • @descreate1923
    @descreate1923 3 года назад +2

    my heart tells comma.ai's way is the answer but my brain says tesla's way is the answer.

  • @Gabriel-fb6et
    @Gabriel-fb6et 3 года назад +3

    What about a general(ish) AI that builds on the outputs of specialized recognition engines?

    • @infinitelink
      @infinitelink 3 года назад

      Human brain.

    • @nonconsensualopinion
      @nonconsensualopinion 3 года назад

      That's essentially what a convolutional neural net is. It learns kernals which are small pieces which recognize very basic features. Their output is combined and the process repeated until a suitable output is obtained.

  • @donatheanx8660
    @donatheanx8660 3 года назад +2

    After watching Tesla AI day, George was right. Tesla has moved to end-to-end ML self-driving approach. In Tesla’s defense, I think they had decided to do that when they decided to build Dojo.

    • @shashimalsenarath4146
      @shashimalsenarath4146 3 года назад

      Exactly

    • @ddud4966
      @ddud4966 2 года назад +1

      I didn't hear anything about end-to-end learning at AI day, it was all about feature engineering. They spent 30 minutes talking about a path planner that's essentially just a numeric solver for christ sakes, it's all throw away work.

  • @bigfactsbigstacks6261
    @bigfactsbigstacks6261 3 года назад +2

    Tesla’s approach: Turn reality into a video game then traverse.
    George: Teach the car to drive.
    Which one makes more sense? In my mind the ceiling is on Tesla.

  • @danhunters8226
    @danhunters8226 11 месяцев назад

    The more i read and watch about AI the more convinced i become that AI will blow past the capability to drive cars so fast that we don't know what happed. It has happened over and over again with tasks that used to seemed really difficult, with playing chess, image recognition, natural language and image generation.

  • @routine8
    @routine8 3 года назад +26

    Tesla's approach is much more akin to human brain.

    • @lukeno4143
      @lukeno4143 3 года назад +6

      human brain could never waste energy when evolving. its local optimisation. there are no animals trying to eat our AI. we can do better.

    • @MrKurumuz
      @MrKurumuz 3 года назад +5

      No wtf, you're not 3d labelling objects in your brain and do planning over a chess board basically. It's all a global computation

    • @lukeno4143
      @lukeno4143 3 года назад

      @@MrKurumuz im referring to how the brain developed via evolution, your reply refers to how it processes data.

    • @StatsMass
      @StatsMass 3 года назад +1

      @@MrKurumuz isn't is possible that you're brain is taking a more task-based approach, but that you can't see those individual taks because they're happening subconsciously (95% of brain activity)?

  • @NotFinancialAdvice
    @NotFinancialAdvice 3 года назад +38

    My money is on Elon.

    • @NotFinancialAdvice
      @NotFinancialAdvice 3 года назад

      @s__n_Ghs_w_J_g_r_v_ Correct. My money is on Elon.

    • @bigfactsbigstacks6261
      @bigfactsbigstacks6261 3 года назад +1

      Elon the programmer lolol

    • @Clickbait86
      @Clickbait86 3 года назад

      I love Elon , but 10,000 dollars for autopilot it a lil too much and hotz is the original autopilot guy of tesla until he quit. Tesla follows comma ai

    • @NotFinancialAdvice
      @NotFinancialAdvice 3 года назад

      @@Clickbait86 Great thing about the free market... you don't have to buy it... if you can find something as good for less, then buy that.

    • @Clickbait86
      @Clickbait86 3 года назад

      @@NotFinancialAdvice that’s what I’m saying. Y is ur money on elon if his autopilot tech is worth tops 200-500 bucks. Ohh ur a shareholder?

  • @eddies8452
    @eddies8452 2 года назад +2

    The fact that openpilot is able to do what it does with just one camera on a cellphone is proof that its model is more effecient and superior to Tesla's approach.

  • @dvanrooyen1434
    @dvanrooyen1434 3 года назад +4

    One big issue here - I can’t give time lines but this is the solution. The world does not work this way. Spoken like a true engineer. Good luck.

  • @trinocerous
    @trinocerous Год назад

    Scrum told us we don't need rook specialists.

  • @macberry4048
    @macberry4048 3 года назад +1

    Level 2 forever

  • @SevenDeMagnus
    @SevenDeMagnus Год назад

    Coolness

  • @bassamakasheh
    @bassamakasheh 3 года назад +3

    There is NO way you can compare Tesla to comma. Shouldn’t be mentioned on the same sentence. TOTALLY different. Apples vs Oranges

    • @osimmac
      @osimmac 3 года назад +3

      they both drive cars lol

  • @BlackRose-cy9xy
    @BlackRose-cy9xy 3 года назад +1

    Rooks are tools to win u don’t win with rooks but you use the rooks to win in the end

  • @bighands69
    @bighands69 3 года назад

    The human brain is not a single system with an input and output. So the idea that you will need a complete central model to produce driverless cars is not the only way.
    The end game is to produce a system that is capable of driving a car to human level expectations. How that is achieved is irrelevant to the style of type of system that is used.

  • @BackwardsR3LLIK
    @BackwardsR3LLIK 3 года назад

    Wait was that question? I thought he was answering 😂

  • @davidwhamilton8559
    @davidwhamilton8559 3 года назад +11

    I would not bet against Elon Musk The man launched a Tesla in the outer space.
    That’s all I have to say

    • @pisoiorfan
      @pisoiorfan 3 года назад +1

      Which traveled 200MMs (Million Miles) without a single autopilot failure

  • @lukeno4143
    @lukeno4143 3 года назад +14

    George is right though. most algorithms are piecewise not end to end. its because they dont know how to optimise any other way. there is no cutting edge here that i can see. reinforcement learning is a primitive way to optimise. there are much better ways to do it but ive never seen people publicly talk about them. These guys get paid what millions for their shit machine learning techniques that have been around for decades. All they do is add a few tricks and a lot of horsepower. Thats the deep secret that machine learning is just brute forcing. Its like string theory. Just add parameters and thank god for GPUs and like.

    • @thinkingchanged
      @thinkingchanged 3 года назад +1

      I would agree for most things, but the complexity of the real world and the role it plays in driving, as well as human behaviour, provides an incomprehensible level of variation that is not well suited for end to end at this time.

  • @adityavarshney6690
    @adityavarshney6690 3 года назад +1

    Isn't the difference between L2 and L5 how much you believe in your solution LOL saying it's "just liability" is a cop-out

  • @KhaledKimboo4
    @KhaledKimboo4 3 года назад

    Tesla is not talking much about self driving cars now because they realized after the hype has gone it's too early and focusing on it now will turn them into the new Nokia. they removed all "self driving car and auto pilot" mentions from there website.

  • @CookiePepper
    @CookiePepper 3 года назад +4

    I think there are good percentage of probability that Tesla screwed up full self driving and Comma surpasses. The new FSD beta is probably another local maxima which does not reach to the level 5.
    Elon said it is 4D, but it is not. 3.2D I would call.

    • @zebra7462
      @zebra7462 3 года назад +9

      Elon said 4.20D if I recall correctly

    • @natedammerich1745
      @natedammerich1745 3 года назад +1

      I think the clips show some enormous potential for this early build. Most importantly, it appears they’ve solved vision. Bugs lay within the policies that drive the car based on that vision, but these are fixable

    • @thinkingchanged
      @thinkingchanged 3 года назад +6

      How can you say the FSD beta is “probably” another local maximum when it has been out for less than a week and we aren’t the engineers on the team so we don’t even really know what’s going on behind the scenes of the software.

    • @scotttisdale2773
      @scotttisdale2773 3 года назад

      Even George contests this. If Tesla realizes their approach needs to be changed to more end to end, then it will be an easy change. For now they are collecting a huge amount of data. Comma is unlikely to surpass simply bc hardware and data. Doesn’t mean his approach is wrong though.

  • @bikerfreak714
    @bikerfreak714 3 года назад +19

    He's so smug it's hard to get through the entire video

    • @metroidM1A1
      @metroidM1A1 3 года назад +13

      Your name is "tesla_dave" 😂😂😂

    • @bikerfreak714
      @bikerfreak714 3 года назад +1

      @@metroidM1A1 LOL I actually like his E2E approach, but just wish he didn't always talk like he's the smartest human being and the rest of the world is below him

    • @Paooul13
      @Paooul13 3 года назад +4

      I half agree. I think he's enjoyably smug.

    • @omarc606
      @omarc606 3 года назад

      Yes. I’m glad Lex is interviewing him. There’s not a prey in the room.

    • @krogan3760
      @krogan3760 3 года назад

      @The Greedy Merchant that's just saying he's always been like this.

  • @CarlosSpicyWiener111
    @CarlosSpicyWiener111 3 года назад +5

    Wow I feel like I'm in the minority here. When I first heard this I was like "exactly, who would get a rook guy, that seems absurd for solving chess as a whole". I enjoy the other perspectives though so it is quite entertaining to hear the other side (much of which I agree with).

    • @julianriise5618
      @julianriise5618 3 года назад

      Maybe chess was not a very good example, since there won't be a hurricane or snow interferring with the game, which it could on the streets with AVs :)

    • @nocare
      @nocare 3 года назад

      I think both camps are wrong.
      I think it should take an approach closer to the human brain. Your brain has a "rook" guy for motion planning. Your brain has a "bishop" guy for object recognition.
      Build fast, and verifiable "rook" and "bishop" guys to do certain tasks that are fundamentally done best as their own task.
      Then have a chess player that utilizes these to be more effective and benefit from the emergence that is prevalent when you integrate simpler systems into one more complex system.

    • @CarlosSpicyWiener111
      @CarlosSpicyWiener111 3 года назад

      @@nocare I feel like I understand what you mean by the human brain has a "rook" guy but I would like to know what you mean more specifically. If I understand correctly, you're suggesting that even in human driving, we're ultimately doing discrete tasks such as motion planning and object recognition which (should) work in conjunction to produce a singular complex task of driving. Is that a fair characterization? If so, I would disagree. This idea of compartmentalizing object recognition and motion planning isn't how the brain, or at least how the brain handles driving, seems to work. The two seem inherently entangled. If you were stationary and saw a cone versus moving and saw a cone, the output of the driving model should be different and it wouldn't necessarily be different if you compartmentalized object recognition and motion planning.
      Also how exactly is the thing you're saying different from Tesla's approach?
      Lastly, you say "...benefit from the emergence that is prevalent when you integrate simpler systems into one more complex system." I feel like this is the entirety of the self-driving problem isn't it? How can we just count on emergence to connect the dots? Isn't the thing that's synthesizing the inputs the model that is actually driving the car? Related to this is Eliezer Yudkowsky's blog post on the futility of emergence: www.lesswrong.com/posts/8QzZKw9WHRxjR4948/the-futility-of-emergence

    • @nocare
      @nocare 3 года назад

      ​@@CarlosSpicyWiener111 So although a good read, that post mischaracterized emergence in terms of defining it in a useless way.
      Yes emergence can be defined as such and thus is accurately dismissed based upon the way the author defined it.
      I am defining emergence as the property of intelligent behaviour that arrives from the interaction of simple rules; where in said intelligent behaviour could also be described by a definite set of more complex rules.
      To avoid just using analogies, the predictive power gained by saying something is emergent is that you are attempting to gain functionality for free instead of specifying it yourself.
      Its writing giant if-else trees vs writing interactive rules, its doing an actual integral for a function vs summing a million small areas to get a good enough approximation.
      Now its well know that the brain has various levels of compartmentalization, thats why we have a motor cortex and a frontal cortex.
      The difference is tesla isn't doing what I was suggesting.
      Tesla is saying we are going to compartmentalize and entire driving task and every aspect of it from the ground up.
      The stop sign example is a really good one I think. A "rook guy" for everything stop signs is bad.
      A "rook guy" for recognizing stop signs vs traffic lights vs cars is good.
      We do have a dedicated part of the brain to taking images from the eyes and slicing that image up into objects based upon the focus director coming from the frontal cortex.
      An example of how independent our brain can be is when you look at a cup your motor cortex generates the pathing to grab it even if you aren't planning to grab it.
      Your frontal cortex just has to say yeah ok go for it.
      Does that make it clearer?

  • @adityavarshney6690
    @adityavarshney6690 3 года назад +5

    Murder of bad agents HAHAHA

  • @ericervin2513
    @ericervin2513 3 года назад

    Lex!!! Not only the largest Redbull ever produced but a coffee chaser. I thought I was bad. :) You did say it was Monday morning so...

    • @raji08xd68
      @raji08xd68 3 года назад

      Regarding Monday morning, if you're referring to what Lex said at 3:10, then what he said was 'Monday morning quarterbacking' which is actually a figure of speech

  • @strange6973
    @strange6973 3 года назад

    I think he gets to one of the better points at the end - who takes liability. The legal challenge will need to be addressed to actually get cars onto the road. Also, wow, this guy is unpleasant to listen to. Also, very arrogant who's entire idea lies on the back of someone else's solution.

  • @retrocdtv
    @retrocdtv 3 года назад +2

    It's super scary to make a Black box AI because it makes investigation difficult. Especially if someone die in crash due to self driving error. This guy is providing self driving capability without accountability of possible death, that's the reason why people will lean to Tesla's autopilot. The biggest concern customers have about self driving car, is the trust in company who made it. Also Tesla's FSD beta is out so I think tesla already won the race :)

  • @Jsmith1611
    @Jsmith1611 3 года назад +1

    George Hotz is making a fatal assumption that we need the "best" driver. We don't we need a driver that can beat 90% of the average human and chess engines pre alpha zero were able to exceed this hurdle. Now if we're talking about writing a song, then yes you want to be in the top 0.01 percent of musicians because no one cares about the rest. But for plain old driving being in the top 10% is more than enough.

    • @bossbondan5054
      @bossbondan5054 2 года назад

      and driver depending 100% in technology is just stupid..

  • @travis3371
    @travis3371 3 года назад

    It’s called modular programming. Not exactly ground breaking

  • @alexai6648
    @alexai6648 3 года назад +2

    If in Tesla works such an arrogant and "teeneger level of thinking" engineer--> self driving for sure will come not from tesla.

    • @Supreme_Lobster
      @Supreme_Lobster 3 года назад +2

      he doesn't work for Tesla tho, what are you on about?

  • @ThurstanHethorn
    @ThurstanHethorn 3 года назад +1

    George Hotz comes off as an insufferable ass. I would hazard he is aware and could change his behaviour, but chooses not to, which makes it all the worse.
    Hotz’s ‘I know better’ attitude grates and makes this hard to watch. He doesn’t even begin to explain how his solution has advantages, so it also lacks any depth.

    • @ThurstanHethorn
      @ThurstanHethorn 3 года назад

      @@dingdong3021 No. I have no problem with intelligence in its various forms. I like to hear from people smarter than me. It really is about attitude.

  • @timbehrens9678
    @timbehrens9678 3 года назад

    Level 2 forever? Bwahaha! Every second Indian automotive IT-services company already has Level 2 and they are selling it since years.

  • @pauldreyer6111
    @pauldreyer6111 3 года назад

    George def doesn’t believe in humility

  • @johannesdolch
    @johannesdolch Год назад +1

    No offense, but if that guy gets to AGI before Elon does, the world is in deep trouble.

  • @susheelkumarpippera7877
    @susheelkumarpippera7877 3 года назад

    is he behaving like a nerd or a pro i can't figure out even with his suit, now i really understand what "never judge your book on it's cover" means.

  • @ManAcadie
    @ManAcadie 3 года назад

    I like Hotz. Seems like a super smart dude obviously. I just have a hard time believing he genuinely thinks he is outwitting Elon Musk....about ANYTHING.

  • @alexandrodisla6285
    @alexandrodisla6285 3 года назад +1

    400 iq on this one video.

  • @ashlynnneumann5063
    @ashlynnneumann5063 3 года назад +1

    👍🏽👍🏽👍🏽

  • @increasemaximumlifespan2502
    @increasemaximumlifespan2502 3 года назад

    Early part of 2020s is when full self driving will occur

    • @increasemaximumlifespan2502
      @increasemaximumlifespan2502 3 года назад

      Law of accelerating returns

    • @pandatobi5897
      @pandatobi5897 3 года назад +2

      @@increasemaximumlifespan2502 if all you have to cite is "muh law of accelerating returns", then you clearly know nothing about the subject at hand here.

    • @increasemaximumlifespan2502
      @increasemaximumlifespan2502 3 года назад

      @@pandatobi5897 lol what do you find wrong with the law of accelerating returns?

    • @Ryan-xq3kl
      @Ryan-xq3kl 3 года назад +1

      Wow its almost as if thats the part of 2020 that were in right now :|

    • @increasemaximumlifespan2502
      @increasemaximumlifespan2502 3 года назад

      @@Ryan-xq3kl if I remember right, Ray Kurzweil said 2019 is when we will have full selfdriving capabilities. And there has been a 2-3 year margin of error in his prediction rate so far. So ya... (8

  • @AmitKumar-yx6ne
    @AmitKumar-yx6ne Год назад

    Lex just talks to himself. The guests come only to listen him.

  • @NeverTalkToCops1
    @NeverTalkToCops1 3 года назад

    Elon and this George bloke don't get it. Remove the steering function, make the environment the car is in steer the car. Don't make the car computer interpret the environment and map it. How silly. Don't believe me, go look at a high speed passenger train. Not much computer power needed there, especially AI. The point is that these folks are using off the shelf parts and off the shelf thinking.

  • @socrates_the_great6209
    @socrates_the_great6209 3 года назад +1

    What about letting the genius talk? First 3 min Lex is talking non stop...

  • @MegaMijit
    @MegaMijit Год назад

    WHY DOES GEOHOT SOUND LIKE A WOMAN OR A THEY/THEM?!? WTF IS UP W THAT???

    • @ipurelike
      @ipurelike 9 месяцев назад

      lol... where did you get that vibe man?