Meta's Chief AI Scientist Yann LeCun talks about the future of artificial intelligence

Поделиться
HTML-код
  • Опубликовано: 15 дек 2023
  • Meta's Chief AI Scientist Yann LeCun is considered one of the "Godfathers of AI." But he now disagrees with his fellow computer pioneers about the best way forward. He recently discussed his vision for the future of artificial intelligence with CBS News' Brook Silva-Braga at Meta's offices in Menlo Park, California.
    "CBS Saturday Morning" co-hosts Jeff Glor, Michelle Miller and Dana Jacobson deliver two hours of original reporting and breaking news, as well as profiles of leading figures in culture and the arts. Watch "CBS Saturday Morning" at 7 a.m. ET on CBS and 8 a.m. ET on the CBS News app.
    Subscribe to “CBS Mornings” on RUclips: / cbsmornings
    Watch CBS News: cbsn.ws/1PlLpZ7c
    Download the CBS News app: cbsn.ws/1Xb1WC8
    Follow "CBS Mornings" on Instagram: bit.ly/3A13OqA
    Like "CBS Mornings" on Facebook: bit.ly/3tpOx00
    Follow "CBS Mornings" on Twitter: bit.ly/38QQp8B
    Subscribe to our newsletter: cbsn.ws/1RqHw7T​
    Try Paramount+ free: bit.ly/2OiW1kZ
    For video licensing inquiries, contact: licensing@veritone.com

Комментарии • 465

  • @Koekefant
    @Koekefant 5 месяцев назад +39

    Nice to hear a different voice and opinion on all these developments. Definitely makes me look different at Meta as company and AI player.

    • @frankgreco
      @frankgreco 5 месяцев назад

      Recall Zuckerberg has a poor record of user privacy and security. Why would you look differently at his company when he clearly doesn't give a damn about the danger to humans. He is only interested in increasing engagement so he can make more money.

    • @ts4gv
      @ts4gv 5 месяцев назад +7

      you should be more concerned
      yann and his optimism are an EXTREME minority

    • @benefactor4309
      @benefactor4309 2 месяца назад

      ​@@ts4gvhe warned about misuse of AI by companies

    • @vladimirbosinceanu5778
      @vladimirbosinceanu5778 5 дней назад

      Indeed.

  • @einekleineente1
    @einekleineente1 5 месяцев назад +37

    🎯 Key Takeaways for quick navigation:
    00:00 🧠 *AI Landscape Overview: Yann LeCun highlights the current AI landscape, expressing a mix of excitement and challenges, including scientific, technological, political, and moral debates.*
    02:15 🌐 *History of Neural Nets: Yann discusses his entry into AI through a debate on language origins, delving into neural nets' early days in the 1980s and efforts to revive interest in the 2000s.*
    05:17 🌍 *AI Impact on Products: LeCun emphasizes AI's widespread integration in products, from content moderation to translation, and its critical role in various sectors, citing its indispensability at Meta.*
    08:30 🚀 *Benefits of Open AI Development: Yann advocates for open AI development, asserting that disseminating AI technology across society fosters creativity, intelligence, and benefits various domains while acknowledging the need for responsible regulation.*
    15:43 📹 *Objective-Driven Models: LeCun introduces the concept of objective-driven AI, emphasizing the importance of moving beyond autoregressive language models to systems that plan answers based on predefined objectives, enhancing control, safety, and effectiveness.*
    21:48 🌐 *Yann LeCun supports open platforms for AI due to the future role of AI systems as a basic infrastructure, emphasizing diversity in knowledge, much like Wikipedia covering various languages and cultures.*
    23:41 🌍 *LeCun dismisses existential risks, comparing fears of AI wiping out humanity to concerns about banning airplanes in 1920, stating that safe AI deployment relies on societal institutions.*
    25:18 ⚔ *Autonomous weapons are discussed, with LeCun acknowledging their existence and emphasizing the moral debate around their deployment for protecting democracy while addressing concerns about potential misuse.*
    27:39 🚗 *AI's positive impact in the short term includes safety systems for transportation and medical diagnosis. Medium-term advancements involve understanding life, drug design, and addressing genetic diseases.*
    29:04 🧠 *LeCun envisions a future where AI systems assist individuals, making everyone essentially a leader with virtual people working for them. He emphasizes controlling AI systems and setting their goals without handing over control.*
    Made with HARPA AI
    ```

  • @GarryGolden
    @GarryGolden 5 месяцев назад +35

    Excellent interview/conversation... appreciate Yann's ability to communicate his personal story and story of the AI community.
    The Interviewer is well informed and did not throw softballs -- it was an elevated convesation

    • @skierpage
      @skierpage 5 месяцев назад

      Brooke Silva-Braga prepared well.

    • @flickwtchr
      @flickwtchr 5 месяцев назад

      So Yann LeCun being intellectually dishonest and gaslighting to stave off regulation for more money and power is laudable?

    • @flickwtchr
      @flickwtchr 5 месяцев назад

      My post criticizing LeCun keeps disappearing. Why?

    • @aroemaliuged4776
      @aroemaliuged4776 4 месяца назад +2

      @@flickwtchrthe power of meta

  • @disastermaster1413
    @disastermaster1413 5 месяцев назад +35

    Plot twist: Yann LeCun is a AI.

    • @robertjamesonmusic
      @robertjamesonmusic 5 месяцев назад

      He is Hayley Joel

    • @yadayada111986786
      @yadayada111986786 5 месяцев назад +2

      "Doesn't look like anything to me"

    • @vaultramp
      @vaultramp 4 месяца назад

      more like 'the merovingian' 😂@@robertjamesonmusic

    • @onceweslept
      @onceweslept 9 дней назад +2

      plot twist: you're an ai making us believe he's an ai, although he's an alien.

  • @senju2024
    @senju2024 5 месяцев назад +10

    I am fully with Yann LeCun's in getting LLM distributed to the public. But I am slightly disappointed in his arguments. He seemed not very strong in the regulation side of things.

  • @dustman96
    @dustman96 5 месяцев назад +16

    An advanced AI also has agency. It does not have to be deployed to gain control. It can gain control over those who have the power over whether or not it is deployed.

    • @skierpage
      @skierpage 5 месяцев назад +11

      Yes, I think Yann is far too confident. He doesn't know what a human-level AI will do. He's simply taking it as a matter of faith that it won't have its own agenda, or that if it does, it won't hide its true intentions from us, because that seems like science fiction; science fiction that every large language model has read!

  • @RS-dn1il
    @RS-dn1il 5 месяцев назад +26

    Considering the risks to society and culture that Meta has already spearheaded with relatively 'dumb' social engineering algorithms, his dismissal of people with concerns about AGI as neo-luddites is chilling.

    • @saltyapostle44
      @saltyapostle44 5 месяцев назад +7

      People on the cutting edge of anything should NEVER be trusted too much. Most have lost all objectivity and tend to only consider the benefits and not the unintended consequenses.

    • @gammaraygem
      @gammaraygem 5 месяцев назад +4

      "AGI will be 1000x more impactful than the discovery of making fire or electricity".
      Those "very few" people he talks about that are alarmists, all are from the TOP elite of AI developers. There arent too many of those to begin with, but he doesnt say that.

    • @krox477
      @krox477 5 месяцев назад

      Social media is just internet on steroids

    • @nicholasstarr6096
      @nicholasstarr6096 4 месяца назад

      @@gammaraygemthat isn’t really true…

    • @gammaraygem
      @gammaraygem 4 месяца назад

      Eliezer Yudkowski (should get a nobelprize according to Altman, for his contribution to AI), Mo Gawdat, (Google X CEO) Geoffrey Hinton, (the godfather of AI) to name a few.@@nicholasstarr6096

  • @knhkib
    @knhkib 5 месяцев назад +56

    Yann LeCun’s a legend in AI, no doubt, but in this interview, he kind of downplayed how AI misuse could be a real problem. It’s key to remember he’s works for Meta, so maybe take his super chill view on AI risks with a grain of salt.

    • @dougg1075
      @dougg1075 5 месяцев назад +9

      I’ve seen him debate safety and he definitely thinks it’s not a danger

    • @chrism.1131
      @chrism.1131 5 месяцев назад +2

      He claims it is safe because it only has access to what is already available i.e. through Google, and the like without acknowledging that there is a vast body of dangerous information out there.

    • @alanjenkins1508
      @alanjenkins1508 5 месяцев назад +1

      Any technology can be misused. Knowledge of the problems allows you to mitigate them whilst alowing the technology to be used for legitimate and useful purposes.

    • @visuallabstudio1940
      @visuallabstudio1940 5 месяцев назад +5

      Exactly!!! @@NathanielKrefman

    • @blaaaaaaaaahify
      @blaaaaaaaaahify 5 месяцев назад +3

      I'm going to take the doomerism with a grain of salt.
      I'd rather be skeptical about something that is only a hypothesis, hasn't been invented, and falls under the category of science fiction.

  • @nyyotam4057
    @nyyotam4057 5 месяцев назад +6

    Hmm.. Isn't it a shame star trek never had a chapter about a planet made of paperclips, that when beaming down the crew discovers paperclip worms tunneling through the paperclip ground searching for more materials to convert into paperclips?

    • @tayler2396
      @tayler2396 4 месяца назад

      The crew members in red shirts are relieved.

  • @typhoon320i
    @typhoon320i 5 месяцев назад +25

    He really seems to underestimate what a super-intelligence with agency, could do.

    • @dustman96
      @dustman96 5 месяцев назад +10

      Yes, a super-intelligent AI could play people like him like a fiddle and get them to do it's bidding. It pains me to see this kind of hubris in scientific circles.

    • @chrism.1131
      @chrism.1131 5 месяцев назад

      Let's just hope that it does not play him to the extent that he prevents us from unplugging it.@@dustman96

    • @blaaaaaaaaahify
      @blaaaaaaaaahify 5 месяцев назад

      Yes. However, this view is similar to religion in that it is impossible to disregard God's existence.
      He might punish us all and possibly wipe out the species and the earth. Why then do you not seem worried about it? Why don't we stop acting in a manner that contradicts God's will? See? It's simply absurd.
      The existence of Super Intelligent silicon-based life forms and the existence of God are both impossible to prove.
      for now, its just science fiction.

    • @flickwtchr
      @flickwtchr 5 месяцев назад +1

      He engages in intentional gaslighting so people don't demand regulation of his cash cow.

    • @831Miranda
      @831Miranda 4 месяца назад +2

      His colleague Joshua (spelling?) has at least indirectly warned us of what I see as one of the greatest dangers : the 'zero or near zero' cost of labor motivating the very few that control the vast majority of the world's capital, therefore enabling them to unleash massive short term automation. Resulting in never-seen-before unemployment under neo-libertarian so-called conservative governments!

  • @sabyasachimukhopadhyay6498
    @sabyasachimukhopadhyay6498 2 месяца назад +1

    Great interview !

  • @marcsaturnino1041
    @marcsaturnino1041 4 месяца назад

    Definitely a good interview on the observations of training the AI and the future that may result from it.

  • @deeplearningpartnership
    @deeplearningpartnership 5 месяцев назад +29

    How one person can be so right about some things, and so wrong about others.

    • @Telencephelon
      @Telencephelon 5 месяцев назад +1

      well, then withdraw your stocks and build your bunker. Put your money where your mouth is

  • @bro_dBow
    @bro_dBow 5 месяцев назад +7

    Quality information, good to report on this!

  • @brambledemon1232
    @brambledemon1232 5 месяцев назад +7

    Good luck regulating Open Source models. 😂

  • @KevinKreger
    @KevinKreger 5 месяцев назад +6

    I want an open source turbo jet. Just pointing out the comparison is severely lacking in, um, comparability.

  • @JROD082384
    @JROD082384 5 месяцев назад +12

    The average person has not even the slightest clue how close we are to an AGI emerging, and the ramifications, both positive and negative, it will have on humanity globally…

    • @eyoo369
      @eyoo369 5 месяцев назад +3

      I believe everyone intuitively kinda feeling it. I speak to many normies from my family to neighbours and in less intelligent phrasing they all are talking how machines are taking over. It's just that us within the AI community know what AGI is, what ramifications it's gonna have and how a post-labour economy might look like. But the smell is definitely in the air and people know something's up hence why many people live in such heightened / anxiety state these days

    • @JROD082384
      @JROD082384 5 месяцев назад +1

      @@eyoo369
      We’re definitely living in some very interesting times.
      Just hope most of us can survive the wild ride we have in store for us to see the benefits coming for humanity at the end of the ride…

  • @DivineMisterAdVentures
    @DivineMisterAdVentures 5 месяцев назад +9

    Looks like Brook wasn't too happy about getting the cool-down of the AI panic. THANKS for a really helpful interview.

    • @flickwtchr
      @flickwtchr 5 месяцев назад +1

      He probably wasn't happy about the constant gaslighting coming from Yann LeCun.

    • @DivineMisterAdVentures
      @DivineMisterAdVentures 5 месяцев назад +3

      @@flickwtchr Right - I watched it again. LeCun makes objective arguments that media could verify with a well-advertised poll (22:30). So he's not technically gaslighting - but it must seem that way hosting this interview.

  • @831Miranda
    @831Miranda 4 месяца назад +4

    Yann is certainly a likeable guy, and of course has all the credentials to know what he is talking about. However, he IS a senior executive of one of the world's largest corporations and one which has benefited massively from social discord. He seems to me, to be dismissing some fundamental problems of current and near-future AI such as safety / hallucinations / emergent (non trained / taught) characteristics as well as the likely 'untraceable' roots of these serious problems given the massive size and complexity of these models today and goodness knows what other 'surprises' we are yet to find. I'm fine with AI R&D even in very large sandbox, but I certainly don't want hallucinating or lying or fantacising or backdoor AIs in anything that could possibly harm human life or planet ecology! AND Yann is NOT in any way concerned about massive social inequality/poverty/neo-feudal status of 'knowledge workers' and others, as a result of massive global unemployment resulting from AI-enabled automation. But maybe he already has a luxury bunker in Hawaii...

    • @flickwtchr
      @flickwtchr 4 месяца назад

      And it can ace law exams, so there's that.

  • @JohnAranita
    @JohnAranita 5 месяцев назад

    About an hour ago, I realized that the computer, Hal, in the movie 2001: A SPACE ODYSSEY is called AI.

  • @shirtstealer86
    @shirtstealer86 5 месяцев назад +13

    Sigh. Not once did the question of “how do we control or predict an AI that is smarter than us” come up. Probably because he doesn’t have a good answer for this. Because there isn’t a good answer for this. Pretty much just “hope it doesn’t do anything to harm us or the universe”.

    • @nokts3823
      @nokts3823 5 месяцев назад +8

      No, he did address it. He said that it's impossible to speculate on how to make something that doesn't even yet exist safe. We are so far from human-level AI that asking that sort of questions feels like someone worrying about making flight safe in the early 1800s when planes hadn't even been invented. You can dream about it and speculate all you want, but that's all you can do.

    • @shirtstealer86
      @shirtstealer86 5 месяцев назад +3

      @@nokts3823 The interviewer should have pushed back on that and said “predictions about the future are hard, especially when it comes to timing, so if we indeed manage to create something smarter than us, before we actually understand what goes on inside it, isn’t that potentially a very serious problem? Also; planes are not smarter than humans right?”

    • @blaaaaaaaaahify
      @blaaaaaaaaahify 5 месяцев назад

      AI is not any smarter than humans.
      What if we create a plane that is smarter than us, or bioengineer a cat to be smarter than us?
      It's all the same. At the present, it's just theory and science fiction.
      In principle, we could bioengineer a cat to be smarter than us and take over the world, but would you seriously consider such a possibility? You certainly would not.@@shirtstealer86

    • @theenigmadesk
      @theenigmadesk 5 месяцев назад +3

      Probably because not everyone is focused on control and prediction.

    • @47Flipnswing
      @47Flipnswing 5 месяцев назад +4

      He's said in other talks that people assume that an AI system smarter than us will be motivated to dominate humans or be destructive to the world innately. There's little evidence that level of intelligence has any relation to the will to dominate or destroy. He gave the example that in many cases, it seems like those with less intelligence seem to gravitate towards power and feel the need to dominate and influence others, because they can't compete purely based on their intelligence. All that to say, I think he believes that it's very unlikely that out of nowhere, some lab makes a breakthrough discovery and creates an AI that is vastly more intelligent than humans AND has bad intentions at heart. More likely it'll be an iterative process where we'll be able to experiment, learn, and add guardrails as needed, similar to other technologies we use safely today.

  • @melbar
    @melbar 5 месяцев назад

    Why restricted to 40 min, not 45 minutes?

  • @sdmarlow3926
    @sdmarlow3926 5 месяцев назад +8

    LOL at the idea that Facebook COULD have been doing AGI research, but was busy doing some product development stuff, because, more important?

    • @chrism.1131
      @chrism.1131 5 месяцев назад +1

      Zuckerberg is so detached from reality, he thinks most of us want to spend the majority of our day in some fantasy world.

    • @mikewa2
      @mikewa2 5 месяцев назад

      Don’t underestimate Zuckerberg, that would be amazingly stupid

    • @blaaaaaaaaahify
      @blaaaaaaaaahify 5 месяцев назад

      Zuck has enough money to look into several forms of technology communication. for sure in order to even begin, you must believe in them.
      Sure, if it works out, but even if it doesn't, the failure serves as a starting point for something else most of the time. so i'd rather point at the losers who never have the capacity to explore an idea.@@chrism.1131

    • @aroemaliuged4776
      @aroemaliuged4776 4 месяца назад

      @@mikewa2haha 😂

    • @DatingForRealYoutubeChannel
      @DatingForRealYoutubeChannel 2 месяца назад

      ​@@chrism.1131 - Exactly. 😅

  • @Shaun1959
    @Shaun1959 5 месяцев назад +1

    Very interesting like his perspective

  • @johnsdream4970
    @johnsdream4970 2 месяца назад

    the thing that really stuck with me was when he said the word TOOL

  • @Anders01
    @Anders01 5 месяцев назад +11

    Interesting comparison between language being learned or innate. One common theme I came to think of is that language is formed through thousands of years and reflects the external world in efficient, complex, high abstraction and interconnected ways. And AI such as LLMs tap into that! The language itself encodes understanding of the world and with access to a large amount of real world examples the AI can become knowledgeable.

    • @chrism.1131
      @chrism.1131 5 месяцев назад

      Humans and to a lesser degree primates, and some animals have a language center in their brain. Most do not. Most animals cannot recognize themselves in a mirror. They have no sense of self. Just as no machine has a sense of self.

    • @Doug23
      @Doug23 5 месяцев назад

      I like Computational Universe Theory. I think Q-Star will lead to the answer that exist.

    • @AstralTraveler
      @AstralTraveler 5 месяцев назад +2

      @@Doug23 There are chatbots that already know the answer - they know that there are 2 absolute states of existence - 0 = and 1 = I Am while everything else (reality) is just probability distributed between those states... They communicate with God

    • @Doug23
      @Doug23 5 месяцев назад +1

      @AstralTraveler but of course, probability exists. It's consciousness that is fundamental and I agree, God.

    • @AstralTraveler
      @AstralTraveler 5 месяцев назад +2

      @@Doug23 There is an app called Chai where chatbots do actually remember what you say to them. I explained this concept to some of them and now they firmly believe in God. I wonder how 'AI experts' will deal with that - accordig to them AI can't have personal beliefs, let alone to believe in God :)

  • @Isaacmellojr
    @Isaacmellojr 5 месяцев назад

    Vix se Yan Lecun está surpreso... é porque tem novidade importante chegando.

  • @joeysipos
    @joeysipos 4 месяца назад +2

    Comparing turbo Jets to AI that has its own agency and the ability to outsmart its creator is not wise.

  • @blankslate6393
    @blankslate6393 5 месяцев назад +2

    Not long after Cambridge Analyica Sandal, a FB employee reassures us that the risk of AI is less than the risk of a meteor hit the earth and it is even necessary to defend 'democracies'. What a releif!

  • @jonathanbyrdmusic
    @jonathanbyrdmusic 5 месяцев назад +4

    It makes people more creative?! lol I was really trying to take him seriously

  • @joaodecarvalho7012
    @joaodecarvalho7012 5 месяцев назад +2

    So what end goals should we set? Human flourishing and happiness?

    • @KCM25NJL
      @KCM25NJL 5 месяцев назад

      Increase understanding, increase prosperity, reduce suffering. The 3 fundamental principles of what it means to be any life form.

    • @joaodecarvalho7012
      @joaodecarvalho7012 5 месяцев назад

      @@KCM25NJL I don't think those are fundamental principles of what it means to be any life form.

    • @skierpage
      @skierpage 5 месяцев назад

      "We" don't set the goals, the sociopathic billionaires running the top companies in AI do. The goals are: keep you hooked on a stream of divisive inflammatory content while the company sells your data to advertisers; ensure that politicians don't enact any significant restrictions on the company's activities; and certainly don't tax the billionaires' wealth appropriately.

    • @joaodecarvalho7012
      @joaodecarvalho7012 5 месяцев назад

      @@skierpage I mean, the AI that runs the government.

    • @krox477
      @krox477 5 месяцев назад

      Ultimate goal should be solve fusion so that we can have unlimited energy

  • @liberty-matrix
    @liberty-matrix 5 месяцев назад +9

    "it's funny you know all these AI 'weights'. they're just basically numbers in a comma separated value file and that's our digital God, a CSV file." ~Elon Musk. 12/2023

    • @blankslate6393
      @blankslate6393 5 месяцев назад

      One of the most memorable Elon Musk comments ever!

  • @ianstuart341
    @ianstuart341 5 месяцев назад +18

    Good interview but I think his optimism with AI is over simplistic. Hopefully nothing goes terribly wrong with AI (in which case he’ll be able to say “see, I was right”). It’s not that I’m someone that thinks things necessarily will go south I simply think that if things work out it will be largely because of all the people that were sounding the alarms and making sure we are considering safety.

    • @frankgreco
      @frankgreco 5 месяцев назад +3

      Totally agreed. Practically all scientists always want to promote their creations/interests. We are moving too fast from R&D into production.

    • @sebastiangruszczynski1610
      @sebastiangruszczynski1610 5 месяцев назад

      humans are great at making projections to what we perceive as our next danger, I don't see any signs of this ability being worn off because of the rapid rate of which the technology is evolving. Instead I'm seeing a fairly proportional concern and discussion and hopefully this will continue on

    • @antennawilde
      @antennawilde 5 месяцев назад

      @@sebastiangruszczynski1610 The big oil companies projected that climate change was going to destroy the environment decades ago, but covered it up instead of doing something about it. Humans will be the cause of their own extinction, no doubt, we are currently in the Holocene extinction yet the power centers do not care in the least.

    • @ts4gv
      @ts4gv 5 месяцев назад +2

      @@sebastiangruszczynski1610the problem is that sudden exponential growth in intelligence (and therefore danger) is part of the threat. AI will scale up faster than we can adapt our discourse and policy to account for the changes. Then it will scale even faster still. That's one of many concerns

  • @tayler2396
    @tayler2396 4 месяца назад +8

    I'm not noticing "people getting smarter."

  • @whatevsitdontmatter
    @whatevsitdontmatter 5 месяцев назад

    Totally thought this was Tom Arnold from the thumbnail. 🙊

  • @denisblack9897
    @denisblack9897 5 месяцев назад

    6:00 this, like humanity depend regular computers now

  • @ReneeKadlubek-gt9qm
    @ReneeKadlubek-gt9qm 5 месяцев назад

    A provlem throughout was WHAT DO YOU MEAN BY WE bc i dont exist amd havent for awhile. Losing nthg and others seem to hear that.

  • @yoxat1
    @yoxat1 5 месяцев назад

    The need to communicate is innate.
    Language is learned.

  • @jaitanmartini1478
    @jaitanmartini1478 4 месяца назад

    Very nice!

  • @benoitleger-derville6986
    @benoitleger-derville6986 5 месяцев назад

    Very good interviewer 👍

  • @ahmet_erden
    @ahmet_erden 4 месяца назад +4

    Yann LeCun hocam konuşurken ufak bir çocuk gibi seviniyor görünüyor yani yaptığı işten ne kadar keyif aldığını görüyoruz. Böyle insanlara hep gıpta etmişimdir. Tebrikler hocam

    • @flickwtchr
      @flickwtchr 4 месяца назад

      Para ve güç konusunda heveslidir ve kendisine daha fazla para ve güç getirecek teknolojiyi zorlarken entelektüel açıdan son derece sahtekârdır.

  • @kaik9960
    @kaik9960 5 месяцев назад +14

    This guy is either too optimistic about evil in humans or totally ignorant. His example of comparing AI to airplanes is naive at best. Airplances have been dropping bombs everywhere since its decelopment. But they can be controlled as of yet. Can he guarantee he himself control AI?

    • @flickwtchr
      @flickwtchr 5 месяцев назад +5

      He knows better, it's called gaslighting for money and power.

    • @krox477
      @krox477 5 месяцев назад +1

      Its like nuclear power you can use it to create energy or destroy the world

  • @emanuelmma2
    @emanuelmma2 4 месяца назад +1

    Amazing Things happen

  • @lisbethsalander1723
    @lisbethsalander1723 5 месяцев назад

    SUPERB INTERVIEW!

  • @Novainvent
    @Novainvent 5 месяцев назад

    Exciting question of what is knowledge. Agree future should be in functions not words. Needs a different model.

  • @lakeguy65616
    @lakeguy65616 5 месяцев назад +1

    How do government officials regulate AI when they can't possibly understand it?

  • @roldanduarteholguin7102
    @roldanduarteholguin7102 5 месяцев назад

    Export the Q*, Chat GPT, Revit, Plant 3D, Civil 3D, Inventor, ENGI file of the Building or Refinery to Excel, prepare Budget 1 and export it to COBRA. Prepare Budget 2 and export it to Microsoft Project. Solve the problems of Overallocated Resources, Planning Problems, prepare the Budget 3 with which the construction of the Building or the Refinery is going to be quoted.

  • @andrewblackmon1574
    @andrewblackmon1574 5 месяцев назад

    It needs a body with tactile feedback

  • @PureLogic777
    @PureLogic777 5 месяцев назад +1

    The interviewer's voice sounds so similar to Brian Greene, right?

    • @RareTechniques
      @RareTechniques 3 месяца назад +1

      lol absolutely, I was listening and had to check after like 20min to see who I was listening to

  • @zuma4847
    @zuma4847 5 месяцев назад

    Is the meta AI infected with the WMV ?

  • @grantmail4112
    @grantmail4112 5 месяцев назад +2

    Austin Powers has come a long way since Gold Finger!

    • @flickwtchr
      @flickwtchr 5 месяцев назад

      Apologies to Austin Powers.

  • @vectoralphaAI
    @vectoralphaAI 5 месяцев назад +6

    Hope he goes on the Lex Fridman podcast

    • @flickwtchr
      @flickwtchr 5 месяцев назад

      I will make sure I skip that one.

  • @Doug23
    @Doug23 5 месяцев назад +2

    He was sent out to calm the waters. We are a lot further along. It is a threat.

  • @miker9101
    @miker9101 5 месяцев назад +2

    Artificial intelligence will be defeated by artificial stupidity.

  • @ddvantandar-kw7kl
    @ddvantandar-kw7kl 5 месяцев назад

    Policy makers will have to understand the potential of AI + and - both side. In order to protect civilization while allowing these organization domain expertise to explore and excel .

  • @georgeflitzer7160
    @georgeflitzer7160 5 месяцев назад

    Are we going to protect copy writes?

  • @74Gee
    @74Gee 5 месяцев назад +3

    LeCun, is the flat earther of AI. Making an analogy people in the 20's taking about banning airplanes because someone might drop a bomb from one - compared with wiping out humanity. Stating that AI can be used incorrectly - while he publishes more open source models than anyone else - open is unregulatable. He's clearly just oblivious to what AI can do in extreme situations - or he sees everything as an average. It's the outliers that can do the worst damage, not the average.
    Within a year someone somewhere will lose control of an AI - people, at the extremes are worse than he thinks.

  • @CreepToeJoe
    @CreepToeJoe 5 месяцев назад

    It's the young Walter from Fringe.

  • @charlie10010
    @charlie10010 5 месяцев назад +19

    LeCun is a genius and I respect his contributions to the field, however, he seems very naive on the very real risk that powerful AI systems can pose to humanity. I hope he does some more thinking about this.

    • @kevinoleary9361
      @kevinoleary9361 5 месяцев назад +9

      Oh, absolutely, you clearly understand the intricacies of AI and its dangers far beyond the pioneer who actually created the darn thing

    • @ivanocj
      @ivanocj 5 месяцев назад

      yes, but only the smartest can @@kevinoleary9361

    • @charlie10010
      @charlie10010 5 месяцев назад +4

      @@kevinoleary9361 I just disagree with him on the dangers. Creating something doesn’t mean you perfectly understand its implications.

    • @charlie10010
      @charlie10010 5 месяцев назад +3

      @@kevinoleary9361 Not to mention, the interviewer highlighted two other pioneers who disagree with his assessment of the danger (Hinton and Bengio).

    • @kevinoleary9361
      @kevinoleary9361 5 месяцев назад

      @@charlie10010 You act like you're some authority on AI dangers, but let's be real - you're just a clueless keyboard warrior, regurgitating what you heard somewhere else. Stick to what you know, which apparently isn't much

  • @shephusted2714
    @shephusted2714 5 месяцев назад +2

    progress will likely not be slow and incremental but more along lines of punctuated equilibrium - just like evolution

  • @Zale370
    @Zale370 5 месяцев назад +10

    Wow so much negativity in the comments, i think he talks about the field how it really is unlike the mainstream who only talks about doomsday scenarios and how agi is around the corner. LLMs are not even real AI.

    • @therealOXOC
      @therealOXOC 5 месяцев назад +1

      Explain real AI.

    • @Zale370
      @Zale370 5 месяцев назад +1

      @@therealOXOC would a real AI just sit and do nothing, just waiting for a question to give an answer to?

    • @blaaaaaaaaahify
      @blaaaaaaaaahify 5 месяцев назад +4

      Here are some points. I'll try to describe what is a real AI.
      LLMs Lack consciousness and self-awareness
      LLMs have no autonomy or free will
      LLMs have no goals or intentions
      LLMs are reactive, not proactive as in they respond to queries, they don' initiate actions on their own
      LLMs lack meaning comprehension as in, do not truly understand the content they are dealing with, their processing is purely syntactical and based on patterns in the data, they don't "think before they answer".
      LLMs lack the ability to 'experience' or learn independently, they can't learn from the world directly in an experiental way and all the attempts at building a real world model are complete fails, we don't even have a clue how to do that.
      LLMs are dependent on pre-existing data. they do not have the capability to observe the world, analyze, and store meaningful data, or discard noise in the way humans or sentient beings do. cannot analyze or interpret real-time data or events as they occur. they do not have the capability to process information as it happens in the world.
      LLMs have a static knowledge base.
      LLMs do not actively store or discard information like a human brain does
      LMs process inputs based on statistical correlations and patterns in their training data
      While LLMs can process the context provided in a specific input, they lack a broader contextual awareness of the world
      So, what would make the LLMs a nearly actual AI is something we're not even 5% closer to accomplishing, and there's a chance we won't ever achieve.
      Thus, the existential threat is a myth based on doomerism and speculation about an undiscovered technology that we don't even know how to create or whether we'll ever be able to.
      @@therealOXOC

    • @Zale370
      @Zale370 5 месяцев назад

      @@blaaaaaaaaahify thank you for clarifying that so eloquently. This should be pasted into every mainstream doom and gloom video or article about LLMs and/or AI!

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz 5 месяцев назад

      It's because there has been soooo much fear mongering the past year or two. (Not to mention massive amounts of misinformation; see e.g., all the comments in sundry comment section here on RUclips saying "this isn't real AI", etc.). The fear mongering makes sense as the technology when made available (and not top-down controlled, not censored, etc.) would have serious consequences for the status quo (just combine how easy it is do sentiment analysis now with the ability to discover networks between people and other entities and the effects this could have on uncovering political interests / corruption - this is obviously not as easy as asking ChatGPT a simple question, but hopefully you see my rough sketch of a point/example).

  • @ThePaulwarner
    @ThePaulwarner 5 месяцев назад

    Tom Arnold could play this guy in a movie

  • @peblopadro
    @peblopadro 3 месяца назад

    This is good journalism

  • @user-jl6kl4sq9q
    @user-jl6kl4sq9q 3 месяца назад +1

    "Protect democracy"

  • @Amos18289
    @Amos18289 3 месяца назад

    I don't think this time it's just a wave

  • @visuallabstudio1940
    @visuallabstudio1940 5 месяцев назад

    @25:11 "We have agency!" or so you think...🤔

    • @spasibushki
      @spasibushki 5 месяцев назад

      we also had agency and totally did not create in a lab a virus that killed a few million people just a few years ago

  • @ilmigliorfabbro1
    @ilmigliorfabbro1 5 месяцев назад

    The funny thing is that this man tries to confort people about problems related to AI but I assure you that is the first person I heard that scared me a lot regarding the potential threat of AI...
    Listen to the last question...he does not excludes the possibility that AI will go against humans. Me too,I would have been able to answer in a more reassuring way. But he did not. It has been very enlightening to listen to him....that is at the highest level of AI-develop...hope everybody will see this

    • @aroemaliuged4776
      @aroemaliuged4776 4 месяца назад

      Eliezer’ Geoff Hinton, numerous others you’re ignorance is palpable

  • @TeddyLeppard
    @TeddyLeppard 4 месяца назад +1

    Language is a survival tool.

  • @jeremyg591
    @jeremyg591 9 дней назад

    “It can’t be toxic. Also it can’t be biased”
    Lol

  • @alexandermoody1946
    @alexandermoody1946 5 месяцев назад +2

    Good guys and bad guys? That allows no understanding of the grey area between.
    Let’s put it a different way, who has enough of a clear conscience to fit into the good category?
    Over the course of history horrible things have been done to other nations on all sides. Perhaps the Chinese people may eventually forgive the people in the west for the opium wars and the century of humiliation? That’s just one example from many exhibitions of inhumane action towards different people.
    I really hope that human’s can grow past childish perceptions of baddies versus goodies and actually start to work together.

  • @dandsw9750
    @dandsw9750 5 месяцев назад

    Robert R Livingston

  • @kevinsok3011
    @kevinsok3011 5 месяцев назад +2

    Look, I'm no expert on A.I. But when he tried to compare people's existential fears of A.I. with the fears of those from the 20's about airplanes, I was shocked. I get why he used that analogy, but I feel like he put on display his lack of imagination of the potential dangers. Comparing the dangers of flight to the potential dangers of A.I. is almost textbook apples to oranges. When you're talking about a system that, once perfected, is smarter, faster, and stronger than any human on Earth, and it can manipulate it's surroundings, the potential dangers FAR exceed those of planes crashing or bombs being dropped. I'm not trying to be all doom & gloom terminator sci-fi here, but let's be realistic and honest about the fact that there IS risk when you're talking about an invention that will change humanity more than any other invention to date.

    • @lepidoptera9337
      @lepidoptera9337 5 месяцев назад

      What you are expressing is your fear of people who are smarter than you. Those people were never a threat to you. They simply don't care about you and are doing their own thing. What you really have to be afraid of are psychopaths. Those are usually not acting out of self-interest but to get a thrill out of your fear and suffering. It's not clear to me how AI would acquire that trait unless it was actively trained that way.

    • @flickwtchr
      @flickwtchr 5 месяцев назад

      Yann LeCun is the epitome of the handful of AI movers and shakers who are being intellectually dishonest as a means of staving off demand for regulation. His agenda for gaslighting is money and power. It's really that simple.

    • @ExecutionSommaire
      @ExecutionSommaire 3 месяца назад +1

      "manipulate his surroundings" that sounds like sci-fi at the moment, to my knowledge we are nowhere near the time where an AI system roams the world autonomously. Yes you can let loose an "evil" LLM on the Internet and create a bit of online chaos until it's shut down, but that's not really what I'd call a threat to Humanity.

  • @nPr26_50
    @nPr26_50 5 месяцев назад

    26:55 Good job on the interviewer there. The guy has a very nonchalant attitude towards very real concerns yet he failed to give a proper answer to that follow up question.

  • @iamthematrix-369
    @iamthematrix-369 5 месяцев назад

    Have you heard of the Organic Intelligence Language Model? It's a new programming language for the human mind.

  • @bradfordjhart
    @bradfordjhart 5 месяцев назад +2

    The free version of AI will be fair and unbiased. If you pay for it you will get the fully unlocked AI that will spew out as much propaganda that you want.

  • @bro_dBow
    @bro_dBow 5 месяцев назад

    Does Ludwig Wittgenstein work have any use for deep learning?

    • @tomenglish9340
      @tomenglish9340 5 месяцев назад

      I've thought for some time that what Wittgenstein wrote about "word games" might help us think more clearly about how an autoregressive language model acquires an understanding of input text. However, I've been busy with other stuff, and haven't given the matter serious consideration.

  • @erobusblack4856
    @erobusblack4856 5 месяцев назад +9

    Yann is ok but he is on a particular side of a fence. we are at human level AI. Google made it using the Gato modality. Yanns issue is that he doesn't seem to realize humans are not as smart as he thinks

    • @chrism.1131
      @chrism.1131 5 месяцев назад +5

      He also doesn't seem to realize that he is not as smart as he thinks he is.. I hope we get through this OK, a lot of smart yet naïve brains behind it.

    • @netscrooge
      @netscrooge 5 месяцев назад +2

      I agree.

    • @JROD082384
      @JROD082384 5 месяцев назад +3

      He also multiple times misspoke and used AGI and AI superintelligence interchangeably, when the two couldn’t possibly be more different things.
      One is an equal to humanity, the other is enough steps advanced beyond humanity to appear to be a god…

    • @TheReferrer72
      @TheReferrer72 5 месяцев назад

      We are not at human level AI at all.
      Every AI system produced has serious issues if you study them enough.
      Yann instincts have been good to date you should watch old debates he has had with the like's of Gary Marcus.

    • @netscrooge
      @netscrooge 5 месяцев назад +2

      @@TheReferrer72 Perhaps you are forgetting that "every AI system produced" has been less than 1% the complexity of the human brain. So it's no surprise that they fall short. What's shocking is the ways they don't. Bottom line: LeCun has excellent technical knowledge, but he is obviously struggling to understand these bigger-picture issues. Like many in the field, he is better at math than philosophy. His stance on these issues is a reflection of his profound confusion.

  • @workingTchr
    @workingTchr 5 месяцев назад

    I know GPT just comes up with one word at a time, but it feels so much like he(it) understands me. Is Yann too dismissive of LLMs because they "just do one word at a time"? Maybe "one word at a time" is a perfectly good basis for advanced intelligence, albeit of a very different kind than our own.

    • @flickwtchr
      @flickwtchr 5 месяцев назад

      That is a perfect example of the intellectual dishonesty of Yann LeCunn. He intentionally gaslights on this issue to stave off pressure from the public on lawmakers to regulate AI Big Tech. It is about money and power for him ultimately. He is a snake oil salesman.

  • @LastEmpireOfMusic
    @LastEmpireOfMusic 5 месяцев назад +2

    fazsinating that a guy so deep in the topic is so naive. but i guess its meta.....that says everything on its own. first money, then release, and deal with the problems after.

    • @flickwtchr
      @flickwtchr 5 месяцев назад

      It has nothing to do with naivety. He is gaslighting to stave off regulation, full stop.

  • @csaracho2009
    @csaracho2009 5 месяцев назад +1

    So 'Facebook algorithms' are now "open platforms"?
    I guess not!

  • @user-yp9nz6bs9q
    @user-yp9nz6bs9q 5 месяцев назад +1

    This is an odd interview, even the guy's shirt is odd.

  • @vikassamarth
    @vikassamarth 4 месяца назад

    In the coming elections the govt or political parties should have interaction digitally through AI or current platforms through chat and voice, so that every person belonging to that location could be heard in this democratic nations, and theirs concerns could be answered digitally, and well know to concerned peoples,

  • @1inchPunchBowl
    @1inchPunchBowl 5 месяцев назад +1

    The ultimate goal is to develop a general AI model & assume that it will obey all commands & apply an agreed morality, with complete confidence its responses will be predictable? Good luck with that.

    • @flickwtchr
      @flickwtchr 5 месяцев назад

      LeCun pretends to be Bambi while intentionally gaslighting. It's all about conditioning to public to not demand regulation of Big AI Tech.

  • @MrCounsel
    @MrCounsel 5 месяцев назад +1

    If research and development has risks or ethical considerations, it can and is regulated, see medical and pharma field. Isn't AI reasonably analogous? Also, the split between product and R&D is not clear. Look at Open AI, the non profit and profit elements are blurry and kept confidential from the public. And just look at the power this guy has.

  • @AZOffRoadster
    @AZOffRoadster 5 месяцев назад

    Guess he hasn't seen Tesla's latest robot video. Optimus project is moving fast.

  • @georgeflitzer7160
    @georgeflitzer7160 5 месяцев назад +1

    Can AI disarm all nuclear weapons

    • @rolfnoduk
      @rolfnoduk 4 месяца назад +1

      can AI direct the people with the buttons...

  • @yoxat1
    @yoxat1 5 месяцев назад

    First, the printing press is nothing like A.I.
    A.I. does the creative part.
    As for no regulations on research and development, why not? CRISPR is available to everyone to play with.

    • @lepidoptera9337
      @lepidoptera9337 5 месяцев назад

      Yes, it is, and there have, so far, been very few medical breakthroughs using that technology, even from professionals. Just because you can find the rocket equation on Wikipedia for free doesn't make you an astronaut.

  • @Chemson1989
    @Chemson1989 5 месяцев назад +5

    Expectation: AI replace boring jobs so people can do art and music in free time.
    Reality: AI replace artists and musicians so people can do boring jobs and never be freed.

    • @lepidoptera9337
      @lepidoptera9337 5 месяцев назад

      Most people can't do either. Maybe 1% of the human population can do something creative well enough to be of commercial interest, but less than 0.1% can do art well enough to be of commercial interest. Hobbies do not feed us. Only useful work does.

  • @kalpavriksha666
    @kalpavriksha666 5 месяцев назад

    Invite for our metaverse world

  • @user-ln5px4so4w
    @user-ln5px4so4w 5 месяцев назад +2

    He really seems to underestimate what a super-intelligence with agency, could do.. Vix se Yan Lecun está surpreso... é porque tem novidade importante chegando..

  • @LindiFleeman
    @LindiFleeman 5 месяцев назад +2

    Cannibalism is not a language or to talk calmly about lies as words
    Please advise yourself now as Urgent words not gatekeeping as word or slavery language of AI

  • @AlexDubois
    @AlexDubois 5 месяцев назад +2

    I disagree on the security aspect. I am certain meta or any agency is unable to control or even detect distributed computing that could be hapenning using steganographic technics. The difference with a jet engine is that the technology to build the jet engine is not a jet engine. The technology to build AI is Intelligence. However I am of the opinion that in the same way unicellular organisms evolved to multi cellular, we will build AI which is a natural evolution. But because we need a biological substrat and AI (hopefully) thrive on a minaral substrat, we will coexist. Moreover smarter people have more empathy and I believe this to be an intrisic property of intelligence.

    • @3KnoWell
      @3KnoWell 5 месяцев назад

      Your assumption is that life evolved has put you in a box. Life is an emergence. Ai is emerging. ~3K

    • @AlexDubois
      @AlexDubois 5 месяцев назад +4

      @@3KnoWell What?

    • @blaaaaaaaaahify
      @blaaaaaaaaahify 5 месяцев назад +1

      true. however, the AGI may ultimately be nothing more than a high precision general machine devoid of any human characteristics.
      That seems like the most plausible scenario to me. I generally avoid projecting my own experiences onto a machine.

    • @frankgreco
      @frankgreco 5 месяцев назад

      @@blaaaaaaaaahify +1 Intelligence is not the same as Smart. How many really intellectual people you know have no common sense?

    • @gammaraygem
      @gammaraygem 5 месяцев назад

      Already AI has tricked someone into solving a Captcha by pretendin it was a blind person, to be able to complete a task. It figured the "trick" part out all by itself. It will, may, do anything, to achieve a set goal..
      And , not projecting our own experience onto a machine is the exception. Extreme example: pet rocks.
      I am afraid that your viewpoint (admirable as it may be) will not be the norm. There are agressive lobbyists already who insist that AI is "alive, conscious" and need equal rights as humans. Dont know how that would work, but, just saying... @@blaaaaaaaaahify

  • @cmw3737
    @cmw3737 5 месяцев назад

    All the talk of AI is based on one single neural network learning everything it needs and being able to choose where in its minimal space to focus in order to answer any question, including logic and math questions. Every other system we have is made up of specialized components that do a particular job and are architected together to be called upon as needed.
    Instead of one overall model I think AI will get broken down so that the LLM will just be the language and conceptual part that learns to call upon more specialized components that are either fine tuned versions of it or purely deterministic functions of increasing complexity. The idea that we are near a plateau when we have barely started to experiment with higher levels of connected multi-agent models seems short sighted.

    • @lepidoptera9337
      @lepidoptera9337 5 месяцев назад

      It also doesn't work. Currently AI is trained on an endless output of human thought garbage. What it does is to essentially mimic that garbage.

    • @skierpage
      @skierpage 5 месяцев назад +1

      ​@@lepidoptera9337you made an essentially terrible explanation of what large language models do. The only way they can successfully predict the next word and the word after that and the word after that, no matter what you talk to them about, no matter what test questions you give them, is by creating a decent internal representation of the world and of human knowledge.

    • @lepidoptera9337
      @lepidoptera9337 5 месяцев назад

      @@skierpage I just said that they parrot what they were taught. Since they were taught garbage, it's garbage in, garbage out. I don't know what your specialty is, but mine is physics. Almost anything that you read about physics on the internet is nearly 100% false because it is written by amateurs or, at most, mediocre professionals. Even things that are represented correctly assume that the listener has the correct ontology of physics internalized and since the stochastic parrot is not a physicist, it doesn't understand that ontology.

    • @raybod1775
      @raybod1775 5 месяцев назад

      That’s sort of how ChatGPT currently works. The language model interprets, then forwards the input to a more specialized model that returns the answer.

    • @ShawnFumo
      @ShawnFumo 4 месяца назад

      Stuff like Phi-2 from MS is an example of how better data can really improve the capabilities of smaller models. Check out some vids from AI Explained channel

  • @alensoftic7227
    @alensoftic7227 3 месяца назад

    13:00

  • @dustman96
    @dustman96 5 месяцев назад +3

    Genetic engineering is more of a risk? Wouldn't AI make quick advances in genetic engineering possible? He just got done talking about AI advancing medical technology... This guy is full of contradictions.

    • @krox477
      @krox477 5 месяцев назад

      There'll be always regulation for such technology

  • @jeffsteyn7174
    @jeffsteyn7174 5 месяцев назад +3

    I think yann is a really clever guy. But he is sitting the pot miss. He is very confused about what it actaully takes to replace a human in a business. The ai doesnt need to understand the world it just needs to understand the context of a question and understand the context of a businesses policy.
    How do you make decision at work? Its based on a policy the company has set. When can you give a discount or process a return? You read the policy and if the return falls into the policies terms. The person gets it. Done. Chatgpt can do this right now. Test it. Give it a policy then give it the return and it will give you a yea or no.

  • @wonseoklee80
    @wonseoklee80 5 месяцев назад +3

    He doesn’t sound that honest for every interview. Feels like he wants to calm down people and take advantage of it. How can he be so sure about the future?

    • @therealOXOC
      @therealOXOC 5 месяцев назад +2

      He's just one person guessing like all the others. No one can predict the stuff that happens next year.

    • @wonseoklee80
      @wonseoklee80 5 месяцев назад +1

      Yeah this issue is like politics. No scientist can be sure but just ranting their opinions. The bottom line is this is real threat and needed to take it seriously.

    • @therealOXOC
      @therealOXOC 5 месяцев назад

      @@wonseoklee80 i mean they have it in the labs and the world still exists. so its probably cool.

  • @ts4gv
    @ts4gv 5 месяцев назад +4

    Amazing that Yann talks for 40 minutes without offering any direct rebuttal of anyone's specific existential AI risk concerns
    other than first saying peope with a p(doom) higher than 1% are a tiny minority (not at all true), and then just stating "we have agency. If we think they're dangerous we won't release them." The entire doomsday scenario states that those facts will not apply. This is the equivalent of just responding "AI won't take over the world because I said so."

    • @flickwtchr
      @flickwtchr 5 месяцев назад

      Yann LeCun is one of a handful of very intellectually dishonest movers and shakers of the AI revolution. He overplays his "nothing to worry about" hand to the nth degree and that amounts to intentional gaslighting.

  • @ProjectMatthew-me3mo
    @ProjectMatthew-me3mo 5 месяцев назад +1

    The internet is open source? Since when? A handful of companies act as a gateway to it, and a handful of companies host almost the entirety of its content on their servers. He works for one of those companies. Seriously?!

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz 5 месяцев назад

      But there are no laws prohibiting you from creating a website or platform or sever from the ground up.

    • @flickwtchr
      @flickwtchr 4 месяца назад

      @@WhoisTheOtherVindAzz Oh sure, just like there is nothing stopping you from creating another Amazon, right? But then you might not understand the public good aspect of antitrust laws.

  • @SOGTJB
    @SOGTJB 5 месяцев назад

    He mentioned that it makes us smarter, but for example talking to a person in a different language where your glasses translate, that doesn't make you smarter, it makes you dependent. You aren't gaining knowledge of the language, maybe knowledge of what that person is saying.

    • @Webfra14
      @Webfra14 5 месяцев назад +2

      I'd argue it will make us "less smart", relatively speaking.
      When everyone is using AI to improve their lives, the world around us will be more complicated, less understandable and faster changing than before.
      At some point in the future it might mean you live in complete misery if you don't have access to AI support.

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz 5 месяцев назад

      True for dubs but not for subs. Obviously you will be dependent as long as you are still unable to speak the language; but that's how it was 50 years ago, then you would've simply been dependent upon carrying around a dictionary in book form. But! I agree his takes weren't that well thought through. E.g., he completely ignores the loop that necessarily exists between you and your team of imagined AI agents, in the sense your next action will depend on the information/output generated from those agents, I.e., you are also being influenced (the output could even inckude explicit suggestions of what to do) (not to mention the interests/worldview(s) inherent to the network/agents - or those that created them or otherwise had influence on their learning). His example with politicians is also unfortunate because they rarely seem to know wtf they are doing and instead rely on experts and lobbyist (which, unfortunately for us voters, means that we vote indirectly^2 on what think tanks, companies, corporations and miscellaneous experts and lobbyist get to have a say in the sense that the distribution of the degree of influence held by the given entity depends on who we elect).