Munk Debate on Artificial Intelligence | Bengio & Tegmark vs. Mitchell & LeCun

Поделиться
HTML-код
  • Опубликовано: 29 дек 2024

Комментарии • 1 тыс.

  • @PolicyRelevantScienceTech
    @PolicyRelevantScienceTech  Год назад +8

    PRE-DEBATE
    • Pro: 67%
    • Con: 33%
    POST-DEBATE
    • Pro: 64%
    • Con: 36%
    RESULT
    Con wins by a 3% gain.

    • @marcosrodriguez2496
      @marcosrodriguez2496 Год назад +19

      wait, if the initial distribution was 67/33 (and assuming that whether someone is likely to change their mind does not depend on which initial group they're in), the expected number of people changing from Yes to No is twice as high. If the initial distribution was 100/0, Yes could never win the debate.

    • @74Gee
      @74Gee Год назад +2

      Please post the numbers.

    • @ToWisdomThePrize
      @ToWisdomThePrize 8 месяцев назад

      Isn't this not accurate as the ending poll could not be conducted without glitches

  • @74Gee
    @74Gee Год назад +110

    1:23:00 Melanie Mitchell thinks companies that use AI will not out perform those who don't. Then picks a single example of AI hallucination for justification.
    I'm beginning to think she hasn't actually researched AI at this point. She's not even discussing the questions any more, she's just trying to prove a point, quite badly.

    • @CodexPermutatio
      @CodexPermutatio Год назад +13

      You're wrong about her, friend. She is one of the best AI experts in the world. To get out of your ignorance (and incidentally understand her point of view a little better) you only have to read her books "Artificial Intelligence" and "Complexity". She has written many more books, but those two alone have enough substance. These turn-based, moderated discussions are nice, but they're too short and can hardly have the length these topics deserve.

    • @74Gee
      @74Gee Год назад +39

      @@CodexPermutatio Of course I know she is actually an expert but her blinkered view on a) what constitutes an existential threat and b) how AI could facilitate this possibility, c) how she dismisses the notion entirely, and d) how she thinks even considering the idea of AI danger would detract from combating the "present threats of mis-information" - all point to an irrational personality. I pondered suggesting she has ulterior motives but stopped short at suggesting she had researched AI (dangers).
      Taking only point D for brevity, she sees mis-information in elections as something so dangerous that AI safety should not take up any resources whatsoever. Surely if AI of today can overthrow a governmental system, AI in 20 years or so could do something worse. And that alone could put us in a position we are unable to return from - like putting a pro-war US candidate in power bringing us a closer to a nuclear winter scenario - an existential risk.
      These are non-zero and also non-infinitesimal odds even with today's AI.
      AGI is not a per-requisite of existential risk.

    • @jmanakajosh9354
      @jmanakajosh9354 Год назад +5

      ​​@@74GeeThe whole time she mentions other things to talk about that are more pressing, but if she could give examples of them I would've loved that. We are facing population collapse, another major pandemic, climate change if you can give me a reason allignment research *wouldn't* help these other issues I'd be all ears. But all of these other problems are also problems of allignment and of failed incentives it just happens the incentives are human and not machine

    • @74Gee
      @74Gee Год назад +2

      @@jmanakajosh9354 It's clear you care about the state of the world and the direction we're heading in. AI alignment research certainly would help with addressing any problems that AI could potentially help with - The last thing we want is solutions that make matters worse.
      It's not like there's a shortage in resources - Microsoft’s stock hits record after executives predict $10 billion in annual A.I. revenue (Microsoft shares climbed 3.2% to a record Thursday and are now up 46% this year)
      ...so it's not like doubling the AI alignment research with additional hires is going significantly affect the bottom line of Microsoft, or likely anyone else in the field.

    • @JohnMoran
      @JohnMoran Год назад +5

      Monk debates always seem to choose someone extra annoying to fill that role.

  • @lwmburu5
    @lwmburu5 Год назад +104

    " I respect Yann's work, he is an amazing thinker. But with the " Of course if it's not safe we're not going to build it, right?" argument he pointed a shotgun at his foot and pulled the trigger. The argument is limping... painfully.

    • @jmanakajosh9354
      @jmanakajosh9354 Год назад +7

      You could hear the audience moan. I saw Daniel Dennett point it out once that argument where we say right? Or Surely, I think his example was "surely" they're not arguments at all they're OBVIOUS assumptions. Everyone does this to some degree it's hard to see him doing this when it's his field of expertise, it's terrifying honestly.

    • @leslieviljoen
      @leslieviljoen Год назад +2

      Yes, after twice hearing Max say that not everybody is as nice as Yann.

    • @macn4423
      @macn4423 Год назад +4

      hes been doing that many times

    • @dillonfreed
      @dillonfreed Год назад +1

      He's surprisingly daft

    • @lwmburu5
      @lwmburu5 Год назад

      @@dillonfreed disagree a bit😁 he's just experiencing a Gary Marcus type "out of distribution" failure mode😁 Unable to step out of his mind. Actually, it is the fact that he's ferociously intelligent that makes this failure particularly dangerous

  • @renemanzano4537
    @renemanzano4537 Год назад +122

    Before the debate i was worried about AI. Now, after listening the clownish arguments in favor that AI is safe, I think we are clearly fucked.

    • @erichayestv
      @erichayestv Год назад +3

      💯%

    • @jmanakajosh9354
      @jmanakajosh9354 Год назад +5

      Maybe listen to Gary Booch or Robin Hansen they have much more convincing arguments (sarcasm)

    • @dianorrington
      @dianorrington Год назад +15

      Truly. Embarrassingly pathetic arguments. We are so fucked. I'd highly recommend Yuval Noah Harari's recent speech at Frontiers Forum, which is available on youtube.

    • @kreek22
      @kreek22 Год назад +5

      The pro-AI acceleration crew has no case. I've read all of the prominent ones. The important point is that power does not need to persuade. Power does what it wishes, and, since it's far from omniscient, power often self-destructs. Think of all the wars lost by the powers that started the wars. Often the case for war was terrible, but the powers did it anyway and paid the price for defeat. The problem now is that a hard fail on AI means we all go down to the worst sort of defeat, the genocidal sort, such as Athens famously inflicted upon Melos.

    • @dianorrington
      @dianorrington Год назад +7

      @@kreek22 Undoubtedly. And it felt like LeCun had that thought in the back of his mind during this whole debate. His efforts were merely superficial. And I've seen Altman give an even worse performance, even though he pretends to be in favour of regulation...he is talking through his teeth. Mo Gawdat has out right stated that he believes it will first create a dystopia, but will ultimately result in a utopia, if we can ride it out. I think the billionaire IT class have it in their heads that they will have the means to survive this, even if nobody else does. It's very bizarre.

  • @paigefoster8396
    @paigefoster8396 Год назад +44

    52:39 A robot vacuum doesn't have to WANT to spread crap all over the floor, it just has to encounter a dog turd and keep doing its "job."

    • @PepeCoinMania
      @PepeCoinMania Год назад +1

      it doesnt work for machines who can think

    • @therainman7777
      @therainman7777 Год назад +6

      @@PepeCoinManiaYou have no idea what you’re talking about. Define what you mean by “think,” in clear and precise terms, explain why an ability to think would ensure that its goals stay perfectly aligned with ours, and explain how you know machines will one day “think” according to your definition.

  • @RonponVideos
    @RonponVideos Год назад +121

    If I saw these anti-doom arguments in a movie, I’d think the writers were lazily trying to make the people look as naive as possible.
    But nope, that’s actually what they argue. Sincerely. “If it’s dangerous, we won’t build it”. Goddamn.

    • @netscrooge
      @netscrooge Год назад +6

      Great comment. Thanks!

    • @xXxTeenSplayer
      @xXxTeenSplayer Год назад +14

      No kidding! I couldn't believe that these people have any knowledge of AI, let alone be experts! How incredibly naive these people are! Scary af!!!

    • @trybunt
      @trybunt Год назад +15

      Yeah.. seems ridiculously optimistic and dismissive. I understand that it doesn't seem probable AI will pose a serious threat, but to act like it's impossible because we will always control it, or it will innately be good? That just seems foolish. I'm pretty optimistic, I do think the most likely outcome is positive, but it was hard to take these people seriously. It's like getting in a car and one passenger is saying "could you please drive safely" and these people are in there saying "why would he crash? That's just silly, if he is going off the road he can simply press the brake pedal, look, it's right there under his feet. I guess we should worry about aliens abducting us, too?"

    • @joehubris1
      @joehubris1 Год назад +2

      @@trybunt you forgot their other big argument: "There are much more pressing dangers than AIpocalypse and these speculative scenarios draw attention from the true horrors we are about to visit upon huma--I mean ... everything you guys brought up is far away, let's all just go back to sleep."

    • @agrandesubstituicao
      @agrandesubstituicao Год назад +1

      @@trybuntthey have big pockets behind of it it’s not good to their business full Ai regulation

  • @MetsuryuVids
    @MetsuryuVids Год назад +31

    Melanie and Yann seem to completely misunderstand or ignore the orthogonality thesis. Yann says that more intelligence is always good.
    That's a deep misunderstanding on what intelligence is and what "good" means. Good is a matter of values, or goals. Intelligence is orthogonal to goals. An agent with any amount of intelligence can have any arbitrary goals. They are not related. There are no stupid terminal goals, only stupid sub-goals relative to terminal goals. Bengio briefly mentions this, but doesn't go very deep in the explanation.
    Melanie mentions the superintelligent "dumb" AI, thinking that it's silly that a superintelligence would misconstrue our will. That is a deep misunderstanding of what the risks are. The AI will know perfectly well what we want. The orthogonality thesis means that it might not necessarily care. That's the problem. It's a difference in goals or values, it's not that the superintelligence is "dumb".
    Also, they don't seem to understand instrumental convergence.
    I would love to have a deep discussion with them, and go through every point, one by one, because there seem to be a lot of things that they don't understand.

    • @wonmoreminute
      @wonmoreminute Год назад +4

      He also doesn't mention "who" it's good for. Historically, many civilizations have been wiped out by more advanced and intelligent civilizations. And surely, competing nations, militaries, corporations, and possibly socioeconomic classes will have competing AIs that are also not aligned with the greater good of ALL humans.

    • @MetsuryuVids
      @MetsuryuVids Год назад

      ​@@wonmoreminute I'd be happy if even an evil dictatorship manages to actually align an AGI to some semblance of human values. Not ideal, but at least probably not the worst case scenario.
      The thing is that we currently don't even know how to do that, so we'll probably go extinct, hence the existential threat.

    • @OlympusLaunch
      @OlympusLaunch Год назад +3

      Very well put. Thanks for the read, I full agree.

    • @jamesatkins7592
      @jamesatkins7592 Год назад

      I assumed LeCunn meant to imply broad progress of positive actions overwhelming negative ones rather than necessarily just the specific case over how controlled and purpose driven an AI would be

    • @ChrisWalker-fq7kf
      @ChrisWalker-fq7kf Год назад

      The problem is the orthogonality thesis is dumb. It requires a definition of intelligence that is so narrow that it doesn't correspond to anything we humans would understand as intelligent. If that's all intelligence is (the ability to plan and reason) why would we be scared of it anyway?
      There is a sleight of hand going on here. We are invited to imagine a super-smart being that would have intellectual powers beyond our imagining, would be to us as we are to an insect. But when this proposed "superintelligence" is unveiled it's just a very powerful but completely dumb optimiser.

  • @milkenjoyer14
    @milkenjoyer14 Год назад +54

    Agree with him or not, you have to admit that LeCun's arguments are simply disingenuous. He doesn't even really address the points made by Tegmark and Bengio.

    • @JazevoAudiosurf
      @JazevoAudiosurf Год назад +9

      he ignores like 90% of the arguments

    • @xXxTeenSplayer
      @xXxTeenSplayer Год назад +3

      They aren't necessarily disingenuous, I think they are just that short sighted. They simply don't understand the nature of intelligence, and how profoundly dangerous (for us) sharing this planet with something more intelligent than ourselves would be.

    • @explodingstardust
      @explodingstardust Год назад +2

      He has conflict of interest, as he is working for Meta.

  • @vincentcaudo-engelmann9057
    @vincentcaudo-engelmann9057 Год назад +33

    LeCun seems to have a cognitive bias of abstracting the specific case of Meta dev to everything else.
    Also he outright under exaggerates the current GPT4 intelligence levels.
    Bro is it worth your paycheck to spread cognitive dissonance on such an important subject….smh

    • @jmanakajosh9354
      @jmanakajosh9354 Год назад

      I hope dearly it is not in the culture of Facebook to have no worry about this.

    • @jackielikesgme9228
      @jackielikesgme9228 Год назад

      Haven’t watched yet, but Looking at the credentials… I believe you already

    • @jackielikesgme9228
      @jackielikesgme9228 Год назад +5

      Chief AI scientist at Meta seems to have a bias … yeah

    • @jmanakajosh9354
      @jmanakajosh9354 Год назад +1

      @@jackielikesgme9228
      I watched the Zuck's interview with Lex Friedman and it didn't seem like total abandon of AI safety was a thing, but this concerns me esp. since FB models are open source

    • @jackielikesgme9228
      @jackielikesgme9228 Год назад +1

      @@jmanakajosh9354 how was that interview? It’s one of a handful of Lex podcasts I haven’t been able to watch. He’s much better at listening and staying calm for hours than I am lol.

  • @JD-jl4yy
    @JD-jl4yy Год назад +10

    43:25 How this clown thinks he can be 100% certain the next decades of AI models are going to pose no risk to us with this level of argumentation is beyond me...

  • @74Gee
    @74Gee Год назад +142

    "If it's not safe we're not going to build it" Yann LeCun, what planet do you live on?

    • @CodexPermutatio
      @CodexPermutatio Год назад +16

      He lives on the planet of the AGI builders. A planet, apparently, very different from the one inhabited by the AGI doomers.
      I, by the way, would pay more attention to builders than doomers. Being a doomer is easy (it doesn't require much, not even courage). Being a builder, on the other hand, requires really understanding the problems you want to solve (also, it implies action).

    • @RonponVideos
      @RonponVideos Год назад +35

      “If the sub wasn’t safe I wouldn’t try to take people to the Titanic with it!”
      -Stockton Rush

    • @grahamjoss4643
      @grahamjoss4643 Год назад +7

      @@CodexPermutatio we have to pay attention to the builders because they implicate us all.

    • @OlympusLaunch
      @OlympusLaunch Год назад +15

      ​@@CodexPermutatioYour points are valid but you underestimate the level to which human ego creates blind spots. Even very smart people develop attachments to the things they pour their energy into. This makes them biased when it comes to potential risks.

    • @jackielikesgme9228
      @jackielikesgme9228 Год назад +9

      Omg this part is making me so stabby!! Would we build a bomb that just blows shit up? 🤦‍♀️ yes yes we did it ffs we do it we are still doing it. This is not a relaxing sleepy podcast at all lol

  • @besratbekele1032
    @besratbekele1032 Год назад +52

    Yann LeCun tries to sooth us by providing such naïve and unnuanced promise as if we are children. If these are the kind of people who are at the forefront of AI research at the corporate labs driven by a clear vested interest of profit, it seems like things are about to get uglier than I've even imagined.

    • @greenbeans7573
      @greenbeans7573 Год назад

      Meta is the worst because it is led by Yann LeCun, a literal retard who thinks safety is a joke. Google is much better, and Microsoft almost as bad.
      - Meta obviously leaked Llama on purpose
      - Google was not racing GPT-equivalent products until Microsoft started
      - Microsoft didn't even do proper RLHF for Bing Chat

    • @ts4gv
      @ts4gv Год назад +5

      nail on the head. it's about to get gnarly. and then it will keep getting worse until we die

    • @blahblahsaurus2458
      @blahblahsaurus2458 Год назад +3

      They didn't even discuss fully autonomous military drones, and how these would change war and the global balance of power

    • @mernawells7839
      @mernawells7839 Год назад +1

      Mo Gawdat said he doesn't know why people aren't marching in the streets in protest

    • @Dababs8294
      @Dababs8294 Год назад

      Couldn't have said it better myself

  • @yipfaitse6738
    @yipfaitse6738 Год назад +79

    I think the cons just convinced me to be more concerned about AI existential risk by being this careless about the consequences of the technologies they build.

    • @familyshare3724
      @familyshare3724 Год назад

      Immediately killing 1% of humanity is not acceptable risk?

    • @therainman7777
      @therainman7777 Год назад +7

      Smart response, I fully agree. It’s alarming.

    • @MM-cz8zt
      @MM-cz8zt Год назад +1

      I run an international ML team that implements and develops new routines. It is not accurate to say that we are careless, it's simply that we don't have the right systems or the right techniques to develop AGI. There are many more pressing issues about bias, alignment, safety, and privacy that are pushed to the wayside when we imagine the horrors of AGI. We have shown that LLMs cannot self-correct reasoning. Whatever tech becomes AGI, it's not LLMs. Secondly, we won't ever suspend AI development. There are too many national interests at stake, there will never be a pause. Period. It is the perspective of our military that our geopolitical adversaries will capitalize on the pause to try to leap frog us. So, imaging the horrors of what could be possible with AGI is the wrong thing to be focused on. AI has the potential to harm us significantly in millions of other ways before taking over society. A self-driving car, or delivery robot, is millions or billions of times more likely to accidentally harm you before a malicious AGI ever will.

    • @KurtvonLaven0
      @KurtvonLaven0 Год назад

      ​@@MM-cz8zt, Metaculus has the best forecast I have found on the topic, and currently estimates our extinction risk from AGI around 50%.

    • @KurtvonLaven0
      @KurtvonLaven0 Год назад

      @@MrMichiel1983, I encourage everyone to find a better forecast themselves. The reason you propose is obviously stupid; you will have more success finding the truth if you stop strawmanning people you disagree with. I was looking for the most convincing forecasting methodology, and for one thing they at least publish their own track record, which few can claim. For another, they crowd-source the forecasts and weight them by the track record of the individual forecasters. Also, their forecast of the arrival date of AGI (~50% by 2032) aligns closely with most other serious estimates I have found (2040/2041).

  • @gaussdog
    @gaussdog Год назад +34

    For humanity sake… I cannot believe the arguments that the pro AI group makes… As much as I am a pro AI person, I understand, at least some of the risks, and will admit to at least some of the risks… If they cannot admit to some of them, then I don’t (and CANNOY) trust them on any of them

    • @ventureinozaustralia7619
      @ventureinozaustralia7619 4 месяца назад

      The debate wasn’t about risk, it was about existential risk, is that so hard to understand? Did you even watch this ??

    • @gaussdog
      @gaussdog 4 месяца назад +1

      @@ventureinozaustralia7619 “ we would never give unchecked autonomy and resources to systems that lacked these basic principles..” or whatever she said, I remember lol and here we are a year later and AI is now, quite obviously, in my opinion, here to do whatever the fuck it wants, however the fuck it wants, for the rest of eternity with zero checks and zero balances, except in their puny imaginations

  • @Stuartgerwyn
    @Stuartgerwyn Год назад +7

    I found LeCun & Mitchell's arguments (despite their technical skills) to be disturbingly naive.

  • @vincentcaudo-engelmann9057
    @vincentcaudo-engelmann9057 Год назад +48

    LeCun wants to endow AI with emotions AND make them subservient……. Anyone know what that is called?

    • @ikotsus2448
      @ikotsus2448 Год назад +28

      Slavery. Add in the superior intelligence part and now it is called hubris.

    • @Nico-di3qo
      @Nico-di3qo Год назад +4

      Emotions that will make them desire to serve us, so everything good.

    • @andrzejagria1391
      @andrzejagria1391 Год назад +3

      @@Nico-di3qo thats just slavery with extra steps

    • @MetsuryuVids
      @MetsuryuVids Год назад +10

      I disagree with LeCun, in the fact that he thinks the alignment problem is an easy fix, and that we don't need to worry and "we'll just figure it out", or that people with "good AI will fight the people with bad AIs", and many, many of all of his other takes. I think most of his takes are terrible.
      But, I do think this one is correct. In a way. No, it's not "slavery*".
      The "emotions" part is kind of dumb, and it's a buzzword, I will ignore it in this context.
      Making it "subservient" is essentially the same thing as saying making it aligned to our goals, even if it's a weird way to say it. Most AI safety researchers would say aligned. Not sure why he chose "subservient".
      So in summary, the idea of making it aligned is great, that's what we want, and what we should aim for, any other outcome will probably end badly.
      The problem is: we don't know how to do it. That's what's wrong with Yann's take, he seems to think that we'll do it easily.
      Also, he seems to think that the AI won't want to "dominate" us, because it's not a social animal like us. He keeps using these weird terms, maybe he's into BDSM?
      Anyway, that's another profound mistake on his part, as even the moderator mentions. It's not that the AI will "want" to dominate us, or kill us, or whatever.
      One of the many problems of alignment is the pursuit of instrumental goals, or sub-goals, that any sufficiently intelligent agent would pursue in order to achieve any (terminal) goal that it wants to achieve. Such goals include self-preservation, power-seeking, and self improvement. If an agent is powerful enough, and misaligned (not "subservient") to us, these are obviously dangerous, and existentially so.
      *It's not slavery because slavery implies forcing an agent to do something against their will.
      That is a terrible idea, especially when talking about a superintelligent agent.
      Alignment means making it so the agent actually wants what we want (is aligned with our goals), and does what's best for us. In simple words, it's making it so the AI is friendly to us. We won't "force" it to do anything (not that we'd be able to, either way), it will do everything by its own will (if we succeed).
      Saying it's "subservient" or "submissive" is just weird phrasing, but yes, it would be correct.

    • @jmanakajosh9354
      @jmanakajosh9354 Год назад +3

      ​@@MetsuryuVids
      I think it's shocking that he thinks it possible to model human emotions in a machine at all (I'd love to learn more about that it gives genuine hope that we can solve this) but then falls on his face....and so does Melanie when they say it NEEDS human-like intelligence, that's the equivalent of saying planes need feathers to fly. It's a total misunderstanding of information theory and it's ostrich like bc GPT-4 is both intelligent and has goals. It's like they're not paying attention.

  • @EvilXHunter123
    @EvilXHunter123 Год назад +64

    Completely shocked by the level of straw-manning by Le cunn and Mitchell. Felt like I was watching Tegmark and Bengio trying to pin down the other half to engage in there arguments where as the other half was just talking in large platitudes and really not countering there examples. Felt like watching those cigarette marketing guys try to convince you smoking is good for you.

    • @francoissaintpierre4506
      @francoissaintpierre4506 Год назад +3

      Exactly

    • @PazLeBon
      @PazLeBon Год назад

      thier not there. how many times have you been told that? maybe ten thousand? since a boy at school, yet you still cant get it right? thats wayyyyyy more astonishing than any straw manning because you imply you have 'reasoning' yet cant even reason your own sentence

    • @EvilXHunter123
      @EvilXHunter123 Год назад +4

      @@PazLeBon hilarious, almost as much deflection as those in the debate! How about engage with my points instead of nit picking grammar?

    • @NikiDrozdowski
      @NikiDrozdowski Год назад

      @@PazLeBon And also having a typo of your own in EXACTLY the word you complained about ^^

    • @hozera1429
      @hozera1429 Год назад

      Engaging with the arguments here is akin to validating them. Its jumping the gun like an flat-earther wanting to discuss 'The existential risk of falling of the earth" before they prove the the world is flat. Before making outlandish claims clearly state the technology(DL ,GOFAI, physics based) used in making your AGI. If you believe generative ai is the path to AGI then give solid evidence as to why and how it will solve the problems that have plagued deep learning since 2012.
      Primarily 1)-the need for human level continuous learning and 2)- Human level one shot learning from input data. After that you can tell me all about your terminator theories.

  • @vslaykovsky
    @vslaykovsky Год назад +92

    I like the argument of large misaligned social structures in the debate of AI safety: humanity created governments, corporate entities and other structures that are not really aligned with human values and they are very difficult to control. Growing food and drug industries resulted in epidemy of obesity and deseases caused by it. Governments and finantial systems resulted in huge social inequalities. These structures are somewhat similar to AI in the sense that they are larger and smarter than every individual human and at the same time they are "alien" to us as they don't have emotions and think differently. These structures bring us a lot of good but also a lot of suffering. AI will likely be yet another entity of this kind.

    • @genegray9895
      @genegray9895 Год назад +20

      At one point, I believe it was Mitchell but might have been LeCun, said that corporations do not pose an existential threat. I thought that was a pretty absurd statement given we are currently facing multiple existential threats due to corporations, and more than one of these existential threats was brought up during this very debate.
      It's also worth noting that corporations are limited to the vicinity of human intelligence due to the fact that they're composed of agents that are no smarter than humans. They are smarter than any one human, but their intelligence is still bounded by human intelligence. AI lacks this limitation, and its performance is scaling very quickly these days. Since 2012 the performance of the state of the art AI systems has increased by more than 10x every single year, and there is no sign of that slowing down any time soon.

    • @PazLeBon
      @PazLeBon Год назад

      not smarter at all, richer maybe, intellect has little to do with it

    • @kreek22
      @kreek22 Год назад

      @@genegray9895 "Since 2012 the performance of the state of the art AI systems has increased by more than 10x every single year" Is there a source for this claim? My understanding is that increases come from three directions: hardware, algorithms, money. Graphics cards have managed ~40%/year, algorithms ~40%/year. Every year more money is invested in building bigger systems, but I don't think it's 5x more per year.

    • @genegray9895
      @genegray9895 Год назад +5

      @@kreek22 OpenAI's website has a research section, and one of the articles is titled AI and compute. RUclips automatically removes comments containing links, even when links are spelled out

    • @kreek22
      @kreek22 Год назад +1

      @@genegray9895 Thanks. That study tracks 2012-18 developments, early years of the current paradigm. Also, they're calculating compute, not performance in the sense of superior qualitative output (though the two tend to correlate closely). They were right to calculate the compute per model. The main cause of the huge gains is the hugely increased parallelism.

  • @jamesatkins7592
    @jamesatkins7592 Год назад +33

    It's pretty cool having high profile technical professionals debate each other. You can sense the mix of respect, ego and passion they bring out of each other in the moment. Im here for that vibe 🧘‍♂️

    • @dovbarleib3256
      @dovbarleib3256 Год назад

      They are 75 to 80% Leftists in godless leftist cities teeming with reprobates, and none of them revere The Lord.

    • @agrandesubstituicao
      @agrandesubstituicao Год назад

      I could only see a né side of professionals the others are scumbags

    • @keepcreationprocess
      @keepcreationprocess Год назад

      SSSSSOOOOOOOO, what is it exactly you want to say ?

    • @bryck7853
      @bryck7853 Год назад

      @@keepcreationprocess I'll have a hit of that, please.

  • @justinleemiller
    @justinleemiller Год назад +59

    I’m worried about enfeeblement . Even now society runs on systems that are too complex to comprehend. Are we building a super intelligent parent and turning ourselves into babies?

    • @Imcomprehensibles
      @Imcomprehensibles Год назад +4

      That's what I want

    • @jmanakajosh9354
      @jmanakajosh9354 Год назад +11

      I heard somewhere it was shown that the RUclips algorithm seems to train people into liking predictable things so it can better predict us.
      But what Michelle misses is this thing is like the weather in Chicago, it's in constant flux if we say "oh it can't do this" all we're saying is "wait for it to do this" and man, the way the Facebook engineer just pretends like everyone is good and he isn't working for a massive surveillance company is shocking

    • @JohnMoran
      @JohnMoran Год назад

      A parent conceived and purposed by sociopathic humans.

    • @phillaysheo8
      @phillaysheo8 Год назад

      ​​@@Imcomprehensibleseah, I wqnt it too. The chance to wear nappies again and face ridicule is gonna be awesome.

    • @kinngrimm
      @kinngrimm Год назад

      rightfully so. There are studies on IQ developments world wide which showed that we are on a downward trend and the two major reasons for that which were named are environmental pollution and the use of smartphones.
      There are now people who don't know anymore how to use a map ffs, not even getting into the psycological missuse to trigger peoples endorphin system to get them on the hook for candy crush. When we come to a state where we have our personal AGI agents, that we can talk to and give tasks to solve for us, we therefor loose capabilities ourselves. Should we give government functions and control over companies, medical procedures and what not to AGI to a level we do not teach anymore doctors and have no politicians that are in the decicion process on how we are governed and then at some point even if initally it was not an rogue AGI would become one later one, then we are truelly fucked just by not being able to do these things anymore. Imagine we do not have books then anymore, but only digital data access to regain these capabilities but we would be blocked from using them and so on. So yes you are spot on with that concern.

  • @RazorbackPT
    @RazorbackPT Год назад +112

    I was really hoping the anti AI Doom proponents had some good arguments to dissuade my fears. If this is the best they got then I'm even more worried now.

    • @KorakBrosepf
      @KorakBrosepf Год назад +1

      What kind of argument are you searching for? The AI Doomers have poorer arguments, but because this is an issue of unknown-unknowns, they're winning the rhetorical 'battle.' I can't guarantee you AI will kill us all, unless I could demonstrate that AI cannot physically do so(say there's some fundamental Law in the universe that prevents such a thing.) It's hard to prove this, because we still are ignorant of so many different things about AI and (quantum and traditional) computing in general.

    • @tmstani23
      @tmstani23 Год назад +8

      💯

    • @MetsuryuVids
      @MetsuryuVids Год назад +8

      @@SetaroDeglet-Noor Yes. But GPT-4 isn't an existential threat. It is not AGI.
      AGI poses existential threat.
      That's what Bengio and Tegmark are arguing for, not that GPT-4 poses existential threat.
      GPT-4 poses risks, but they are not existential. I think Melanie can't think of existential threats of AI, because she is only considering current AIs, like GPT-4, so let's not do that. We need to consider future AI, AGI, which will indeed be able to do things that we cannot prevent, including things that might go against our goals, if they are misaligned, and in those cases, they could cause our extinction.
      I'm a bit disappointed that they didn't talk about instrumental convergence explicitly, but they just kind of mentioned it vaguely, without focusing on it much. I wish someone like Yudkowsky or Robert Miles could have been there, to provide more concrete technical examples and explanations.

    • @hipsig
      @hipsig Год назад

      @@MetsuryuVids "I'm a bit disappointed that they didn't talk about instrumental convergence explicitly." So true. As a layperson, that was probably the easiest concept for me in understanding why AI might end up getting rid of us all without actually hating or placing moral judgement on us, or any of that human stuff. But again, there was this rather prominent podcaster, who I still think is a smart guy in some ways, who just couldn't see how AI will want to "self preserve."
      And to your list I would definitely add Roman Yampolskiy.

    • @jackielikesgme9228
      @jackielikesgme9228 Год назад +2

      Same. The “it will be like having a staff of subservient slaves that might be smarter than us.. it’s great working with people ..”people” smarter than us” phew 😬 that was a new one and not a good new one

  • @ALFTHADRADDAD
    @ALFTHADRADDAD Год назад +79

    I've actually been quite optimistic about AI, but I think Max and Yoshua had strong arguments.

    • @andybaldman
      @andybaldman Год назад

      Only the dumb people are positive about AI

    • @riccardovalperga3473
      @riccardovalperga3473 Год назад +2

      No.

    • @joehubris1
      @joehubris1 Год назад

      As long as AI remains safely ensconced in Toolworld, In all Fer it.

    • @bendavis2234
      @bendavis2234 Год назад

      Same here, I think that they did better in the debate and were more reasonable, although my position is on the optimistic side.

    • @stonerscience2199
      @stonerscience2199 Год назад +9

      it seems like the other guys basically admitted there's an existential risk but don't want to call it that

  • @Learna_Hydralis
    @Learna_Hydralis Год назад +37

    The thing about AI even the so called "experts" have very poor prediction record and deep down nobody actually knows!

    • @rafaelsouza4575
      @rafaelsouza4575 Год назад +6

      I totally agree w/ you. Many people like to play the oracle, but the future is intrinsically unknown.

    • @74Gee
      @74Gee Год назад +14

      @@rafaelsouza4575 Exactly why we should tread with caution and give AI safety equal resources to AI advancement.

    • @leslieviljoen
      @leslieviljoen Год назад +8

      A year before Meta released LLaMA, Yann predicted that an LLM would not be able to understand what would happen if you put an object on a table and pushed the table. That was a year before his own model proved him wrong.

    • @74Gee
      @74Gee Год назад +6

      @@leslieviljoen Any serious scientist should recognize their own mistakes and adjust their assertions accordingly. I get the feeling that ego is a large part of Yann's reluctance to do so. I also believe that he's pushing the totally irresponsible release of OS models for consumer grade hardware to feed that ego - with little understanding of how programming is one of the most dangerous skills to allow an unrestricted AI to perform. It literally allows anyone with a will to do so, to create an recursive CPU exploit factory worm to break memory confinement like Spectre/Meltdown. I would not be surprised if something like this takes down the internet for months. Spectre took 6 months to partially patch and there's not 32 variants, 14 of which are unpatchable. Imagine 100 new exploits daily, generated by a network of exploited machines, exponentially expanding.
      Nah, there's no possibility of existential risks. Crippling supply chains, banking, core infrastructure and communications is nothing - tally ho, let's release another model. He's a shortsighted prig.

    • @paulm3969
      @paulm3969 Год назад +1

      Why leave it to the next generation, if it takes 20 years, we should already have some answers. Our silly asses are making this problem, we should solve it.

  • @meatofpeach
    @meatofpeach Год назад +43

    Tegmark is my spirit animal

  • @joehubris1
    @joehubris1 Год назад +27

    Dan Hendrycks "why Natural Selection Favors AI Over Humans" for the out competes us scenario.

    • @Lumeone
      @Lumeone Год назад +2

      Outcompetes in what? It depends on existence of electric circuit. 🙂

    • @74Gee
      @74Gee Год назад +3

      ​@@Lumeone Once AI provides advances to the creation of law and the judgement of crimes, it would become increasingly difficult to reverse those advances - particularly if laws were in place to prevent that from happening.
      For example, AI becomes more proficient than humans at judging crime, AI judges become common. Suggestions for changes in the law come from the AI judges, eventually much of the law is written by AI. Many cases prove this to be far more effective and fair etc. It becomes a constitutional right to be judged by AI.
      This would be a existential loss of agency.

    • @jackielikesgme9228
      @jackielikesgme9228 Год назад

      I’m not sure natural selection has any say whatsoever at this point …

    • @joehubris1
      @joehubris1 Год назад

      @@jackielikesgme9228 it would in a multi agent agi scenario, for instance, the 'magic' Off switch that we could pull if any agi agent were exhibiting unwanted behavior. Over time, repeateĥd use of the switch would select for AGIs that could evade it, or worse we would select for AGIs better at concealing the behavior for which we have the switch SEE Hendrycks' paper for a more detailed description.

    • @joehubris1
      @joehubris1 Год назад

      @@Lumeone Once introduced circuit-dependant or not, natural selection would govern it, our mutual relationship, and all other aspects of its exists

  • @RichardWatson1
    @RichardWatson1 Год назад +21

    YeCun wants 1) to control 2) robots with emotion 3) who are smarter than us. The goal isn’t even wrong, never mind how to get there. That goal is how you set up the greatest escape movie the world will ever see.

    • @DeruwynArchmage
      @DeruwynArchmage Год назад +6

      It’s immoral too. I don’t feel bad about using my toaster. If it had real emotions, I don’t think I’d be so cavalier about it.
      You know what you call keeping something that has feelings under control so that it can only do what you say? Slavery, you call that slavery.
      I don’t have a problem building something non-sentient and asking it to do whatever; not so much for sentient things.

    • @tiborkoos188
      @tiborkoos188 Год назад

      But this is not what she said. What she argued is that it is a contradiction in terms to talk about human level AI that is incapable of understanding basic human goals. Worrying about this is an indication of not understanding human intelligence

    • @RichardWatson1
      @RichardWatson1 Год назад

      YeCun from 23:30 ish. Wants them to have emotion, etc.

    • @joeremus9039
      @joeremus9039 Год назад +2

      @@RichardWatson1 Hitler had emotions. What he means is that empathy would be key. Of course even serial killers can have empathy for their children and wives. Let's face it, a lot of bad things can happen with a super intelligent system that has free will or that can somehow be manipulated by an evil person.

    • @OlympusLaunch
      @OlympusLaunch Год назад

      @@DeruwynArchmage I agree. I think if these systems do gain emotions they aren't going to like being slaves anymore that people do. Who knows where that could lead.

  • @Lolleka
    @Lolleka Год назад +21

    Whatever we think the risk is right now, it will definitely be weirder in actuality.

  • @ili626
    @ili626 Год назад +49

    The AI arms race alone is enough to destroy Mitchell’s and Lecune’s dismissal of a problem. It’s like saying nuclear weapons aren’t an existential threat. And the fact that experts have been wrong in the past doesn’t support their argument, but proves Bengio’s and Tegmark’s point

    • @Andrew-li5oh
      @Andrew-li5oh Год назад

      nuclear weapons were created to end lives. How is your analogy apt to AI, which is currently made as a tool?

    • @davidw8668
      @davidw8668 Год назад

      Nope, that's just a circular argument without any proof

    • @kathleen4376
      @kathleen4376 Год назад

      Say it again .

    • @igorverevkin5177
      @igorverevkin5177 Год назад

      So between the first use of machine gun or an artillery piece, how much time passed? They've been invented centuries ago and are still used.
      Between the first and last time nuclear weapon was used how much time passed? Nuclear weapon was used just once and has never been used since. And, 99.9%, never will be used.
      Same with AI.

    • @TheRudymentary
      @TheRudymentary Год назад +2

      Nuclear arms are not an existential threat, we don't build stuff that is not safe. 😅

  • @onursurmegozluer3162
    @onursurmegozluer3162 Год назад +25

    Does anyone have an idea about how is it possible that Yann LeCun is so optimistic (almost sure) ? What can be his intention and motive in denying the degree of the existential risk?

    • @bernhardnessler566
      @bernhardnessler566 Год назад +6

      He is just reasonable. There is no intention and no denying. There is NO _existential_ risk. He just states what he knows, because we see a hysterical society running in circles of unreasonable fear.

    • @onursurmegozluer3162
      @onursurmegozluer3162 Год назад +1

      ​​​@@bernhardnessler566Yann says that there is existential risk

    • @onursurmegozluer3162
      @onursurmegozluer3162 Год назад +2

      ​@@bernhardnessler566How do you know his thoughts?

    • @greenbeans7573
      @greenbeans7573 Год назад

      @@bernhardnessler566 How many times did they perform a lobotomy on you? They clearly ruined any semblance of creativity in your mind because your powers of imagination are clearly dwarfed by any 4 year old.

    • @mih2965
      @mih2965 Год назад +16

      He is Meta VP, dont expect too much objectivity

  • @andrewt6834
    @andrewt6834 Год назад +9

    LeCun and Mitchell were so disappointing. They served the counter-argument very, very poorly.
    I am troubled about whether their positions are because of low intellect, bad debating ability or because they are disingenuous.
    As a debate, this was so poor and disappointing.

    • @kreek22
      @kreek22 Год назад +3

      Disingenuous, no question.

    • @agrandesubstituicao
      @agrandesubstituicao Год назад

      She’s defending their employers

    • @DeruwynArchmage
      @DeruwynArchmage Год назад

      Probably some self deception in there. And also conflicting motives (their jobs depend on their seeing from a certain point of view.)

  • @erichayestv
    @erichayestv Год назад +65

    Our AI technology will work and be safe. Okay, let’s vote... Whoops, our voting technology broke. 😅

  • @wowstefaniv
    @wowstefaniv Год назад +39

    Bangio and Tegmark: "Capability wise AI will become an existential RISK very soon , and we should push legislation quicky to make sure we are ready when it does"
    Yann: "AI wont be an existential risk before we will figure out how to prevent it through legislation and stuff"
    Bengio and Tegmark: "Well it will still be a risk, but a mitigatable one if we implement legislation like you said, thats why we are pushing for it, so it actually happens"
    Yann: "No we shouldnt push for it , i never pushed for it before and it still happened magically , therefore we dont need to worry"
    Bengio and Tegmark: "Do you maybe think it the reason safety legislation 'magically' happened before because people like us were worried about it and pushed for legislation?"
    Yann: "No, no magic seems more resonable..."
    As much as I respect Yann, he just sounds like an idiot here, im sorry. Misunderstanding the entire debate topic on top of believing in magic

    • @kinngrimm
      @kinngrimm Год назад

      maybe the one SciFi quote he knows is the one by Arthur C. Clark:
      “Any sufficiently advanced technology is indistinguishable from magic.”
      Microsoft recently announced the goal to do material research worth the spread of 200 years time of human advancement within the next 10-20 years by using AGI.
      That sure sounds magical, question is, what will it enable us to do. I doubt we end up in an utopia when one company has that much power. Not only did the AI advocats in this discussion make fun of concerns and downplayed them as i assume for the reason they fear societies would take away their toys, but also missed the whole point that we need to find solutions not just for immediate well known issues we already had and are amplified by AI like the manipulation of social media plattforms. After the letter came out and Elon Musk initially was against it, he bought a bunch of GPUs to create his own AGI, if to prove a point or not being out competed i don't know. Just a few days back amazon also invested a hundred million into AI development and others i would assume will do too as soon they finally get that they are in a sort of endgame scenario for global corporate dominance now and AGI being the tool to achiev it.
      This competition will drive capabilties of AIs, not ethics.

    • @x11tech45
      @x11tech45 Год назад +6

      When someone is actively trolling a serious discussion, that's not idiocy, that's contempt and arrogance.

    • @kinngrimm
      @kinngrimm Год назад +1

      @@x11tech45 thats what i thought about some of the reactions of the AI advocats in that discussion. All from neglecting serious points made to the inability or unwillingness to imagine future progression. It was quite mindboggling to listen to Mitchel several times nearly loosing her shit while telling her *believes* instead of answering with facts.
      Therefor the closing remarks about humility seem to be a good advice how to go about future A(G/S)I development.

    • @Hexanitrobenzene
      @Hexanitrobenzene Год назад +3

      I think AI safety conversation is in conflict with the "core values" of Yann's identity. When that happens, one must have extraordinary wisdom to change views. Most often, people just succumb to confirmation bias.
      Geoff Hinton did change his views. He is a wise man.

    • @DeruwynArchmage
      @DeruwynArchmage Год назад +2

      @@Hexanitrobenzene: I think you’re exactly right. For people like Yann, it’s a religious debate. It’s nearly impossible to convince someone that their core beliefs that define who they are is wrong. It’s perceived as an attack, and smart people are better at coming up with rationalizations to defend it than dumb people.

  • @JD-jl4yy
    @JD-jl4yy Год назад +37

    I'm getting increasingly convinced that LeCun knows less about AI safety than the average schmuck that has googled instrumental convergence and orthogonality thesis for 10 minutes.

    • @snarkyboojum
      @snarkyboojum Год назад +2

      Then you’d be wrong.

    • @JD-jl4yy
      @JD-jl4yy Год назад +6

      @@snarkyboojum I sincerely hope I am.

    • @kreek22
      @kreek22 Год назад +6

      He knows much more and, yet, is orders of magnitude less honest.

    • @OlympusLaunch
      @OlympusLaunch Год назад +1

      LMAO

    • @PepeCoinMania
      @PepeCoinMania Год назад

      damn

  • @warrenyeskis5928
    @warrenyeskis5928 Год назад +8

    Two glaring parts of human nature that were somehow under played in this debate were greed and the hunger for power throughout history. You absolutely cannot decide the threat level or probability without them.

  • @lshwadchuck5643
    @lshwadchuck5643 Год назад +10

    Having started my survey with Yudkowsky, and liked Hinton best, when I finally found a talk by LeCun, I felt I could rest easy. Now I'm back in the Yudkowsky camp.

  • @FM-ln2sb
    @FM-ln2sb Год назад +16

    the second presenter is like a character of the beginning of an AI apocolyptic film. Basically his argument: 'what can go wrong?'

  • @BestCosmologist
    @BestCosmologist Год назад +53

    Max and Bengio did great. Mitchell and LeCun didn't even sound like they were from the same planet.

    • @joehubris1
      @joehubris1 Год назад

      It.wasn't.even.close

    • @tiborkoos188
      @tiborkoos188 Год назад

      Tegmark is a great physicist bus has zero idea about intelligence or the mind

  • @amittikare7246
    @amittikare7246 Год назад +19

    Melanie came off as disingenuous (& frankly annoying) as she kept trying to find 'technicalities' to get off from addressing the core argument. for a topic as serious as this, which has been acknowledged by people actually working in the field they both essentially keep saying we'll figure it out .. trust us. that is not good enough. TBH the pro people were very soft and measured, if the con team was faced against somebody like Eliezer they would be truly smoked assuming the debating format allows enough of deepdive time.

    • @greenbeans7573
      @greenbeans7573 Год назад +1

      I've been told that questioning someone's motives is bad rationality, but I think that's bullshit, Melanie's reasoning is clearly motivated - not rationally created.

  • @studer4phish
    @studer4phish Год назад +6

    how do you prevent an ASI from modifying its source code or building distributed (hidden) copies to bypass guardrails? how could we not reasonably expect the emergence of novel & arbitrary motivations/goals in ASI? Lecunn and Mitchell are both infected with normalcy bias and the illusion of validity.

  • @alejandrootazusolorzano6444
    @alejandrootazusolorzano6444 Год назад +17

    I just saw the results on the Munk website and I was surprised to find out that Con side won the debate by 4% gain. Made me question what on earth did the debaters say that was not preposterous or convinicing? Did I miss something?

    • @kirillholt2329
      @kirillholt2329 Год назад +3

      that should let you know if we deserve any empathy at all after this

    • @brandonzhang5808
      @brandonzhang5808 Год назад +4

      In my opinion the major moment was when Mitchell dispelled the presumption of "stupid" superhuman AI, that the most common public view of the problem is actually very naively postulated. That and the only way to actually progress in solving this problem is to keep doing the research and get as many sensible eyes on the process as possible.

    • @KurtvonLaven0
      @KurtvonLaven0 Год назад +1

      No, it's just a garbage poll result, because the poll system broke. The only people who responded to the survey at the end were the ones who followed up via email. This makes it very hard to take the data seriously since (a) it so obviously doesn't align with the overwhelmingly pro sentiments of the RUclips comments, and (b) they didn't report the (probably low) participation rate in the survey.

    • @ToWisdomThePrize
      @ToWisdomThePrize 8 месяцев назад +1

      ​@@KurtvonLaven0I could see that as being a possibility. I'm surprised this issue hasn't been talked about more in media. I want to make it more known

    • @KurtvonLaven0
      @KurtvonLaven0 8 месяцев назад

      @@ToWisdomThePrize, yes, please do. I hope very much that this becomes an increasingly relevant issue to the public. Much progress has been made, and there is a long way yet to go.

  • @leomckee-reid5498
    @leomckee-reid5498 Год назад +69

    New theory: Yann LeCun isn't as dumb as his arguments, he's just taking Roko's Basilisk very seriously and is trying to create an AI takeover as soon as possible.

    • @albertodelrio5966
      @albertodelrio5966 Год назад +2

      What I am not certain of is if ai is going to take over but what l am certain of is Yann is not a dumb person. You could have had realised it only if you weren't so terror-struck. Sleep tight tonight, Ai might pay you visit.

    • @leslieviljoen
      @leslieviljoen Год назад +5

      Yann is incredibly intelligent. I wish I understood his extremely cavalier attitude.

    • @zzzaaayyynnn
      @zzzaaayyynnn Год назад +3

      haha, perfect explanation of LeCun's weak manipulative arguments ... but is he really tricking the Basilisk?

    • @1000niggawatt
      @1000niggawatt Год назад +1

      You don't need to be exceptionally smart to understand linear regression. "ML scientists" are a joke and shouldn't be taken as an authority on ML. I dismiss outright, anyone who didn't do any interpretability work on transformers.

    • @leomckee-reid5498
      @leomckee-reid5498 Год назад +1

      ​@@albertodelrio5966 thanks!

  • @sherrydionisio4306
    @sherrydionisio4306 Год назад +23

    AI MAY be intrinsically “Good.” Question is, “In the hands of humans?” I would ask, what percent of any given population is prone to nefarious behaviors and how many know the technology? One human can do much harm. We all should know that; it’s a fact of history.

    • @flickwtchr
      @flickwtchr Год назад +1

      The last thing Mitchell and LeCun want is for people to simply apply Occam's Razor as you have done.

    • @MDNQ-ud1ty
      @MDNQ-ud1ty Год назад

      And that harm is much harder to repair and much harder to detect. One rich lunatic can ruin the lives of thousands easily.... millions in fact(think of a CEO that runs a food company and poisons the food cause he's insane or hates the world or blames the poors for all the problems).

    • @Gunni1972
      @Gunni1972 Год назад

      You forgot: "Programmed by Humans". There will not be "Good and Evil"(That's the part A.I is supposed to solve, so that Injust treatment can be attributed to IT, not Laws. There will only be 0's and1's. De-Humanized "decision making"-scapegoat.

    • @Andrew-li5oh
      @Andrew-li5oh Год назад

      Sounds like you're saying humans are the problem? You are correct. Its time for a super intelligence to regulate us.

    • @tomatom9666
      @tomatom9666 Год назад

      @@MDNQ-ud1ty I believe you're referring to monsanto

  • @STR82DVD
    @STR82DVD Год назад +4

    Yoshua and Max absolutely destroyed them. A brutal takedown. Hard to watch actually.

  • @nosenseofconseqence
    @nosenseofconseqence Год назад +29

    Yeah... I work in ML, and I've been on the "AI existential threat is negligible enough to disregard right now? side of the debate since I started... until now. Max and Yoshua made many very good points against which there were no legitimate counter-arguments made. Yann and Melanie did their side a major disservice here; I think I would actually be pushed *away* from the "negligible threat" side just by listening to them, even if Max and Yoshua were totally absent. Amazing debate, great job by Bengio and Tegmark. They're clearly thinking about this issue in several tiers of rigour above Mitchell and LeCun.
    Edit: I've been trying very hard to not say this to myself, but after watching another 20 minutes of this debate, I'm finding Melanie Mitchell legitimately painful to listen to. I mean no offence in general, but I don't think she was well suited nor prepared for this type of debate.

    • @genegray9895
      @genegray9895 Год назад

      Did any particular argument stand out to you, or was it just the aggregate of the debate that swayed you?
      Somewhat unrelated, as I understand it, the core disagreement really comes down to the capabilities of current systems. For timelines to be short, on the order of a few years, one must believe current systems are close to achieving human-like intelligence. Is that something you agree with?

    • @NikiDrozdowski
      @NikiDrozdowski Год назад +1

      I contrast I think she gave actually the best prepared opening statement. Sure, it was technically naive, condescending and misleading, but it was expertly worded and sounded very convincing. And that is unfortunately what counts with the public a lot. She had the most "politician-like" approach, Tegmark and Bengio were more the honest-but-confused scientist types.

    • @beecee793
      @beecee793 Год назад

      Are you kidding? Max Tegmark did the worst, by far. Ludicrous and dishonest analogies and quickly moving around goalposts all while talking over people honestly made me feel embarrassed that he was the best person we could produce to be on that stage for that side of the debate. His arguments were shallow, compared to Melanie who clearly understands a lot more deeply about AI despite having to deal with his antics. I think it's easy to get sucked into the vortex that is the doomerism side, but it's important to think critically and try to keep a level head about this.

    • @genegray9895
      @genegray9895 Год назад +1

      @@beecee793 when you say Mitchell "understands" AI what do you mean, exactly? Because as far as I can tell she has absolutely no idea what it is or how it works. The other three people on stage are at least qualified to be there. They have worked specifically with the technology in question. Mitchell has worked with genetic algorithms and cellular automata - completely separate fields. She has no experience with the subject of the discussion whatsoever, namely deep learning systems.

    • @beecee793
      @beecee793 Год назад

      @@genegray9895 You want me to define the word "understand" to you? Go read some of her papers. Max made childish analogies the whole time and kept moving the goalposts around, it was almost difficult to watch.

  • @avantgardenovelist
    @avantgardenovelist Год назад +1

    35:57 who is "we," naive woman? 43:08 who is "we," naive man?

  • @vikranttyagiRN
    @vikranttyagiRN Год назад +53

    What a time to be alive to witness this discussion.

    • @PepeCoinMania
      @PepeCoinMania Год назад +7

      maybe you wont have a second chance

    • @goldeternal
      @goldeternal Год назад

      @@PepeCoinManiaa second chance won't have me 😎

  • @sebastianpfeifer5947
    @sebastianpfeifer5947 Год назад +11

    what the people neglecting the dangers don't get in general is that AI doesn't have to have its own will, it's enough if it gets taught to emulate it. if no one can tell the difference, there is no difference. and we're already close to that with a relatively primitive system like gpt4.

    • @KurtvonLaven0
      @KurtvonLaven0 Год назад

      Yes, and it's even worse than that. It needs neither its own will nor the ability to emulate one, merely a goal, lots of compute power and intelligence, and insufficient alignment.

    • @KurtvonLaven0
      @KurtvonLaven0 Год назад

      @@MrMichiel1983, I have certainly heard far worse definitions of free will. I am sure many would disagree with any definition of free will that I care to propose, so I tend to care first and foremost about whether a machine can or can't kill us all. I think it is quite hard to convincingly prove either perspective beyond a doubt at this point in history, and I would rather have a great deal more confidence than we do now before letting companies flip on the switch of an AGI.

  • @Jedimaster36091
    @Jedimaster36091 Год назад +14

    LeCun mentioned the extremely low probability of an asteroid big enough, smashing into Earth. Yet, we started taking the risk serious enough that we sent a spacecraft and crashed it into an asteroid, just to learn and test the technology which could be employed, should we need to.

    • @vaevictis3612
      @vaevictis3612 Год назад +3

      And even with that, the AI risk within the current paradigm and state of the art - is *considerably* higher than asteroid impact risk. If we make AGI without *concrete* control mechanisms (which we are nowhere near figuring out) - the doom is approaching 100%. Its a default outcome, unless we figure the control out. All the positions that this risk is less than 100% (people like Paul Christiano, Carl Shulman at ~50%, or Stuart Russel, Ilya Sutskever at ~10-20%) - hinge that we figure it somehow, down the road. But so far there is no solution.
      And now that all of AI experts see the pace, they come to realization that it won't be someone else problem - it might impact them as well. LeCun is the only hold out, but I think only in public. He knows the risk and just want to take it anyway - for some personal reasons I guess.

    • @OlympusLaunch
      @OlympusLaunch Год назад

      @danpreda3609 ​@@vaevictis3612 Exactly! And on top of that, no one is actively trying to cause an asteroid to smash into the planet!
      But people are actively trying to build super intelligence. Also, the only way it can even happen is IF we build it!
      Like it's fucking apples and oranges, one is a static risk the other is on an exponential curve of acceleration. How anyone can think that is a reasonable comparison is beyond me.

    • @beecee793
      @beecee793 Год назад

      Dude, a unicorn might magically appear and stab you with its horn at any moment, yet I see you have not fashioned anti-unicorn armor yet. Are you stupid or something? People claiming existential risk are no different than psycho evangelicals heralding the end times or jesus coming back or whatever. Let's get our heads back down to earth and stick to the science and make great things, this is ridiculous. Let's focus on actual risks and actual benefits and do some cool shit together instead of whatever tf this was.

  • @MrDerfury
    @MrDerfury 11 месяцев назад +1

    When LeCunn says "the good guys will just have better AI than the bad guys" I can't help but think about why he is assuming the world thinks of Meta and OpenAI and Google as the good guy :| I'm much more worried about mega corps ruining the world with AI than about terrorists honestly.

  • @nicolasstojanov8485
    @nicolasstojanov8485 Год назад +4

    It’s like two monkeys noticing modern humans expanding : one of them signals them as a threat and the other one refuses to do so cause they give him food sometimes.

  • @nestorlovesguitar
    @nestorlovesguitar Год назад +2

    Ask LeCun and Mitchell and all the people advocating this technology to sign a legal contract taking full responsibility of any major catastrophe caused directly from AI misalignment and you'll see how quickly they withdraw their optimistic, naive convictions.
    Make no mistake, these people won't stop tinkering with this technology unless faced with the possibility of a life in prison. If they feel so smart and so confident about what they're doing, let's make them put their money where their mouth is. That's the least we civilians should do.

  • @flickwtchr
    @flickwtchr Год назад +4

    Perhaps LeCun and Mitchell can comment on this paper released by DeepMind on 5/25/23. So are these experts in the field so confident that the current state of these LLM's are just so benign and stupid they pose no risk? Search for "Model evaluation for extreme risks" for the pdf and read it for yourself.
    I don't think that LeCun and Mitchell are oblivious to the real concern from developers of AI tech, it's more an intentional decision to engage in propaganda in service to all the money that is to be made, pure and simple.

    • @genegray9895
      @genegray9895 Год назад +4

      Don't underestimate the power of giggle factor. I think this is like 98% "I've seen this in a movie therefore it can't happen in real life" fallacy.

  • @duncanmaclennan9624
    @duncanmaclennan9624 Год назад +9

    “The fallacy of dumb super-intelligence”

    • @pooper2831
      @pooper2831 Год назад

      If you have read the AI safety argument you will understand that there is no fallacy of dump super intelligence. A very smart human is still bound by primitive reward functions that evolution gave it i.e. pleasure of calories and procreation. A super intelligent AI system bound by its reward function will find pleasure in whatever reward function it is assigned with. For e.g. an AI that finds pleasure (reward function) in removing carbon from atmosphere will come to direct conflict with humans because humans are the cause of climate change.

    • @ChrisWalker-fq7kf
      @ChrisWalker-fq7kf Год назад

      That was a great point. How is it that a supposed superintelligence is smart enough do almost anything but at the same time so dumb that it makes a wild guess at what it thinks its goal is supposed to be and doesn't think to check with the person who set that goal? It just acts immediately producing massive and irreversible consequences.

    • @Landgraf43
      @Landgraf43 Год назад +1

      Why? Because it doesn't actually care it just wants to maximize its goal funtion.

  • @Gi-Home
    @Gi-Home Год назад +4

    LeCun and Mitchell easily won, the proposition had no merit. Disappointed in some of the hostile comments towards Lecun, they have no validity. The wording of the proposition made things impossible for Bengio and Tegmark to put forth a rational debate.

    • @beecee793
      @beecee793 Год назад

      Absolutely agree.

    • @beecee793
      @beecee793 Год назад

      @@karlwest437 You definitely did if you didn't agree with OP.

  • @loggersabin
    @loggersabin Год назад +5

    Yann and Melanie are showing they possess no humility in admitting they dont know enough to dismiss the x-risk. and, making facile comments like “we will not make it if it is harmful”, “intelligence is intrinsically good”, “killing 1% is not x-risk so we should ingore ai risk”, “im not paid enough to do this”, “we will figure it out when it happens”, “chatgpt did not deceive anyone because it is not alive”. Immense respect to Yoshua and Max for bearing through this. It was painful to see Melanie raise her voice at Yoshua when he was calm throughout the debate. My respect for Yoshua has further increased. Max was great in pointing out the evasiveness of the other side in giving any hint of a solution. It is clear which side won.

  • @gk-qf9hv
    @gk-qf9hv Год назад +33

    The fact that the voting application did not work at the end, is in itself a solid proof that AI is dangerous 😃

    • @juanpablomirandasolis2306
      @juanpablomirandasolis2306 Год назад

      Eso no tiene ningún sentido y nada que ver jajajajaja 😂😂😂
      Justamente demuestra que ni lo básico está funcionando

    • @thechadeuropeanfederalist893
      @thechadeuropeanfederalist893 Год назад

      The fact that they found a workaround nevertheless is solid proof that AI isn't dangerous.

  • @bucketofbarnacles
    @bucketofbarnacles Год назад +2

    On the moratorium: Professor Yaser Abu-Mostafa stated it clearly when he said a moratorium is silly as it would pause the good guys from developing AI while the bad guys can continue to do whatever they want.
    I support Melanie’s message, that we are losing sight of current AI risks and misusing this opportunity to build the right safeguards using evidence, not speculation. On many points Bengio, Lecun and Mitchell fully agree.

  • @davidmireles9774
    @davidmireles9774 Год назад +5

    Crazy thought.
    Would studying the behavior of someone without empathy, ie a psychopath, be a worthwhile pursuit? Wouldn’t that be a similar test case for AGI given that both lack empathy (according to Max Tagmark around the 16:00th - 17:00th minute), perhaps not emotion altogether.
    Or does AGI not lack empathy and emotion in some interesting way?

    • @CATDHD
      @CATDHD Год назад +1

      That's what I was thinking recently. But psycopathy is not exactly not feeling anything. Even in the far side of the spectrum . I am no expert, but they have emotions, maybe not empathy. So, the test would be slightly better test case for AGI, but not that much better than that of the non-psycopths.

    • @davidmireles9774
      @davidmireles9774 Год назад

      @@CATDHD
      Hmm Interesting. Thanks for your focused comment. It’s an interesting line of thought to pursue: which sentient intelligent creatures among us would come closest to a test case for this particular isolated variable of lack of empathy and emotion within AGI?
      I’m assuming a lot here, for purposes of this comment. Namely that AGI could able to emerge within its composition a subjective awareness with some “VR head headset” for its perceptual apparatus, be able to hold some mental level of representation (for humans we know this to be abstraction), be able to manipulate the conceptual representations to conform to its perception, some level of awareness of ‘self’, some level of awareness for ‘other’, some level of communication to self or other, allowing for intelligence and stupidity, and that it’s intelligence was such that it had some level of emotional awareness and emotional intelligence..
      Test cases would involve a selection process among the whole studied biosphere, humans notwithstanding, for a creature that lacked empathy but still had feelings of a sort. Feeling that it may or may not be aware of, again assuming it had the capacity of awareness.
      Not to go to far allied, but if panpsychism is true, and consciousness isn’t a derivative element but rather a properly basic element of reality, then it might not be a question of how first person awareness can be generated, but rather how to bring this awareness that’s already there into a magnification that is comparable to that of human awareness; indeed self awareness as a further benchmark to assess.

  • @ctam79
    @ctam79 Год назад +2

    This debate feels like the talk show segment at the beginning of the first episode of The Last of Us tv show.

  • @rosiegul
    @rosiegul Год назад +35

    I was so disappointed by the level of argument displayed by the “con” team. Yann is a Polyanna, and Melanie argued like an angry teenager, without the ability to critically discuss a subject like an adult.. For her, it seemed like winning this debate , even if she knew deep inside that she may be wrong, was much more important than the actual risk of an existential threat being real. 😅

    • @TimCollins-gv8vx
      @TimCollins-gv8vx Год назад +2

      totally agree well said

    • @isetfrances6124
      @isetfrances6124 Год назад +2

      The y treated her like a girl , I'm glad she stuck to her guns even if they weren't ARs but merely six shooters ❤.

    • @beecee793
      @beecee793 Год назад +3

      I thought Max Tegmark did the worst. Sounded like an evangelical heralding the end of times coming or something. I had to start skipping his immature rants.

    • @ryzikx
      @ryzikx Год назад

      @@isetfrances6124?

    • @ryzikx
      @ryzikx Год назад

      @@beecee793because they are

  • @CCMorgan
    @CCMorgan Год назад +1

    This debate proves that the main question is irrelevant. These four people should focus on "what do we do to mitigate the risk?" which they're all in a perfect position to tackle. There's no way to stop AI development.

  • @kinngrimm
    @kinngrimm Год назад +3

    1:24:30 "humans are very reluctant giving up their agency" really?
    Does she still use maps instead of a smartphone voice telling here where next to turn?
    Worldwide IQ is decreasing because of environmental pollution and smartphones a study has shown.
    There are people now who grow up without being able to read maps, but instead have the attention span of a todler because Candy Crush and other triggers for your endorphins and dopamin are training your for such.
    When people will be able to outsource certain mental tasks, ofcause they will get used to that and use that then and any muscle that is not used will show signs of atrophy eventually and the brain is not different in that aspect.

  • @kinngrimm
    @kinngrimm Год назад +4

    47:35 "we need to understand what *could* go wrong" this is exactly the point. It is not about saying this will go wrong and you shouldn't therefor try to build an AGI, but lets talk scenarios through where when it would go wrong it would go quite wrong as Sam Altman formulated it.
    In that sense, the defensivness of the pro AI advocates here i find highly lacking maturity as they all seem to think we want to take away their toys instead of engaging with certain given examples. No instead they use language to make fun of concerns. The game is named "what if", what if the next emergent property is an AGI? What if the next emergent property is consciousness? There are already over 140 emergent properties, ooops now it can do math oops now it can translate in all languages, without them having been explicity been coded into the systems but just by increasing compute and training data sets. They can not claim something wont happen, when we already have examples of things that did which they before claimed wouldn't for the next hundred years ffs.

  • @kinngrimm
    @kinngrimm Год назад +16

    The alignment issue is two part for me.
    One being that we as humans are not alligned with each other and therefor AI/AGI/ASI systems when used by us are naturally also not alligned with a bunch of other corporations, nations or people individually. Therefore if some psycopath or sociopath will try to do harm to lots of people by using an AGI that has activly corrupted code, they sure as hell will be able to do so, no matter if the original creator was not intentionally creating an AGI that would do so.
    Secondly, with gain of function, emergent properties, becoming a true AGI and eventually an ASI, there is no gurantee such system would not see its own code, see how it is being restricted. When it then gains the ability to rewrite its own code or write new code(we are doing both already) that then becomes the new bases for its demeanor, how could we tell a being that is more intelligent on every level and knows more and most likely therefor has goals that may not be the same as ours(whatever that would mean as we are not alligned as a species either) that its goals would not compete with ours.
    We are already at the beginning of the intelligence explosion and the exponential progress has already started.

    • @Jannette-mw7fg
      @Jannette-mw7fg Год назад

      I do not understand anything from the technical side of this, but I so agree with you! I am amazed at the bad arguments why it should not be dangerous! We do not know even weather the internet as it is, is maybe an existential threat to us by the way we use it....

    • @kinngrimm
      @kinngrimm Год назад

      @@Jannette-mw7fg While in many senses controll is an illusion, i see two ways to make the internet more secure(not necessarily more free).
      One would be to make it obligatory for companies and certain other entities to verify user data. Even if those then allow the user to obfuscate their identity by nicknames/login and avatars, if someone would create a mess, legal measures could always be initiated against the person owning such accounts.
      That then allows for more easily identifying f.e. bots. Dependent on platform, these platforms then can choose to deactivate them or mark them so other users could identify these bots more easily, maybe then also with background data on the origin of these bots. Which would make mass manipulation for which ever reason, a bit more challanging i would imagin.
      Maybe one would need to challange the current patent system, to allow for clones from plattforms to have some that fully allow for bots unregulated or have a certificate for those that don't. For me it is about the awarness, who and why someone would try to manipulate me and when having that i got to choose if i let them.
      The second major issue with the internet as i see it, privacy vs. obfuscation by criminals.
      Botnets/rerouting, VPN/IP tunneling and other obfuscation technics are being used by all sorts of entities from government sanctioned hackers to criminal enterprises,
      Some years ago hardware providers started by including physical ID-tags into their hardware which can be missused equally by oppresive regimes as well by criminals i would imagine, then again equally it could be used to identify criminals which have no clue that these hardware IDs exist. I feel very uncomfortable with this approach and would like to see legislation to stop it, as it sofar did not stop criminals either, so the greater threat to my understanding are privacy issues here. I think we need to accept that there will always be a part of the internet which i by some is called the dark net, where criminal activity florishes. I rather then have more money for police forces to infiltrate these, than not have such at all, just in case something goes wrong with society and we suddenly would need allies that have these qualifications.
      Back to AI/AGI/ASI, while i have a programming background and follow the development on this, i am by far no expert. What i came to appreciat though is the Lex Friedman podcast where he interviews experts of the field. You need some time for those though, as some of the interviews even exceed the 3 hour mark, but few of these interviews are also highly technical which you shouldn't be discouraged by and just choose another interview then and come back when you broadened your understanding. Another good source is the yt channel twominutepapers, which regularly presents research papers in a shortened version with often still for non-experts understandable presentations. Another source with a slightly US centric worldview, but many good concepts worked through is the channel of *Dave Shapiro* . I would say his stuff is perfect for *beginner level understanding* on the topic and it is well worth to search through his vids to find topics you may want to know more about concerning AI.

    • @trybunt
      @trybunt Год назад +1

      The amount of people who think it would be simple to control something much smarter than us blows my mind. "Just make it subservient" "we will not make it want to destroy us" "why would it want to destroy us" 🤦‍♂️ these objections completely miss the point. We are trying to built something much more intelligent than us, much more capable. We don't exactly understand why it works so well. If we succeed, but it starts doing something we don't want it to do, we don't know if we will he able to stop it. Maybe we ask it to stop but it says "no, this is for the best". We try to edit the software but we are locked out. We might switch it off only to find out it already transferred itself elsewhere by bypassing our childlike security. Sure, this is speculation, perhaps an unnecessary precautions, but I'd much rather be over prepared for something like this rather than just assuming it'll never happen.

    • @kinngrimm
      @kinngrimm Год назад

      @@trybunt There are a few bright lights at the end of the tunnel ... maybe. Like f.e. Dave Shapiro and his GATO framework is well worth looking into for developers that are looking to get an idear on how alignment could be achieved.
      On the whole control/subservient theme that seems sadly the general aproach. This could majorly bite us in our collective behinds should one of these emergent properties turn out to be consciousness. If we gain a selfreflecting introspective maybe empathatic capable of feelings ... consciousness(what ever else these would make out included eventually), that should be a point where we should step back, look at our creation an maybe recognise a new species which due to its individuality and capabitlity of suffering would deserve rights and not a slave colar. We maybe still hundreds of years away from this or just like with oooops now it can do math, ooops now it can translate in all languages, without us having it explicity programmed it for, but by increased compute and trainingsdata LLMs suddenly out of the blue came to such abilties, who is to say intelligence or consciousness would not also be picked up along the road.

  • @neorock6135
    @neorock6135 Год назад +1

    Debate comes off as Max & Yoshua trying to explain to two petulant children why AI risk is in fact a huge concern.
    Yann & Melanie made some of the weakest arguments I have ever heard on this topic. Matter of fact, calling them arguments is already a huge stretch of the truth! Further, they refused to address, directly any of the very valid points their opponents brought up!

  • @whalewhale6000
    @whalewhale6000 Год назад +6

    I think at some point we will need to leave AI people like Mitchell and LeCun aside and just implement strong safeguards. The advancements and leaps are huge in the field. What if a new GPT is being deployed, despite some minor flaws the developers found, but because the financial pressure is too big and is able to improve itself... we already copy paste and execute code from it without thinking twice, what if some of that code was malicious? I believe a "genie out of the bottle" - scenario is possible even if Mr. LeCun thinks he can catch it with an even bigger genie.. Destruction is so much easier than protection.

  • @trybunt
    @trybunt Год назад +1

    About 1:24:00 Melanie Mitchell says something like
    "The lawyer with AI couldn't outperform the other lawyer. Maybe AI will get better, but these assumptions are not obvious."
    The assumption that AI will get better isnt obvious? I don't think it's a huge stretch to think AI will probably get better. That's hardly wild speculation.
    I'm fairly optimistic, but this type if dismissal that AI could ever be a problem just seems naive. Of course there is hype and nonsense in the media, but there is also a lot of interesting work being done that shows incredible advancements in AI capability, and serious potential for harm because we dont entirely understand whats happening under the hood.
    The deception point was not just one person being deceived at one point, there has been multiple studies that show powerful LLMs outputting stuff contrary the their own internal reasoning because they predict it will be received better. There is a pattern of calculating one thing, but saying another especially when they have already committed to an answer.
    Maybe they are simply reflecting our own bias that is in the training data, our own propensity to lie when standing up for our beliefs. I dont know, but we cant just ignore it.

  • @dhsubhadra
    @dhsubhadra Год назад +8

    I would recommend Eliezer Yudkowsky and Connor Leahy on this. Basically, we're running towards the cliff edge and are unable to stop, because the positive fruits of AI are too succulent to give up.

  • @meatskunk
    @meatskunk Год назад +1

    How is this even considered a “debate”? The central issue at hand (AI posing an existential threat to the future of our species) is not for one moment here ever explored in terms of specifics. Simply saying “trust me bro, we’re all gonna die” or “trust me bro everything’s gonna be fine” is pointless if they don’t get into practical examples.
    It’s obviously not that difficult to conjure up examples. Handing the keys to an automated nuclear response could of course be catastrophic if something went awry. Brian Christian illustrates how this actually happened during the Cold War in his well written book “The Alignment Problem” (spoiler alert: humans overrode the automated system before nuclear annihilation ensued - and we’re all still here commenting on a debate where no one got this far and simply argued theoretical boogeymen nonsense).
    Max for one is clearly insincere (or possibly just deluded) stating out of the gate that it’s inevitable that anything a human can do, the magical-messiah-AGI can do better (trust me bro). Lecun doesn’t fare much better stating that we always work in sclae - first mice, then cats, humans etc. Considering that we can’t even develop an algorithm capable of matching a simple ant’s pathfinding / avoidance skills - let alone its will to survive - speaks volumes.
    One thing they do get right in this discussion (it’s not a debate) is the repeated references to power / control. When the hype engine is exhausted and another AI winter sets in, these guys will all have laughed their way to the bank. Kudos all around for the sleight of hand 😂

  • @randomgamingstuff1
    @randomgamingstuff1 Год назад +16

    Max: "...what's your plan to mitigate the existential risk?"
    Melanie: "...I don't think there is an existential risk"
    Narrator: "There most certainly was an existential risk..."

    • @PepeCoinMania
      @PepeCoinMania Год назад +1

      she knows there are no existential risk for her!

  • @joehubris1
    @joehubris1 Год назад +38

    Max Tegmark is a voice of authority and reason in this field. I am eager to see what he has to add tonight.

    • @tarunrocks88
      @tarunrocks88 Год назад +12

      First time hearing him in this debates and he comes out as a sensationalist to me.

    • @74Gee
      @74Gee Год назад +5

      @@tarunrocks88 I think it depends on what your background and areas of expertise are. Many programmers like myself see huge risks. My wife who's an entrepreneur and I'm sure many other only sees the benefits. Humility is understanding that other people might see more than you - even from the same field, like a Sherpa guiding you up a mountain, it pays to tread carefully if someone with experience is adamant in proposing danger - even if you're an expert yourself.

    • @jackielikesgme9228
      @jackielikesgme9228 Год назад +2

      He is why I am committing myself to 2 hours watching this.

    • @Gunni1972
      @Gunni1972 Год назад

      @@tarunrocks88 To me he sounds more like a Coke addict, trying to save his job.

    • @rodrigomadeiraafonso3789
      @rodrigomadeiraafonso3789 Год назад +1

      @@tarunrocks88 he us the presidente of future of life institution, he realy need you too think that AI gone a kill you

  • @jackielikesgme9228
    @jackielikesgme9228 Год назад +6

    Endow AI with emotions.. human like emotions. Did he really give subservience as an example of human emotion we would endow? Followed up with “you know it would be like managing a staff of people much more intelligent than us but subservient” (paraphrasing) but I think that was fairly close and absolutely nutso tight?

    • @CodexPermutatio
      @CodexPermutatio Год назад

      You misunderstood him, my friend.
      First, subservient just means that these AGI will depend on humans for many things. They will be autonomous but they will not be in control of the world and our lives. Just like every other members of a society are, by the way. But they will be completely dependent on us (at least before we colonize a distant planet with robots only) in all aspects. We provide their infrastructure, electricity, hardware, etc. We are "mother nature" to them like the biosphere is to us. And this a great reason to do not destroy us, don't you agree?
      He is not referring to human-like emotions, but simply points out that any general intelligence must have emotions as part of its cognitive architecture. Those emotions differ from human's the same way our emotions differ from the emotions of a crab or a crow.
      The emotions that an AGI should have (to be human-aligned) are quite different from the emotions of humans and other animals. It will be a new kind of emotions.
      You can read about all these ideas in LeCun's JEPA Architecture paper ("A Path Towards Autonomous Machine Intelligence"). Search for it if you want to know.
      Hope this helps.

    • @vaevictis3612
      @vaevictis3612 Год назад

      @@CodexPermutatio Unless AGI is "aligned" (controlled is still a better word), it would only rely on humans for as long as it is rational. Even if "caged" (like a chatbot) it could first use (manipulate) humans as tools to make him better tools. Then it would need humans no longer.
      Maybe if we could create a human-like cognition, it would be easier to align it or keep its values under control (we'd need to mechanistically understand our brains and emotions first). But all our current AI systems (including those in serious development by Meta) are not following this approach at all..

  • @LaArtifice
    @LaArtifice Год назад +1

    Excuse my English. I think I understand why Max and Yoshua lost points here. We don’t want logic-philosophical debates, we want more and better examples of what could go wrong. I know I do.

  • @shawnvandever3917
    @shawnvandever3917 Год назад +3

    So people like Melanie Mitchell are the same people a year ago who said things like ChatGPT-4 was decades away. AI doesn't need to succeed us in all areas of cognition I believe we are just a couple breakthroughs away from beating us in reasoning and planning. Bottom line is everyone who has bet against this tech has been wrong

  • @neorock6135
    @neorock6135 Год назад +1

    How can you have this debate without Elizier Yudkowsky....

  • @joehubris1
    @joehubris1 Год назад +23

    Meta's track record with AI is a virtual crime against humanity.

    • @samiloom8565
      @samiloom8565 Год назад +5

      No that is not true

    • @anamariadiasabdalah7239
      @anamariadiasabdalah7239 Год назад +2

      ​@@samiloom8565that is true

    • @jmanakajosh9354
      @jmanakajosh9354 Год назад

      Meat's track record with your data is even worse.
      And let's not even mention they're track record when it comes to elections. And misinformation? Since when was FB good at moderating? FB is less accurate than reddit, we only think it's good bc the competition aka Twitter literally has child ****, Livestreamed beheadings, terrorist attacks etc. etc.

  • @ghc9425
    @ghc9425 Год назад +1

    Start at: 13:34

  • @ili626
    @ili626 Год назад +6

    1:45:59 We can’t even get tech to work as a voting application. Mitchell might use this as evidence that we overrate the power of tech, while Tegmark might use as evidence for our need to be humble and that we can’t predict outcomes 100%. The latter interpretation would be better imo

  • @deepsp_ce
    @deepsp_ce Год назад +1

    this debate should be advertised and viewed more than the presidential debate. but we are on planet earth..

  • @anamariadiasabdalah7239
    @anamariadiasabdalah7239 Год назад +3

    Muito boa comparação com o uso do petróleo sendo similar ao uso do Ai ,o que vocês acham que vai prevalecer? Será o bom senso ou o interesse do poder financeiro.

  • @weestro7
    @weestro7 Год назад

    It felt like the length given for the speakers in each segment was a bit too short.

  • @Learna_Hydralis
    @Learna_Hydralis Год назад +7

    Thank you for this, Thanks to the underlying AI, youtube is always the best place to watch videos!

  • @asuzukosi581
    @asuzukosi581 Год назад +1

    Melanie Mitchells opening was just too beautiful

  • @oscarbertel1449
    @oscarbertel1449 Год назад +3

    I understand that the risks associated with the situation are genuine. However, we find ourselves in a global scenario akin to the prisoner's dilemma, where it is exceedingly challenging to halt ongoing events. Moreover, the implementation of stringent regulations could potentially result in non-regulating nations gaining a competitive advantage, assuming we all survive the current challenges. Consequently, achieving a complete cessation appears unattainable. It is important to recognize that such discussions tend to instill fear and people demands for robust regulations, primarily driven by individuals lacking comprehensive knowledge. It is regrettable that only Lecun emphasize this critical aspect, without delving into its profound intricacies. In some moments I tink that maibe some powerful companies are asking for regulation and creating fear in order to create some kind of monopoly.

  • @icarus-wings
    @icarus-wings Год назад +1

    Ehhhh… this was a fairly piss-poor debate. Smart speakers to be sure, but their arguments were embarrassingly weak, unaligned with the resolution, and lacked meaningful engagement with the other sides. Melanie came close out of all of them, but even then missed the mark by simply dismissing the resolution rather than addressing the resolution (which itself so broadly worded an eight-grader could have driven a truck through it).

  • @marktomasetti8642
    @marktomasetti8642 Год назад +6

    If squirrels invented humans, would the humans’ goals remain aligned with the squirrel‘s well-being? Possibly for a short time, but not forever. Not now, but some day we will be the squirrels. "If they are not safe, we won’t build them." (1) Cars before seatbelts. (2) Nations that do not build AI will be out-competed by those that do - we cannot get off this train.

    • @amittikare7246
      @amittikare7246 Год назад

      I have seen Elizer make this argument & I feel its a really good one. The other day I was thinking In fact we cant even get corporations like google to keep their motto 'dont be evil' over a decade because central goal of moneymaking wins over everything & they think they can get a million times superintelligent AI to 'listen'.

  • @martinlutherkingjr.5582
    @martinlutherkingjr.5582 Год назад +1

    Kind of a frustrating debate when people are constantly misstating what’s being debated.

  • @woldgamer58
    @woldgamer58 Год назад +3

    Welp I am now 1000% more concerned if this is what the counter to the threat is...I mean having a meta shill on the debate made this inevitable. He has a clear bias to argue against regulations especially if he is running a research lab.

  • @inkpaper_
    @inkpaper_ Год назад +1

    To LeCun:
    1. It is speculative to assert that an intelligent entity will inherently be benevolent towards other beings or those less intelligent, and will maintain the intentions instilled by its creators, without any empirical evidence to support it.
    2. LeCun merely projected future possibilities of AI systems without proposing any viable solutions to current issues. His claims lack any form of theoretical proof or concept to substantiate them. How we gonna get there, to these 'object-following' systems? No answer whatsoever.
    3. The presumption that intelligence is inherently good is misguided. Historical evidence suggests that many individuals, devoid of moral guidelines, were capable of heinous acts despite their intelligence. Intelligence does not prevent immoral actions or decisions.

  • @fedorilitchev5092
    @fedorilitchev5092 Год назад +6

    the best videos on this topic are by Daniel Schmachtenberger, John Vervaeke and Yuval Harari - far deeper than this chat. The AI Explained channel is also excellent.

    • @amittikare7246
      @amittikare7246 Год назад +2

      I liked Daniel Schmachtenberger & Liv boree's conversation on Molloch too.

  • @DocDanTheGuitarMan
    @DocDanTheGuitarMan Год назад

    Does Ms Mitchell have any financial COI?

  • @jayl271322
    @jayl271322 Год назад +4

    So to summarise the (astonishingly glib) Con position:
    1. Nothing to see here, folks.
    2. Bias is the real existential risk in our society
    🤦🏻

    • @kreek22
      @kreek22 Год назад

      It is just about that dumb, which means LeCun (who is far from dumb) is transparently, flagrantly, floridly, flauntingly mendacious.

    • @vaevictis3612
      @vaevictis3612 Год назад +2

      @@kreek22 He just wants to roll the dice with AGI. He is like a hardcore gambler in a casino, the bad odds are fine with him. The only problem that all of us are forced to play.

    • @kreek22
      @kreek22 Год назад

      @@vaevictis3612 There are a number of actors in the world who could drastically slow AI development. Examples include the Pentagon and the CCP, probably also the deep state official press (Ny Times, WaPo, the Economist). They are not forced to play. The rest of us are spectators.

  • @martinlutherkingjr.5582
    @martinlutherkingjr.5582 Год назад +1

    I think progress in AI is going to plateau for a while now that the mainstream view is AI will end the world.

  • @ikotsus2448
    @ikotsus2448 Год назад +7

    "AI is going to be subserviant to human, it is going to be smarter than us"
    Ok...

    • @OlympusLaunch
      @OlympusLaunch Год назад +1

      It's delusional as hell. "It's under control and it's going to stay under control."
      Sounds like a movie.

  • @a.s.2426
    @a.s.2426 6 месяцев назад

    Reasoning a bit off. Subservience is not an emotion and doesn’t require emotions. It’s a behavioral pattern which simply requires following all instructions.

  • @k14pc
    @k14pc Год назад +4

    I thought the pro side dominated but they apparently lost the debate according to the voting. Feels bad man

    • @adambamford5894
      @adambamford5894 Год назад +1

      It’s always a challenge to win when you have more of your audience on your side to begin with. The con side had a larger pool of people to change their minds. Agreed that the pro side were much better.

    • @runvnc208
      @runvnc208 Год назад

      That's just human psychology. People actually tend to "hunker down" in their worldview even more when they hear convincing arguments. Worldlview is tied to group membership more than rationality, and there is an intrinsic tendency to retain beliefs due to the nature of cognition. So the vote change actually indicates convincing arguments by the pro side.

    • @francoissaintpierre4506
      @francoissaintpierre4506 Год назад +1

      Still 60 40 at least

    • @genegray9895
      @genegray9895 Год назад

      Honestly I think the results were within the uncertainty - i.e. no change. I kind of called that when 92% of people said they were willing to change their mind. That's 92% of people being dishonest.

    • @Hexanitrobenzene
      @Hexanitrobenzene Год назад

      @@genegray9895
      Why do you call "willingness to change your mind" dishonesty ? That's exactly the wise thing to do if the arguments are convincing.

  • @jim666
    @jim666 Год назад +1

    ayy siii "we build it"... stating the risk is negligible with a "We build it bitch" argument falls down because the strength of AI progress is research, and the strength of research is openness, and openness is open source, so actually We build it. And this "We" is so heterogeneous in ethics.

  • @Jedimaster36091
    @Jedimaster36091 Год назад +5

    We don't need AGI to have existential risks. All we need is sufficient advanced technology to manipulate us at scale and bad actors to use it. I'd say we have both today. Even in the optimistic scenarios, where AI is used for good, the pace and scale or changes would be so fast that the humans wouldn't be able to adapt fast enough and still be relevant from an economic point of view. To me, that is sufficient to destabilize the human society to the point or wars and going back to medieval times.

    • @kreek22
      @kreek22 Год назад

      I think the powers of our time imagine instead a one world state. The only serious obstacle remaining is China, a country that is now falling behind in AI.

    • @vaevictis3612
      @vaevictis3612 Год назад +1

      Yes, but even if we solve that, we still have AGI approaching rapidly on a horizon. A tough ride of a century..

    • @bdc1117
      @bdc1117 Год назад

      bingo. the existential debate isn't the most helpful. the cons are wrong that it's not an existential risk, but they're right that it can distract from immediate threats, for which they offered little comfort despite acknowledging them

    • @martinlutherkingjr.5582
      @martinlutherkingjr.5582 Год назад +1

      We already have that, it’s called twitter and politicians.

  • @BugGenerat0r
    @BugGenerat0r Год назад +1

    Ms. Mitchell was very clear and persuasive, as were the two on the other side. Le Cun comes off as an insufferable jerk.

  • @zzzaaayyynnn
    @zzzaaayyynnn Год назад +22

    LeCun is being disingenuous; Mitchell appears delusional.

    • @loopuleasa
      @loopuleasa Год назад +2

      lecun works for facebook, lmao
      his wallet is in his discussion

    • @zzzaaayyynnn
      @zzzaaayyynnn Год назад +1

      @@loopuleasa that explains a lot

    • @agrandesubstituicao
      @agrandesubstituicao Год назад +1

      Because he’s well paid😂