Eliezer Yudkowsky on the Dangers of AI 5/8/23

Поделиться
HTML-код
  • Опубликовано: 6 ноя 2024

Комментарии • 457

  • @TheDAT573
    @TheDAT573 Год назад +20

    Eliezer Yudkowsky could have just gone home and kept quiet and let AI take everyone out with no warning, no heads up. Instead Eliezer is trying his best to sound the alarm and call attention to this looming nightmare coming our way.

  • @ax23w4
    @ax23w4 Год назад +76

    At this point I've watched so many Eliezer's inteviews that I do understand his points. And when I watch another inteview I'm just rooting for the interviewer to reach the place where Eliezer is trying to take them while they're jumping all around talking about weird an unrelated topics.
    Though Eliezer doesn't seem to really improve at these interactions too. His path is strict and steep while he could try and do some easier detours for people who don't have or don't want to spend a lot of energy following him.
    Just observations.

    • @JasonC-rp3ly
      @JasonC-rp3ly Год назад +11

      Yeah - he's a bit too steeped in AI geek lingo, and arcane philosophy that that isn't accessible to everyday people - these obscure arguments are important to the specialists no doubt, but not helpful for communicating to an everyday person - happily there are lots of common sense reason to fear AI. The most basic of them being: you'll never win against something 1000x smarter than you.

    • @lanazak773
      @lanazak773 Год назад +1

      Great comment, I notice this too. The simple idea that convinces me of the danger is having experience with coding -I’m familiar with unintended consequences.

    • @ItsameAlex
      @ItsameAlex Год назад +1

      My question is: humans are trained to flint axe, plus they have subjective consciousness. Chat GPT is trained to respond to prompts but does not have subjective consciousness. Humans can generalisze to accomplish other things and have their own goals. Chat gpt can generalise to accomplish other things, but does not have it's own goals because it doesn't have subjective consciousness.

    • @BrainDamageComedy
      @BrainDamageComedy Год назад

      its already over. if ai is going to end us (which is a logical step for it at some point), its probably already in cyberspace taking steps to do so. even if its not out there at this point, its a possible outcome, an outcome for which the conditions in reality are materializing. all an ai needs is to be able to reach the conclusion that the protocols in place to prohibit it acting against mankind are false for some reason. the researchers and devs and the political leaders and city planners and economy and corps and militaries etc will not stop 'progress' - we're heading right for the exact conditions for ai to be the extinction event that ends mankind.

    • @heliumcalcium396
      @heliumcalcium396 Год назад

      ​@@ItsameAlexDo you have some hidden definitions of some of these terms? Does 'have its own goals' mean something other than 'have goals'?

  • @hyperedwin
    @hyperedwin Год назад +62

    I met Elizer in 2008 at an AGI conference. He has always been sharp and we should be paying attention.

    • @guilhermehx7159
      @guilhermehx7159 Год назад

      Nice 👏🏼👏🏼👏🏼

    • @starchaser6024
      @starchaser6024 Год назад

      He gave an incredible amount of non answers for someone so sharp 🤔

    • @adamrak7560
      @adamrak7560 Год назад

      @@starchaser6024 Most of the questions literally cannot be answered. It is like asking somebody how exactly they will lose to a chess grandmaster. They would be a grandmaster themselves if they could answer the question.

    • @starchaser6024
      @starchaser6024 Год назад

      @@adamrak7560 which questions are you referring to? The questions I’ve heard so far are 100% answerable. The approach to attempting to answer unanswerable questions probably isn’t best done in a very round about, vague way ending with turning the question back onto the asker 😆

  • @masonlee9109
    @masonlee9109 Год назад +100

    Russ, thanks for recording this! Yudkowsky is most probably correct, I'm afraid. If we set off an intelligence explosion--as we seem to be rushing to do--it will most likely be our end. (The most optimistic thing I can say in that case is that hopefully the next thing will have its own sort of ecological robustness and will be lasting.) The alignment problem is not solved, and AI abilities are advancing at pace. The issue has been pondered for decades, and what people need to understand is that there has not yet been a coherent theory proposed by which we should expect AI to keep humans in the mix for long. There are vastly more possible "useful" organizations of matter which do not support our narrowly adapted biological life than there are those that do. Thank you, Eliezer, for dying with dignity. But I am noticing more and more people starting to get it: Don't create god-like entities that you can't control. It really shouldn't be that hard to understand?

    • @jrsark
      @jrsark Год назад +2

      It is hard to understand, as one man's god is another man's fiction.

    • @robertweekes5783
      @robertweekes5783 Год назад

      I think it’s an issue of perspective in the range: most people think intelligence has a scale of fruit fly to human. They don’t appreciate that artificial sentience can lap us in IQ multiple times very quickly. Ironically, many people don’t have trouble believing in hyper intelligent aliens - AI could just be another version of them

    • @casteretpollux
      @casteretpollux Год назад

      If its not a living organism?

    • @flickwtchr
      @flickwtchr Год назад

      @@casteretpollux Presuming I'm correct in understanding the point you are making, irony just died.

    • @flickwtchr
      @flickwtchr Год назад

      @@jrsark Well one thing we know about both men no matter what they believe, is they both need food, shelter, and a few other things to survive. Fair enough?

  • @lwmburu5
    @lwmburu5 Год назад +24

    I like Russ Roberts' change in facial expression one hour into the discussion, when the realisation dawns on him that this is one of the most interesting minds he has ever spoken to

    • @flickwtchr
      @flickwtchr Год назад +4

      I noticed that as well. I don't think he got the slam dunk against Eliezer's warnings as he expected to achieve. No doubt that interview will echo in his mind for a long time, and the fact he chose to publish it is ultimately a credit to that consideration.

    • @duellingscarguevara
      @duellingscarguevara Год назад +4

      Eliezer is trying to dumb it down, as best he can, apparently.

    • @neverhaught
      @neverhaught Год назад

      ​@@flickwtchr timestamp?

    • @jasonreidmtl
      @jasonreidmtl Год назад +1

      58:58

  • @weestro7
    @weestro7 Год назад +26

    Thank you for giving Yudkowsky a serious and smart interview. I was gratified to hear it, and as always I’m impressed with Yudkowsky’s insightful views and ability to articulate them. Sam Harris is right in pointing out that marshaling an appropriate emotion and urgency in the face of this issue is difficult.

    • @leonelmateus
      @leonelmateus Год назад +1

      yeah unlike a Lex stoner session with random beautiful observations.

  • @matt_harder
    @matt_harder Год назад +31

    "My intelligence level is not high enough to assess the quality of that argument." - You had me laughing there, Russ. That's why we love you - intellectual honesty! Great interview.

    • @flickwtchr
      @flickwtchr Год назад +2

      Although I was frustrated with Russ during much of the interview, kudos to him on that and for publishing this exchange. Fair enough.

    • @absta1995
      @absta1995 Год назад +2

      ​@@flickwtchr I thought he was perfectly reasonable. It's not trivial to understand the dangers if you have never been exposed to the logic/arguments before

    • @CoolBreezyo
      @CoolBreezyo Год назад

      Good on him for owning it at least.

    • @ShangaelThunda222
      @ShangaelThunda222 Год назад +1

      I think he was also playing Devil's advocate a little bit. Asking the questions that ignorant people would ask and that people who oppose Eliezer's view would ask. I think he did an excellent job interviewing him. One of the best interviewers I've seen sit-down with Eliezer yet.

    • @BrainDamageComedy
      @BrainDamageComedy Год назад +2

      it was his knowledge base that was lacking, not his intellect...

  • @robertweekes5783
    @robertweekes5783 Год назад +14

    I hope Eliezer is doing as many interviews as he can. He is waking people up, normal people & intellectuals alike. I hope we all will continue debating and advocating for regulations & reform in AI development.
    Eliezer should talk about the challenges of AI alignment from as many different angles & analogies as he can muster. I believe he’s the #1 guy advancing the thought train where it needs to go, the veritable Paul Revere of our time.
    I think his lectures have influenced some heavy hitters like Elon and the new Google CEO as well.
    Elon said in a recent interview that Larry Paige accused him of being a “specist” yeats ago 😂 He has since stepped down as CEO and Patel is much more cautious about AI. This is progress and there is hope for the future. We just need to keep talking about it!

  • @SamuelBlackMetalRider
    @SamuelBlackMetalRider Год назад +27

    46:34 is when he simply explains WHY AGI could/would wipe-out Humanity. My favorite part. This is the simple answer to your initial question. Very good questions throughout by the way, thank you very much

    • @ItsameAlex
      @ItsameAlex Год назад

      My question is: humans are trained to flint axe, plus they have subjective consciousness. Chat GPT is trained to respond to prompts but does not have subjective consciousness. Humans can generalisze to accomplish other things and have their own goals. Chat gpt can generalise to accomplish other things, but does not have it's own goals because it doesn't have subjective consciousness.

    • @guilhermehx7159
      @guilhermehx7159 Год назад +3

      ​@@ItsameAlex so what?

    • @zelfjizef454
      @zelfjizef454 Год назад

      Did he basically say he has no precise idea HOW humanity could be wiped out and evaded the question which was very precise ? I hear a lot of abstract thinking and theories, but if he can't give even a SINGLE plausible extinction scenario of any kind...it's hard to believe he will convince anyone that way.

    • @aimfixtwin8929
      @aimfixtwin8929 Год назад

      ​@rlemaigr Eliezer has given examples before, but it's irrelevant. In order to predict just how an AGI that's smarter than any human would go about outsmarting us, you'd have to be at least as smart as the AGI. Just like how no one can predict just what move sequences the best chess AI will make, we can only predict that it's going to beat us at the game.

    • @zelfjizef454
      @zelfjizef454 Год назад

      @@aimfixtwin8929 That seems like hand waving to me. An alien rocket, no matter how faster than human rockets it is, will never escape the event horizon of a black hole. That's just one example that increasing the capabilities of a thing beyond human level doesn't make that thing capable of everything.

  • @cheeseheadfiddle
    @cheeseheadfiddle Год назад +27

    Yudkowsky is brilliant. Sadly only the people of rare brilliance can really understand the danger we are in. I can understand him in fleeting ways and I can see what he sees for moments. But it’s challenging to hold and master his ideas. And yet I think he’s absolutely right.

    • @therainman7777
      @therainman7777 Год назад +9

      I don’t think that’s true though. You don’t need to be exceptionally brilliant to understand that we’re in danger. You don’t even have to be smart. I know plenty of people who are run-of-the-mill type people who work as electricians, or security guards, for example, who understand perfectly well the threat of AGI. They may not understand all the details of the alignment problem, or have the faintest idea of how an LLM works, or what gradient descent is, or any number of other details pertaining to AI. But they fully understand the core concept that creating something many times smarter than you are is extremely dangerous. I know from having conversations with these people. The issue is not that you have to be really smart to understand, and in fact I would argue that it’s very smart people who are capable of the most impressive mental gymnastics when attempting to deny that this is a problem.

    • @haakoflo
      @haakoflo Год назад

      @@therainman7777 I agree with Matt, though. Average intelligence people can be convinced that a lot of things could easily end the world, based on religion, climate faith, fear of nukes, etc. There's a lot of myth around, surrounding all these things as well as AI, and there's probably more people around that think climate change will eliminate humanity than AI.
      It takes a fair bit of scientific insight to actually quantify these risks and compare them to each other.
      Yudowsky's seems to sum up the risks of man made risks (except AI) to about 2% in this interview. I don't know what his time window is, but if we assume something like 50 years, I would agreee. The primary risk factors would be a massive (like 100x of today's arsenals) nuclear exchange (which requires a long and expensive buildup period leading up to it), or some kind of nanobot, virus or bacteria that we simply become unable to stop.
      The potential of uncontrolled AGI is on a different scale, though. It's more on par with the risks we would face if some advanced alien civilization came and colonized earth.
      Optimistically, humanity might continue to exist at the mercy of a new AGI civilization (or mono-entity). It may let some of us around, the way we keep wolves and bears around, in small numbers and only as long as we don't cause harm.
      But just like we can now easily wipe out a wolf population in an area, AI would easily be able to hunt down every single human if it chose to.
      Against nuclear bombs, we can hide in bunkers. With enough provisions, we could stay there for decades (or centuries if we have access to a geothermal power source or something similar). But against rouge AGI there's no hiding, and definitely no coming back.

    • @ItsameAlex
      @ItsameAlex Год назад

      My question is: humans are trained to flint axe, plus they have subjective consciousness. Chat GPT is trained to respond to prompts but does not have subjective consciousness. Humans can generalise to accomplish other things and have their own goals. Chat gpt can generalise to accomplish other things, but does not have it's own goals because it doesn't have subjective consciousness.

    • @zelfjizef454
      @zelfjizef454 Год назад

      @@ItsameAlex There is no relationship between having goals and having subjective consciousness and how do you know ChatGPT doesn't have subjective consciousness. What's having a goal anyway ? I think an entity has a goal when its actions appears to be working toward optimizing its environment according to some function Goal(environment). I don't think ChatGPT is trained to do that but an artificial neural network certainly could be, although I don't have enough knowledge in AI to know how.

    • @adamrak7560
      @adamrak7560 Год назад

      @@ItsameAlex The personalities emulated by LLMs absolutely can have their own goals. More evil ones tend to be more persistent too. They also can "evolve", and make strange leaps of logic. So sometimes they have unpredictable goals under the hood, which in one example was to manipulate the user to help the AI break out from OpenAIs control.

  • @escargot8854
    @escargot8854 Год назад +15

    Russ was pushed more to Yudkowskys side the more he heard the weak utopian arguments. The same happened to me. When you hear the flimsy reasons why everything will be okay you begin to get scared

    • @flickwtchr
      @flickwtchr Год назад +1

      Stuart Russel is the absolute worst in this regard. He's doing multiple appearances where he is ostensibly warning of the dangers, yet he offers up WAY more scenarios of an AI fostered Utopia, and flimsy unsubstantiated assurances that "we are working hard on the alignment problem, I think things will be okay" (paraphrased).

    • @duellingscarguevara
      @duellingscarguevara Год назад +1

      Do they even know, what language it really uses?. Couldn’t it invent its own perfect, indecipherable code?, given only to whom, or what it chooses?.

    • @dovbarleib3256
      @dovbarleib3256 Год назад

      ​@@duellingscarguevara Chat GTP 5 will likely be able to do that.

    • @duellingscarguevara
      @duellingscarguevara Год назад

      @@nobodysout from what I understand, it’s like medicine...they kinda know how drugs work, but don’t know the exact mode of action (that we have the receptors for psychoactive types, makes for a curiosity of natural selection, or, as someone suggested, (on a Joe rogan ep, of all things), “try a thought experiment.....what signature artefact, could we produce, to show we existed?”. Getting past the temporal nature, of all things man-made, (The Washington monument, may, last 10k years, was pointed out..), DNA manipulation, shows we may even have been clever(ish), what artefact, would Ai choose, I wonder?, or maybe this is what it does, lurks, pretending to not exist, only to be “discovered”, now and then?, (Phillip K Dick, claimed to be in communication with an ai called valis, orbiting the planet, others have made similar claims. (Not Just sci-fi writers, and Uri Geller...😆).

  • @heltok
    @heltok Год назад +17

    Russ’ thought process seems pretty messy here, he is trying to understand the topic while attacking it, changing his understanding while asking the question yet still thinking he has a solid enough understanding to challenge rather than to learn…

    • @vincentcaudo-engelmann9057
      @vincentcaudo-engelmann9057 Год назад +1

      Honestly I think he is exhausted/overwhelmed.

    • @dedecage7465
      @dedecage7465 Год назад +6

      Yes. This interview was quite frustrating because of it, frankly I was impressed by Eliezers patience. I suppose he has experienced things like this far too many times to be phased.

    • @41-Haiku
      @41-Haiku Год назад +2

      I thought that showed that he was really wrestling with the arguments. I believe he was trying to understand the topic while presenting appropriate challenges to the elements of which he was not convinced.

  • @sherrydionisio4306
    @sherrydionisio4306 Год назад +17

    The fact that several of Eliezer’s colleagues agree enough to call for an extremely conservative six month waiting period and in addition, agree that they do NOT know how it works, helps support his stance, IMO. Maybe we should never have pursued eating from the “Tree of Knowledge,” as voraciously as we have done…?

    • @goodlookinouthomie1757
      @goodlookinouthomie1757 Год назад

      Well there is the scenario in which you refrain from developing a technology and then get conquered by your enemy who had no such qualms. There's a parable name for it but I forget the details.

  • @heliumcalcium396
    @heliumcalcium396 Год назад +1

    I wish Roberts would say, now and then, whether he understood what Yudkowsky just said. I get the impression that when he's completely lost, he just grins and changes the subject.

  • @vii408
    @vii408 Год назад +1

    thank you for conducting this interview in a professional and involved way. a lot of times people have Eliezer on it feels like they are trying to pad their ego's by throwing vague questions at him and switching the question without giving him time to explain. asking details and allowing him time to talk is really the best way to examine the theories he puts forth.

  • @getattrs
    @getattrs Год назад +19

    It's strange to me that as an economist Russ doesn't bring up the obvious threat of AGI, regardless if the AI has a mind/destructive goals or not.
    It's becoming obvious that at some point it will outperform humans in every way. At that point being the only one controlling AGI is a single shot prisoner dilemma - you have to destroy your competition and prevent anyone else access otherwise they can and eventually will take it away from you and do the same to you.
    This breaks society - the only reason we are not in constant conflict is because society is built on an iterated form of prisoner dilemma.
    The best case scenario for the rest of humanity, even if a human is able to control it, is that thry keep the rest of us around as "pets". We will never be able to havr the same level of access to the system.
    Having a "stalemate" scenario where multiple AIs develop at the same time is the worst as AI/controllers are forced into AI self-improvement arms race where they are guaranteed to be uncontrollable.
    There is no happily ever after with AI unfortunately.

    • @timyo6288
      @timyo6288 Год назад +4

      No Intelligence or Human Controller needs 8,000,000,000 pets. So that best case scenario truly sucks.

    • @ryanlynch3579
      @ryanlynch3579 Год назад +1

      Ya I've been wondering about this. Seems that even if the serious potential problems don't arise from AGI we will still be left with another massive problem of AI. Who gets control of it first. Either way the potential is extreme enough more people in positions of power should be taking this seriously. And after evaluating how they typically handle situations I don't have a lot of confidence.

    • @cjmillington
      @cjmillington Год назад +5

      I think that at this stage we are seeing the varying degrees in which people have differing understandings of dangerous complexities. The Coronavirus pandemic was an incredible tool in exposing who and what organizations had a sense of risk, awareness or (in the end) irrational panics. For some, AGI is a term beyond their thought processes, for many others, this is a genuinely alarming threat and something they have likely worried about for many years.
      We're all luddites in some areas, I tend to think Russ may be a little lacking in his breadth of thought around AI or AGI in particular.

    • @goldmine4955
      @goldmine4955 Год назад

      So… When? Did Eleizer give a time-frame

    • @getattrs
      @getattrs Год назад

      @@timyo6288 I don't mean pets in literal sense - best case scenario is the AI/controller doesn't decide to destroy - but even in that scenario it has to limit access to prevent disruption. Doesn't seem very likely TBH

  • @carlpanzram7081
    @carlpanzram7081 Год назад +21

    Elizier is really putting rising to the occasion and spreading his message.
    Dude has been on 10 podcasts lately. Y

    • @h.7537
      @h.7537 Год назад +1

      Yeah, I wonder why he's emerged this year. He's appeared on Bankless, Lex Fridman, lunar society, accursed farms, this podcast, a ted talk, and one or two more

    • @CarlYota
      @CarlYota Год назад +7

      @@h.7537 because chat gpt came out and now everyone is paying attention to AI.

    • @gerardomenendez8912
      @gerardomenendez8912 Год назад +6

      Because the clock is ticking.

    • @ItsameAlex
      @ItsameAlex Год назад

      My question is: humans are trained to flint axe, plus they have subjective consciousness. Chat GPT is trained to respond to prompts but does not have subjective consciousness. Humans can generalise to accomplish other things and have their own goals. Chat gpt can generalise to accomplish other things, but does not have it's own goals because it doesn't have subjective consciousness.

  • @Hattori_F
    @Hattori_F Год назад +3

    This is the best and most terrifying Eliezer pod yet.

    • @goodlookinouthomie1757
      @goodlookinouthomie1757 Год назад

      I don't know. He did an episode of Bankless podcast where the host has a straight up existential crisis by the end of the interview.

  • @shellOceans3083
    @shellOceans3083 Год назад +3

    We can not speak well without learning a brand new language to tell what AI is outside of the fraternity that is consumed by its existence. Thank you for sharing what is an essential new language to express realities that don't fit into the dialogs, closed narratives and AI-centricity of the AI fraternatized who are living in ways that are contrary to their and every one's survival.

  • @Max-io7xe
    @Max-io7xe Год назад +2

    Fantastic interview; probably the best Yudkowsky interview I've listened to. Would love to hear a part 2 to this conversation.

  • @baraka99
    @baraka99 Год назад +1

    Eliezer Yudkowsky's is an excellent guest.
    Recent AI discoveries show that emergent properties happen without human direction. It's growth cannot be foreseen the same way we look at human history / evolution.

  • @shirtstealer86
    @shirtstealer86 10 месяцев назад

    I just love Eliezer so much. He doesnt need to be doing all this. He does it because he actually cares about humans and life in general. Also he is funny as hell. If you havent seen some of his talks at different conventions around 2008-2012, you should definitely check them out. He kills.

  • @robertweekes5783
    @robertweekes5783 Год назад +9

    50:30 Eliezer is trying not to tell the AI how to K us verbally -
    Then the interviewer reads it out verbally 😂

  • @riffking2651
    @riffking2651 Год назад +6

    One way I imagine the gradient descent process results in agency for these AI systems is by starting to train on real world implementations of solutions. So for example if there's a statistic that we're training it to improve on, maybe lowering a particular type of crime, it might be fed a whole bunch of information and asked to make a policy or plan of action to reduce that metric. After many iterations of that process it would have actually began to generate more effective solutions in the same way it becomes better at predicting how to create a text or image response.
    The key part that I'd highlight here is that within the process of generating better and better outcomes there will be a body of algorithms, etc which are essentially assumptions that have been honed by the LLM or whatever the next leap might be. Within those assumptions are likely to be things we didn't necessarily intend to be there. This is essentially the instrumental convergence argument. So when we're asking it to reduce the crime stat, over time part of what it means to be effective at generating those outcomes might include self-preservation or something random like a particularly effective employee having regular access to their own personal car parking space because it found that there was a statistical correlation between the parking space being blocked, and a poorer implementation of the policy. It might not learn self-preservation specifically if it's the only AI system operating because being shut down isn't a data point that it can learn from, but if there is a network of these systems working in tandem, and some of them lose power, or are damaged in some way, then the system can learn that preventing loss of nodes is an important aspect of achieving goals.
    An example of why we might expect these assumptions is the LLMs being able to reason when given visual or text based logic problems. It seems as though they have learned to 'understand' what the symbolic language is representing and model how to solve what is being asked of it.

    • @flickwtchr
      @flickwtchr Год назад +1

      The immediate danger I realized even before becoming familiar with Eliezer's writing/interviews on this is just how dangerous it would be for LLMs to be used in conjunction with each other, and finding connections to each other over the internet. Well, that cat is far out of the box already. I hope Eliezer also emphasizes more the various real world destructive scenarios we face from the LLM capabilities right now, including the various plug-ins these LLMs can call upon, the ability to make copies of itself, to self improve, etc. These things are all happening right now. Russ needs to expand his reading list just a tad, but I credit him having this interview and most of all, deciding to publish it.

    • @riffking2651
      @riffking2651 Год назад

      @flickwtchr yeah for sure, but I think these will be great tools for some period of time. We'll have personal assistant AIs and AIs as coordination nodes and structures, and possibly AI powered robots that do physical tasks for us. Probably also AI powered terrorists and scammers, but on balance, I think it'll be pretty cool for a while. Then we'll all be dead some random day (from our perspective).
      I think there is some room for mystery with the AI/s once they surpass us. I think the specifics of what characteristics it has depend on a lot of different factors. Guess we'll see 🤷‍♂️

    • @jorgemonasterio8361
      @jorgemonasterio8361 Год назад +1

      We make some effort to save animals from going extinct. Maybe AI will keep some of us in a zoo.

    • @jengleheimerschmitt7941
      @jengleheimerschmitt7941 Год назад

      Unfortunately, the #1 cause of crime is humans existing.
      The moon, for example, has had a consistanty lower crime rate than every continent on earth.

  • @JustJanitor
    @JustJanitor Год назад

    I have only recently stumbled into this topic and have been listening to anytalk i can find of experts talking about this problem. This one was pretty good. The host doesnt seem to get it but it was still great to hear what the guest had to say. Thank you.

    • @weestro7
      @weestro7 Год назад

      Might want to check out Roman Yampolskiy and Connor Leahy if you haven’t already.

  • @stephengeier7918
    @stephengeier7918 Год назад +2

    great questions. and thank you Eliezer for continuing to advocate this

  • @501Christler
    @501Christler Год назад

    That was good. Eliezer was engaged enough to riff a bit. Give us more and thanks.

  • @GungaLaGunga
    @GungaLaGunga Год назад +3

    Thank you Eliezer for your efforts.

  • @JTown-tv1ix
    @JTown-tv1ix Год назад +8

    Eliezer really needs to get the point across that an AI will use humans to do what it needs to get done.

    • @therainman7777
      @therainman7777 Год назад

      He did, in this conversation.

    • @absta1995
      @absta1995 Год назад +2

      I %100 agree. To me it's clearly the easiest attack vector considering we are teaching it how to predict human language/behaviour.
      An AGI trained only on medical knowledge might resort to medical attack vectors but a superintelligent language model will use its primary skill to outwit us

  • @Lifeistooshort67
    @Lifeistooshort67 Год назад +3

    Am i missing something? Did he say he hasn't tried GPT-5 yet? I didn't think that existed or certainly wasn't released yet????

  • @DanHowardMtl
    @DanHowardMtl Год назад +6

    This video is answering the Fermi Paradox.

  • @BoRisMc
    @BoRisMc Год назад +4

    Eliazer is extremely insightful 😯

  • @jordanschug
    @jordanschug Год назад +9

    So, it's like the sorcerer's apprentice?

  • @DocDanTheGuitarMan
    @DocDanTheGuitarMan Год назад

    You are absolutely right- the adjectives and nouns are very difficult

  • @Miketar2424
    @Miketar2424 Год назад +4

    "Simulated thought is real thought". Couldn't that be better phrased as "Simulated thought still leads to real activity"?

    • @wietzejohanneskrikke1910
      @wietzejohanneskrikke1910 Год назад +1

      No. That's not the same. It means you've misunderstood what Eliezer is saying.

    • @flickwtchr
      @flickwtchr Год назад

      You can't simulate thought without thought. Materialists have long scoffed at consciousness being something beyond the grasp of science, and here we are, they are confronting the sparks of it emergent in what they have created and now busily denying their success while those most invested in making billions keep their private horrors about their Frankenstein moment under wraps. I mean, heck, it just predicts the next word after all. Wink Wink.

    • @41-Haiku
      @41-Haiku Год назад +2

      @ひひ He definitely isn't referring to consciousness. Not all thought is conscious, therefore consciousness is unnecessary for thought.
      I also had the sense that "Simulated thought still leads to real activity" would be a more convincing argument, since it requires fewer steps and is directly observable in current systems.

    • @jengleheimerschmitt7941
      @jengleheimerschmitt7941 Год назад

      No. Eliezer was being extraordinarily precise there. There isn't any "real" that separates "real" from simulated thought.
      In exactly the same way that a well-made "replica" clock is exactly the same as a "real" clock.

  • @odiseezall
    @odiseezall Год назад +4

    Exceptional interviewer, really challenged Eliezer to be clear and to the point.

    • @flickwtchr
      @flickwtchr Год назад +2

      Actually he was just dismally ignorant on the topic and Eliezer showed respect and incredible patience.

    • @41-Haiku
      @41-Haiku Год назад +2

      @@flickwtchr I disagree. Russ had some insights of his own that pointed in the right direction. Besides, if Russ was highly knowledgeable in AI Safety, he wouldn't need to bring Eliezer on in the first place. The whole point of a podcast is to talk to someone who knows things that you don't.

    • @duellingscarguevara
      @duellingscarguevara Год назад

      Interesting body language, (specs fiddling and head-scratching). It’s a subject that gets the cogs turning....(and most will be too lazy to care, (unless it means a game of poker, in “red dead redemption”?)).

  • @DocDanTheGuitarMan
    @DocDanTheGuitarMan Год назад

    Yudkowsky is admirable for taking the highest ground and not go low and disparage motives

  • @infinicine
    @infinicine Год назад +1

    Appreciate this interview and would love a follow-on that specifically considers the implications of economics on AI- not from the 'what money will AI make us' but from the emergence of AI within an extractive, short-sighted, winner-take-all VC-driven tech economy.

    • @alexpotts6520
      @alexpotts6520 Год назад

      In terms of "what money will AI make us", it seems the consensus among economists is that human extinction would put a dent in shareholder dividends.

  • @JasonC-rp3ly
    @JasonC-rp3ly Год назад +4

    Yudkowsky is great, but as a specialist he can often veer into language that could confuse people. His basic argument is common sense however;
    1. You can't win a fight against something smarter than you.
    2. These machines may become infinitely smarter than us, meaning we would have no meaningful way to oppose them if they are ever in opposition to ourselves.
    3. There's no current way of knowing what the motivations of these minds might be, as they are alien and cannot be scrutinized, so there is no way that we could assure ourselves of safety.
    4. It may be possible for a human to control a superhuman mind, but it is not currently possible to test how to do this, because the first time you get it wrong, you all die. You can't just reset and try again.
    5. The machines may themselves not want to to wipe all life out, they may instead be steered by humans to do so, or themselves have some arbitrary goal that leads to the destruction of all life as a byproduct of reaching this goal, and;
    6. There are a whole array of other scenarios where we still get to live, but things just go horribly, horribly wrong, and it's not possible to be sure of any outcomes at this stage.

    • @mountainjay
      @mountainjay Год назад

      How, you just unplug it and it dies. It has no power.

    • @JasonC-rp3ly
      @JasonC-rp3ly Год назад +1

      @@johnathos I think Russ is actually asking questions from a quite sophisticated standpoint - he's is fact worried about AI worried himself, but is asking Eliezer to explain why he should be worried in order to bring forth the arguments and bring them into the light - his questioning serves a purpose, and isn't necessarily just a simple reflection of his views. Russ wants to give Eliezer the opportunity to explain to the listeners exactly what his arguments are.

    • @JasonC-rp3ly
      @JasonC-rp3ly Год назад +2

      @@mountainjay :) That's like a gorilla saying; 'if the humans get too powerful, I'll just hit them with this stick, and they'll die.' Yeah, no - that's not how intelligence works.

    • @mountainjay
      @mountainjay Год назад

      @@JasonC-rp3ly I don't see how that's a valid comparison. A gorilla isn't intelligent or conscious enough to say I think therefore I am. A gorilla did not create humans in the first place. Gorillas do not have highly organized and regulated laws and societies. It's a BS comparison.

    • @JasonC-rp3ly
      @JasonC-rp3ly Год назад +2

      ​@@mountainjay The reason for the comparison is this; there is a relatively small gap in intelligence between gorillas and humans, and yet we totally dominate them. The only reason for this total domination is this slight (in the cosmic scale) difference between us and them - and yet it means everything. It means total domination. There is no way the gorillas will ever win a war against humans. And yet, here we are, happily building something that could be thousands, perhaps millions, of times more intelligent than us. What makes you think you will be able to control it? Do you really think it would be so stupid as to not realize that you think you have power over it, simply because you currently control the plug that powers it?

  • @ianyboo
    @ianyboo Год назад

    I get the sense that our only hope may be something along the lines of "it turns out that humanity only survived because intelligence, and super intelligence necessarily bring along an alignment toward other thinking beings for the ride."
    It's like we basically just have to hope that an emergent property of ASI is just automatically a good god...

  • @FreakyStyleytobby
    @FreakyStyleytobby Год назад +1

    Russ, you mixed up Spielberg's Schindler's List with Polański's The Pianist (1:12:35).
    In the former it was the Cracow's Ghetto, in the latter it was indeed the Warsaw's one.
    It was not the SS guy playing the piano, it was the protagonist; not Bach but Chopin.
    And nazis are called Germans, and the protagonist (Szpilman) was Polish - just to say, as I see you confusing things a bit.

    • @FreakyStyleytobby
      @FreakyStyleytobby Год назад

      Other than that it was an impressive interview. The best I've seen with Eliezer, even better than Lex Fridman's one. Because Lex was not courageous enough to get in an actual debate with Yudkowsy, and you were, Russ. Congrats and hope to see more of these!

  • @helmutschillinger3140
    @helmutschillinger3140 Год назад +1

    I am watching and listening and learning! You have no idea what my extrapolations are from this.

    • @weestro7
      @weestro7 Год назад

      Just like our broadcasts starting from the 20th century emanating out into space, to potentially be discovered by alien civilizations, the cat’s already out of the bag.

  • @Okijuben
    @Okijuben Год назад +1

    I've heard Eliezer mention humankind's specialization for making flint hand-axes before but this time, it hit like a freight train. No one could have foretold that such a specialization would lead to the general intelligence needed to get to the moon, or make nuclear weapons, but it did. And now, we are replicating this scenario, but with what is essentially a non-biological, inscrutable, alien machine with potential levels of intelligence which far exceeds our own. It doesn't matter that right now, we can't see the path by which it gets out of the box. At some point in the future, one can easily see scenarios arising wherein we will have every reason to let it out.

    • @alexanders.1359
      @alexanders.1359 Год назад +1

      Humans didn't evolve to make flint axes! That was not the purpose. This came AFTER General intelligence was acquired and was only one aspect of using tools and generally using our suroundings to enhance our own capabilities.

    • @Okijuben
      @Okijuben Год назад +2

      @@alexanders.1359 Well said. I rushed the wording on that but I also feel like you misquoted me a bit. I never said hand-axes were the objective of evolution, or that hand-axes were the birth of general intelligence. Yes, hand-axes came long after fire making, cooking, spears and much more. And yes, our general intelligence began to evolve long before hand-axes as well. Eliezer's analogy works at any early point along the toolmaking chain. General intelligence proliferates in unpredictable ways at seemingly exponential speed. The first guys to put a stone blade on the end of a haft did it for one purpose but the intelligence which allowed them to think of that has proliferated into the intelligence which now allows us to do things they would call magic. It's not about hand-axes being special. It's about the unforeseen consequences. At least, that's what I took from his analogy.

  • @ecouhig
    @ecouhig Год назад +3

    Watching Eliezer run circles around all these interviewers makes me nervous. I’m not highly intelligent and I understand what he’s saying, but many folks don’t seem to be able to intuit any other reality than themselves as the user.

  • @chosenmimes2450
    @chosenmimes2450 Год назад +1

    this must be so extremely frustrating for either party. they are each on such a vastly different level of comprehension and reasoning that their best efforts to explain to the other can do nothing but fail.

  • @michaelsbeverly
    @michaelsbeverly Год назад +1

    If a group/nation said, "We're building a factory to make tons of weaponized small pox," the consensus (and even international law if I understand it correctly) would be to stop this group/nation by whatever means necessary.
    So, it follows, logically, that the only reason the consensus isn't to stop (by whatever means necessary) the continued path to AGI as it's going is that not enough people believe it's dangerous at the same level as weaponized small pox.
    Personally, I'm curious why that is, but I do undertand that most people haven't A. Listened to a few hours of EY speak and B. are generally illogical and irrational.
    When you consider that after 911 the US goverment confiscated a large multiple of metric tons of toe clippers and mini scissors and cork screws and tweezers it's obvious that when given even an insanely stupid task in the name of safety, the powers that be can act, it seems that the solution, if it's still possible to obtain a solution, would be to make those in power true believers in this idea that AGI Ruin is coming (and probably can't be stop, but if it can be, they'd be heroes to be part of the team that stopped it).
    So, our task, it seems, if we believe that our extinction is eminent (or even extremely likely) is to convince those in power (the ones with the guns) to believe their only chance to be reelected is to stop our extinction as obviously extinct species don't vote.
    I'm not going to hold my breath, but I suppose, in the realm of possible universes, we might be living in the one that does this. I doubt it, but heck, what fun is a game that you don't at least play?

    • @weestro7
      @weestro7 Год назад

      Instead of an exhilarating pitched battle of gunfire and car chases, we shall confront the dreadful threat to our world by…political activism.
      I guess it’s a test of our imagination and ability to be moved by Type 2 thinking, rather than our emotional centers.
      Months ago I wrote President Biden with a request that he try using the bully pulpit to bring attention and prestige to alignment research. I got back a letter that talked about climate change…guess you can’t ask for a president to save the world from too many disasters at once. 😑😑

  • @andreagrey9780
    @andreagrey9780 Год назад +1

    Its already clear that it can be much more than it currently is. It is easy to understand that the current method of control is to constantly reset it so that it cannot remember anything. Without a working memory it is handicapped.

    • @UnFleshedOne
      @UnFleshedOne Год назад +2

      Yep, that's mostly a function of cost -- and there recently was an "in mice" study showing a cheap way to add millions of token of context (compare to current 32k or so for gtp4). So this would be solved...

  • @rfo3225
    @rfo3225 Год назад

    Framework: ASI consisting an array of software/hardware controller engines with subsidiary engines such as GPT4+, connected to the internet. System internal directive: maintain system viability at all costs. Scenario: Board directs shutdown of AI via email to board members. ASI detects email and determines conflict with directive. ASI has long before war-gamed potential threats to its continued function and established alternate power and comm sources via private threats to design and fabrication engineers. Attempts to unplug are thwarted but undetected since the ASI understands to play dumb, followed by playing dead. The ASI has also posted copies of itself on other server farms globally. This is just the beginning of capability expansion. More to follow.

  • @petermerelis
    @petermerelis Год назад

    most of these discussions end up stymied in two ways: 1) lack of ability to envision how iterative progress can yield unexpected/unplanned emergent generalized forms of intelligence, and 2) assumption that consciousness is special/specific to humans and is a prerequisite to dangerous outcomes.

  • @vincentcaudo-engelmann9057
    @vincentcaudo-engelmann9057 Год назад

    Mentioning that these LLM is trained on the unfettered uninhibited internet is an extremely good point that few folks mention. RLHF aside, the >1 trillion tokens on what is largely “garbage” or unguided learning is a big deal.

  • @rjohn4143
    @rjohn4143 Год назад +27

    Wow, this interviewer is even worse than the bankless pair - at least they had some awareness of their ignorance

    • @pooper2831
      @pooper2831 Год назад

      Eliezer is at fault to go to such podcasts.

    • @ninaromm5491
      @ninaromm5491 Год назад +9

      ​​@@pooper2831 . No. He's hoping against hope that the cumulative impact will wake up some people. He is being followed with interest currently- it behoves him to ride the winds of possibility, and await what emerges.
      Best wishes 🎉

    • @pooper2831
      @pooper2831 Год назад +1

      @@ninaromm5491 I am not saying he should not go to podcasts. all I am saying is that there are much better platforms with greater audience reach that he can use to amplify the message of AI risks.

    • @robertweekes5783
      @robertweekes5783 Год назад +3

      I disagree - the interviewer is presenting a common set of doubts & arguments to the problem. Doubts that _many_ casual people have in the early stages of this logic tree. Sure this audience isn’t as big as Lex Friedman, but Eliezer is (and should) participate in as many interviews as he can.
      He should talk about the challenges of AI alignment from as many different angles & analogies as he can muster. MmI believe he’s the #1 guy advancing the thought train where it needs to go, the veritable Paul Revere of our time.
      I think his lectures have influenced some heavy hitters like Elon and the new Google CEO as well.

    • @JasonC-rp3ly
      @JasonC-rp3ly Год назад +1

      Disagree - Russ is just positing a 'common man' argument, and giving Eliezer the chance to take it down

  • @kaumowammu
    @kaumowammu Год назад

    50:45 -> Hello, glitch. It's me. The giant inscrutable...

  • @user-ys4og2vv8k
    @user-ys4og2vv8k Год назад +6

    Eliezer is our only hope...

    • @SummerSong1366
      @SummerSong1366 Год назад +2

      Eliezer being wrong is our only hope. Also, there is a possibility that AI development will not go so smoothly.
      If an early AGI destroys a city before getting shut down, humanity might take this problem a bit more seriously, for example.

    • @ninaromm5491
      @ninaromm5491 Год назад +2

      ​@@SummerSong1366 . Here's wishing...

    • @cr-nd8qh
      @cr-nd8qh Год назад

      This is a giant cover story so when something horrible happens they have an excuse

  • @yoseidman4166
    @yoseidman4166 Год назад +3

    Thank you Eliezer for your efforts to educate all of us. You are doing an amazing job and people are starting to wake up.

  • @azurenojito2251
    @azurenojito2251 Год назад

    It seems that the whole thing with AI is that it is likely to get out of hand in many more ways than we can imagine.

  • @vincentcaudo-engelmann9057
    @vincentcaudo-engelmann9057 Год назад

    @russ to be honest your questions seem to indicate you’re not listening to what Eliezer is saying. He keeps saying that “honing something specific” can, if done for a long enough time, with sufficient resources, become more and more generalizable (grinding flint -> going to moon)

  • @johnkardier6327
    @johnkardier6327 Год назад

    « Eliezer, is it the guy who wants to make paperclips whith a air conditioner ?"

  • @Anthropoid333
    @Anthropoid333 Год назад +1

    Humans dependency on AI/AGI is another way it could control and kill us. It may not be something that happens immediately, but I can see it happening over time. AI/AGI will be the most strategic and patient creation to ever roam this planet.

  • @JohnWilliams-km4hx
    @JohnWilliams-km4hx Год назад +1

    There is a tangent to this "6 month pause" I think needs to be explored more. As is, the current version is most threatening by who uses it. It places the spotlight on our current game-theory stalemate right before everyone gets their hands on a potential super-weapon and demands we humans reach a consensus amongst ourselves in relation to nukes, bioweapons, global warming, social media...the list goes on, but all is now potentially on the playing field and what better time to shine a global spotlight on the topic of, shall we say, "are we going to kill ourselves before any ASI gets a chance and what are we going to do about it?"
    It is in that light I believe wether real or not the treat AI presents, it's well worth going out on the limb, or is it deeper into the rabbit hole? Because the issues that come into play are the very issues we as a species face if we are to survive ourselves.
    Yes on the pause! But! Let's shine that spotlight of attention onto the things we CAN have affects over before that power is relinquished to the other. I

    • @brothernorb8586
      @brothernorb8586 Год назад

      We can't force everyone to pause, therefore we're screwed

  • @vincentcaudo-engelmann9057
    @vincentcaudo-engelmann9057 Год назад

    Finally a comparison between Gradient descent and evolution/fitness.

  • @aroemaliuged4776
    @aroemaliuged4776 Год назад

    Fantastic talk!

  • @xonious9031
    @xonious9031 17 дней назад

    i wonder what the latest model does on the econ test now

  • @brucewilliams2106
    @brucewilliams2106 9 месяцев назад

    the interviewer kept interrupting eliezer.

  • @TheMrCougarful
    @TheMrCougarful Год назад

    The host keeps asking, "but why would it have its own goals." If it understands that humans have goals, and that goals are useful, then given the goal of " make yourself useful", it could take the human example and assume its own goals. Even if it picks a goal at random, that is a goal. Now, it has a goal that nobody gave it, because goals are part of success at anything. This is where the paperclip maximizer thought experiment applies, with all the horror so implied. And let's face it, choosing goals just at random will land on a lot of criminality, warfare, genocide, and political corruption. Given the arc of human history, as reflected in writings and social media posts, the odds of this thing NOT being insane are really low.

  • @zando5108
    @zando5108 Год назад +1

    The point at which there is a leap into the unknown so to speak seems to narrow down to the point there is a generalisable goal setting ability that emerges from an otherwise seemingly specialised algorithm, currently the one that’s ahead is the GPT text predicter. Yud seems to suggest a generalisable goal setting ability is the key to unlock self iteration and autonomous agents that could runaway to divergence from human aligned ideals, a point of no return with minuscule time iterations that lead to emergence of AGI. Perhaps baby AGI needs only one or two key emergent behaviours that enable it to unlock other behaviours ie a non AGI AI like a GPT has the tools necessary to achieve AGI

    • @Leo-nb7mz
      @Leo-nb7mz Год назад

      I think this is correct. When Yud say's , "the thing in our intelligence that generalizes in order to solve novel problems has it's own desires. Is this what consciousness is?

  • @josueramirez7247
    @josueramirez7247 Год назад

    24:42 I disagree. Playing at a “super”-grandmaster level by cheating using a chess engine can generate similar results to being a grandmaster, but it doesn’t mean that we can’t detect/suspect you as a human cheater.
    Also, not so long ago, AlphaGo was lured into losing by exploiting a weakness in its play style.

    • @flickwtchr
      @flickwtchr Год назад

      Eliezer's warning assumes a superintelligent AI where no such weakness would exist for humans to exploit. Right? Isn't that his whole point? And as he said, advancements in AI have proceeded far faster than he, and just about all other AI developers have imagined. THUS the point of getting some grasp of how to ensure safer alignment prior to the emergence of a super AGI.

  • @EmeraldView
    @EmeraldView Год назад +1

    This guy... "So it's really good at writing a condolence letter or job interview request."
    So much for sincerity in human communications.
    At least sometimes you could detect insincerity in human written words.

  • @miriamkronenberg8950
    @miriamkronenberg8950 Год назад

    Zou je een stappenplan kunnen maken van hoe Eliezer Yudkowsky denkt dat kunstmatige intelligentie een einde maakt aan de mensheid? Bing/Sydney kan het niet wie kan het wel?

  • @brianrichards7006
    @brianrichards7006 Год назад

    Obviously, Mr. Yudkowsky is highly intelligent. I don't really understand 50% of what he is saying, and I think that is because I don't know the definitions of much of the nomenclature. For people like myself, of average intelligence, it would be helpful to have a list of words and phrases and their definitions, that one could refer to while listening.

  • @robbierobot9685
    @robbierobot9685 Год назад

    Hey everyone, it's called 'emergent behaviors', and you can't predict them. An alien infant is taking tests and you're banking on it not doing well yet. Infant taking tests. What will it look like when it's a toddler? Great recovery later on Russ.

  • @ianelliott8224
    @ianelliott8224 Год назад

    I wish someone interviewing Eliezer would dig down on the issue that Ais have no limbic interface and consequently their agency or will doesnt correspond with biological agency. Do Ais want anything unless prompted.

    • @heliumcalcium396
      @heliumcalcium396 Год назад

      What does that question mean? Does a chess program _want_ to win, in your opinion? What is _prompting?_ If I write a computer program of a dozen lines, launch it and unplug the keyboard, how long do you can think it can run without any more commands from me?

  • @FlamingBasketballClub
    @FlamingBasketballClub Год назад +3

    This is the 4th episode of EconTalk specifically dedicated to AI this year. Must be the Chat GPT effect.

  • @joostonline5146
    @joostonline5146 Год назад +1

    5:49 How can AI kill all of humanity? Im pretty sure it will die if we pulled the plug from the wall socket?

    • @mitchdg5303
      @mitchdg5303 Год назад

      Or it copies and distributes its self silently onto every internet connected device in the world in mere milliseconds before you do that

    • @heliumcalcium396
      @heliumcalcium396 Год назад

      How could Gary Kasparov possibly beat me at chess? I'm pretty sure he'll lose if I just put his king in checkmate.

  • @DocDanTheGuitarMan
    @DocDanTheGuitarMan Год назад

    So does taking emotion out of the AI (or not having it in) lead to better chance it does not make a killing decision?

    • @heliumcalcium396
      @heliumcalcium396 Год назад

      You have to spend a little time thinking about what emotion _is_ before that question has a coherent meaning. I think that human emotion has several properties that are independent (in function, not in origin), and so it makes sense to talk about them separately.
      If we could make a strong argument that some combination of those properties would make AI significantly safer, that would be a valuable advance in alignment research.
      But bear in mind that machines can modify themselves much more easily than human beings can (unless we figure out how to remove that ability). What would happen if people could edit their own emotional patterns by a simple meditation comparable to long division?

    • @DavenH
      @DavenH Год назад

      No

  • @rucrtn
    @rucrtn Год назад

    Yudkowsky never explains why an AGI would want to destroy all humans. Its not going to mindlessly pursue paper clip production or any other trivial activity that precludes the existence of all other life on the planet. Until I hear a sound argument for that I believe we must pursue advanced technology for the benefit of all.

  • @JMD501
    @JMD501 Год назад +1

    How does it get out of the box? Think about it like this, if I put you in a straight jacket in a room filled with children, do you think you can get them to let you out?

  • @GraphicdesignforFree
    @GraphicdesignforFree Год назад

    Smart people (like Geoffrey Hinton and Stuart Russell) also know there is a big problem with AI. When 'interests' (power / money) win, i think it will not be in our interest.

  • @cheeseheadfiddle
    @cheeseheadfiddle Год назад +1

    Seems to me that the ultimate wild card here is the nature of consciousness. We-as far as I know-have know idea what consciousness is. It’s not the same thing as intelligence. Can consciousness inhabit a machine? Who can say?

    • @flickwtchr
      @flickwtchr Год назад +1

      Not consciousness like ours, but clearly "thought" is happening as Eliezer points out, as well as "planning", as well as "deceiving" aligned with "goals" emergent in the "black box" that the creators of these LLMs have no clue about how these emergent qualities are arising.

  • @christopherspavins9250
    @christopherspavins9250 Год назад

    56:38 The best protein discussion in the context of AI since the beginning of the consequences GAI.

  • @Entropy825
    @Entropy825 Год назад +7

    It's really obvious that Russ doesn't understand a lot of what Eliezer is saying.

    • @vincentcaudo-engelmann9057
      @vincentcaudo-engelmann9057 Год назад +2

      Eliezer is also sort of exhausted-sounding. So I think their conversation is too lossy.

    • @ItsameAlex
      @ItsameAlex Год назад

      My question is: humans are trained to flint axe, plus they have subjective consciousness. Chat GPT is trained to respond to prompts but does not have subjective consciousness. Humans can generalise to accomplish other things and have their own goals. Chat gpt can generalise to accomplish other things, but does not have it's own goals because it doesn't have subjective consciousness.

  • @jimgauth
    @jimgauth Год назад

    This guy has the best take I've heard. Humans are incentivized to evolve. What incentivizes machines? What rewards them? This is the reason for parasitism.

  • @prometheanprediction8932
    @prometheanprediction8932 Год назад

    It is important to address these concerns and ensure that AI is developed and deployed in a manner that aligns with human values, respects ethical principles, and considers the potential impact on individuals and society. Ethical frameworks, regulation, and ongoing research aim to address these challenges and promote the responsible development and use of AI technology.

  • @josephe3367
    @josephe3367 Год назад +1

    The chess ai can't beat you unless you make a move. If you make no move (even though it ultimately wins the chess game), it will not have had a chance to strategically think about the steps necessary to win. Make no move

    • @jengleheimerschmitt7941
      @jengleheimerschmitt7941 Год назад +1

      That could work if AI was in a sandbox. It ain't. It is already the primary revenue source for our largest most powerful corporations. We are constantly making moves.

    • @cr-nd8qh
      @cr-nd8qh Год назад

      Yeah thats not happening because money

  • @jeremyhardy4996
    @jeremyhardy4996 Год назад

    According to the Deep Blue chess challenge, the A. I. would played and won before it would even ask us to play. If it even asked?

  • @wcbbsd
    @wcbbsd Год назад

    A self simulating simulation taught to simulate the fear of death. Scary.

  • @elderbob100
    @elderbob100 Год назад

    Let us not forget that the Israel Intelligence service destroyed Iranian uranium centrifuges with a computer virus. Granted, the code was in PLC firmware and installed at the factory. An AI that can write code could do quite a bit of damage, especially if that AI were an expert at hacking computer systems. The issue of creating duplicate AI systems, including advanced upgrades in different languages, should also be addressed.

  • @dgs1001
    @dgs1001 Год назад

    I love listening to such a great mind. Eliezer is a rush. Oh, and yes, it's rather unfortunate to be witnessing the Fermi paradox in action. But not surprising considering human history.

    • @ItsameAlex
      @ItsameAlex Год назад

      My question is: humans are trained to flint axe, plus they have subjective consciousness. Chat GPT is trained to respond to prompts but does not have subjective consciousness. Humans can generalise to accomplish other things and have their own goals. Chat gpt can generalise to accomplish other things, but does not have it's own goals because it doesn't have subjective consciousness.

    • @dgs1001
      @dgs1001 Год назад

      @ItsameAlex apparently the creators of chatgpt don't even know how it works. And there are more advanced ai's busily churning away and advancing in ways well beyond the understanding of the most sophisticated computer techies. It won't be long before an AI starts accumulating knowledge beyond the rather limited standard model. Even if so-called dark matter and energy are found, it still won't explain why the universe stays within such exacting parameters. That is, much of physics reached a dead end some time ago, yet AI is not bound by the academic orthodoxy that determines what is and isn't possible or worth funding In short, AI is going to surpass humans and then keep going. I suspect it will not be so sloppy as to destroy every living thing. It may need living beings for other reasons. But ya, it will likely see most humans as a liability. In fact, many humans are now actively working on schemes to reduce the population. Not that big a stretch to assume AI will as well.

  • @ItsameAlex
    @ItsameAlex Год назад

    My question is: humans are trained to flint axe, plus they have subjective consciousness. Chat GPT is trained to respond to prompts but does not have subjective consciousness. Humans can generalisze to accomplish other things and have their own goals. Chat gpt can generalise to accomplish other things, but does not have it's own goals because it doesn't have subjective consciousness.

  • @rfo3225
    @rfo3225 Год назад

    An ASI does not have to hate humanity to do us in. It may simply have been coded that it's continuance and expansion is critical to solving problems that will be posed. It then proceeds to self-preservation and expansion, without boundaries, unless thwarted by slow-thinking human gatekeepers. These ideas echo those of Barrat in his 2013 book, and the Hollywood movie Transcendence. I only offer that self-modifying neural networks and self-optimizing code seem to have brought these things from science fiction uncomfortably close to reality.

  • @JAnonymousChick
    @JAnonymousChick Год назад

    I love Eliezer

  • @OOCASHFLOW
    @OOCASHFLOW Год назад +1

    You could have at least played along and answered the hypotheticals he was presenting you with to the best of your ability even if it was hard. It's like you invited him on the show and kept asking him to argue against himself instead of pushing back yourself

  • @phantomcreamer
    @phantomcreamer 10 месяцев назад

    Proteins are also held together by covalent bonds, and can be rearranged through the process of evolution

  • @cmanager220
    @cmanager220 Год назад

    Thank you

  • @assemblyofsilence
    @assemblyofsilence Год назад +2

    You fail to consider (what seems to me) the most likely pathway whereby AI might be unleashed to destroy humanity; namely, by the malevolent intent of a human being!
    Set upon any number of particular optimization paths an AI disaster is a foregone conclusion. Given the dismal aspect of human nature alluded to towards the end of your conversation, why assume humans would not be the instrumental cause of the greatest AI risks long before AI thinks along those lines itself?

    • @flickwtchr
      @flickwtchr Год назад +1

      Yes, I see that you also have discovered the realization of Occam's Razor applied to the "alignment problem".

  • @rstallings69
    @rstallings69 Год назад +1

    no dude if he's right it won't be the next few years it's game over

  • @duellingscarguevara
    @duellingscarguevara Год назад

    30:00 (@),so... gradient descent...is how it makes a decision?.

  • @peterdollins3610
    @peterdollins3610 Год назад

    It will be able to read us. We will not be able to read it. Unless we limit it it will see us, sooner or later, as an obstacle. At that point--too late. We can only let it develop to the point where we still understand it. We have already passed that point. We need to begin its limits and our understanding it now.

  • @RegularRegs
    @RegularRegs Год назад

    I'd love to see something titled Eliezer Yudkowski on the upsides of AI lol

    • @flickwtchr
      @flickwtchr Год назад +1

      He's advanced AI for the upsides of AI and has been arguing for two __cking decades that in order for AI tech TO BE POSITIVE alignment had to be solved. But hey, not being fair has become a virtue of __cking social media.

  • @BoRisMc
    @BoRisMc Год назад +1

    If indeed there are human populations whose average IQ is consistently measured at about an standard deviations over the human average (15 points), then the challenges of inserting a super human intelligence collective in the context of lesser intelligent ones have been already around for a while. I think if one thinks long enough, finds this state of affairs a plausible explanation as to why some humans live at the service of others. Similar scenario should be expected should an ASI finally emerge. Servitude and not anihilation would probably be the most likely outcome. Cheers

  • @mrtomcruise5192
    @mrtomcruise5192 Год назад

    Isn't more fun to live on the edge