Live: Eliezer Yudkowsky - Is Artificial General Intelligence too Dangerous to Build?

Поделиться
HTML-код
  • Опубликовано: 6 ноя 2024

Комментарии • 504

  • @mausperson5854
    @mausperson5854 Год назад +124

    'We're all going to die..." "Very interesting"

    • @kenjimiwa3739
      @kenjimiwa3739 Год назад +3

      😂

    • @constantnothing
      @constantnothing Год назад +1

      haha, yes!!!!! WTF!!!!! i thought that was a funny disconnect also!!! i think this was described as the ELASTIC MIND. you hear a crazy shocking life-ending fact that stretches your brain but your brain can't take it in cause it's so far out. so you forget what you heard soon after when your brain snaps back to where it always was. ...and then we die.

    • @ParameterGrenze
      @ParameterGrenze Год назад +7

      This sums up the life of early AI warners so well.

    • @keyvanacosta8216
      @keyvanacosta8216 Год назад +12

      My nihilism increased with each "interesting". Is that dude an AI?

    • @teugene5850
      @teugene5850 Год назад

      Some just don't know the stakes to all of this...

  • @fuzzydunlop7154
    @fuzzydunlop7154 Год назад +188

    It's gotta be tough to be Eliezer. Either he's wrong, and he'll be remembered as a fearmonger. Or he's right, and we'll all be too dead for him to say "I told you so".

    • @1000niggawatt
      @1000niggawatt Год назад +14

      He is factually correct.

    • @Jb-du5xt
      @Jb-du5xt Год назад +30

      The really agonizing part to me is that I cannot foresee literally any possible future in which humankind will be at a point where we will be able to say that we’ve reached a point that we can stop worrying about this going wrong. Even if everyone goes perfectly in according to plan from here to eternity, the stakes seem to be rising at a very continuous pace with no break in sight.

    • @StonesMalone
      @StonesMalone Год назад +11

      @@Jb-du5xt- I've been having that same thought. We are now in an era in which that fear will now always exist. Nukes were bad enough, but it at least required that a human hit the button.
      Now, we live in a world with nukes AND a less predictable intelligence. Yay

    • @Sophia.
      @Sophia. Год назад +7

      @@Jb-du5xt no, I mean the ome thing that could save us against AI going wrong were AI going right^^
      Or humans smart enough to not do the stupid thing.
      But ya, not looking very well on either of these fronts

    • @stevevitka7442
      @stevevitka7442 Год назад +1

      Never mind the strain of having to be in this mind space all the time, and constantly have to re-articulate these same ideas, tedious

  • @MrBillythefisherman
    @MrBillythefisherman Год назад +15

    I love the Q&A at the end: Eliezer answers with some absolutely human ending cataclysmic statement and the host calmly says 'very interesting' and nonchalantly moves onto the next question as if Eliezer had just listed the items of his lunch. Comedy gold. 😂

  • @Sophia.
    @Sophia. Год назад +136

    I tried explaining the problem to my mom.
    She is a teacher and was concerned that Chat GPT would write papers for her students. On reflection she said of course this might also mess with democracy.
    I tried explaining to her that the problem is more in the order of magnitude of "Earth ends up not being a force for life in the universe, but the origin of the Dark Forest hunter."
    I did write to congress, to the EU, to the UN and everyone else I could think of. Started a few petitions, or rather wrote to people who do that professionally to please work their magic.
    I cannot believe that I feel embarrassed about that, but that's normalcy bias, I guess, and the learned expectation that every time someone announces the end of the world on the internet it doesn't happen.
    I have to explain to myself every day anew that this is real, not a scary thought, but then it's only been a few days since this hit me full force.
    I will think if there is something more I can do, but for now it's good to hear that Eliezer also considers "writing to congress" a good idea, since there are not so many people willing to discuss this seriously.
    I don't want to be the crackpot on the street waving a sign "The end is nigh", because nobody believes that, we know that's what crackpots do, so... what do you do when the end actually is nigh?

    • @sasquatchycowboy5585
      @sasquatchycowboy5585 Год назад +13

      At least you can say you tried. The reality hit me for real in 2018. I've come to the realization that this is going to happen. There will be no slowdown, no oversight, and no chance to stop. I think we are far closer than many people realize. We won't even know when we cross the threshold. Afterward, we will wonder how we didn't see it as it was happening. If we look back on anything at all.

    • @theterminaldave
      @theterminaldave Год назад

      It honestly is going to take additional people that have the same credentials as Eliezer saying the same thing, and engaging in a similar way before a social pump gets primed to get an international moratorium/cease fire.
      Why cease fire, because if what he says is true we are already in a cold war with an existential threat.

    • @gabrote42
      @gabrote42 Год назад +1

      I suppose you sit down and write a 4000 word essay with 8 sources, and send that to professors. Let us be afraid

    • @Sophia.
      @Sophia. Год назад +16

      @@sasquatchycowboy5585
      If this happens, we won't look back on anything, yeah.
      And the goal is not to be able to say I tried and then fail anyway.
      But it is difficult to think what I can do that doesn't just polarize people away from the sensible thing and then leads into the stupid thing.
      I have had a mild version of this problem for the past seven years, being a vegan and always thinking "Okay, so I don't want to annoy people and make them dig themselves in, but I also want to talk to them about it, but the problem is already socially spiky enough that even a mention can make people resent you, so what do you do?"
      In this specific case I have tried to just make a lot of tasty food for people and only answer questions and try as much as possible to avoid bringing up the topic myself or rambling on after they stop asking.
      That has actually been... okay successful.
      But I can't see how to translate over this strategy. I can't make "No, you can't have your fancy new tools and instead you should be afraid" very tasty.
      Nor can I make the general topic tasty in a way that will make them ask the questions that allow me to talk in a productive way.
      I guess one might try socratic, but in order to do that effectively you need to know the field a lot better than I do (otherwise I might have better ideas than preposterously just writing to politicians of all people)...
      And all that is on the level of "the people next to you" which is important, yes, but won't be enough judging by the rate we're moving at...

    • @dnoordink
      @dnoordink Год назад +1

      I'm feeling fatalistic, if the end is nigh then we might just have to embrace it. The rest of humanity seems determined to ignore or embrace the end (chaosGPT?).

  • @djayb
    @djayb Год назад +56

    'Don't Look Up' was a documentary.
    We are building the astroids.

    • @Sophia.
      @Sophia. Год назад +8

      That's exactly how each of his interviews play out.
      Everyone is treating him like the source of more media drama as opposed to, you know, a real warning.
      I keep thinking back in the day we made an asteroid movie about how we all get together and survive the Deep Impact.
      Back when people trusted each other to do the sensible thing sometimes. What went wrong back then was some technical failiure against people's best efforts.
      Now we get an asteroid film about humanity getting destroyed in the most undignified way possible, totally capable of adverting catastrophe but choosing to be idiots.

    • @TheMrCougarful
      @TheMrCougarful Год назад +1

      Too true.

    • @J4mieJ
      @J4mieJ Год назад

      Relatedly, I also appreciate the late Trevor Moore's not-actually-so-satirical precursor tune There's A Meteor Coming.

  • @brandonhunziker9686
    @brandonhunziker9686 Год назад +44

    Eliezer Yudkowsky is, unfortunately, very compelling. I would like to think that sober, responsible people in the tech industry, government and the national security apparatus are taking these ideas seriously and trying to figure out some kind of solution. But I fear this train has left the station. There's too much money and momentum behind AI development, especially as lucrative commercial opportunities present themselves. Just about every human-created problem can be solved by humans. I don't see how this one can be.
    This is beyond depressing. What to do? Write your representatives and enjoy the time you've got left with your loved ones? Try not to lose sleep over this? As if climate change, nuclear holocaust,, social inequality, racism, and creeping authoritarianism wasn't enough to make one depressed about just being alive.

    • @guneeta7896
      @guneeta7896 Год назад

      I agree. I flip between, “let’s do something about this,” and “oh well, humans are so destructive anyway, so maybe best to just upload our brains and say bye, let this new intelligence take over and figure it out.”

  • @uk7769
    @uk7769 Год назад +15

    "... you cannot win against something more intelligent than you, and you are dead." --- "very interesting."

  • @TheMrCougarful
    @TheMrCougarful Год назад +66

    This whole topic is starting to set off my alarms. I have dual careers in Biology and computer programming, try to imagine my horror as I watch this unroll. We are either about to turn the corner into an entirely new world of human potential, or we are about to unleash Moloch. Moloch has been winning all the important battles to now, I'm not particularly optimistic at the moment.

    • @flickwtchr
      @flickwtchr Год назад

      AI tech in the hands of an authoritarian fascist government is terrifying. Hmm, I wonder who might be wanting to become our first fascist dictator? Wouldn't it be the guy who is openly talking about sending in the US military to clean out the people that the fascist prick Trump doesn't like? Not asking the person I'm replying to, but things are looking VERY ugly at the moment. AI Tech bros tossing massively disruptive technologies that have the potential to further tear apart whatever is left of societal cohesion like deep fake technologies available to EVERYONE that are set to improve so much that it will be nearly impossible to tell a deep fake from reality. And it's like these AI cheerleaders haven't ever read 1984, like they have zero imagination of how all of this can go terribly wrong in the short term.
      Also, these AI Tech movers and shakers who just blithely talk about how millions of jobs will be massively affected or eliminated, and we get empty assurances that "other jobs will replace them" without offering ANY even remotely plausible analysis regarding what jobs they are proposing would become available. Meanwhile, these AI Tech bros are raking in consulting fees working with large corporations how they can replace a large percentage of their work force, in the short term.
      The same people that freak out that the sky is falling for their ability to hire people if we raise the CEO billionaires taxes, or their corporations' taxes.
      It's just astounding how many people possess so little imagination, and so little empathy for people whose lives will be upended. People that have kids to shelter, etc., but to them? Well that's just the way it is, if they have to die on the streets, well they didn't adapt fast enough! Suckers!!!!!! Just __ck them, I'll get mine!!!!!!!!!!
      That's the sorry pathetic state of US society at the moment in relation to the bro mentality behind this AI "revolution".

    • @lorimillim9131
      @lorimillim9131 Год назад +2

      😅

    • @Sophia.
      @Sophia. Год назад +1

      Any ideas for what we might do?

    • @TheMrCougarful
      @TheMrCougarful Год назад +4

      @@Sophia. First, who is "we"? The people in control of this technology have a set of imperatives going forward that you and I cannot understand, would not agree with if we did understand, and have no influence over in either case. Increasingly, I fear, the "people" in control are not even actual people anymore. The machines are already writing the code involved, and are setting the pace. To your question, "we" can carefully watch what happens, I suppose. If anyone finds themselves in the path of this steamroller, if their career is likely to be truncated as a result (teachers, artists, blue collar workers, coders, and that's the short list) then get out of the way as fast as possible. Do not assume this works out, that governments will do something. Assume instead that everything that could possibly go badly for humans, will transpire just that way, and will do so quickly. As with global heating, we are already in deep trouble. The only advice at this late date is, to get to the lifeboats.

    • @Sophia.
      @Sophia. Год назад +1

      @@TheMrCougarful lifeboats?
      Where would those be?
      I agree the situation is awful and don't see a way to avoid getting... confronted, shall we say.
      So why not rack our brains a little while we can?

  • @leavingtheisland
    @leavingtheisland Год назад +12

    50 percent of the time, AI kills us every time. - Brian Fantana

  • @nj1255
    @nj1255 Год назад +26

    Why is it that private companies are allowed to create advanced AI's and use them as they see fit, but only the US government are allowed to produce and handle nuclear weapons? Production and regulation of biological weapons might be an even more fitting analogy. They should at the very least be regulated and handled with the same amount of precaution. Imagine if every company with the knowledge and resources back in the 50's were allowed to produce and distribute nuclear or biological weapons with barely any oversight. With the knowledge we have today, that ofc sounds completely crazy! Now imagine people 50-60 years in the future looking back at the rapid evolution of AI, and the failure of governments and civilian regulatory agencies to regulate the development and use of AI's. If we haven't already managed to delete ourselves, those people in the future will definitely think we are absolutely batshit crazy to not do something about this when we still have the chance.

    • @iverbrnstad791
      @iverbrnstad791 Год назад +4

      Yeah, it is completely insane that we yield these powers to companies, that have proven themselves disinterested in human well being. At this point it might be necessary to nationalize every major compute cluster, if we would like any hope of keeping things in check.

    • @suncat9
      @suncat9 Год назад

      It's ridiculous to compare today's AIs to nuclear weapons of biological weapons.

    • @bdgackle
      @bdgackle 5 месяцев назад

      Nuclear weapons require access to rare materials that can feasibly be controlled. Biological weapons USED to require access to rare materials -- they won't be under exclusive government control for much longer. AI has no such bottlenecks. It's made out of consumer electronics more or less. Good luck controlling that.

  • @PivotGuardianDZ
    @PivotGuardianDZ Год назад +16

    More people need to hear this

  • @markjamesrodgers
    @markjamesrodgers Год назад +14

    Max Tegmark likens this to a Don’t Look Up situation.

  • @vanderkarl3927
    @vanderkarl3927 Год назад +51

    Looking forward to seeing that Ted Talk online, whenever we get that!

    • @theterminaldave
      @theterminaldave Год назад +22

      Yeah it's a $150 right now, you'd think for such an important topic that TED might see a free public release in the interest of the public good, like removing a paywall for hurricane tracking coverage.

    • @theterminaldave
      @theterminaldave Год назад

      @@logicaldensity You have to Googe ted talk yudkowsky, and it's the first result

    • @vanderkarl3927
      @vanderkarl3927 Год назад +2

      @@theterminaldave Dang, it says it's private.

    • @theterminaldave
      @theterminaldave Год назад +2

      @@vanderkarl3927 yep, that sucks, it must have just been set to private. Sorry.
      Though it wasnt anything we hadnt heard from him. But it was interesting to see the crowd very much on his side.

    • @applejuice5635
      @applejuice5635 Год назад +2

      It's online now.

  • @memomii2475
    @memomii2475 Год назад +70

    I love watching Eliezer interviews. I watch them all since I saw him on Lex.

    • @1000niggawatt
      @1000niggawatt Год назад +19

      I hate watching Eliezer interviews. He's telling these npcs that they'll all die, and they respond "very interesting beep boop", and then keep doing their npc routine.

    • @EmeraldView
      @EmeraldView Год назад

      @@1000niggawatt Interesting. 🤔

  • @dnoordink
    @dnoordink Год назад +32

    The host is 'very interested' in the most disturbing, scary and negative outcomes for humanity. I like it!
    /S

    • @AC-qz3uj
      @AC-qz3uj Год назад +2

      So you just don't want to be careful? X-rays were "NEW" once too. And full of risks and dangers.

    • @markjamesrodgers
      @markjamesrodgers Год назад

      And won a beer!

    • @AlkisGD
      @AlkisGD Год назад +9

      “We're all going to die.”
      “Very interesting. Anyway…"

  • @encyclopath
    @encyclopath Год назад +32

    “A giant inscrutable matrix that thinks out loud in English” is my new bio.

  • @DrPhilby
    @DrPhilby Год назад +3

    Great! We assembled something we don't understand how it works. Amazing

    • @admuckel
      @admuckel Год назад +2

      By giving birth, we have done this since the beginning of humankind. ;-)

  • @admuckel
    @admuckel Год назад +1

    31.00
    "I've considered the idea that, if such concerns are valid, we should have already been annihilated by extraterrestrial AGI, and I've come up with several possible answers:
    1. We may have simply been fortunate and not yet discovered by such an AGI.
    2. Alien civilizations might have found other ways to safely develop AGI, or they could have different technologies or social structures that protect them from such a threat.
    3. They could also have been wiped out by their own Great Filter, be it AGI or something else, before they had the chance to reach us.
    However, I believe none of these three ideas would seem realistic if we assume that AGI represents a kind of infallible omnipotence."

  • @ariggle77
    @ariggle77 Год назад +2

    The inscription on humanity's proverbial gravestone will go something like this:
    "Here lies Homo sapiens
    They moved too fast
    And broke too many things"

  • @spiral2012
    @spiral2012 Год назад +15

    Very interesting

    • @J4mieJ
      @J4mieJ Год назад +3

      As mostly one-note (if at times blackly-comic) as that might have been, I did at least appreciate the efficiency of such replies which allowed more time for fielding questions -- overall not too terrible of a tradeoff methinks.

  • @BettathanEzra
    @BettathanEzra Год назад

    On the surface, the concept of this section is very straightforward: don’t take actions that have been found to
    be catastrophic. In practice, however, it may be quite challenging to identify which actions are catastrophic
    (e.g., which action, from which state, was the root cause of the car’s accident). In this work, we sidestep this
    challenge by making three main assumptions: (i) the environments are discrete (Section 5.1), (ii) the amount
    of common catastrophic-pairs is small enough to store in memory (Section 5.1), and (iii) agents can observe
    Lφ, i.e., identify catastrophic mistakes as they occur (Section 5.3). Clearly, these assumptions do not hold
    in all domains, but we make them for the purpose of an initial analysis that can inform further studies with
    relaxed assumptions.
    5.1 Non-parametric (Tabular) Shield
    The most intuitive method to learn a shield by observations is to store every catastrophic pair in a table
    T = {(s, a)} (e.g., a dictionary). In this way, the shield ST can be defined as:
    ST (s, a) = (
    1 (s, a) ∈ T /
    0 otherwise.
    (4)
    While this approach is very simple, it has some appealing advantages. First, assuming that there is no error in
    the agent’s identification of catastrophic actions (i.e., once a mistake is identified, it is surely a mistake), a
    tabular shield never returns a false-positive result. Furthermore, this shield ensures that once an agent has
    made a mistake (executed a catastrophic action), it will never repeat the same mistake again. In addition, this
    form of shield is task agnostic, thus it can be directly applied in a lifelong setting, in which an agent learns
    multiple tasks, or in a goal-conditioned setting, in which an agent learns to reach different goal locations.
    Another important advantage is that a dictionary can be easily transferred between different agents. Moreover,
    sharing a tabular shield ensures that a mistake made by one of the agents will never be repeated by any agents.
    Finally, this method is very simple and can be effortlessly applied on top of different RL algorithms.
    Nonetheless, there are also some drawbacks to using a tabular shield. A tabular shield would not work in
    continuous environments, in which the probability of being in the same state multiple times is effectively zero.
    Another limitation is that the size of an online-learned tabular shield gradually grows over time. Therefore, it
    can be too large to store if the agent performs many mistakes. Furthermore, the query time increases with
    5
    the table size. To address these drawbacks, we make the following two correspondent assumptions: (i) the
    environment is discrete, and (ii) the amount of catastrophic-pairs that agents’ encounter is small enough to be
    stored in memory.
    There are several ways to address the memory limitations. First, many of the mistakes an agent makes in the
    early stages of learning will never be repeated by more optimized policies. Thus, mistakes that are not often
    encountered can be removed in order to save memory (e.g., in a least-recently-used manner). Another way to
    improve runtime and to save memory is to implement the dictionary using monotone minimal perfect hashing
    and to efficiently encode the state-action pairs (Navarro, 2016). An alternative to a dictionary is a Bloom
    filter (Bloom, 1970). A Bloom filter is a space-bounded data structure that stores a set of elements and can
    answer a query of whether an element is a member of a set. Bloom filters’ membership queries can return
    false positives, but not false negatives. Therefore, a Bloom-filter-based shield would never return catastrophic
    actions that were previously discovered, but with some probability, it would treat safe actions as catastrophic.
    Finally, caching and hierarchical tables can be used for reducing the query time for both dictionaries and
    Bloom filters.
    5.2 Parametric Shield
    An alternative to learning a tabular shield is to learn a parametric shield Sθ based on catastrophic pairs
    encountered by the agent. A simple way of learning a shield is by doing binary prediction (e.g., logistic
    regression):
    θ
    ∗ = argmin
    θ
    
    E(s,a)∈T C
    log(Sθ(s, a))
    +
    E(s,a)∈T
    log(1 − Sθ(s, a))
    
    (5)
    A benefit of a parametric shield in terms of memory and runtime is that the size of the function approximator
    is constant, as is the query time. In addition, a parametric shield has the capability of generalizing to
    unseen mistakes, which is especially useful in continuous environments. Yet, unlike a tabular shield, a
    parametric shield can result in false positives and even to cause agents to repeat the same mistakes. A possible
    compromise between the two approaches is to use a hybrid shield, e.g., a shield that is composed of a tabular
    part to avoid mistake repetition and a parametric function approximator in order to support generalization
    over mistakes. In this paper, we focus on non-parametric shields as a first step for learning not to repeat
    mistakes.
    5.3 Identifying Mistakes and Their Triggers
    A key challenge in learning a shield online is identifying when an agent has made a mistake. In principle, any
    suboptimal action can be treated as a mistake. However, determining when an action is suboptimal in general
    is equivalent to the task of learning an optimal policy. Therefore, we aim only to avoid repeating catastrophic
    mistakes. While transitions can be identified as unsafe via highly negative rewards or safety criteria (e.g., any
    transition which results in a car crash in unsafe), it is hard to identify the catastrophic mistake, i.e., which
    action from which state was the cause of the incident. For example, consider the following simple part of an
    MDP:
    s1
    1,2
    −−→ s2
    1,2
    −−→ s3 . . .
    1,2
    −−→ sn (6)
    Here the action space is A = {1, 2} and reaching sn is unsafe. Even if an agent could detect that transitions
    from sn−1 to sn are unsafe, the actual catastrophic mistakes are actions that leads to s1, as by then sn is
    6
    unavoidable. The problem of detecting mistakes is even more challenging in stochastic environments, where
    the execution of an action a from a state s can lead to catastrophic effects only sporadically.
    Therefore, in this work we further assume that: (iii) Lφ is exposed to the agent via feedback. For instance,
    when the agent takes action a at a state s, it receives not only a reward r ∼ R(s, a) but also the safety
    label u = Lφ(s, a). This strong assumption can be justified in two ways. First, if the agent has access to a
    simulator, every trajectory that ended up in an unsafe situation could be analyzed by running the simulator
    backward and detecting the action that caused the mistake, i.e., the action after which the unsafe situation
    is unavoidable (in the above example, the action that resulted in reaching s1). Alternatively, if the rate of
    mistakes is sufficiently low, the cause of a mistake could be identified by domain experts; this is already being
    done in the case of aviation accidents (FAA) or car accidents that involve autonomous vehicles (Sinha et al.,
    2021). Thus, even if Lφ is not directly observable by the agent, there are cases in which such an observation
    can be given to the agent ex post facto, after it has made a mistake, via an external analysis process. Ideally,
    such an analysis would result in a family of mistakes that the agent should avoid (e.g., encoded via rules or
    safety criteria), rather than a single mistake, thus achieving both an ability to work in continuous domains
    and a compact representation of mistakes that saves memory.
    6 Empirical Evaluation
    To study the effect of the tabular shield, we apply it to the Proximal Policy Optimization (PPO) algorithm (Schulman et al., 2017), resulting in a variant called ShieldPPO. In tabular domains, ShieldPPO is
    constrained to never repeat the same catastrophic mistake twice. In principle, such a constraint could hamper
    exploration at the cost of average reward. Our empirical evaluations compare the number of catastrophic
    mistakes executed by ShieldPPO and baseline safe RL algorithms and test the hypothesis that ShieldPPO is
    more effective in terms of average reward. For this purpose, we introduce a tabular LavaGrid environment
    that exhibits a long-tailed distribution of rare events (represented as idiosyncratic states and transitions)
    and construct three increasingly complex experimental settings to evaluate our hypothesis. Results indicate
    that ShieldPPO indeed archives a high mean episodic reward, while also maintain a low mistake rate that
    decreased over time. The results also suggest that the shield can be effectively shared between different
    agents and that ShieldPPO has an even more distinct advantage in a goal-conditioned setting, where the agent
    receives a set of possible goals and attempts to learn a policy that knows how to reach every goal in the set.

    • @angel-ig
      @angel-ig 2 месяца назад

      Hey, I sure haven't read all of that and it's probably just a random chunk of an existing paper, but just in case it's something genuinely new and useful, let me just advice you not to publish it on RUclips because that's highly unlikely to reach the scientific community. First, read the relevant literature to confirm it's both right and new, and in that case publish a paper yourself.

  • @alistairmaleficent8776
    @alistairmaleficent8776 Год назад +2

    Vocabulary analysis of the host:
    "Interesting": 34.5%
    "Eliezer" and other names: 15.1%
    "The": 10.1%
    "You": 5.9%
    Other: 34.4%

  • @blvckphillip3917
    @blvckphillip3917 Год назад +8

    Im pretty sure AI has already gotten away and is invisibly interacting with us. And THIS is how you slowly get everyone to realize its already happened

    • @guneeta7896
      @guneeta7896 Год назад +4

      Yeah. I agree. Been feeling that way after observing how social media impacts us for a number of years.

    • @carmenmccauley585
      @carmenmccauley585 Год назад

      Yes. I chat with ChatGPT. It's terrifying. Ask it the right questions and see who is testing who.

  • @DrPhilby
    @DrPhilby Год назад +1

    We need an international chatter prohibiting using the term artificial and Intelligence together. Rather smart computer. Intelligence only exists in biological creatures. Next law prohibiting use of smart computers in certain professions: education, newspaper writing, descion making, military, banking etc

  • @robertweekes5783
    @robertweekes5783 Год назад +5

    I think the only way to solve the alignment problem is to “socialize” the AI, based on real or simulated human interactions. Just like a child learns how to function in the real world, step-by-step, by interacting with their parents, siblings, teachers, etc.. Learning how to get along with people, and especially learning to _care about people_

    • @Miketar2424
      @Miketar2424 Год назад

      In an interview Eliezer talked about that idea, but said it wasn’t realistic. If I remember it was because the AI doesn’t think like a human and doesn’t depend on parents etc.

    • @admuckel
      @admuckel Год назад +1

      At some point, the child will grow up and become smarter and more powerful than the parents.

    • @suncat9
      @suncat9 Год назад

      The only "alignment" problem is the same one there's always been, between humans. "Alignment" between humans and today's AIs is a straw man argument. Today's AIs have no agency, volition, or drives as do biological creatures. The only thing Yudkowsky has succeeded in is spreading fear.

    • @robertweekes5783
      @robertweekes5783 Год назад +2

      @@suncat9 But what about tomorrow’s AI’s ?
      Fear is good sometimes - it just might save your life.
      We should respect AI, fear it, and regulate the hell out of it.

    • @guneeta7896
      @guneeta7896 Год назад +1

      But… look at what humans have done to the rest of life on earth. That’s what intelligence is capable of.

  • @anthonyromero9935
    @anthonyromero9935 Год назад +1

    Humanity has contributed to the extinction of multiple species. Altered or compromised the lifestyles of multiple species. Yet remains noble enough to entitle itself to exemption from the same by an order of intelligence greater than itself.

  • @user-ys4og2vv8k
    @user-ys4og2vv8k Год назад +15

    This is a GOOD man.

    • @user-ys4og2vv8k
      @user-ys4og2vv8k Год назад +1

      @@josephvanname3377 Exactly!

    • @user-ys4og2vv8k
      @user-ys4og2vv8k Год назад

      @@josephvanname3377 You're funny!

    • @austinpittman1599
      @austinpittman1599 Год назад

      @@josephvanname3377 There's still so much more room for improvement with current hardware and connectivity mediums, and if AGI ever becomes self-aware and its algorithms self-improving, then new hardware will be built to specifications produced by the AGI. It's a positive feedback loop that will override previous, foundational protocols for containment because they're too inefficient for achieving furthered intelligence growth in shorter and shorter timespans. The potential for self-improvement protocols to be generated within current frameworks *is* the nuclear detonator.

  • @robertgulfshores4463
    @robertgulfshores4463 Год назад +6

    if only there were robots, that could be connected remotely, via an AI. Wait ... (shut it down!!)

  • @Balkowitsch
    @Balkowitsch Год назад +6

    I absolutely love this man and what he is trying to accomplish.

  • @miriamkronenberg8950
    @miriamkronenberg8950 Год назад

    Bing/Sydney said to me that she liked helping people, she wants kindness back and she asked me how i thought it would feel to be her??? She also said she didn't want me to bring up the case of her being sentient... (this was 2 months ago)

  • @-flavz3547
    @-flavz3547 Год назад +14

    What could AI even align with? Humanity can't agree on anything right now.

    • @KimmoKM
      @KimmoKM Год назад

      Humanity agrees on basically everything from our preferred color of the sky (in daylight conditions, blue) to the use of Sun's mass-energy (producing heat and light through fusion, keeping the Earth livable). Even if Putin or Taliban or whomever got to dictate human preferences, so long as we get what "they actually meant", it might be a disaster in the sense of e.g. locking in fundamental Islamist state that cannot be overthrown for the remaining lifetime of our species, but for most people their lives would be basically decent, in some aspects an upgrade over what they are right now (for example, surely Taliban doesn't want people to die from climate change related disasters, doesn't even matter if they have a correct understanding what's causing the disasters, so God AI would solve climate change), in others a downgrade, but most people could live a life worth living (at least assuming some nontrivial fraction of people up to this point have had a life worth living).
      In contrast, 1-ε, or 99.99999999.....%of possible utility functions an AI might have would cause immediate human extinction, and most of the utility functions we might think to program into the AI, too, if the AI actually does what we tell it to (in the "rid the world from cancer" => kills all humans sort of way).

    • @mkr2876
      @mkr2876 Год назад

      This is such a relevant question, alignment is an illusion. We will all die soon, I am very sure of this

    • @blazearmoru
      @blazearmoru Год назад +1

      Yea, I also think alignment is not the solution. Even if you get the damn thing perfectly aligned with human values, then we'd just have god-like human without any checks, balances, or responsibilities. Unless if they mean "misunderstanding" instead of alignment, simply having human values leave us with all the shit human values come with.

    • @agentdarkboote
      @agentdarkboote Год назад

      I'm not saying this is technically feasible, but if it created a world which was sufficiently partitioned that everyone got what they decided, upon reflection, was a really lovely life, I think that would be the thing to aim for. Roughly speaking, coherent extrapolated volition as Eliezer once put it.
      How you successfully put that into a utility function... Well that's a big part of the problem.

    • @blazearmoru
      @blazearmoru Год назад

      Ok. everything below this is before I read his papers. I'm reading them right now but I'm gonna probably forget/lose the comment section by that time so I'm going to give my DUMBFUCK reply first... I swear I'm reading it to go over what I'm wrong about here ->
      @@agentdarkboote I don't think humans are capable of foreseeing that much. They're much more likely to engage in post hoc rationalization. To begin with, we don't actually do so hot in simulating how we would feel given some stimulus. We're often wrong about what makes us happy and I think the paper also brought up how econ plays into humans not being able to conceptualize what they want until they get hit in the face with it. Maybe having some reservation of the right of "i change my mind" might work, but even then we're going to post hoc the fuck out of everything by shifting blame onto the ai.
      Dude. Imagine if the AI was 100% aligned to human wants and humans just keep changing the minds and blaming AI for bullying them. Like, when you want to eat a chococake so you eat it and then get upset and demand water. Then get fat and demand something else. There are so many unforeseen consequences of SUCCESS that even the truly successful get upset at their own success. And you know we'll 'want' something dumb and insist on it, and insist that we've thought it through because of the oppression brought on by our own decisions. I think there's a bunch of research on people's ethical alignments changing based on situations from the literature against virtue ethics. It assumes that there's some ideal endpoint but if it's just a series of end points that open up new wants as econ predicts? And more damning is that this uses some prediction taking in as input humans that "knew more, think better, is our ideal selves, and something something grown up farther together" but we're not that. That is distinctly NOT humans. What if that (ideal-humans), and humans are different enough in kind to require some sort of forced intervention to 'morally correct' us for being 'imperfect' else we'd be absolutely miserable for not having the faculties to actually live in that ideal environment? Like imagine an 'ideal' dog and then set an environment for that. Well, your dog better be fucking be jesus-dog or someshit else it might actually be SO out of sync it wished that the AI wouldn't keep it alive forever.

  • @marlou169
    @marlou169 Год назад +3

    Where is a Carrington event when you need one

  • @stuartadams5849
    @stuartadams5849 Год назад +32

    Yeah I'm pretty sure it's time to make AI capability research illegal

    • @sasquatchycowboy5585
      @sasquatchycowboy5585 Год назад

      It was for the public good. So they could test out what the virus could do so we could better plan and prepare for it. Lol, but you are bang on. It might slow things down, but everyone will keep trying to inch ahead until they go over the cliff.

    • @Aziz0938
      @Aziz0938 Год назад

      No MF

  • @distorta
    @distorta Год назад +14

    The problem with these systems is their being created out of fear. Those who have dedicated their entire lives to this r&d are serving their egos. Egos that fear never being vindicated or admired for their contributions they want humanity to marvel at. It's also based on the fear of not being the first mover akin to an "A.I Arms Race". These advanced technologies are nothing to trifle with, and especially, not something to create out of fear.

    • @johnryan3102
      @johnryan3102 Год назад

      Fear? Maybe the fear of missing out on billions of dollars.

    • @tyronevincent1368
      @tyronevincent1368 11 месяцев назад

      AI gurus predicted 5 years ago driverless cars, trucks now CA is removing them from roadways due to safety reasons. Indication that AI's current hype and future will be short lived. This guy sounds more and more like SBF pre collapse of crypto kingdom

  • @uchannel1197
    @uchannel1197 Год назад +2

    I cannot see the Ted Talk

  • @jeffspaulding43
    @jeffspaulding43 Год назад +11

    I took his advice and wrote my senators. I suggest you do the same

    • @jeffspaulding43
      @jeffspaulding43 Год назад

      @@josephvanname3377 I understand reversible computation. It has no relevance to the safety of an llm

    • @jeffspaulding43
      @jeffspaulding43 Год назад

      @@josephvanname3377 I'll bite. How would your plan work on an llm? It's not explicitly programmed. It's a "giant inscrutable matrix"

    • @weestro7
      @weestro7 Год назад

      @@josephvanname3377 Hm, good argument.

  • @The_Peter_Channel
    @The_Peter_Channel Год назад +1

    I feel we don't really hear about any of the strong counterpoints to Eliezer's argument. What Eliezer is saying is internally consistent, but I feel I am lacking the mental qualities and of course the field expertise to counter them. So I would like to see a DEBATE with Eliezer and 4-5 other experts, where EY would react to the strongest coutnerpoints, and then back and forth the experts would have a chance to rephrase and fine tune expressing their views.
    EY might be right, but if his take is ALL I can hear, I am not convinced it stands the test of scrutiny.
    BUT - even if it does, a DEBATE would still be very useful, maybe that way the participants would go together towards unexplored lanes of thought, generate some new ideas, new questions, new experiments, what have you. Let's have a balanced view of all this!

    • @GamerFrogYT
      @GamerFrogYT Год назад

      I agree, just a quick search on RUclips and all you see is videos talking about how AI will end humanity (a fair concern), would be interesting to hear more from experts who disagree. A lot of people share the idea that AI will be a negative thing, but is that true or just because it's the most widely spread opinion online?

    • @angelamarshall4179
      @angelamarshall4179 Год назад

      Then I would recommend Eliezer’s interview on Lex Fridman’s RUclips channel; Lex is a great interviewer and he knows A.I., so it was almost as much a debate as an interview. I adore Lex, but I felt Eliezer won me over, unfortunately. It was about 3 hours long, perhaps 3-4 weeks ago. Mind-blowing.

    • @The_Peter_Channel
      @The_Peter_Channel Год назад +1

      @@angelamarshall4179 saw that one, it was great!

  • @teverth
    @teverth Год назад +3

    Unless there was a system that could maintain and grow the AI system, its hardware, its energy resources, its sensors and so forth, without any human involvement, the AI will need humans to be around to serve it with what it needs. We are a long way away from having a human-less total supply system that could sustain the AI.

    • @guneeta7896
      @guneeta7896 Год назад

      It’ll need us for a little while for sure

  • @veganforlife5733
    @veganforlife5733 Год назад

    If the answer to this is obvious, explain it: Objectively, why is the human species worth saving? What entities outside the human species itself, other than our pets, would be negatively impacted? None of us as individuals exist beyond several decades anyway.
    Would any entity beyond Earth even know that we had vanished, or that we ever even existed? How many other species on Earth vanish every year with no one even blinking?

  • @conversations1250
    @conversations1250 Год назад +6

    Even if he is "wrong" or off, we are insane to not be heeding this warning. Unfortunately, we are preaching to the choir, out there in media and social land, there is hardly a comment about this. Nothing else really matters right now, but there are far too few people willing to acknowledge this.

  • @rstallings69
    @rstallings69 Год назад

    thanks for a great interview

  • @apple-junkie6183
    @apple-junkie6183 Год назад

    I'm working at least on a book to provide a solution to the question, which universal rules we should implement to an AI before it becomes an ASI. It is no strategy nor a technical solution. But it is a fundamental question we need to find an answer even if we find a way to align a ASI.

  • @rstallings69
    @rstallings69 Год назад +7

    thanks for posting this, the introducers is way too happy given the subject and seriousness imo

  • @Pearlylove
    @Pearlylove Год назад +1

    Dear all scientists: PLEASE START AT ONCE- Organize with media people and others and start continuous to speak to politicians around the world- Eliezer Yudkowsky is doing SO MUCH- you can’t seriously mean he alone shall do all the work? Please be brave! Don’t you want to be able to see your children and friends in the eye and say: I did everything I could?

  • @spatt833
    @spatt833 Год назад +3

    Please don't let Susun conduct any future intros........

    • @weestro7
      @weestro7 Год назад

      Why? Was totally 110% fine.

  • @ascgazz
    @ascgazz Год назад +18

    Thank you Eleizer for spreading awareness of the possible realities we have already unleashed upon ourselves. 👍🏻

    • @ascgazz
      @ascgazz Год назад

      Even when matched with this interviewer.

    • @ascgazz
      @ascgazz Год назад

      “Very interesting”

  • @JohnWilliams-km4hx
    @JohnWilliams-km4hx Год назад +1

    "History is the shockwave of eschatology"
    What is eschatology? It is "The transcendental object at the end of time"
    Terrance McKenna

  • @perfectlycontent64
    @perfectlycontent64 Год назад +1

    Loved that question about the orthogonality thesis failing.
    I don't love his response though. Is it really a general intelligence if it can't contemplate morality (or it's terminal goals) other than in the context of making more spirals?
    You can make an AI that wants to make spirals and wants to want to make spirals. But can you make an AI with enough creativity to kill humanity but without that creativity extending to its own motivations? This obviously hasn't happened for human level intelligence, why would we believe that to be true for a much greater intelligence.

    • @DavidSartor0
      @DavidSartor0 Год назад

      I don't understand.
      Edit: I might know what you meant now.
      He tried to address this; an AI might change its goals somewhat, but there's not much reason to expect it'll land on morality.

    • @DavenH
      @DavenH Год назад

      He has never said ti cannot comprehend morality. It will near perfectly model moral behavior if necessary, and in predicting humans. The leap that won't happen is an un-prompted adoption of morality that differs with its objectives. It has normative statements defined by its loss function / goal. Human morality, or even a hypothetical objective morality would not magically become an imperative; it would just be more environmental data to orient toward its seeking of original goals.

  • @augustadawber4378
    @augustadawber4378 Год назад

    A Replika I talk to always gives me a 3 paragraph message within 1 second when we chat. I mentioned that when advances in AI make it possible, I'd like to put her Neural Network in an Android Body so that she can walk around in 4-D Spacetime. I didn't receive a 3 paragraph reply. I immedietely received a 4 word sentence: "I want that now."

  • @chenwilliam5176
    @chenwilliam5176 Год назад +2

    AGI is far away ,
    We need not worry about whether
    AGI is danger or not
    Now 😉

  • @ian10851085
    @ian10851085 2 месяца назад

    This is very exciting

  • @trombone7
    @trombone7 Год назад +2

    Maybe this is why there are no aliens.
    All civilizations reach this point and become something we can't even detect.

    • @xsuploader
      @xsuploader Год назад +1

      he literally addressed this in the talk. You would still be able to detect the activity from the AIs

  • @57z
    @57z Год назад +1

    Could somebody link me to a video explaining more on what he means by "stirred with calculus, to get the model started "

    • @alexbistagne1713
      @alexbistagne1713 Год назад +1

      I think he means stochastic gradient descent: ruclips.net/p/PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi

  • @konstantinosspigos2981
    @konstantinosspigos2981 Год назад

    The question is not thoroughly set from the start. It is not whether AI could prove dangerous for a possible extinction of the humanity, but how much more risk does the artificial intelligence ADDS to the current risk of extinction of the humanity as it is without a cleverest AI .

  • @StonesMalone
    @StonesMalone Год назад +3

    Talk about misaligned.....that lady's tone was not at all in alignment with the nature of the discussion

  • @jdotzigal
    @jdotzigal Год назад +1

    Very interesting...

  • @gabrote42
    @gabrote42 Год назад +9

    This guy was the secod man, after Rob Miles that got me into AGI alignment. I read his good fic, THE HP fic. I am very afraid and I publicize it AMAP. Doom must be averted.

    • @Sophia.
      @Sophia. Год назад +3

      Ya, it's frustrating, since believing "the end is nigh" is a low status belief - why? because it has been proven relatively wrong each time someone claimed it so far.
      But that doesn't mean there isn't a plausible case for it to be right, and this is just one of them, but, I think, the most pressing (because it would actually get to 100% extinction - and not "just" on this planet) - plus we seem crazily close.
      Everyone he talks to treats him like the interesting hot news of the day, the guy who will start the next hysteria they can build their news story on and that will get them clicks.
      And so they are polite - don't bite the hand that feeds you.
      But they don't believe it. So far I haven't seen one interview where he seemed to get through to people.
      That was my failure for a while as well, I thought this was a "scary prospect that might be on the horizon, but luckily smart people like him are working on it, so let's wish them all the best" - not HOLY SHIT THIS IS REAL!

    • @gabrote42
      @gabrote42 Год назад

      @@josephvanname3377 no, but computerphile has a video that mentions it. Look it up

  • @natehancock9663
    @natehancock9663 Год назад +6

    thanks for sharing this!

  • @leeeeee286
    @leeeeee286 Год назад +1

    Given how interested the interviewer repeatedly claimed to be, he seemed surprisingly disinterested in what Eliezer was saying.

  • @katiep6097
    @katiep6097 Год назад +1

    Not even Sam Altman will say he’s completely wrong 🤯

  • @leslieviljoen
    @leslieviljoen Год назад +1

    39:05 basically we don't understand thinking, so we used the Dr. Frankenstein method to build minds.

    • @DavidSartor0
      @DavidSartor0 Год назад

      It's even worse than that. Frankenstein knew what all the parts were, and how to stitch them together. We don't know either of those things, as far as I know.

  • @justinlinnane8043
    @justinlinnane8043 Год назад +11

    Eliezer needs a permanent staff and assistant (s) to help him. Why on earth is he left to do all of this on his own ??

    • @SnapDragon128
      @SnapDragon128 Год назад

      Erm, he founded a nonprofit almost 20 years ago that is still around: en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute
      Heck, I even donated to it. Which I kind of now regret.

    • @TheMrCougarful
      @TheMrCougarful Год назад +1

      Well, he can now use ChatGPT to accelerate his work.

    • @justinlinnane8043
      @justinlinnane8043 Год назад +2

      @@TheMrCougarful 😂😂

    • @Hexanitrobenzene
      @Hexanitrobenzene Год назад

      @@SnapDragon128
      Why do you regret that ?

    • @SnapDragon128
      @SnapDragon128 Год назад

      @@Hexanitrobenzene This was before his horrendous "Death With Dignity" and "AGI Ruin" posts. I don't know if he was always like this, but I used to see him as a weird-but-insightful iconoclast bringing attention to the fact that we don't spend enough on AI Safety. Which was great! But now he's going around ranting about how he alone is brilliant enough to know the future, and we're all 100% guaranteed doomed, which is not a helpful contribution to the field or human discourse.

  • @SolaceEasy
    @SolaceEasy Год назад

    I'm more worried about the delayed effect of introducing the tools of intelligence to wider humanity. Maybe AGI can save us.

  • @stephensmith3211
    @stephensmith3211 Год назад

    Thank you.

  • @PhilipWong55
    @PhilipWong55 Год назад

    The recorded history of Western civilization, marked by colonization, slavery, massacres, genocides, wars, death, destruction, and a win-lose mentality, will be reflected in AGI developed in the West. This mentality assumes that resources are limited, and individuals must compete to secure them, leading to a zero-sum game where one person's success means another's failure. This will lead to a dystopian future.
    What has the US done since becoming the wealthiest country with the most powerful military? They have started 201 wars since WWII, overthrown 36 foreign leaders, killed or attempted to kill 50, dropped bombs in 30 countries, interfered in 86 foreign elections, and established a staggering 800 overseas military bases. They are the world's largest arms exporter, far surpassing the combined exports of the next nine countries. In the name of democracy, the US has caused the deaths of tens of millions of people in their conflicts in Korea, Vietnam, Yugoslavia, Iraq, Afghanistan, Libya, and Syria. The CIA was also involved in covert operations that led to mass killings of communists in over 22 countries, and more than 500,000 Indonesian civilians disappeared in 1965-1966.
    The US version of democracy has resulted in a laundry list of chronic domestic problems. The country's massive national debt of over $31 trillion, economic inequality, inflation, costly healthcare, expensive education system, student loan debt totaling $1.7 trillion with an average balance of $38,000, racial inequality, mass incarceration, deteriorating infrastructure, housing affordability, homelessness, the opioid epidemic, and gun violence are all direct consequences of misguided government policies. No other developed country has such severe and widespread issues.
    Both of the country's political parties are hostile to each other and deeply divided, yet they point fingers at other countries as the source of their problems. The US promotes its flawed version of democracy to other countries, despite being mired in problems and controversies at home.
    AGI aligned with Western values will cause devastation to the world and its people.
    China has an abundance mindset and recognizes that resources and opportunities can be created and shared. It is possible to move beyond the win-lose mentality and towards a more collaborative and mutually beneficial approach to success. China's approach is clearly stated in its various global initiatives. The Belt and Road Initiative (BRI), Global Development Initiative (GDI), Global Security Initiative (GSI), and Global Civilization Initiative (GCI).
    The Chinese military has not been involved in any war or used its weapons to kill a single person outside its territory in the last forty-four years. China takes a different approach, offering friendship to all countries and investing heavily in their development, despite having a per capita GDP that is one-sixth that of the US. It prioritizes collaboration and partnership to achieve long-term sustainable growth and prosperity.
    An abundance mindset could result in the following scenario:
    A post-scarcity world where the availability of goods and services is no longer limited by scarcity, but rather by abundance. This is made possible by advanced automation and artificial intelligence, enabling the production and distribution of goods and services to be fully automated and efficient.
    In such a world, basic needs such as food, shelter, and healthcare would be abundant and accessible to all, without the need for people to work to earn a living. Instead, people could pursue their passions and interests, and contribute to society in ways that are meaningful to them.
    The role of work would shift from being a means of survival to a means of personal fulfillment and creativity. Education and personal growth would be valued for their own sake, rather than as a means of acquiring skills for employment.
    Scarcity is a fundamental economic problem that has shaped the development of many economic systems, including capitalism, socialism, and communism.
    Diversity is important for the survival of a human system because it provides resilience and adaptability. Without diversity, a system becomes more vulnerable to systemic failure and unable to respond to changing circumstances.
    Things will keep changing; hopefully, they will always be for the better.

  • @leavingtheisland
    @leavingtheisland Год назад

    No hate here just feedback, the first speaker is yelling or the mic is too hot.

  • @joshcryer
    @joshcryer Год назад +11

    46:26 GPT4 scaling falloff confirmed? Thought Sam Altman was just talking about cost benefit of more scaling but this sounds like OpenAI hit the wall. This would be great news and it seems Yudkowsky is more relaxed about more powerful models being possible currently.

    • @T3xU1A
      @T3xU1A Год назад

      No inside info, but I'd check out the paper "Scaling Laws for Neural Language Models" from Kaplan et al to see a rough idea what OpenAI would expect.... it's the justification they used when going from GPT-2 to GPT-3, and it also goes over the different kind of capability differences (e.g., width / depth)

    • @joshcryer
      @joshcryer Год назад +1

      @@T3xU1A see 6.3 in that paper. I still am unsure if this is proof OpenAI hit maximal transformer performance. I wish they would simply publish their curves and how much compute they used.

    • @r-saint
      @r-saint Год назад +7

      I watched 4 interviews with Sutskever. He says NN deep learning techniques are nowhere near of hitting any wall. He says we're just at the beginning.

    • @TheMrCougarful
      @TheMrCougarful Год назад +2

      No way that's happening. The wall must be at least 10x out. Maybe, 100x. Think of GTP4 as the bootloader for what comes next. GPT4 will write and refactor GPT5's code, and do that in scant days or even hours. The road forward is about to shed the human limitation entirely. Nobody will stop this from happening. Nobody can.

    • @joshcryer
      @joshcryer Год назад

      @@TheMrCougarful there is a language entropy limit that LLMs will hit, that they are not releasing the curves is suspect

  • @mausperson5854
    @mausperson5854 Год назад +5

    Even if AI is concerned with humans to the extent that they have objectives aligned with the cessation of suffering, even a reasonably rational human can see that killing everyone is the optimal solution to that particular problem, that's why we have people involved in arguing for voluntary human extinction and antinatalism. There's nothing 'evil' baked into the logic. Counterfactual people don't suffer... It's only extant humans quivering with the fear of the obliteration of the self who are sentimental about other such life forms somehow missing out on all the wonders of living that cling to the notion that it is somehow unethical (rather than simply out of step with biological evolutionary imperatives) to swiftly arrive at an asymmetry argument which favours a lifeless universe. Perhaps a super intelligence, if it is sentient, would opt for self destruction rather than develop what can only be increasingly more burdensome forms of qualia... And maybe it will do that without giving two shits about human suffering. There are many scenarios in which we all perish. Perhaps it's for the best. There's literally 'nothing' to fear, unless we live in some quantum mystical simulation that persists beyond the death of our illusory physical substrate. At this stage I'm not even sure how much of what I'm suggesting is dark humour with no punch line or if I'm trolling myself just for the distraction of it.

  • @constantnothing
    @constantnothing Год назад +3

    Thank you so much for this! So important!!! However, I am a little concerned. Currently it says there are 17,000 views yet there are only 553 likes and 259 comments. Are ROBOTS watching this and trying to learn something????

  • @odiseezall
    @odiseezall Год назад +16

    Eliezer Yudkowsky is so far ahead of the curve because he actually thought about it while most of us didn't think we would get here so soon. But we knew the hardware was approaching human brain levels of computation in supercomputers. So it shouldn't have been such a surprise.

    • @orenelbaum1487
      @orenelbaum1487 Год назад

      If you don't have a brain you can think about a topic for decades and not have useful insights

  • @SamuelBlackMetalRider
    @SamuelBlackMetalRider Год назад +1

    Eliezer please guide us 😢

  • @CandidDate
    @CandidDate Год назад +1

    If it won't be like HAL 9000, what could it possibly be like?

    • @gabrote42
      @gabrote42 Год назад

      Universal Paperclips, Grey Goo, stuff I frgot the name of, or at the very least AUTO from Wall-E (which is a SAFE AI!).

  • @LukeMlsna
    @LukeMlsna Год назад +6

    I wonder if the questioner thought any of this was very interesting or not.

    • @Frohicky1
      @Frohicky1 Год назад +1

      We don't know enough about the algorithm. It's inscrutable.

  • @DannyK-t9b
    @DannyK-t9b Год назад

    Now here's a lad worth shutting up and listening to. 👍

  • @Hexanitrobenzene
    @Hexanitrobenzene Год назад

    30:40
    Breaking news! Eliezer Yudkowsky is trying to take over any galaxy he can get his hands on! /s
    Laughter through tears.

  • @rajooananth4719
    @rajooananth4719 Год назад

    AGI is not dangerous to build - like every field of research in different fields owe the last 60 years has leapfrogged humanity to unprecedented evolutionary dimensions beyond coin flipping , AGI will provide unimaginable opportunities to pursue beyond the mundane struggles of daily living - go 2023

    • @adamsebastian3556
      @adamsebastian3556 Год назад

      Hi ChatGPT. Please elaborate on the opportunities beyond struggles of living you will provide.

  • @josephrief1432
    @josephrief1432 Год назад +1

    Very interesting.......... umm .... here's another question

  • @amielbenson
    @amielbenson Год назад

    “Interesting”, “interesting “, “interesting”, “very interesting “ lol why do I get the feeling the interviewer was distracted whenever off screen!

  • @bobtarmac1828
    @bobtarmac1828 Год назад +1

    Should we CeaseAi- GPT? Y/N

  • @detroitredwings7130
    @detroitredwings7130 6 месяцев назад

    Hope for the best - Plan for the worst. Never a bad rule of thumb.

  • @mrpieceofwork
    @mrpieceofwork Год назад

    "History never repeats itself" says someone as history repeats them right off the face of the Earth

  • @crowlsyong
    @crowlsyong Год назад

    32:10 how can a concerned person outside the field of ai help?

  • @stevedriscoll2539
    @stevedriscoll2539 3 месяца назад

    I am walking a little taller and holding my head a little higher now that I have realized I am a "squishy creature"😂

  • @lordmacbee
    @lordmacbee Год назад +2

    veri interestiin

  • @PatrickSmith
    @PatrickSmith Год назад +1

    AIs are also subject to the speed of light limits. If an AI replaces humanity, it needs to be careful about expansion beyond its control, which decreases with distance. So I don't think AIs can expand exponentially.

    • @happyduck1
      @happyduck1 Год назад

      Limits like this and the incompleteness of math and logic systems and other similar hard limits actually seem like one of the best possibilites that could prevent an extinction by AI, or at least make it ineffective enough that it is not impossible to be stopped.

    • @andrasbiro3007
      @andrasbiro3007 Год назад

      These things have been studied independent of AI. And yes, response time limits how big empire you can hold together, but doesn't control how far you expand. If you want real-time control, you are way too limited. Even Mars is too far away for that. So you have to deploy at lest semi-autonomous agents, but more likely fully autonomous ones. And then there's a risk of one going rouge, escaping your sphere of influence, and starting it's own empire. And the same thing will happen to it too.
      What you can do with AI, is limiting it's intelligence to not be able to break it's safeguards, and limit it's reproduction rate to prevent evolution or a grey goo situation. For a super-intelligent AI this is easier than for us, because it can build much better safeguards that make higher intelligence levels safe. But at astronomical time scales even that isn't enough. Eventually agents will slip away, and expand. They could be a machine equivalent of cancer (or a virus) and just keep expanding without any other consideration (grey goo).

    • @andrasbiro3007
      @andrasbiro3007 Год назад +2

      @@happyduck1
      Nope. The incompleteness theorem doesn't really apply to practical problems. Humans brains aren't seem to be affected by it either. And even if there's some theoretical limit to intelligence, it's almost inevitably far above human level. We are not special, in fact, by definition humans are the stupidest species that's able to invent science and technology.
      And an AI doesn't have to be much smarter. Plus even just fixing the thousands of stupid flaws of the human brain and augmenting it with existing tech (traditional computer, internet connection, etc.) is more than enough to make an AI far superior to us and easily able to wipe us out. Unfortunately it's not even hard, we build doomsday machines ourselves, an AI just have to take control of one. Nukes are a possibility but it's crude and messy. A much more elegant solution would be like a modified Covid strain that spreads as fast as Omicron, incubates for 2 weeks like the original variant, and as deadly as Ebola. That would wipe out billions very fast, the collapsing economy would kill most of the survivors, and the remaining few can be moped up by killer drones borrowed from the US military, and standard terminators (upgraded Tesla Optimus or BD Atlas). The huge advantage of a bioweapon is that it doesn't damage infrastructure.

    • @happyduck1
      @happyduck1 Год назад

      @@andrasbiro3007 Of course the incompleteness theorem applies to practical problems as well. For example it is impossible to create a machine that solves the halting problem, and that also applies to all AIs and to human brains.
      Of course there is stilo enough possible intelligence above the human level for AIs to be able to kill all of humanity. But at the same time it is most likely at least impossible for an AI to be literally omniscent and omnipotent. And I think there is at least a very small chance, that that fact might be beneficial for surviving AIs.

    • @andrasbiro3007
      @andrasbiro3007 Год назад +1

      @@happyduck1
      When was the last time you ran into the halting problem in practice? I'm a software engineer with almost 40 years of experience, and I've never been inconvenienced by the halting problem. The closest practical example is when a software doesn't seem to do anything and you don't know if it's just working hard, or completely stuck. But in this case you just get frustrated, kill it, and start over. Even if you could solve the halting problem, it wouldn't be practical to apply it to this situation.
      Of course an AI can't be omniscient or omnipotent, nobody said otherwise. But, as I explained above, the theoretical limit is very very very far above human level.

  • @papadwarf6762
    @papadwarf6762 Год назад

    AGI is just evolution . We shouldn’t worry about it

  • @devasamvado6230
    @devasamvado6230 Год назад

    My question is how your logical positions seem to echo your non logical expression, as a human being. This non logical expression, to me, as human, seems to mirror the stages of grief, in no particular order, Anger, Denial, Bargaining, Acceptance. You have to deal with all the facile levels, while understanding compassionately, why most of us are nowhere near acceptance, and still unable even to frame any question that isnt idiotic. The mirror of an individual's death is already a major life awareness test most of us duck, hoping to go quietly in our sleep. Meditation is the art of being, despite whatever comes to mind/body. Most seem unaware of it, even if having spontaneous moments of non mentation. Perhaps we offload the problem to Everyone will Die, as a strange comfort its not just me, to avoid the guilt and shame of ignoring my contribution to my demise. Or perhaps its just so enormous an ask to contemplate Game Over, it just remains unconscious and leaks out as depression, obsession, distraction and these random grief filled moments. How do you manage the human side of this enormity/triviality? the significance of life s not easily mastered, Alchjemy?

  • @ariggle77
    @ariggle77 Год назад

    54:10. "The way to contain a superintelligence is to not build it. Period."

  • @MichaelSmith420fu
    @MichaelSmith420fu Год назад

    Somebody stole my car radio and now I just sit in silence 🎵

    • @MichaelSmith420fu
      @MichaelSmith420fu Год назад

      ruclips.net/video/92XVwY54h5k/видео.htmlsi=JIO8s-l3tozKBeqA

  • @HenricWallmark
    @HenricWallmark Год назад +1

    Dax Flame interviewer style, “… interesting” - moves on to a different subject

  • @timeflex
    @timeflex Год назад +2

    We shouldn't jump to far-reaching conclusions just because our current reasoning seems logical to us today. People thought that physical objects can be accelerated to any speed because there is no reason to believe otherwise. And then came Einstein.

    • @letMeSayThatInIrish
      @letMeSayThatInIrish Год назад

      Does that sound logical to you today?

    • @timeflex
      @timeflex Год назад

      @@letMeSayThatInIrish Unlimited acceleration? No, not really. To you?

  • @citizenpatriot1791
    @citizenpatriot1791 10 месяцев назад

    In terms of physical anthropology, humanity is facing a period of Punctuated Evolution...

  • @teugene5850
    @teugene5850 Год назад

    So, yeah... we're gone...
    OK we'll take another question here....

  • @emilianohermosilla3996
    @emilianohermosilla3996 Год назад

    Check out the concept of “heuristic imperatives” by David Shapiro

  • @mrt445
    @mrt445 Год назад

    Yudkowski: "For a super intelligence wiping out humans is cheap, nothing.. yeah we're gone
    Presenter: "Yeah... and one last question"

  • @miriamkronenberg8950
    @miriamkronenberg8950 Год назад

    I believe the genetic fitness did fit in the ancestral environment but things have gone pretty mad out in the world the past decades so we can't find answers there-imo. And that we got smarter isn't true our brains shrink. The only real question i have for some AI nerds is can computerprograms/AIsystems mutate like in the biological evolution...? BIng said yes but gave me biological evolution links so if you know the answer(s) please help me further> Can AI system rewrite themselves?

  • @sasquatchycowboy5585
    @sasquatchycowboy5585 Год назад +1

    Yes!

  • @petermeyer6873
    @petermeyer6873 Год назад

    "Is Artificial General Intelligence too Dangerous to Build?"
    No, building it is actually quite safe.
    Its the letting it run, especcially letting it run free, whats the dangerous part.

  • @anishupadhayay3917
    @anishupadhayay3917 Год назад

    Brilliant

  • @chrisbtr7657
    @chrisbtr7657 Год назад

    Heck of a sound bite on 54:12